text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I needed to implement Devise and JWT using Rails (Rails 5), and I thought, how hard could this be? Boy was I naive... Now there is a lot of information out there on how to do this, but each resource was using a different method and nothing really seemed to work. Well, I've finally figured it out and I want to share it with the world for 2 reasons:
- It may save someone days of researching and trial-and-error.
- Selfishly, I want to know where I can go to look it up for next time
Warning, this post will assume some knowledge of Rails and a few popular gems, it's a little bit more advanced than my normal stuff so far. So here we go.
How does it work?
First thing's first, there are a few ways this can be handled. There is (was?) a Devise-JWT gem that integrated JWT and worked very similarly to Devise's regular flow. When I tried to go that route, it did not work and I wasted many, many hours troubleshooting. I did eventually succeed in registration, but the sign_in functionality was still not working. It's very probable that this was do to user error, but regardless, I found my way to be much simpler.
So basically, you can really think about this in two steps. Step 1 is the standard devise-driven authentication. Step 2 is passing the JSON Web Token back and forth.
Implementation
Project Generation
First, let's build our project. Since we don't need the full Rails functionality because we'll be setting up a separate front-end, we can use the --api flag
rails new example-project --api. One of the effects of this flag is that the project will be set up without rails sessions - this is important.
Gemfile
Once we've built our project, first thing we'll do is build out the Gemfile. For the purposes of our authentication flow, we'll need 3 gems
devisefor actual authentication
jwtfor handling the JSON Web Tokens we'll be passing back and forth.
bcryptfor password-related unit testing - this only needs to be included in the test environment because otherwise it's included in Devise.
- BONUS: I pretty much always add
pryto help with debugging, and it comes in real handy when I need to check what params are coming over.
Devise Initializer
To configure Devise, we'll run
rails generate devise:install from our console to create an initializer file: config/initializers/devise.rb. The good news is that we can largely keep the default configuration; the only special thing we need to do is to set
config.skip_session_storage = [:http_auth] (about quarter way down the file).
User Model
Now we need to set up our user model. Devise has a special way to do this by running
rails generate devise User. This command creates a User model and prefills it with some Devise functionality, it also creates a database 'devise_create_users' migration, and adds a line to the routes file:
devise_for :users which creates routes to the default Devise Controllers.
Once the User model is created, we can finish configuring Devise by selecting which modules we want and adding it after the
devise macro. For my app, I just used the basic defaults:
devise :database_authenticatable, :registerable
One last thing before we can call the User model ready. Since a given JSON Web Token (JWT) will be associated to a given user, it makes sense to think of a user "creating" their token. Additionally, the goal is to get as much of the app's logic in the models, so to address both of these concerns we will place the logic of creating a JWT in the User model. Here we use the JWT gem to encode a token containing only the user's id. How can the id be the only thing we need you ask? Thinking back to our "How Does It Work" Diagram above, remember that the user will need to pass in their credentials as parameters at the sign-in page and, if successful, the server will issue an encrypted token for them. This is that token, so it will only be used to authenticate that the user is who they say they are once they've already logged in and they try to make a subsequent call to the API. Thus, we only need a way to identify the user: their unique
id attribute works perfectly for this purpose.
def generate_jwt JWT.encode({id: id, exp: 60.days.from_now.to_i}, Rails.application.secrets.secret_key_base) end
Routes
As stated above, the
rails generate devise User generator will create a route for us automatically that looks like this:
devise_for :users. For our purposes, the default controllers aren't going to work on their own because they are meant to operate via sessions, which we will not have in our api-only implementation. So, we'll need to overwrite some of the default functionality - to do this, we need to point to custom registrations and sessions controllers:
devise_for :users, controllers: { registrations: :registrations, sessions: :sessions }
Database
Also stated above, the
rails generate devise User generator will create our database migration for us, so the only change we need to make is uncommenting any non-default modules you added in your User model, as well as adding any custom fields you may need. Once you're done, run
rake db:migrate and we're done here.
Intermission (Coffee Break)
We've gotten through a lot already, but there's quite a bit more to come, so before we get into the controllers, which contain most of our logic and functionality, take a quick breather and grab a fresh cup of coffee. If you're following along, this is a good time to double check that everything is correct in your app so far...
Ready to continue? Okay, let's do this!
Controllers
There are three controllers that we're going to be concerned with for this, and each of these 3 controllers will have a specific job from the diagram at the top of this article.
- The Application Controller is where we will process a JWT when a user sends a request to our API. It's vital to keep in mind that the Application Controller is not concerned with credentials - it simply checks for a valid JWT.
- The Registrations Controller is where a user will create his/her credentials, and it will assign the JWT to the user once complete.
- The Sessions Controller is where a user will authenticate his/her credentials and it will assign the JWT to the user if successful.
Application Controller < ActionController::API
We will set up our JWT processing functionality first because, once a JWT is assigned, we'll want to check to make sure it's working correctly. Since we know that we will be passing in JSON, we will start off the Application Controller with the following line
respond_to :json. Since all other controllers inherit from the Application Controller, we only need to do this for this controller - it will automatically be passed down to the rest. This is also where we'll want to provide our app with similar private methods to what the standard Devise implementation would give us, so let's set up our authentication method
authenticate_user! as well as a
signed_in? and
current_user method, then we'll look at how to get them to work.
For our
authenticate_user!, we know that we want this to reject a user as unauthorized unless they are correctly signed in. We also know we'll eventually have a
signed_in? method available, so let's go ahead and proceed using that:
def authenticate_user!(options = {}) head :unauthorized unless signed_in? end
But for this to work, of course, we need to define
signed_in?. Default Devise does this by checking the session for the presence of a user_id. We won't have a session for this, but what we will have is a JWT. We now know that we need a method to somehow pull a user's
id out of the JWT and return it. Let's call it
@current_user_id and use that future value in our signed_in? method like so:
def signed_in? @current_user_id.present? end
While we're at it, since we know that we'll have a
@current_user_id to work with, let's use it to define our current_user method too. We need this to take the id and search our database for a corresponding user record:
def current_user @current_user ||= super || User.find(@current_user_id) end
That's easy enough, essentially just copying the Devise methods, now we just have to find a way to extract that id from a passed JWT. One final reminder: remember that this controller is NOT meant to make sure that the user authenticates against his/her credentials, it's just to see whether they are signed in or not by looking at the JWT. If a user HAS a valid JWT, it means that they have correctly authenticated their credentials and the server gave them one. With that in mind, this is actually super simple using the
jwt gem:
def process_token jwt_payload = JWT.decode(request.headers['Authorization'].split(' ')[1], Rails.application.secrets.secret_key_base).first @current_user_id = jwt_payload['id'] end
That will work, assuming that there IS an Auth header, and that it has a valid JWT. I'm not willing to bet that either of these are always going to happen, so let's put some error handling around it. We want to throw an error if an invalid JWT is sent, but not if there is no Auth header sent at all:
def process_token if request.headers['Authorization'].present? begin jwt_payload = JWT.decode(request.headers['Authorization'].split(' ')[1].remove('"'), Rails.application.secrets.secret_key_base).first @current_user_id = jwt_payload['id'] rescue JWT::ExpiredSignature, JWT::VerificationError, JWT::DecodeError head :unauthorized end end end
There! Now there's just one last step. We need to make sure that the token is processed before we try to take any other action. To do this, we just need to add
before_action :process_token underneath
respond_to :json. Now whenever our app is called, it will process the token (if provided) and then take whatever action is required.
Registrations Controller < Devise::RegistrationsController
Okay, next step is to provide our app the ability to register a new user and assign them a JWT to be passed to our Application Controller for processing. As long as we're just using the default attributes for Devise (and calling them "sign_up_params", we don't need to worry about whitelisting parameters because Devise is already doing it for us. The reason we need to have our own controller is so that we can have the user instance build its token for the controller to deliver it. On the client side, we would use this returned token to store in a
httpOnly cookie, (or whatever other storage option you prefer).
def create user = User.new(sign_up_params) if user.save token = user.generate_jwt render json: token.to_json else render json: { errors: { 'email or password' => ['is invalid'] } }, status: :unprocessable_entity end end
Sessions Controller < Devise::SessionsController
Finally, the last step in our implementation! Just gotta set up the Sessions Controller so that a user can return and sign back in, and it works the same way as the Registrations Controller. The user will submit params through the front-end, including their email, which our API will use to query the database and return our user instance. Then we'll validate that the password they provided matches the stored password and, if successful, we will distribute a JWT:
def create user = User.find_by_email(sign_in_params[:email]) if user && user.valid_password?(sign_in_params[:password]) token = user.generate_jwt render json: token.to_json else render json: { errors: { 'email or password' => ['is invalid'] } }, status: :unprocessable_entity end end
Wrap-Up
So there it is. This is how I was finally able to get JWT working with server-side authentication using Devise, the de-facto standard for Rails. Once I realized that JWT is really a separate process from authenticating credentials, it wasn't so bad to figure out. Let me know what you think in the comments. Is there a better way to combine these two gems? Are there major issues with this implementation? If you've successfully used
devise-jwt, what is the secret??
Thanks so much for reading and hanging in there to the end! Below this is just the final code (minus Gemfile and Initializer), in case you want to see it all in one place.
Full Code:
# User.rb class User < ApplicationRecord # Include default devise modules. Others available are: # :confirmable, :recoverable, :rememberable, :validatable, :lockable, :timeoutable, :trackable and :omniauthable devise :database_authenticatable, :registerable def generate_jwt JWT.encode({id: id, exp: 60.days.from_now.to_i}, Rails.application.secrets.secret_key_base) end end # Routes.rb Rails.application.routes.draw do devise_for :users, controllers: { registrations: :registrations, sessions: :sessions } root to: "home#index" end # Database Schema create_table "users", force: :cascade do |t| t.string "email", default: "", null: false t.string "encrypted_password", default: "", null: false t.datetime "created_at", null: false t.datetime "updated_at", null: false t.index ["email"], name: "index_users_on_email", unique: true end # ApplicationController.rb class ApplicationController < ActionController::API respond_to :json before_action :process_token private # Check for auth headers - if present, decode or send unauthorized response (called always to allow current_user) def process_token if request.headers['Authorization'].present? begin jwt_payload = JWT.decode(request.headers['Authorization'].split(' ')[1], Rails.application.secrets.secret_key_base).first @current_user_id = jwt_payload['id'] rescue JWT::ExpiredSignature, JWT::VerificationError, JWT::DecodeError head :unauthorized end end end # If user has not signed in, return unauthorized response (called only when auth is needed) def authenticate_user!(options = {}) head :unauthorized unless signed_in? end # set Devise's current_user using decoded JWT instead of session def current_user @current_user ||= super || User.find(@current_user_id) end # check that authenticate_user has successfully returned @current_user_id (user is authenticated) def signed_in? @current_user_id.present? end end # RegistrationsController.rb class RegistrationsController < Devise::RegistrationsController def create user = User.new(sign_up_params) if user.save token = current_user.generate_jwt render json: token.to_json else render json: { errors: { 'email or password' => ['is invalid'] } }, status: :unprocessable_entity end end end # SessionsController.rb class SessionsController < Devise::SessionsController def create user = User.find_by_email(sign_in_params[:email]) if user && user.valid_password?(sign_in_params[:password]) token = current_user.generate_jwt render json: token.to_json else render json: { errors: { 'email or password' => ['is invalid'] } }, status: :unprocessable_entity end end end
Discussion (5)
Your article will help me a lot, because I have to add JWT handling to a Rails application that already uses Devise. Thank you! Just one thing:
It is strongly discouraged to save the token in localStorage due to XSS attacks. Read more about it here or search for articles on that topic on dev.to (there are a few). A better solution is to use an
httpOnlycookie.
Glad it's helpful! It's worked for me twice so far, but if you run into any problems and have to solve around them, please add another comment about it.
Also, thanks for the suggestion - I've gone ahead and made the change in the article. I haven't had the chance to dig as much into client-side storage strategies as I'd like, so I'm really glad you called that out.
One other callout:
In real-world apps, you may need to look into more securely logging out a user.
It's on my radar to research as soon as I get the chance, and I'll post about it once I do. But as an example for the mean-time, I've briefly read about adding a database table for blacklisted tokens so that the user can't make calls with an old token without logging back in, or conversely, adding a whitelisted token column to your users table. A simpler option may be to just set the JWT to expire after a much shorter time (like 1 day or less).
hey Daniel! I was just going through this last week and went through a tutorial that really helped out. I made a git repo with a detailed README describing what I did differently from the tutorial and then beyond it how you could store tokens client side: github.com/dakotalmartinez/rails-d.... As far as localStorage goes for storing tokens, from what I've seen there's actually quite a bit of debate there. Some people say it's totally bad and should be avoided, others say that storing the token in a cookie only makes it slightly more difficult for an attacker to exploit XSS vulnerabilities. If an attacker can run JS on your domain, they can use the cookie to send requests to your API whether or not they can access it via the JS it can be included with a fetch request. Moral of the story, XSS is bad, so don't take user input and put it straight into innerHTML = without encoding/escaping it. portswigger.net/web-security/cross...
Hi Dakota, thank you for posting this link!!
Your tutorial looks great. I haven't had a chance to follow along with my own code yet, but it seems to be exactly what I needed about 8 months ago when I was trying to implement Devise-JWT 😆
A lot of the content looks very familiar, so it will be interesting to dig in and see where I went wrong. Could even be due to Rails version (I'm still on 5)...maybe it's time for me to finally update. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/dhintz89/devise-and-jwt-in-rails-2mlj | CC-MAIN-2021-10 | refinedweb | 2,907 | 63.09 |
This guide covers the basics of navigation within a React Storefront app.
Use the
react-storefront/Link component to render all links, including those that to point to pages outside of the PWA.
By default, clicking a Link element results in client-side navigation. Here's an example:
import Link from 'react-storefront/Link' <Link to="/about">About Us</Link>
You can override this behavior and cause the browser to reload by adding a
server prop:
<Link to="/some/non/pwa/page" server>Some page outside the PWA</Link>
By default all links are processed client-side in React Storefront. If you want to override this behavior, for example when displaying
<a> tags from CMS content,
you can force any link to reload the page by adding a
data-reload="on" attribute.
Sometimes, when porting an existing app to a PWA, you need to preserve a URL scheme that doesn't indicate the type of page being rendered. For example, does
/shirts point to a category or subcategory? It would be ideal if we could change the URL to something like
/category/shirts, but often times this is not possible because of the negative effects on SEO. We can get around this by setting app.page using Link's
state prop:
<Link to="/shirts" state={{ page: 'Category' }}>Shirts</Link>
When clicked, the object specified in the
state prop will be immediately applied to the app state. This is essential for displaying the correct skeleton while data is being fetched from the server.
You can inject the
history object into your component to navigate programmatically:
import React, { Component } from 'react' import Button from '@material-ui/core/Button' import { inject } from 'mobx-react' @inject('history') export default class MyComponent extends Component { render() { <Button onClick={this.onClick}>Go to Home</Button> } onClick = () => { this.props.history.push('/') } }
You can also access
history via
window.moov.history. This is helpful for changing the location from within a model action. | https://pwa.moovweb.com/v6.9.1/guides/navigation | CC-MAIN-2019-22 | refinedweb | 324 | 52.09 |
In this problem, we are given a number n. Our task is to create a program to find Star number in C++.
Star Number is a special number that represents a centered hexagram (sixpoint star).
Some start numbers are 1, 13, 37, 73, 121.
Let’s take an example to understand the problem
n = 5
121
To find the nth star number we will use the formula.
Let’s see the general formula for the star number.
n = 2 -> 13 = 12 + 1 = 6(2) + 1 n = 3 -> 37 = 36 + 1 = 6(6) + 1 n = 4 -> 73 = 72 + 1 = 6(12) + 1 n = 5 -> 121 = 120 + 1 = 6(20) + 1
For the above terms we can derive nth term.
Nth term = 6(n * (n-1)) + 1.
Validating it,
For n = 5, 6( 5 * 4) + 1 = 121
Program to illustrate the working our solution
#include <iostream> using namespace std; int findStarNo(int n){ int starNo = ( 6*(n*(n - 1)) + 1 ); return starNo; } int main(){ int n = 4; cout<<"The star number is "<<findStarNo(n); return 0; }
The star number is 73 | https://www.tutorialspoint.com/program-to-find-star-number-in-cplusplus | CC-MAIN-2021-43 | refinedweb | 179 | 78.18 |
In this tutorial, we'll learn how to create a basic blog using Vue CLI, Apollo Client and GraphCMS. The complete code for this example is available here
This guide assumes you have some knowledge about Vue and GraphQL. If you don't yet, we highly recommend you check out this to learn about GraphQL and this to learn about Vue.
What we'll need to get started is the
Vue CLI
npm i -g vue-cli
Next up, we're going create our awesome app
vue init webpack-simple graphcms-starter-blog
Vue will ask us a couple of questions about our app's name, description, license type and author. You can go ahead and just
enter through them, then answer
no to the last
Use sass? question.
Let's
cd to our app and install all dependencies
cd graphcms-starter-blog yarn
Now open up the code in the editor of your choice. You can see the main entry point to our application, which is
src/main vue-apollo apollo-cache-inmemory apollo-link-http graphql-tag
That's quite a lot of packages, don't you worry though, we're gonna look at what each of them does.
apollo-clientis our main hero here, we'll use it to create our GraphQL client using ApolloClient.
vue-apollowill be used to install Apollo plugin in our Vue app and create a provider which provides the Apollo functionality to all the other components in the application without passing it explicitly.
vue-router for routing and
vue-markdown to parse the markdown we get from GraphCMS post's
content field.
yarn add vue-router vue-markdown
Alright, we now have everything we need to start hacking! Let's come back to our
main.js and add the following lines at the top of it:
import { ApolloClient } from 'apollo-client' import { HttpLink } from 'apollo-link-http' import { InMemoryCache } from 'apollo-cache-inmemory' import VueApollo from 'vue-apollo' import router from './router.js'
Don't worry about the
router import not existing, we'll get to it in a moment.
Now we can initialize our Apollo Client! We will also tell Vue to install the
vue-apollo plugin for us. To do so, add this piece after the imports and replace
YOUR_GRAPHCMS_API with your
Endpoint's URI which you can find under
Dashboard -> Endpoint in your GraphCMS project.
const GRAPHCMS_API = 'YOUR_GRAPHCMS_API' const apolloClient = new ApolloClient({ link: new HttpLink({ uri: GRAPHCMS_API }), cache: new InMemoryCache(), }) Vue.use(VueApollo)
Next, we need to create the
apolloProvider and include it in our root components. A provider holds the apollo client instances that can then be used by all the child components. We will also add the imported
router to use our
vue-router.
This is how the rest of our
main.js should look like after the modification
const apolloProvider = new VueApollo({ defaultClient: apolloClient, }) new Vue({ el: '#app', provide: apolloProvider.provide(), router, template: '<App/>', components: { App }, })
Styling is the least important part of it all and we've prepared the most basic set of styles to get you started. They will be included at the end of each component's overview.
router.js
Since the only purpose of this file is to provide the routing for our application, we won't be digging too much into it, just pasting the following in the
router.js will suffice:
import Vue from 'vue' import Router from 'vue-router' import Home from './components/Home.vue' import About from './components/About.vue' import Post from './components/Post.vue' Vue.use(Router) export default new Router({ mode: 'history', routes: [ { path: '/', name: 'home', component: Home, }, { path: '/about', name: 'about', component: About, }, { path: '/post/:slug', name: 'post', component: Post, }, ], })
If you want to read on how
vue-router works, you can find the documentation here.
App.vue
In our example, the purpose of
App is mainly related to routing and displaying a header at the top of our application so we won't go into details here aswell. Just go ahead and replace it's content with this:
<template> <div id="app"> <app-header /> <main> <router-view /> </main> </div> </template> <script> import AppHeader from './components/AppHeader.vue'; export default { name: 'app', components: { AppHeader }, }; </script> >
Let's make a
components folder in our
src directory and create 4 components:
AppHeader.vue
Home.vue
About.vue
Post.vue
AppHeader.vue
Similar to our
App,
AppHeader is only here to provide routing for our application. We can go ahead and paste the code below into our file:
<template> <header> <h1>GraphCMS Starter blog</h1> <nav> <router-link exactHome</router-link> <router-linkAbout</router-link> </nav> </header> </template> <script> export default { name: 'AppHeader', }; </script>
Styles for
App >
Home.vue
This is the homepage of our application also responsible for showing the list of posts and a
Load more pagination button. Let's go through it bit by bit.
First, let's create our main
Home template
<template> <div> <section v- <ul> <li v- <router-link : <div class="placeholder"> <img : </div> <h3>{{post.title}}</h3> </router-link> </li> </ul> <button v- {{loading ? 'Loading...' : 'Show more'}} </button> </section> <h2 v-else> Loading... </h2> </div> </template>
Now, we add the component's logic:
<script> import gql from 'graphql-tag'; const POSTS_PER_PAGE = 2; const posts = gql` query posts($first: Int!, $skip: Int!) { posts(orderBy: dateAndTime_DESC, first: $first, skip: $skip) { id slug title dateAndTime coverImage { handle } } } `; export default { name: 'HomePage', data: () => ({ loading: 0, posts: null, postCount: null, }), apollo: { $loadingKey: 'loading', posts: { query: posts, variables: { skip: 0, first: POSTS_PER_PAGE, }, }, postCount: { query: gql` { postsConnection { aggregate { count } } } `, update: ({ postsConnection }) => postsConnection.aggregate.count, }, }, methods: { loadMorePosts() { this.$apollo.queries.posts.fetchMore({ variables: { skip: this.posts.length, }, updateQuery: (previousResult, { fetchMoreResult }) => { if (!fetchMoreResult) { return previousResult; } return Object.assign({}, previousResult, { posts: [...previousResult.posts, ...fetchMoreResult.posts], }); }, }); }, }, }; </script>
Your project has to have more posts than the
POSTS_PER_PAGE indicates or you wont see the
Load more button.
There are quite a few things going on in here, let's go through them bit by bit:
We start with importing the
gql module and using it to define the query we'd like to fetch the data with. We also create a
POSTS_PER_PAGE constant to specify how many posts we'd like to have on every page.
import gql from 'graphql-tag' const POSTS_PER_PAGE = 2 const posts = gql` query posts($first: Int!, $skip: Int!) { posts(orderBy: dateAndTime_DESC, first: $first, skip: $skip) { id slug title dateAndTime coverImage { handle } } } `
Next up, we name our component and tell Vue what data do we want it to have.
export default { name: 'HomePage', data: () => ({ loading: 0, posts: null, postCount: null }), apollo: { $loadingKey: 'loading', posts: { query: posts, variables: { skip: 0, first: POSTS_PER_PAGE } }, postCount: { query: gql`{ postsConnection { aggregate { count } } }`, update: ({ postsConnection }) => postsConnection.aggregate.count } }
Here we say that we'd like:
As initial data:
a variable
From apollo:
a variable
allPosts produced from getting the allPosts
query data with
variables
a variable
postCount produced from getting the _allPostsMeta
query data and extracting the
count of our posts with
update.
You can think of
loading as a simple conditional that tells us what's the current state of the data fetching process. When
loading is true, loading message will be rendered. As soon as the loading is finished, the message will be replaced with our component.
The last part of our default export is the list of methods:
methods: { loadMorePosts () { this.$apollo.queries.posts.fetchMore({ variables: { skip: this.posts.length }, updateQuery: (previousResult, { fetchMoreResult }) => { if (!fetchMoreResult) { return previousResult } return Object.assign({}, previousResult, { posts: [...previousResult.posts, ...fetchMoreResult.posts] }) } }) } }
In our case, the list contains only 1 method
loadMorePosts, which we'll use to load more posts with our
<button />. The function returns the result of calling the fetchMore method on our
posts query result.
fetchMore allows us to do a new GraphQL query and merge the result into the original result.
In this case we want our button click to change the number of posts we
skip in the new query by the amount _reasons*. (The video talks about React but those rules apply here aswell)
And boom, we now have a complete
Home component with a neat pagination button! Also, if you haven't already, we strongly encourage you read more about pagination in Apollo and GraphQL in general. This post are great places to start.
Styles for
Home >
Post.vue
If you managed to follow what happened in the
Home component, this one is much simpler and requires little more explaining. You can go ahead and paste this into our
Post.vue file:
<template> <h2 v- Loading... </h2> <div v-else> <article> <h1>{{post.title}}</h1> <div class="placeholder"> <img : </div> <vue-markdown>{{post.content}}</vue-markdown> </article> </div> </template> <script> import gql from 'graphql-tag'; import VueMarkdown from 'vue-markdown'; const post = gql` query post($id: ID!) { post(where: { id: $id }) { id slug title coverImage { handle } content dateAndTime } } `; export default { name: 'PostPage', data: () => ({ loading: 0, }), apollo: { $loadingKey: 'loading', post: { query: post, variables() { return { id: this.$route.params.slug, }; }, }, }, components: { VueMarkdown }, }; </script>
As you can see, it only gets simpler now. All we do is pass the data from apollo to our Vue component and pass
slug that we get from
vue-router's
params to the query. This is because when we enter the page
/post/:slug we'd like our
post variable to be the post matching the
slug which in our case is the
post ID.
You can read more about
vue-router route params here
Styles for
<style scoped> .placeholder { height: 366px; background-color: #eee; } </style>
About.vue
Last piece of the puzzle is the
About component that will display the list of blog authors. Go ahead and paste this in:
<template> <h2 v- Loading... </h2> <div v-else> <div v- <div class="author"> <div class="info-header"> <img : <h1>Hello! My name is {{author.name}}</h1> </div> <p>{{author.bibliography}}</p> </div> </div> </div> </template> <script> import gql from 'graphql-tag'; export const authors = gql` query authors { authors { id name bibliography avatar { handle } } } `; export default { name: 'AboutPage', data: () => ({ loading: 0, authors: null, }), apollo: { $loadingKey: 'loading', authors: { query: authors, }, }, }; </script>
As you can see, there's nothing new going on in here, the
About component is the simplest of them all.
Styles for
About:
<style scoped> .author { margin-bottom: 72px; } .info-header { text-align: center; } img { height: 120px; width: auto; } </style>
With all of this in place, we can go ahead and launch our application with
yarn dev
Congratulations! Our basic Vue Apollo blog is now ready, hack away! | https://graphcms.com/docs/tutorials/beginners-guide-with-vue/ | CC-MAIN-2019-43 | refinedweb | 1,742 | 62.58 |
I've found this piece of code in a pull request someone made to one of my gems:
source = HTTParty.get(PoliticosBR::DEPUTADOS_URL)
tempfile = Tempfile.new('deputados.xls').tap do |f|
f.write(source.to_s.force_encoding('UTF-8'))
end
#tap
#tap
#tap is defined on
Object
It was introduced in Ruby 1.9. It yields self to the block and then returns self. I think an illustrative example is when it's used to return an object from a method.
You could do this.
def foo a = [] a.push(3) a end def foo [].tap do |a| a.push(3) end end
In the first example the array a is returned explicitly and in the second tap is being used to yield the block to self and then return self. | https://codedump.io/share/mvaxonwGFhxc/1/ruby-tempfiletap-what-class-defines-this-method-and-what-is-it-for | CC-MAIN-2016-44 | refinedweb | 130 | 78.25 |
Run this program why does C4 = 4 if you input 2 4 0, and not 5 as it should be?Run this program why does C4 = 4 if you input 2 4 0, and not 5 as it should be?Code:#include <iostream> #include <stdio.h> #include <math.h> using namespace std; int main() { int A, B, C, A1, B1, C1, A2, B2, C2, A3, B3, C3, A4, B4, C4 ; cout << "Welcome to Maths Cheat. A program developed by Kej. \nI hope it helps! \n"; cout << "Pythag mode has been initialized. \nInput the variables in the form of A B C, with either B or C as 0 dependent\n on whether the hypotenuse is known. \n"; cout << "Now enter your variables in the form A B C Then press enter."; cin >> A >> B >> C; cout << "A = " << A <<"\n"; cout << "B = " << B <<"\n"; cout << "C = " << C <<"\n"; if (C == 0){ C1 = pow (A,2); C2 = pow (B,2); C3 = C1 + C2; C4 = sqrt (C3); cout << "Calculation in progess \nC = " << C4; } }
Only just learning about math functions. | http://cboard.cprogramming.com/cplusplus-programming/119752-cplusplus-pythag-calculator.html | CC-MAIN-2014-52 | refinedweb | 176 | 89.79 |
How do I check that multiple keys are in a dict in a single pass?
Well, you could do this:
if all (k in foo for k in ("foo","bar")): print "They're there!"...They're there!
if {"foo", "bar"} <= myDict.keys(): ...
If you're still on Python 2, you can do
if {"foo", "bar"} <= myDict.viewkeys(): ...
If you're still on a really old Python <= 2.6, you can call
set on the dict, but it'll iterate over the whole dict to build the set, and that's slow:
if set(("foo", "bar")) <= set(myDict): ...
Simple benchmarking rig for 3 of the alternatives.
Put in your own values for D and Q
from timeit import Timer setup='''from random import randint as R;d=dict((str(R(0,1000000)),R(0,1000000)) for i in range(D));q=dict((str(R(0,1000000)),R(0,1000000)) for i in range(Q));print("looking for %s items in %s"%(len(q),len(d)))'''Timer('set(q) <= set(d)','D=1000000;Q=100;'+setup).timeit(1)looking for 100 items in 6324990.28672504425048828#This one only works for Python3Timer('set(q) <= d.keys()','D=1000000;Q=100;'+setup).timeit(1)looking for 100 items in 6320842.5987625122070312e-05Timer('all(k in d for k in q)','D=1000000;Q=100;'+setup).timeit(1)looking for 100 items in 6322191.1920928955078125e-05 | https://codehunter.cc/a/python/how-do-i-check-that-multiple-keys-are-in-a-dict-in-a-single-pass | CC-MAIN-2022-21 | refinedweb | 231 | 76.52 |
ASP.NET presentation/business/data layer
I've been working with ASP.NET for several months now, but I'm a little uncomfortable with managing the separation between the business, data and presentation layers.
It *seems* like there's a clear divide between the .aspx page design (presentation), code behind (business logic) and data layers (data adapters & data sets). However, there's quite a bit of 'leakage' between all three layers.
For instance, the validators are in the page design, and manipulation of the interface is done in the code behind part.
What's the best way to keep everything as separate as possible?
Colm O'Connor
Friday, April 22, 2005
I don't there is a "best" way, but there are options. MSDN lists a few here:
Personally, I have found that using the standard ASP.NET data binding model makes it difficult to write good code. "Leaks" spring out all over the place and eventually the dam breaks.
Two pieces of advice:
1) You don't have to use the same model in every page. I have a lot of web apps that use MVC , PageController, _and_ FrontController.
2) Don't get caught up in the patterns. Do the simplest thing that works and make sure it is maintainable in the long run.
Jeff Mastry
Friday, April 22, 2005
I agree with Jeff to so the simplest thing.
Most of the time, for simple apps, I have the following:
An aspx with the markup and script
An aspx.cs with presentation logic (not business logic)
A 'logic' namespace containing classes to perform whatever it is the app does. The presentation logic tells the logic class what to do.
A 'data' namespace containing classes to serve data to the logic classes.
I find this provides organization without too much overhead. For simple apps, sometimes patterns are overkill.
Rick Childress (intellithought, inc.)
Friday, April 22, 2005
Hi, Colm.
Lower levels are expected to have no knowledge of higher ones, so it looks like you really have only 2 layers in your design: you keep business logic and presentation (code behind) together. This layer model, I believe, works best for smaller teams and applications in terms of development time (and probably maintenance as well).
In bigger teams/apps (and especially when UI and BL are developed by different teams) I find myself using the separation Rick described. Plus, business entities (typed datasets or classes for domain object) go to separate namespace.
namespace Product.Domain
Holds domain object data structure, object-level logic.
namespace Product.DataAccess
Holds DB communication code, ORM.
Uses Product.Domain.
namespace Product.BL
Holds business logic.
Uses Product.DataAccess, Product.Domain.
namespace Product.Test
Does BL unit testing.
Uses Product.BL, Product.Domain.
namespace Product.UI
Holds presentaion logic.
Uses Product.BL, Product.Domain.
In some cases it makes sense to introduce even more layers to keep big projects well organized or satisfy business requirements. Downside of it is that some logic (probably, the worst example is validation) becomes scattered and to some extent duplicated on multiple layers.
May I suggest a book, that Jeff Mastry recommends here:
It covers multi-layer architecture very well, it is nicely written and... and I just love it ;) (thanks a lot, Jeff, for the reference)
DK
Friday, April 22, 2005
The CodeBehind shouldn't be your business layer but the UI Controller (as in MVC).
Business objects (entities and process) should be put in separate classes from the page classes.
And I second the suggestion of reading the Microsoft Patterns and Practices. Specialy the topic on 'Designing Applications and Services', it gives some heuristics for assigning responsabilities to classes in the context of a three-layered app.
.NET Developer
Friday, April 22, 2005
This is a decision you need to take depending on the size/complexity of your project. If the project is small and there isn't much of "business layer" as such - you would probably mix presentation and business processing within the code-behind.
But a cleaner way is to create different classes for the business objects, expose them as API's for the code-behind to use. With some thinking, it is quite possible to make it clean so that the code behind focuses on "how to display" and the backend classes deal with the rest. A good way to go about it is
-Bus. Layers must provide API's only for the primitive values (i.e. values that have no UI attributes, but on which processing logic is based)
-code-behind should validate params, and pass them to the bus. API's.
-Bus. layers should only return primitive values and let the code-behind figure out how to render it.
You should take a look at the Microsoft Enterprise Library layers, it will simplify many of the "usual" things you do as part of design.
v
Saturday, April 23, 2005
>The CodeBehind shouldn't be your business layer
>but the UI Controller (as in MVC).
I'm starting to come around to this point.
>Business objects (entities and process) should be
>put in separate classes from the page classes.
If the logic is complex then it makes much more sense to put it in a class, yes. I think if the logic is simple, though, creating new classes for it adds unnecessary complexity.
Perhaps pragmatically speaking, a little leakage is necessary.
Colm O'Connor
Tuesday, April 26, 2005
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/dotnetquestions/default.asp?cmd=show&ixPost=6091&ixReplies=6 | CC-MAIN-2018-26 | refinedweb | 906 | 64.3 |
23 USC § 110 - Revenue aligned budget authority..)
References in Text
Section 251 of the Balanced Budget and Emergency Deficit Control Act of 1985, referred to in subsecs. (a)(1) and (2), is section 251 ofPub. L. 99–177, title II, Dec. 12, 1985, 99 Stat. 1063, which is classified to section 901 of Title 2, The Congress. Section 251 ofPub. L. 99–177was amended generally by Pub. L. 112–25, title I, § 101,Aug. 2, 2011, 125 Stat. 241, and as so amended, par. (1) no longer contains a subpar. (B).
Section 1102(h) of the SAFETEA–LU, referred to in subsec. (a)(2), is section 1102(h) ofPub. L. 109–59, which is set out as a note under section 104 of this title.
The SAFETEA–LU, referred to in subsecs. (a)(2) and (b)(1)(A), is Pub. L. 109–59, Aug. 10, 2005, 119 Stat. 1144, also known as the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users. Title VIII of the Act amended sections 900 and 901 of Title 2, The Congress, and enacted provisions set out as a note under section 901 of Title 2. For complete classification of this Act to the Code, see Short Title of 2005 Amendments note set out under section 101 of this title and Tables.
Title III of Public Law 105–178, referred to in subsec. (f), is title III of Pub. L. 105–178, June 9, 1998, 112 Stat. 338, as amended, known as the Federal Transit Act of 1998. Sections 3037 and 3038 of title III of Pub. L. 105–178are set out as notes under sections 5309 and 5310, respectively, of Title 49, Transportation. For complete classification of title III to the Code, see Short Title of 1998 Amendment note set out under section 5101 of Title 49 and Tables.
Codification
Prior Provisions
A prior section 110,Pub. L. 85–767, Aug. 27, 1958, 72 Stat. 894, related to project agreements, prior to repeal by Pub. L. 105–178, title I, § 1105(a),June 9, 1998, 112 Stat. 130.
Amendments
2005—Subsec. (a)(1). Pub. L. 109–59, § 1105(a), substituted “2007” for “2000” and inserted “and the succeeding fiscal year” after “allocate for such fiscal year”.
Subsec. (a)(2). Pub. L. 109–59, § 1105(b), substituted “2007” for “2000” and “October 15 of such” for “October 1 of the succeeding”, inserted “for such fiscal year and the succeeding fiscal year” after “Account)”, and inserted at end “No reduction under this paragraph and no reduction under section 1102(h), and no reduction under title VIII or any amendment made by title VIII, of the SAFETEA–LU shall be made for a fiscal year if, as of October 1 of such fiscal year the balance in the Highway Trust Fund (other than the Mass Transit Account) exceeds $6,000,000,000.”
Subsec. (b)(1)(A). Pub. L. 109–59, § 1105(c), (e), struck out “for” before “Federal-aid highway” and substituted “equity bonus” for “minimum guarantee” and “SAFETEA–LU” for “Transportation Equity Act for the 21st Century”.
Subsec. (c). Pub. L. 109–59, § 1105(d), inserted “the highway safety improvement program,” after “the surface transportation program,”.
1999—Subsec. (a)(2). Pub. L. 106–159, § 102(a)(2)(A), inserted “and the motor carrier safety grant program” after “relief)”.
Subsec. (b)(1)(A). Pub. L. 106–159, § 102(a)(2)(B), inserted “and the motor carrier safety grant program” after “program)”, substituted “title,” for “title and”, and inserted “, and subchapter I of chapter 311 of title 49” after “21st Century”.
Subsecs. (e) to (g). Pub. L. 106–113, which directed amendment of section 110 by adding subsecs. (e) to (g) at the end, was executed to this section to reflect the probable intent of Congress. See Codification note above.
1998—Subsec. (a). Pub. L. 105–178, § 1105(c)(1), as added by Pub. L. 105–206, § 9002(e), substituted “In general” for “Determination of amount” in heading and amended text of subsec. (a) generally. Prior to amendment, text read as follows: “On October 15 of fiscal year 1999, and each fiscal year thereafter, the Secretary shall allocate an amount of funds equal to the amount determined pursuant to section 251(b)(1)(B)(I)(cc) of the Balanced Budget and Emergency Deficit Control Act of 1985 (2 U.S.C. 901 (b)(2)(B)(I)(cc)).”
Subsec. (b)(2), (4). Pub. L. 105–178, § 1105(c)(2), as added by Pub. L. 105–206, § 9002(e), substituted “subsection (a)(1)” for “subsection (a)”.
Subsec. (c). Pub. L. 105–178, § 1105(c)(3), as added by Pub. L. 105–206, § 9002(e), substituted “the Interstate and National Highway System program” for “the Interstate Maintenance program, the National Highway System program”..
Special Rule
Pub. L. 109–59, title I, § 1105(f),Aug. 10, 2005, 119 Stat. 1166, provided that: “If the amount available pursuant to section 110 of title 23, United States Code, for fiscal year 2007 is greater than zero, the Secretary [of Transportation] shall—
“(1) determine the total amount necessary to increase each State’s rate of return (as determined under section 105 (b)(1)(A) of title 23, United States Code) to 92 percent, excluding amounts provided under this paragraph;
“(2) allocate to each State the lesser of—
“(A) the amount computed for that State under paragraph (1); or
“(B) an amount determined by multiplying the total amount calculated under section 110 of title 23, United States Code, for fiscal year 2007 by the ratio that—
“(i) the amount determined for such State under paragraph (1); bears to
“(ii) the total amount computed for all States in paragraph (1);. | http://www.law.cornell.edu/uscode/text/23/110 | CC-MAIN-2013-48 | refinedweb | 945 | 63.59 |
Here's my +1. It should also go our default page at the top ().
Thanks,
dims
--- giacomo <giacomo@apache.org> wrote:
> On Wed, 21 Nov 2001, Gianugo Rabellino wrote:
>
> > Stefano Mazzocchi wrote:
> >
> > >?
> >
> > I think that Carsten is the right man for the job and that he's right
> > when he says that this task should be in the "release manager" hands.
> >
> > As a side note, I must confess that I don't like that much the "About"
> > part, I think that it can and should be rephrased to reflect what Cocoon
> > really is.
> >
> > This is the original content:
> >
> > About: Cocoon is a pure Java publishing framework servlet that relies on
> > DOM, SAX, XML, and XSL to provide web content. Web content generation is
> > mostly based on HTML. Cocoons changes this view allowing content to be
> > written in XML, style on XSL stylesheets and logic in another XSL
> > stylesheet that converts the whole thing to the XSP namespace. This
> > allows the complete separation of the three layers used to create
> > content. The Cocoon framework creates web content by processing these
> > layers with the ability to create, for example, valid HTML as output.
> >
> > I see at least the following issues with it:
> >
> > 1. Cocoon is not a servlet, this is severely limiting the power of Cocoon.
> >
> > 2. "Web content generation is mostly based on HTML"??? 'nuff said...
> >
> > 3. "Cocoons changes this view allowing content to be written in XML,
> > style on XSL stylesheets and logic in another XSL stylesheet that
> > converts the whole thing to the XSP namespace". Ugly, to say the least.
> >
> > I think that such description is poor, wrong and troublesome. I think
> > also that it's worth the effort to spend some time investigating juicy
> > alternatives that might be attractive to newcomers and people willing to
> > know what Cocoon is. The same snippet should be placed in the project
> > home page in place of the negative presentation that it's present now.
> >
> > Here is a small suggestion for a change:
> >
> > "Apache Cocoon is a XML publishing framework leveraging the power of the
> > latest XML technologies. Designed for performance and scalability around
> > the SAX event model, Cocoon offers a flexible and powerful environment
> > based on the separation of concerns between content, logic and style and
> > on a central configuration file which drives the whole processing.
> > Cocoon is able to interact with most data sources (from filesystems to
> > RDBMS, from LDAP to native XML databases) and to serve the content to
> > different devices in different formats (HTML, WML, PDF, SVG, RTF just to
> > name a few). Cocoon can be used both as a servlet or in a standalone
> > fashion."
>
> Awesome, +1!!
>
> Giacomo
>
> >
> > Please, consider this a quick note written in five minutes, and feel
> > free to elaborate on it, steal from it or just throw it away. But
> > please, let's change that "About" snippet ASAP :)
> >
> > Ciao,
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
> For additional commands, email: cocoon-dev-help@xml.apache.org
>
=====
Davanum Srinivas -
__________________________________________________
Do You Yahoo!?
Yahoo! GeoCities - quick and easy web site hosting, just $8.95/month.
---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200111.mbox/%3C20011121164842.26777.qmail@web12806.mail.yahoo.com%3E | CC-MAIN-2018-30 | refinedweb | 534 | 62.98 |
hey guys, I have 2 programs I am trying to figure out and the generic void pointers are really throwing me off. I realize the values are being swapped in the function but I don't understand how a=4 and b=7 are being sent in and then there in an array. When the numbers are passed in are they being converted to chars? If so what is 4 becoming?
Code:// Generic Pointers #include <iostream> #include <iomanip> using namespace std; void q(void *a, void *b, int n) { unsigned char *ca = (unsigned char *)a; unsigned char *cb = (unsigned char *)b; unsigned char c; for (int i = 0; i < n; i++) { c = ca[i]; ca[i] = cb[i]; cb[i] = c; } } int main() { int a = 4, b = 7; float x = 2.5f, y = 3.5f; q(&a, &b, sizeof(int)); q(&x, &y, sizeof(float)); cout << setprecision(1) << fixed; cout << "(a,b) = (" << a << ',' << b << ')' << endl; cout << "(x,y) = (" << x << ',' << y << ')' << endl; } | http://cboard.cprogramming.com/cplusplus-programming/141698-problem-generic-pointers.html | CC-MAIN-2016-07 | refinedweb | 161 | 75.95 |
Az..
For the Azure Cosmos DB core SQL API, we offer a JavaScript library which works in both Node.js and browser environments. This library can now take advantage of CORS support. There is no client-side configuration needed to use this feature. Now that the browser can talk directly to Cosmos DB, you can get even higher performance for web scenarios by avoiding the extra hop through a web app. In the sample we link to below, we’re able to directly listen for changes from Cosmos DB, rather than needing to set up an intermediate server using something like websockets.
import {CosmosClient} from "@azure/cosmos"; const client= new CosmosClient({ endpoint: "https://<your-cosmos-account>documents.azure.com", auth: {} /* Permission token or read only key. Avoid master Key */ }); const todos = client.database("TodoApp").container("Todo").items; todos.readAll().toArray() .then(({result})=>{ for(const item of result) { const e = document.createElement("div") e.innerText = item.text; document.body.prepend(e); } });
Here is a simple sample of getting TypeScript and Webpack working with the @azure/cosmos library to build an anonymous bulletin app with real-time updates across all the clients, powered entirely by Cosmos DB.
Enabling CORS
To enable CORS, you can use the portal or an ARM template. You can use a wildcard “*” to allow all origins or specify fully qualified domains which are separated by a comma, (i.e.,,,). Today, you cannot use wildcards as part of the domain name (aka https://*.mydomain.net).
To enable CORS in the portal, navigate to your Cosmos DB Account, and select the CORS option from the settings list. From there, you can specify your allowed origins and then select Save to update your account.
Using the @azure/cosmos library in a browser
Today, the @azure/cosmos only has the CommonJS version of the library shipped in its package. To use the library in the browser, you’ll need to use a tool like Rollup or Webpack to create a browser compatible library. Certain Node libraries need to have browser mocks provided for them. Below is an example of a webpack config file which has the necessary mock settings.
const path = require("path"); module.exports = { entry: "./src/index.ts", devtool: "inline-source-map", node: { net: "mock", tls: "mock" }, output: { filename: "bundle.js", path: path.resolve(__dirname, "dist") } }; Another thing to consider in the browser is that you don’t want to use your master key for most situations. It is best to use Resource Tokens or Readonly keys instead. You can refer to this sample on Github to get started understanding how Resource Tokens work and how you can use something like Azure Functions to authenticate and authorize your users before giving them a Resource Token. We will have more blogs soon about how to use these more advanced authentication patterns with your browser based applications.
To get started, take a look at our @azure/cosmos library on npm and start using it in your browser-based apps.! | https://azure.microsoft.com/it-it/blog/azure-cosmos-now-supports-cross-origin-resource-sharing-cors/ | CC-MAIN-2020-10 | refinedweb | 496 | 55.13 |
Ah, tricky.
We could have
imglyb check that
imagej has been initialized before it is used.
Ah, tricky.
We could have
imglyb check that
imagej has been initialized before it is used.
First of all thanks @thewtex for your efforts, this looks great!
@bnorthan imglyb expects certain environment variables in your environment to be set, one of them
PYJNIUS_JAR. I assume that the variables do not get set on install but only when you activate the environment. Re-activating:
conda install -c hanslovsky imglib2-imglyb source deactivate source activate /* conda environment */
I am no conda expert, though.
We could have imglyb check that imagej has been initialized before it is used.
That is not possible, unfortunately. ImageJ looks like a higher level wrapper around imglyb that sets up the environment appropriately before starting imglyb. imglyb is a lower compatibility layer between numpy and imglib2 and as such should not be aware of ImageJ.
There are efforts to make setting up the imglyb environment easier with run-time class loading and dependency resolution but I do not know about the current progress. Maybe @ctrueden can comment.
Hi @thewtex, @ctrueden, @hanslovsky
This notebook is my l attempt to run the YacuDecu GPU Deconvolution wrapper Op I’ve been working on in a notebook.
It works, but I ran into a few minor issues a long the way, which I’ve commented on in the notebook.
The biggest issue (see Cell 10) is that I wasn’t able to get
ij.op().run(...) to work. Some googling indicates other people have trouble calling functions with variable number of inputs with pyjnius. Although perhaps I messed up the call in some other way.
Any ideas??
I had similar issues with calling overloaded Java methods through PyJNIus. Other than writing helper methods that reduce this “ambiguity” (quotations mark because it is no real ambiguity), I have not found any solution yet.
Yes, agreed. We do not want
imglyb to have a package dependency on
imagej, for example. But, it would help newcomers understand what is happening if there was a check that ensures
PYJNIUS_JAR is set before it is used. And, if it is not set, then an informative error is thrown that 1) explains the issue and 2) suggests the most common approach to resolve it, whether that is
imagej.init,
scyjava, …
That is awesome, @bnorthan! We are approaching a beautiful place, which @ctrueden described, where we can mix open source image analysis technologies, whether they are Java-based, Python-based, C++ based, or JavaScript-based. The true power of open source is unleashed when efforts are combined. Now, I have access to an super-speedy deconvolution implementation
.
I ran into these issues, also.
It already throws an error that explains the issue:
Path to pyjnius.jar not defined! Use environment variable PYJNIUS_JAR to define it.
I could add a statement like
If you are using a framework that sets the environment for you (e.g. imagej) make sure to import and set up that framework before importing imglyb.
Frameworks that use imglyb (and initialize the environment) should explicitly state that the framework needs to be set up before any call to
import imglyb and demonstrate that in their usage examples, and make sure those usage examples work (imagej usage example does not import imglyb on PyPI ->). Use bold letters for that so everybody can see it.
Or, even better, add the imglyb package as a member to your own namespace, and use it consistently in your examples, so people will never have to run
import imglyb.
For imagej, this might look like this:
import imagej ij = imagej.init('/Applications/Fiji.app') imglyb = ij.imglyb
Good idea – that improvement should help many folks get started.
Cool!
ij currently does not have the
.imglyb attribute. @ctrueden do you think we should add this?
Hello!
First of all thank to @thewtex for this useful integration. I have been looking for something like this long time ago.
Sorry if the question is too obvious or it shouldn’t be place here, but it is related. I’m having problems with this first part.
I’m working on Windows with an environment in Anaconda2
!conda install --yes --prefix {sys.prefix} -c hanslovsky imglib2-imglyb !conda install --yes --prefix {sys.prefix} requests !{sys.executable} -m pip install imagej
Getting this error, however it says that All requested packages already installed.
Solving environment: ...working... failed UnsatisfiableError: The following specifications were found to be in conflict: - imglib2-imglyb - sphinx==1.6.3=py35heeac824_0 Use "conda info <package>" to see the dependencies for each package. Solving environment: ...working... done # All requested packages already installed. Requirement already satisfied: imagej in c:\users\User\anaconda2\envs\py35\lib\site-packages (0.2.0)
After that I load the modules and initialize Imagej (which I store in Dropbox) and I get this error:
import shutil import os import requests import itk import numpy as np from itkwidgets import view fiji_path = 'D:/Dropbox/fiji-win64/Fiji.app/' import imagej
ImportError: No module named 'jnius_config'
import imglyb
ImportError: No module named 'imglyb'
I tried to install it into the environment from the terminal but I get the same error for imglyb and Imagej seems to be installed.
(py35) C:\Users\User>pip install imagej Requirement already satisfied: imagej in c:\users\User\anaconda2\envs\py35\lib\site-packages (0.2.0)
Thanks!
Can you try installing imglyb with python 3.6?
Example command:
conda create -n imglyb -c hanslovsky python=3.6 imglib2-imglyb <any-other-dependencies>
I did not upload any (Windows) packages for python 3.5:
Update: Packages for Python 3.5 and Python 2 for other architectures (osx, Linux) are somewhat obsolete and I do not have plans to update packages for Python <= 3.5 for any architecture.
Thanks @hanslovsky, that worked!
But now I’m having another issue.
fiji_path = 'D:/Dropbox/fiji-win64/Fiji.app/' import imagej ij = imagej.init(fiji_path) import imglyb
please set the java enviroment manully by call set_java_env() command Java can not be found, it might not be correctly installed. Added 390 JARs to the Java classpath.
So I set it,
imagej.set_java_env("C:\\Program Files\\Java\\jre1.8.0_162\\lib\\rt.jar")
But still raises the same problem.
I don’t know much about Java so maybe I’m choosing the wrong java environment.
Thank you in advance!
I am not too familiar with Python path separator conventions but the forward slashes in
fiji_path might be a problem. You can try to use this instead:
fiji_path = os.path.normpath('D:/Dropbox/fiji-win64/Fiji.app/')
Also pinging @ctrueden for this imagej.py question.
Normally to take paths in Python from Ubuntu is with forward slashes “/” and from windows can be either forward or 2 backwards “\\”. I tried with:
fiji_path = 'D:/Dropbox/fiji-win64/Fiji.app/' fiji_path = 'D:\\Dropbox\\fiji-win64\\Fiji.app\\' fiji_path = os.path.normpath('D:/Dropbox/fiji-win64/Fiji.app/') fiji_path = os.path.normpath('D:\\Dropbox\\fiji-win64\\Fiji.app\\')
But any of them works.
Using os.path.join is a good practice that can help make the commands cross platform, work around string escape characters, etc.
Thanks @thewtex for os.path.join command.
However, I’m still having an issue with Java recognition.
fiji_path = os.path.join('D:/Dropbox/fiji-win64/Fiji.app/') import imagej fiji_java_env = os.path.join("D:/Dropbox/fiji-win64/Fiji.app/java/win64/jdk1.8.0_66") # Java from ImageJ fiji_java_env2 = os.path.join("C:/Program Files/Java/jre1.8.0_162") # Java from OS imagej.set_java_env(fiji_java_env) # or fiji_java_env2 ij = imagej.init(fiji_path)
please set the java enviroment manully by call set_java_env() command Java can not be found, it might not be correctly installed. Added 390 JARs to the Java classpath. --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-33-73c7f0b08f53> in <module>() 9 imagej.set_java_env(fiji_java_env2) 10 ---> 11 ij = imagej.init(fiji_path) 12 import imglyb ~\Anaconda2\envs\imglyb\lib\site-packages\imagej\imagej.py in init(ij_dir) 202 return 203 print("Added " + str(num_jars + 1) + " JARs to the Java classpath.") --> 204 import imglyb 205 from jnius import autoclass 206 ImageJ = autoclass('net.imagej.ImageJ') ~\Anaconda2\envs\imglyb\lib\site-packages\imglyb\__init__.py in <module>() 48 49 ---> 50 from .util import \ 51 to_imglib, \ 52 to_imglib_argb, \ ~\Anaconda2\envs\imglyb\lib\site-packages\imglyb\util.py in <module>() 5 from collections import defaultdict 6 ----> 7 from jnius import autoclass, PythonJavaClass, java_method 8 9 import numpy as np ~\Anaconda2\envs\imglyb\lib\site-packages\pyjnius-1.1.2.dev0-py3.6-win-amd64.egg\jnius\__init__.py in <module>() 10 __version__ = '1.1.2-dev' 11 ---> 12 from .jnius import * # noqa 13 from .reflect import * # noqa 14 ImportError: DLL load failed: The specified module could not be found.
Thanks in advance!
Can you successfully
import imglyb (without importing any imagej) in that conda environment?
No, I can’t. It raises the same error.
ImportError: DLL load failed: The specified module could not be found.
I tried reinstalling from the terminal but still doesn’t work.
activate imglyb (imglyb) C:\Users\User>conda install -c hanslovsky imglib2-imglyb
The issue is with the conda build of PyJNIus. I tried on a Windows laptop and I got the same error. I tried to build the conda package for PyJNIus again on Windows but it failed with the same error. Unfortunately, my windows knowledge and debugging skill is very limited. It already took me hours to get conda to (try and) build, so I cannot help here, unfortunately.
Until we find maintainers for the conda packages for Windows, I will not be able to provide PyJNIus on conda, unfortunately. The only other option would be to try and build PyJNIus yourself. My hope is that you could run imglyb then.
As stated in the other thread (Imglyb and PyJNIus conda package maintainers needed for Windows and OSX) it seems like I was able to fix the build issues on windows with the help of @jakirkham and @jjahanip
PyJNIus (and eventually imglyb) will be available on conda-forge (instead of my personal channel):
@malj390 I will also update my anaconda channel tonight (when I have access to a Windows machine) with an updated Windows conda package, which should hopefully fix your particular issue for now.
@malj390 I just updated the
imglib2-imglyb and
imglyb-examples packages on my conda channel
hanslovsky. I confirmed on a Windows 10 machine that it works. Please let me know if there are any issues. | http://forum.imagej.net/t/analysis-with-imagej-and-visualization-in-the-jupyter-notebook/11052/24 | CC-MAIN-2018-26 | refinedweb | 1,742 | 58.58 |
In this article, you will learn in depth about Python modules from their creation to the different ways of importing them to use different functions defined in them in your program.
Python Modules: Introduction
Python modules are nothing but files that consist of different statements and functions defined inside.
A module can define functions, classes, and variables. Modules help in organizing the code making it easier to use and understand. Modules provide reusability of the code.
Any file with extension
.py can be referred as a module and the functions defined inside the module can be used in another program by simply using the import statement.
Suppose we need a function to find factorial in many programs. So, instead of defining a function to find the factorial in each program, what we can do is create a module with a function to find factorial and use that function in every program by simply importing the module.
How to create a Python Module?
Creating a Python module is as simple as defining a function and saving it as a
.py file so that we can use this function later by just importing this module.
For example, let’s create a module
findfact.py which contains a function to find the factorial of any number and a function to check positive or negative number. We will use recursion to find factorial.
#findfact.py #function to find factorial def fact(n): """ Function to find factorial """ if n == 1: return 1 else: return (n * fact(n-1)) #function to check positive/Negative def check_num(a): """ Function to check positive/Negative number """ if a > 0: print (a ," is a positive number.") elif a == 0: print ("Number is zero.") else: print (a ," is negative number.")
Save this file as
findfact.py and there you have created your first ever Python module.
Now let’s see how to import this module in other programs and use the function defined in it.
How to import Python modules?
There are tons of standard modules that are included in our local machine inside the folder Lib in the directory we installed Python.
To import a Python module be it standard or user-defined, the keyword
import is used followed by the module name.
For example, let’s import the module
findfact.py and use the function to find factorial.
>>> import findfact
Importing a module is as simple as mentioned above.
Now to use the functions defined inside this module we use
(.) operator in following way.
>>> import findfact >>> findfact.fact(5) #calling factorial function inside module 120
Note: In Python, a module name is stored within a module available as the global
variable __name__ (note the double underscores).
>>> import findfact >>> findfact.__name__ 'findfact'
This was a simple demonstration to import modules in Python using the
import statement. There are a couple of other ways to import Python modules using different forms of
import statements.
Python from .. import statement
Imagine you have multiple functions defined inside a module like we have two functions defined inside
findfact.py.
Python from
.. import statement allows us to import particular function from the module. Here is the example.
>>> #importing only check_num function from findfact.py >>> from findfact import check_num >>> findfact.check_num(2) 2 is a positive number.
Import module as object
Python modules can be imported as objects. In such case, instead of
module_name.function_name( ) we use
object.function_name( ).
Here is the example.
>>> #importing findfact.py as f >>> import findfact as f >>> f.fact(5) 120 >>> f.check_num(0) Number is zero.
Import everything from module
Besides importing certain functions, we can import everything defined inside the module and use the functions directly in the program.
Here is the example.
>>> #importing every functions from findfact.py >>> from findfact import * >>> check_num(2) 2 is a positive number. >>> fact(5) 120
This imports all names except those beginning with an underscore (
_).
Importing everything from a module using
(*) might seem easy to use, but this can lead to duplicate methods and definitions. The methods name in the module and the main program may have same names. So it’s better to import certain functions or import module as an object.
Reloading a Python Module
Sometimes we may feel need to change the functions or statements in the module that we have already imported in our Python Shell.
We can make changes easily to the module but the changed won’t be effective in our program.
This is because Python imports modules once in a session. We will have to re-start the interpreter and import again for the changes to be effective.
Doesn’t seem good practice to re-start interpreter again just to make changes effective, right?
Thankfully, Python has
reload( ) function to address this problem. Here is the example.
>>> #for python version < 3.4 >>> import imp >>> import findfact >>> imp.reload(findfact) >>> #for Python version >3.3 >>> import importlib >>> import findfact >>> importlib.reload(findfact)
Note:
imp is depreciated from Python version 3.3.
importlib has replaced
imp in newer versions.
The dir( ) Function
The
dir( ) is a built-in Python function used to find the name defined in a Python module.
If the object is supplied as an argument to this function it is a module and will return a sorted list of functions, classes, and variables defined inside that module.
If the argument is not supplied, then the function will return the list of names from the current module.
For example, if we used
dir( ) function to find the names defined in the module we created
findfact.py, it will list the names defined inside as following.
This is the sorted list of names defined inside our module
findfact.py.
Notice the functions we defined
check_num and fact listed as well. Other names with underscores
(_) are the default attributes associated with the module.
Python module search path
When a module is imported, Interpreter first searches for a built-in module with that name. If it is not found in the list of built-in modules, then the interpreter searched for the module in the following locations in order.
- The current working directory.
- The directories of
PYTHONPATH. It is an environment variable with the list of directories.
- The standard installation path of Python – the installation dependent default. | http://www.trytoprogram.com/python-programming/python-modules | CC-MAIN-2019-30 | refinedweb | 1,041 | 59.19 |
#include <wx/mdi.h>
An MDI (Multiple Document Interface) parent frame is a window which can contain MDI child frames in its client area which emulates the full desktop.
MDI is a user-interface model in which all the window reside inside the single parent window as opposed to being separate from each other. It remains popular despite dire warnings from Microsoft itself (which popularized this model in the first model) that MDI is obsolete.
An MDI parent frame always has a wxMDIClientWindow associated with it, which is the parent for MDI child frames. In the simplest case, the client window takes up the entire parent frame area but it is also possible to resize it to be smaller in order to have other windows in the frame, a typical example is using a sidebar along one of the window edges.
The appearance of MDI applications differs between different ports. The classic MDI model, with child windows which can be independently moved, resized etc, is only available under MSW, which provides native support for it. In Mac ports, multiple top level windows are used for the MDI children too and the MDI parent frame itself is invisible, to accommodate the native look and feel requirements. In all the other ports, a tab-based MDI implementation (sometimes called TDI) is used and so at most one MDI child is visible at any moment (child frames are always maximized).
Although it is possible to have multiple MDI parent frames, a typical MDI application has a single MDI parent frame window inside which multiple MDI child frames, i.e. objects of class wxMDIChildFrame, can be created.
This class supports the following styles:
Constructor, creating the window.
Notice that if you override virtual OnCreateClient() method you shouldn't be using this constructor but the default constructor and Create() as otherwise your overridden method is never going to be called because of the usual C++ virtual call resolution rules.
Under wxMSW, the client window will automatically have a sunken border style when the active child is not maximized, and no border style when a child is maximized.
Destructor.
Destroys all child windows and menu bar if present.
Activates the MDI child following the currently active one.
The MDI children are maintained in an ordered list and this function switches to the next element in this list, wrapping around the end of it if the currently active child is the last one.
Activates the MDI child preceding the currently active one.
Arranges the MDI child windows in a cascade.
This method is only implemented in MSW MDI implementation and does nothing under the other platforms.
Used in two-step frame construction.
See wxMDIParentFrame() for further details.
Returns a pointer to the active MDI child, if there is one.
If there are any children at all this function returns a non-NULL pointer.
Returns a pointer to the client window.
Returns the current MDI Window menu.
Unless wxFRAME_NO_WINDOW_MENU style was used, a default menu listing all the currently active children and providing the usual operations (tile, cascade, ...) on them is created automatically by the library and this function can be used to retrieve it. Notice that the default menu can be replaced by calling SetWindowMenu().
This function is currently not available under macOS.
Returns whether the MDI implementation is tab-based.
Currently only the MSW port uses the real MDI. In Mac ports the usual SDI is used, as common under this platforms, and all the other ports use TDI implementation.
TDI-based MDI applications have different appearance and functionality (e.g. child frames can't be minimized and only one of them is visible at any given time) so the application may need to adapt its interface somewhat depending on the return value of this function.
Override this to return a different kind of client window.
If you override this function, you must create your parent frame in two stages, or your function will never be called, due to the way C++ treats virtual functions called from constructors. For example:
You might wish to derive from wxMD.
Replace the current MDI Window menu.
Ownership of the menu object passes to the frame when you call this function, i.e. the menu will be deleted by it when it's no longer needed (usually when the frame itself is deleted or when SetWindowMenu() is called again).
To remove the window completely, you can use the wxFRAME_NO_WINDOW_MENU window style but this function also allows doing it by passing NULL pointer as menu.
The menu may include the items with the following standard identifiers (but may use arbitrary text and help strings and bitmaps for them):
wxID_MDI_WINDOW_CASCADE
wxID_MDI_WINDOW_TILE_HORZ
wxID_MDI_WINDOW_TILE_VERT
wxID_MDI_WINDOW_ARRANGE_ICONS
wxID_MDI_WINDOW_PREV
wxID_MDI_WINDOW_NEXTAll of which are handled by wxMDIParentFrame itself. If any other commands are used in the menu, the derived frame should handle them.
This function is currently not available under macOS.
Tiles the MDI child windows either horizontally or vertically depending on whether orient is
wxHORIZONTAL or
wxVERTICAL.
This method is only implemented in MSW MDI implementation and does nothing under the other platforms. | https://docs.wxwidgets.org/3.1.5/classwx_m_d_i_parent_frame.html | CC-MAIN-2021-31 | refinedweb | 846 | 53.51 |
FPARSELN(3) BSD Programmer's Manual FPARSELN(3)
fparseln - return the next logical line from a stream
#include <stdio.h> #include <util.h> char * fparseln(FILE *stream, size_t *len, size_t *lineno, const char delim[3], int flags);
The fparseln() function returns a pointer to the next logical line from the stream referenced by stream. This string is null terminated and dynamically allocated on each invocation. It is the responsibility of the caller to free the pointer. By default, if a character is escaped, both it and the preceding escape character will be present in the returned string. Various flags alter this behaviour. The meaning of the arguments is as follows: stream The stream to read from. len If not NULL, the length of the string is stored in the memory lo- cation referenced by len. lineno If not NULL, the value of the memory location to which lineno references is incremented by the number of lines actually read from the file. delim Contains the escape, continuation, and comment characters. If a character is NUL then processing for that character is disabled. If NULL, all characters default to values specified below. The contents of delim is as follows: delim[0] The escape character, which defaults to '\', is used to remove any special meaning from the next character. delim[1] The continuation character, which defaults to '\', is used to indicate that the next line should be con- catenated with the current one if this character is the last character on the current line and is not escaped. delim[2] The comment character, which defaults to '#', if not escaped indicates the beginning of a comment that ex- tends until the end of the current line. flags If non-zero, alter the operation of fparseln(). The various flags, which may be OR'ed together, are: FPARSELN_UNESCCOMM Remove escape preceding an escaped comment. FPARSELN_UNESCCONT Remove escape preceding an escaped continua- tion. FPARSELN_UNESCESC Remove escape preceding an escaped escape. FPARSELN_UNESCREST Remove escape preceding any other character. FPARSELN_UNESCALL All of the above.
Upon successful completion a pointer to the parsed line is returned; oth- erwise, NULL is returned. Internally, the fparseln() function uses fgetln(3), so all error condi- tions that apply to fgetln(3) apply to fparseln() as well. In addition fparseln() may set errno to ENOMEM and return NULL if it runs out of memory.
fgetln(3) MirOS BSD #10-current December. | http://mirbsd.mirsolutions.de/htman/sparc/man3/fparseln.htm | crawl-003 | refinedweb | 397 | 56.55 |
neosemantics is a plugin that enables the use of RDF in Neo4j. RDF is a W3C standard model for data interchange. Some key features of n10s are:
Other features in NSMNTX include model mapping and inferencing on Neo4j graphs.
⇨ Check out the complete user manual with examples of use. ⇦
⇨ Blog on neosemantics (and more). ⇦
You can either download a prebuilt jar from the releases area or build it from the source. If you prefer to build, check the note below.
dbms.unmanaged_extension_classes=n10s.endpoint=/rdf
call dbms.procedures(). The list of procedures should include a number of them prefixed by n10s.
Checking that the logs show the following line on startup:
YYYY-MM-DD HH:MM:SS.000+0000 INFO Mounted unmanaged extension [n10s.endpoint] at [/rdf]
You can also test the extension is mounted by running
:get on the neo4j browser and this should return the following message
{"ping":"here!"}
CREATE CONSTRAINT n10s_unique_uri ON (r:Resource) ASSERT r.uri IS UNIQUE
Before any RDF import operation a
GraphConfig needs to be created. Here we define the way the RDF data is persisted in Neo4j.
We'll find things like
Most of them are the same (expect some changes) as in previous versions (see 3.5 manual for reference).
You can create a graph config with all the defaults like this:
call n10s.graphconfig.init()
Or customize it by passing a map with your options:
call n10s.graphconfig.init( { handleMultival: "ARRAY", multivalPropList: ["", ""], keepLangTag: true })
Once the Graph config is created we can import data from a url using
fetch:
call n10s.rdf.import.fetch( "", "Turtle")
Or pass it as a parameter using
inline:
with ' @prefix neo4voc: <> . @prefix neo4ind: <> . neo4ind:nsmntx3502 neo4voc:name "NSMNTX" ; a neo4voc:Neo4jPlugin ; neo4voc:runsOn neo4ind:neo4j355 . neo4ind:apoc3502 neo4voc:name "APOC" ; a neo4voc:Neo4jPlugin ; neo4voc:runsOn neo4ind:neo4j355 . neo4ind:graphql3502 neo4voc:name "Neo4j-GraphQL" ; a neo4voc:Neo4jPlugin ; neo4voc:runsOn neo4ind:neo4j355 . neo4ind:neo4j355 neo4voc:name "neo4j" ; a neo4voc:GraphPlatform , neo4voc:AwesomePlatform . ' as payload call n10s.rdf.import.inline( payload, "Turtle") yield terminationStatus, triplesLoaded, triplesParsed, namespaces return terminationStatus, triplesLoaded, triplesParsed, namespaces
It is possible to pass some request specific parameters like headerParams, commitSize, languageFilter... (also found in the 3.5 manual)
Same naming scheme applies...
call n10s.onto.import.fetch(...)
Use autocompletion to discover the different procedures.
Full documentation will be available soon. In the meantime, please share your feedback in the Neo4j community portal.
Thanks! | https://awesomeopensource.com/project/neo4j-labs/neosemantics | CC-MAIN-2020-50 | refinedweb | 399 | 51.95 |
Question:
Site drove her car the Fune Mall. As she reaches the parking lot she gets a token from the machine. But she’s not sure where she wants to park her car. She sees a display board that gives the status of he occupancy in the parking lot.
There are 4 basements: B1, B2, B3, and B4. The values of token numbers are displayed in the giant display board as follows. Help Sita to find the basement number to park her car.
- If token number is between 1 and 50 (both inclusive) then the output B1.
- If token number is between 51 and 100 (both inclusive) then the output B2.
- If token number is between 101 and 120 (both inclusive) the the output B3.
- If token number is between 121 and 180 (both inclusive) the the output B4.
- Otherwise, print “Invalid Input ” and terminate the program.
Input Format:
Input consist of 1 integer corresponding to N
Output Format:
Output consists of a string which is either “B1″or”B2″or”B3″or”B4”
Please do not use System,exit(0) to terminate the application
Sample Input 1:
120
Sample Output 1:
B3
Sample Input 1:
180
Sample Output 1:
B4
Code:
Main.java
import java.util.Scanner; public class fune_mall_vehicle_parking { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int num = sc.nextInt(); if(num>=1 && num<=50){ System.out.println("B1"); } else if(num>=51 && num<=100){ System.out.println("B2"); } else if(num>=101 && num<=120){ System.out.println("B3"); } else if(num>=121 && num<=180){ System.out.println("B4"); } else{ System.out.println("Invalid Input"); } } }
Output:
125 B4
1 thought on “Java Program for Fune Mall Vehicle Parking”
It is appropriate time to make some plans for the future and it’s time to be happy.
I’ve read this post and if I may just I wish to counsel you
few fascinating things or tips. Maybe you can write subsequent articles referring
to this article. I desire to learn more things approximately it! | https://quizforexam.com/java-program-for-fune-mall-vehicle-parking/ | CC-MAIN-2021-21 | refinedweb | 341 | 67.96 |
Anyway, patronising over...
If your form uses method='GET' then parameters will be stored in
If your form uses method='POST' then input will be stored in a string in the same format, but fed into STDIN instead. The length is
You'll need to unescape the output, and also change '+' into ' '.
I have used it on occasion as I can't work out how to process multipart forms for file uploads.
As you said, personal preference, but either way works.
[perlmonks.org...]
use CGI qw/:cgi/;
I couldn't get the article about file uploads unfortunately, it 404ed.
Thanks again for the info.
Back to the point though, Plum please let us know how you get on.
You are right, but if you look at the source code of the CGI module (or other modules) I'll bet you can learn a thing or two and incorporate that into your own code. Like how to parse multi-valued form fields:
'SplitParam' => <<'END_OF_FUNC', sub SplitParam { my ($param) = @_; my (@params) = split ("\0", $param); return (wantarray? @params : $params[0]); } END_OF_FUNC
again thanks everyone | http://www.webmasterworld.com/perl/3312599.htm | CC-MAIN-2014-10 | refinedweb | 182 | 72.56 |
Python-based EDM analysis » History » Version 7
Version 7/23
(diff) -
Current version
Gleb Lukicov, 10/27/2019 05:56 PM
Python-based EDM analysis¶
Python is awesome! You can start using it for your analysis in two ways: 1) writing Python-based ROOT macros (pyROOT), or interact with ROOT files directly in a JupyerLab environment (JupyROOT) 2) Convert ROOT file into Numpy/HDF/etc. format and fit using Python tools directly (e.g. scipy-optimize).
JupyROOT/pyROOT¶
To install JupyterLab on your laptop follow the instructions here:
Make sure to go via the pip installation route (not anaconda!).
Many tutorials to get started with Jupyter are also available (e.g.)
Finally, test that you can import ROOT into your JupyteLab notebook (e.g.)
If you have ROOT installed via homebrew, you might need to import the path to ROOT explicitly i.e. (in the very first cell)
import sys # add brew ROOT (Mac) to the system path sys.path.append("/usr/local/Cellar/root/6.18.04/lib/root") import ROOT as r
Alternatively, add "export JUPYROOT=/usr/local/Cellar/root/6.18.04/lib/root" to your .bash_profile and
import sys # add brew ROOT (Mac) to the system path sys.path.append(os.environ["JUPYROOT"]) import ROOT as r
Then you only need to change the path to ROOT in a single place, if you decide to update ROOT on your laptop.
Blinding
To get started you need to set-up the official blinding tools on your laptop here:
Then go to the bottom of that page, and follow "Using the code in Python3". There are already examples provided for a 5 parameter fit by Kim!¶
Here is one for the tracker fit for an EDM-based analysis
That's it! The rest is the same as using a ROOT C macro, but being in a Jupyer environment one can execute things interactively and use native Python plotting tools. | https://cdcvs.fnal.gov/redmine/projects/gm2analyses/wiki/Python-based_EDM_analysis/7 | CC-MAIN-2020-29 | refinedweb | 321 | 53.81 |
Create Spark Project in Scala With Eclipse Without Maven
1. Objective – Spark Scala Project
This step by step tutorial will explain how to create a Spark project in Scala with Eclipse without Maven and how to submit the application after the creation of jar. This Guide also briefs about the installation of Scala plugin in eclipse and setup spark environment in eclipse. Learn how to configure development environment for developing Spark applications in Scala in this tutorial.
If you are completely new to Apache Spark, I recommend you to read this Apache Spark Introduction Guide.
Create Spark project in Scala with Eclipse without Maven
Stay updated with latest technology trends
Join DataFlair on Telegram!!
2. Steps to Create the Spark Project in Scala
To create Spark Project in Scala with Eclipse without Maven follow the steps given below-
i. Platform Used / Required
- Operating System: Windows / Linux / Mac
- Java: Oracle Java 7
- Scala: 2.11
- Eclipse: Eclipse Luna, Mars or later
ii. Install Eclipse plugin for Scala
Open Eclipse Marketplace (Help >> Eclipse Marketplace) and search for “scala ide”. Now install the Scala IDE. Alternatively, you can download Eclipse for Scala.
Install Eclipse plugin for Scala
iii. Create a New Spark Scala Project
To create a new Spark Scala project, click on File >> New >> Other
Create a New Spark Scala Project
Select Scala Project:
Select Scala Project
Supply Project Name:
Supply Project Name
iv. Create New Package
After creating the project, now create a new package.
Create New Package
Supply Package Name:
Supply Package Name
v. Create a New Scala Object
Now create a new Scala Object to develop Scala program for Spark application
Create a new Scala Object to develop Scala program for Spark application
Select Scala Object:
Select Scala Object
Supply Object Name:
Supply Object Name:
vi. New Scala Object in Editor
Scala object is ready now we can develop our Spark wordcount code in Scala-
New Scala Object in Editor to create Spark Application
vii. Copy below Spark Scala Wordcount Code in Editor
[php]
package com.dataflair.spark
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
object Wordcount {
def main(args: Array[String]) {
//Create conf object
val conf = new SparkConf()
.setAppName(“WordCount”)
//create spark context object
val sc = new SparkContext(conf)
//Check whether sufficient params are supplied
if (args.length < 2) {
println(“Usage: ScalaWordCount <input> <output>”)
System.exit(1)
}
//Read file and create RDD
val rawData = sc.textFile(args(0))
//convert the lines into words using flatMap operation
val words = rawData.flatMap(line => line.split(” “))
//count the individual words using map and reduceByKey operation
val wordCount = words.map(word => (word, 1)).reduceByKey(_ + _)
//Save the result
wordCount.saveAsTextFile(args(1))
//stop the spark context
sc.stop
}
}[/php]
Spark Scala WordCount Code in Editor
You will see lots of error due to missing libraries.
viii. Add Spark Libraries
Configure Spark environment in Eclipse: Right click on project name >> build path >> Configure Build Path
Configure Spark environment in Eclipse
Add the External Jars:
Add the External Jars
ix. Select Spark Jars and insert
You should have spark setup available in developing environment, it will be needed for spark libraries.
Select Spark Jars and insert
Go to “Spark-Home >> jars” and select all the jars:
select all the jars
Import the selected jar:
Import the selected jar
x. Spark Scala Word Count Program
After importing the libraries all the errors will be removed.
Spark WordCount Program in Scala
We have successfully created Spark environment in Eclipse and developed Spark Scala program. Now let’s deploy the Spark job on Linux, before deploying/running the application you must have Spark Installed.
Follow this links to install Apache Spark on single node cluster or on the multi-node cluster.
xi. Create the Spark Scala Program Jar File
Before running created Spark word count application we have to create a jar file. Right click on project >> export
Create the Spark Scala Program Jar File
Select Jar-file Option to Export:
Select Jar-file Option to Export
Create the Jar file:
Create the Jar file
The jar file for the Spark Scala application has been created, now we need to run it.
xii. Go to Spark Home Directory
Login to Linux and open terminal. To run Spark Scala application we will be using Ubuntu Linux. Copy the jar file to Ubuntu and create one text file, which we will use as input for Spark Scala wordcount job.
cd spark home directory
xiii. Submit Spark Application using spark-submit script
To submit the Spark application using below command:
bin/spark-submit --class <Qualified-Class-Name> --master <Master> <Path-Of-Jar-File> <Input-Path> <Output-Path>
bin/spark-submit --class com.dataflair.spark.Wordcount --master local ../sparkJob.jar ../wc-data output
Let’s understand above command:
- bin/spark-submit: To submit Spark Application
- –class: To specify the class name to execute
- –master: Master (local / <Spark-URI> / yarn)
- <Jar-Path>: The jar file of application
- <Input-Path>: Location from where input data will be read
- <Output-Path>: Location where Spark application will write output
Submit Spark Application using spark-submit script
Submit Spark Application using spark-submit script
The application has been completed successfully, now browse the result.
xiv. Browse the result
Browse the output directory and open the file with name part-xxxxx which contains the output of the application.
spark wordcount job success
We have successfully created Spark project in Scala and deployed on Ubuntu.
To play with Spark First learn RDD, DataFrame, DataSet in Apache Spark and then refer this Spark shell commands tutorial to practically implements Spark functionalities.
See Also-
What I like about this article is that it has not missed a step (in my eyes, at least). If hand-holding the hesitant starters in the world of Apache Spark, has been the objective, then you have achieved it. Good job!
Hi Nirmalya,
Thanks for such nice words. The feedback comes from our loyal readers, build our confidence and inspire us to bring you even better content.
We hope you are exploring other Spark blogs as well.
Regards,
Data-Flair
I’m gonna say to my little brother, that he should also visit this web site on regular basis to obtain updates from hottest reports.
Hii Becky,
Thanks for visiting Data Flair. We are happy to hit the mark for you and your brother. It seems that you liked and understood how to create the Spark Project in Scala. You should share this Spark Knowledge but not only with your brother, with all who wants to explore the career in Spark. Our Site Data Flair is continuously working for the sake of our loyal readers.
I am not able to create jar. it says
JAR creation failed. See details for additional information.
Class files on classpath not found or not accessible for: ‘SparkApplication/src/com/spark/employee/Maxwages.scala’
Dear Udit,
It seems there is compilation error in your program. Spark is not able to create class file when compilation error is there.
Excellent page to setup the Scala + Spark + Eclipse
Gr8 Work
Glad to see your review on Spark Project in Scala. Our team is continuously working for readers like you. You must read more Spark blogs on our website and let us know if the content helps you.
getting error while running with spark-submit
exception in thread main java.lang.nosuchmethodexception
Make sure you use scala version compatible with your spark version.
Great Post
Dileep thanks a lot, for taking time to post the review on Spark Scala Project. Your words are valuable to us. Update yourself with our new Spark blogs. Keep learning, keep sharing.
Great Post described in simple steps
Hii Dileep
Thank you for sharing such a positive experience. Keep learning and keep visiting Data Flair.
Maven should be given the preference as it is the preferred way.
Hii Mohit,
Thank you for catching this Maven query for Spark Scala Project.
Maven is quite a popular way, will post another article detailing the steps: how to create a Spark Scala project using Maven. Till then keep visiting Data Flair and keep learning.
Great! helped a lot.
Thanks Nikita for such nice words. We glad to see that our explanation of the process of creating a Spark Scala Project helps you. We want you to learn more about Spark. Here we are providing the best Spark guide for you:
This Spark scala Project process is very helpful and very detailed.
Thanks very much.
Aliaa, Thank you so much for taking the time to write this excellent review. We regularly post the simply written helpful articles on Spark. You can select the spark category for more on Spark. Good luck with the site.
clearly explained how to create a jar in eclipse, if possible pls explain in intellij as well
Hii Venu,
Glad you understand our explained tutorial of creating Jar in Spark Scala Project. Soon, we will post another tutorial about the project creation in IntelliJ for a Spark & Scala project. Get notified with our new blogs, keep checking the site. You can also check our new blog, hope this helps to clear your Spark Concept
Do the same commands apply when running spark-shell on Windows cmd? If not how do I run this jar file using Spark-submit on Windows cmd?
Yes Vaibhav, all the mentioned steps/commands in Spark Scala Project work on Windows. You can set up the complete Spark Scala environment on Windows. If you want to explore more in Spark check this link:
after importing the libraries stiil errors are there….please help
Pravin, please post the error, will look into it.
Excellent post.
Just a small correction for creation of conf object.
val conf = new SparkConf().setAppName(“WordCount”).setMaster(“local[*]”)
Hello sir
Your this tutorial is very good but in last step we past jar file in spark home dir then use spark submit command for run the program. Here i am facing the problem. sir plz can you this make this tutorial on video. By video we can understand where we doing something wrong.
Please sir help the student. You Blog is very good
hello sir
Your tutorials are very good.This is also a good tutorial. When i was creating project in scala without using maven in the last step where we using jar file does not conf. So please sir can you convert this tutorial in video in this way we can more understand how to conf spark scala through eclipse to ubuntu
Thanks if you can help us
Hii Alka, you commented on this Spark Scala Project, so we are grateful for your loyal feedback telling us that our current approach on Spark requires a video tutorial to explain the things to our readers. We will work on it so that you can get an easy understanding of Spark Scala. I can see that you are very curious to learn more about Spark, just follow our site for best Spark Tutorials.
Used below command for execution:
spark-submit –class WordCount –master local /vinyas/Jars/WordCount.jar
But getting error:
java.lang.ClassNotFoundException: WordCount
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
What might be the reason?
Hii Vinyas, check the below solutions your error on Spark Scala Project
There could be the following 2 issues:
Either you are mentioning the wrong class name, make sure to mention the fully qualified class name. Or there are compile-time issues in your program. Check this and still if you are getting any error do let us know.
Hi,
I have also got the same error. On compiling in eclipse i am getting below error:
Error: Main method not found in class com.dataflair.spark.Wordcount, please define the main method as:
public static void main(String[] args)
or a JavaFX application class must extend javafx.application.Application
The shown program is in Scala. public static void main(String[] args) is Java style of coding.
If you are getting the same error, look for following issues:
– Use Scala 2.11 (by default Scala 2.12 is shipped with Eclipse)
– Check the package name and class name
– look at the problems tab in Eclipse (next to console)
This one is great. Could you please explain how to run the above spark program in hadoop? The same .jar file made, not using spark-shell.
Rajat Saha, thanks a lot for such loyal comment and good words.
You can run the Spark program on Hadoop, you need to mention the input and output path of HDFS URI and mention the master as yarn. If you are satisfied with this, leave a remark.
Hi, I did everything like above but get the next problem: Failed to load com.dataflair.spark.Wordcount
I already tried to clean and rebuild the project but still doesn’t working. Where can the Problem lie? Thanks in advence
Hi Sir ,
its great tutorial which helped me in learning spark -scala-eclipse as a beginner.
I am getting an error as output directory already exists.
can you please help me with that.
spark-submit –class com.smruti.scala.Wordcount –master local C:\Users\irhake\Desktop\ApacheSpark_POC\jarFol
der\SparkJob.jar C:\Users\irhake\Desktop\ApacheSpark_POC\sample.txt C:\Users\irhake\Desktop\ApacheSpark_POC\output
“main” org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:/C:/Users/irhake/Deskto
utput/op.txt already exists
Please supply a new directory name (path).
For each application / job we need to supply a new output directory which must not exist, alternatively, you can delete the existing directory.
Hi, Thank you for the nice tutorial, I am able to make the sample working perfectly.
But the completed application shown in master:8080 is always zero. How to make them working?
Hi JN,
Thanks for commenting on the Spark Scala project. I think you have installed Spark Standalone Mode and running the application on Local Mode. It is recommended to run the application in Standalone mode to listed application on –master spark://IP-ADDRESS:PORT.
Hope, it will help you!
Regards,
DataFlair
Hi DataFlair, Thanks for the fast reply . I tried to ran as you told me, but it is showing FileNotFound Excception.
I executed like this –> spark-submit –class com.dataflair.spark.Wordcount –master spark://172.31.38.56:7077 test1.jar /home/ubuntu/scalaapp/wc-data output
I am getting this error–> Caused by: java.io.FileNotFoundException: File file:/home/ubuntu/scalaapp/wc-data does not exist
wc-data is available in the specified path and the created output folder is empty
Hello JN
We have tested, it’s working fine:
spark-submit –master spark://ubuntu:7077 –class com.dataflair.spark.Wordcount ../sparkwc.jar /home/dataflair/inp /home/dataflair/out
Your error is clearly saying, the file wc-data doesn’t exist, please give the correct path.
Hi,
I followed the example step-by-step and it was nicely written. New to spark and having some issues if you can help,
I am doing this in windows, and I installed Spark — I can start spark-shell, and it is giving me as below:
Spark context Web UI available at ip-address : port,
When I’m doing as below for above example:
spark-submit –class com.dataflair.spark.Wordcount –master spark: //: SparkJob.jar wc-data.txt output
It is giving me as below:
WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.apache.spark.deploy.SparkSubmit$$anon$2).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See for more info.
no output file is getting generated.
I started spark-shell in different command prompt window, it starts spark and ends with scala shell.
and I opened separate cmd prompt to run spark-submit.
Please if you can help. thanks
Hi Ravi,
The above example has been tested on Ubuntu, I recommend you to run on Ubuntu.
BTW, it’s supported on Windows as well, if no output file is generated, it seems there is some issue, please scan the logs and post the same.
Thank you for the article.Your content has been helpful in many cases. | https://data-flair.training/blogs/create-spark-scala-project/ | CC-MAIN-2021-31 | refinedweb | 2,702 | 65.52 |
Montgomery Multiplication
July 29, 2014
We will work with 64-bit unsigned integers in C rather than unlimited-precision integers in Scheme, because the use of unlimited-precision integers makes the exercise trivial. We are following the text and code of Henry Warren’s description of Montgomery multiplication. We assume that long long integers are 64 bits, and select r = 264, which isn’t representable in a 64-bit integer, though that doesn’t matter:
typedef unsigned long long ull;
typedef signed long long sll;
We begin with the gcd calculation. We studied the binary gcd algorithm in a previous exercise, but we can simplify and speed that algorithm because we know that r is a power of 2 and m is odd, so there are no powers of 2 to eliminate. Note that r is half what it “should” be, so this function computes
rinv and
mprime such that (2r) *
rinv − m *
mprime = 1 rather than the expected r *
rinv + m *
mprime = 1. Here’s the code:
void gcd(ull r, ull m, ull *r_inv, ull *m_prime) {
ull r_saved, m_saved, u, v;
r_saved = r; m_saved = m; u = 1; v = 0;
while (r > 0) {
r = r >> 1;
if ((u & 1) == 0) {
u = u >> 1; v = v >> 1; }
else {
u = ((u ^ m_saved) >> 1) + (u & m_saved);
v = (v >> 1) + r_saved; } }
*r_inv = u; *m_prime = v;
return; }
Montgomery multiplication needs the 128-bit product of two 64-bit numbers. We keep the high bits (most significant bits) and the low bits (least significant bits) of the product separate, using the grade-school multiplication algorithm on 32-bit “digits” in an algorithm given at Knuth 4.3.1 M:
void mul_ull(ull x, ull y, ull *xy_hi, ull *xy_lo) {
ull x0, x1, y0, y1, t, p0, p1, p2;
x1 = x >> 32; x0 = x & 0xFFFFFFFF;
y1 = y >> 32; y0 = y & 0xFFFFFFFF;
t = x0 * y0;
p0 = t & 0xFFFFFFFF;
t = x1 * y0 + (t >> 32);
p1 = t & 0xFFFFFFFF;
p2 = t >> 32;
t = x0 * y1 + p1;
*xy_hi = x1 * y1 + p2 + (t >> 32);
return; }
The modulus operator divides (x || y) by the modulus m, returning the remainder. This function assumes x < m, which is enforced by the
mulmod_ull function given below. When the loop finished, y is the quotient and x is the remainder:
ull mod_ull(ull x, ull y, ull m) {
sll i, t;
for (i = 1; i <= 64; i++) {
t = (sll) x >> 63;
x = (x << 1) | (y >> 63);
y = y << 1;
if ((x | t) >= m) {
x = x - m; y = y + 1; } }
return x; }
We are ready now for the Montgomery step, which takes ā, b̄, m and m′ and returns a b r mod m. The code follows the three-step process described above:
ull mont_mul(ull a_bar, ull b_bar, ull m, ull m_prime) {
ull t_hi, t_lo, tm, tmm_hi, tmm_lo, u_hi, u_lo, ov;
mul_ull(a_bar, b_bar, &t_hi, &t_lo);
tm = t_lo * m_prime;
mul_ull(tm, m, &tmm_hi, &tmm_lo);
u_lo = t_lo + tmm_lo; u_hi = t_hi + tmm_hi;
if (u_lo < t_lo) u_hi = u_hi + 1;
ov = (u_hi < t_hi) | ((u_hi == t_hi) & (u_lo < t_lo));
u_lo = u_hi; u_hi = 0;
u_lo = u_lo - (m & -(ov | (u_lo >= m)));
return u_lo; }
Finally we are ready to specify the modular multiplication, following the four-step algorithm given above; we return 1 if the operation succeeded or 0 if its arguments are out of bounds, and return the product in the fourth argument:
int mulmod_ull(ull a, ull b, ull m, ull *p) {
ull half_r = 0x8000000000000000LL;
ull r_inv, m_prime, a_bar, b_bar, p_hi, p_lo;
if ((m & 1) == 0) { return 0; } /* must be odd */
if (m <= a) { return 0; } /* must be less than m */
if (m <= b) { return 0; } /* must be less than m */
gcd(half_r, m, &r_inv, &m_prime);
a_bar = mod_ull(a, 0, m); b_bar = mod_ull(b, 0, m);
*p = mont_mul(a_bar, b_bar, m, m_prime);
mul_ull(*p, r_inv, &p_hi, &p_lo);
*p = mod_ull(p_hi, p_lo, m);
return 1; }
Now the payoff for all our hard work. Given modular multiplication, it is easy to perform modular exponentiation using the square-and-multiply algorithm. The expensive calculations of r_inv and m_prime and the translations to and from Montgomery space are performed only once, and only Montgomery steps are taken inside the loop:
int powmod_ull(ull x, ull y, ull m, ull *x_to_y) {
ull half_r = 0x8000000000000000LL;
ull r_inv, m_prime, x_bar, hi, lo;
if ((m & 1) == 0) { return 0; } /* must be odd */
if (m <= x) { return 0; } /* must be less than m */
gcd(half_r, m, &r_inv, &m_prime);
x_bar = mod_ull(x, 0, m);
*x_to_y = mod_ull(1, 0, m);
while (y > 0) {
if ((y & 1) == 1) {
*x_to_y = mont_mul(*x_to_y, x_bar, m, m_prime);
y = y - 1; }
else {
x_bar = mont_mul(x_bar, x_bar, m, m_prime);
y = y / 2; } }
mul_ull(*x_to_y, r_inv, &hi, &lo);
*x_to_y = mod_ull(hi, lo, m);
return 1; }
Here is an example:
#include <stdio.h/gt;
int main(void) {
ull z; int t;
ull x = 34721908534901LL;
ull y = 72193687003295LL;
ull m = 9412345678901731LL;
t = mulmod_ull(x, y, m, &z);
if (t) { printf("%lld\n", z); }
else { printf("ERROR"); }
t = powmod_ull(x, y, m, &z);
if (t) { printf("%lld\n", z); }
else { printf("ERROR"); }
return 0; }
If the program is stored in file
monty.c, compiling and running the program looks like this; both results shown below are confirmed by Wolfram|Alpha:
> cc -o monty monty.c
> chmod +x monty
> ./monty
3751384291706939
7001634529421238
You can run the program at.
Re bar: One theory seems to be to put a code called a combining
diacritic in Unicode after the character. A bar above would be
̄. The following characters looked somewhat usable
locally in w3m and firefox: ā, b̄, c̄,
d̄, ē.
(The above text is copied and pasted from my local test document source code. In the comment editor, at the moment of this writing, I see the hex codes as HTML character entities or whatever they are called.)
Hm. The code entity above should have been ampersand hash x 0304 semicolon. The software interpreted it as the actual code. The sample characters look to me as they should. | http://programmingpraxis.com/2014/07/29/montgomery-multiplication/2/ | CC-MAIN-2014-52 | refinedweb | 981 | 50.74 |
Details
Description
Add the ability to include screen widgets, form widgets, menu widgets, and simple methods in a single XML file. This approach could be used in situations where the widgets share a logical grouping - so they can be kept in one place.
Issue Links
- depends upon
OFBIZ-6978 Refactor Quote Screen to use Compound Screen Widget feature
- Closed
- is depended upon by
OFBIZ-6990 Add Example for Compound Screen Widget
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
Updated patch file. XML parsing errors have been fixed with new schemas. Modifying the existing schemas didn't work because that caused parsing errors on existing screens (~1MB log entries per request).
This patch also adds the ability to include a controller <site-conf> element in the compound widget file - an idea suggested by David.
It all works and it is ready to commit if the community agrees it is a worthwhile improvement.
I would prefer a more descriptive name than "root" for the root element. What about "widgets" or "ofbiz-widgets"?
Shouldn't there be a schema for the root element, whatever name it has, specifying the valid name for the root element and valid subelements within it? It would be a very simple schema compared to the ones for screen widgets etc, but should be there for completeness, I think. Any reason why not?
The root element could be called anything. "widgets" sounds good to me.
I created a schema for the root element, but I couldn't get it to work. The file would parse fine, but in Eclipse all of the elements were shifted down one level - meaning auto-complete would insert values from child elements, not the selected element. Maybe it's just a problem with Eclipse - I don't know.
If anyone can improve upon the patch, they are welcome to do so.
Is it easy for you to post a screenshot showing the problem? I don't use Eclipse, so I can't see what you're seeing.
Here is the root schema I tried:
<xs:schema xmlns: <xs:element <xs:complexType> <xs:all <xs:element <xs:element <xs:element <xs:element <xs:element <xs:element </xs:all> </xs:complexType> </xs:element> </xs:schema>
In Eclipse, you can right-click on an XML element and a popup menu appears with valid selections for that element - based on the schema. I can't provide a screen shot because now the popup menu doesn't show any suggestions from the schema. I'm almost certain it's a bug in Eclipse's XML editor.
The implementation basically worked, but the Eclipse editor wasn't working well with the changes.
The patch is very outdated. It will need to be updated.
I think this is a very interesting feature and we should try to commit it before the patch becomes unusable. So I have created a new patch w/o the simple-methods-v2.xsd because it now already exists
BTW I think we should remove
<system systemId="" uri="simple-methods.xsd" />
from minilang-catalog.xml. It's not used anymore.
Mmm, it seems more work should be done to create an updated patch. I'll have a look...
Here a new and last patch, nothing tested for now. I had to do some changes manually in SimpleMethod.java but they sounds OK
I had to do some changes manually in ConfigXMLReader.java as well. I did not include the simple-methods-v2.xsd file, we have one already and they don't compare. So maybe things are missing in the new one
This feature is useful. I am interested to refactor some screens like EditQuote with it.
That would be great, but I'm not sure of the status currently. I think at least it misses some lines in (now in trunk) simple-methods.xsd and even if we get the same than Adrian did, he said he crossed issues with auto-completion in Eclipse. Anyway trying will enlight us for sure!
Any problem if I refactor the Quote Screen to use the compound feature?
Also thinking of creating sub tasks to make it easier to review the smaller patches.
If you are sure about what you do, why not indeed.
But remember that the current simple-methods.xsd must miss the elements which were initially provided by the simple-methods-v2.xsd in the CompoundWidgetFiles.patch from 03/Jan/11 02:27. I try to find them but gave up. I think It should not be that hard to find and put them in our current simple-methods.xsd, but I did not :/.
Regarding splitting patches it's easier, for me at least, to have only 1 patch. Or if it's really hard to review several smaller patches in only one Jira issue. This at least if it makes sense from a functional perspective. Else better to create several Jira issues (seems not the case here).
It's fun to notice that by splitting issues and patches we would work contrary to what the compound screen feature try to avoid (opening, searcing in more files, etc.)
And last but not least, Adrian said that, even with his complete changes, the Eclipse auto-completion feature was not working properly. So It's interesting but I feel there are more work in your plate that maybe you envision
Thanks to try anyway!
Thanks for the pointers.
I think the consensus is to work things out by moving forward with the Quote Screen refactoring.
So a new issue at
OFBIZ-6978.
From what I have seen I think I will keep this issue open for now and we will close it once the work on
OFBIZ-6978 will be done, thanks!
The change for ConfigXMLReader.java is missing in the current patch file.
I found that it is ok to use the original xsd files. So the template can be something like
<root xmlns: <site-conf <!-- Insert controller entries here --> </site-conf> <simple-methods <!-- Insert simple methods here --> </simple-methods> <menus xmlns="" xsi: <!-- Insert menu widgets here --> </menus> <forms xmlns="" xsi: <!-- Insert form widgets here --> </forms> <screens xmlns="" xsi: <!-- Insert screen widgets here --> </screens> </root>
Now I think the Example would be a better place to start with the Compound Widget Screen feature. But what should I do with
OFBIZ-6978?
Actually I wanted to get further than what Adrian did but I did not. So you can find the change in his latest patch.
The attached patch demonstrates the concept.
There is an issue preventing it from being committed: The XML validator reports errors because the schemas don't contain a target namespace. If I add a target namespace to the schemas, the validator reports errors because it was expecting the target namespace to be empty. I don't know how to solve that problem and I'm hoping an XML expert can figure it out. | https://issues.apache.org/jira/browse/OFBIZ-4090 | CC-MAIN-2017-47 | refinedweb | 1,150 | 75.2 |
Brandon Fosdick wrote:
>
So maybe I should explain the code a bit. First off, its all C++. Now we wait a sec for the
C folks to run away screaming...ok, good.
All of the database magic happens in class ServerConfig. It's an enourmous mess of a class.
Cleaning it up is on the ToDo list, after switching to prepared statements. (really, I have
a list, it's in OmniOutliner on my pbook) As the name implies, this class is created by the
usual create server config handler. All of the config directive handlers call set_X() methods
of ServerConfig.
repository.cc is what you think it is if you're familiar with mod_dav, but it mostly passes
stuff off to class resource_t, which then passes database requests to ServerConfig.
For those not familiar with mod_dav, every request starts with a call to get_resource(), which
is responsible for creating a structure that represents each DAV resource involved in the
request. In this case that structure is resource_t.
deliver() is where the downloading happens. It creates a bucket brigade and dumps all of the
blocks from the database into the brigade. I have no idea if I did that right.
hooks.cc is where all of the hooks are registered. Nothing fancy there.
Locks, properties, etc are handled by the appropriately named files. Most of the handlers
pass through to a class method of some sort. That sounds like it would be slow, but almost
all of the class methods are inline, so its no worse than C. Most of these classes are fairly
straightforward. Locks are poorly implemented ATM.
stream_t in stream.h is where the upload magic happens. It serves as a buffer between mod_dav
and the database. Incoming bytes are buffered into 64K blocks before being written to MySQL.
This would be a great place to use prepared statements, but I haven't gotten around to it.
When I started this project I had never used them outside of PHP, and my focus was on getting
something working as quickly as possible.
apr_pool_base.h has a base class and a new() operator that helps with using pools. I can't
tell if the destructors are being called properly, but I haven't had any problems with memory
leaks, so maybe its working.
The sharp eyed will notice that I have a copy of mod_dav.h in the source. That's because the
official copy uses the namespace token in two places, and therefore barfs in C++. I sent an
email to the list about this several months ago and didn't get a response. So, I use a modified
copy.
I should point out that the usernames are constrained to be positive integers, but only because
that's what Terran Bank needs. At one point I was maintaining a fork that allowed real usernames,
but it fell by the side. Mainly because it was pointless, I think the only difference was
two functions in resource_t. Some day I'll add a compile-time config option.
That's the high level overview. Let me know if you want more. | http://mail-archives.apache.org/mod_mbox/httpd-dev/200510.mbox/%3C4363025B.7080502@bfoz.net%3E | CC-MAIN-2015-22 | refinedweb | 520 | 75.61 |
Visual Studio 2013 Released 198
jones_supa writes "Final releases of Visual Studio 2013, .NET 4.5.1, and Team Foundation Server 2013 are now available. As part of the new release, the C++ engine implements variadic templates, delegating constructors, non-static data member initializers, uniform initialization, and 'using' aliases. The editor has seen new features, C++ improvements and performance optimizations. Support for Windows 8.1 has been enhanced and the new XAML UI Responsiveness tool and Profile Guided Optimization help to analyze responsiveness in Windows Store apps. Graphics debugging has been furthered to have better C++ AMP tools and a new remote debugger (x86, x64, ARM). As before, MSDN and DreamSpark subscribers can obtain the releases from the respective channels, and the Express edition is available zero cost for all."
Who cares? (Score:2, Insightful)
Re: Who cares? (Score:3, Informative)
Both VS and TFS 2012 were massive improvements over the 2010 editions for what its worth. 2013 seems more iterative and superfluous.
Re: Who cares? (Score:4, Insightful)
I disagree
VS2012 was massive improvement in terms of features. Unfortunately, those features consumed A LOT of resources, to the point it was completely unusable on my computer (on start, after a few minutes, VS2012 would show a message saying "your computer is too slow for VS2012").
VS2013 is as feature rich (actually, more) than VS2012 *and* it consumes LESS resources than 2010. I have been using it since the Preview (with ReSharper and a few more plugins) and it's great.
Re: (Score:2)
I've been using VS2012 on a 5 year old laptop, that was midrange at best when new. The requirements don't seem that steep.
Re: (Score:2)
Re: Who cares? (Score:4, Informative)
My experience was the opposite. VS2012 was night-and-day faster than VS2010 on my work machine, if only because it was much better at multi-threading. My peers had a similar experience. Perhaps my experience was different due to the fact that I don't run that many plug-ins.
VS2013 is an improvement as well, so I am curious to see how quickly I can get an upgrade approved.
Re: (Score:3)
It was the UI that made me hate 2012. The largely black and white themed icons slowed me down in finding the file I wanted in solution explorer in larger projects which was fucking annoying. It took some used to having the menu bars shouting at you all the time too.
I also hate the fact that it's a step backwards feature wise in some ways also, no more automated generation of unit tests for a class when using MSTest for example. I've also found NuGet can be quite annoying with it breaking once or twice and m
Re: (Score:2)
Re: (Score:2)
If you're using C++, VS 2013 comes with a much better compiler in terms of standards compliance, with more C++11 features (notably, variadic templates), and even some small pieces of C++14.
Re: (Score:2)
I find it easier on the eyes. There is a dark theme that makes my eyes feel less tired after hours of use.
Re: Who cares? (Score:2)
Thank you. I had tried a while back to fix that in office, and failed. This will make Visual Studio 2012 usable for me.
Re: (Score:2)
The real question is, why can't the Visual Studio programmers just use Windows Forms or XAML or whatever the Hell it is every other Windows application developer is "supposed" to use these days, so that VS looks and works like a "normal" application?
Re: (Score:2)
Actually, the real question is... WTF are you talking about?
Re: (Score:3)
I guess I don't know... I was working off the premise that Windows applications ought to (and do) have a "standard," consistent look-and-feel, but then I just looked through the UIs of the 10 or so applications I have open right now and pretty much every single one of them is different.
Maybe the follow-up question should be "why can't Microsoft be less schizophrenic about UI standards?"
Re: (Score:2)
Re:Who cares? (Score:4, Interesting)
VS2012 doesn't support XP as far as I know since
.Net 4.5 doesn't run there and the main thing with VS2012 was support for Metro. So that ship has sailed.
I don't think it is vendor lock in to expect developers to be using a OS that is less than 10 years old.
Re: (Score:2, Informative)
Yes, VS2012 and VS2013 still support XP. I'm running some stuff on Server 2003 right now, that I compiled with VS2013RC.
Here's how it's done:
Windows XP Targeting with C++ in Visual Studio 2012 [msdn.com]
Works exactly the same in VS2013 also.
Re: (Score:2)
Sorry I should have been more specific VS 2012 doesn't run on XP as far as I know, you can target the platform but you can't run on it. You also give up features in
.Net > 4 when targeting downwards which kind of sucks (async is your friend).
Programs! (Score:4, Insightful)
I look back with fondness for the times when a program was a set of instructions and declarations written in a programming language, rather than am odd derivative of C++ tied to a billion files in various XML schemas.
Re: (Score:2)
I look forward to the time when I can tell my computer, in plain English, what I need it to do and it just does it without having to program a specific application to do a specific function.
Re: (Score:3)
Be careful what you ask for. Computers are vindictive. One that has free reign to misinterpret what you are asking for it going to be nothing but trouble.
Re: (Score:2)
Re: (Score:2)
That's why I like my Apple 2e
Re: (Score:2)
Re:Programs! (Score:4, Funny)
I look back with fondness for the times when a program was a set of instructions and declarations written in a programming language, rather than am odd derivative of C++ tied to a billion files in various XML schemas.
Yeah and I remember hand crafting make files in order to build systems from all that carefully written C code.
.. progress sucks
I mean I really hate myself for clicking on the NuGet package manager that I installed in VS, browsing a huge number of open source solutions and downloading and installing libraries and libraries of useful code with almost a single click. Yeah
Re:Programs! (Score:5, Insightful)
Using lots of libraries and components is great... when it all works. When your app won't build and you get an obscure error message from some package that you didn't even know you were using, it's not so much fun. I handcrafted make files as well. At least then, I knew what was going on, and what depended on what.
Re: (Score:2)
I'd worry about a developer that doesn't even know what packages he is using.
It's not like NuGet provides a list of installed packages or anything. Oh wait.
You still know what's going on now. Scrap that. Competent developers still know what's going on now. The configs are still open and human readable, most people are aware of what dependencies they've added to their projects by simply not shutting down their brain whilst installing dependencies but I don't see how if you can't keep track of what you instal
Re: (Score:2)
What better way to expand your attack surface.
Truly, in the Age of Information, the Hackers shall inherit the Earth.
Re: (Score:2)
The tension between KISS and DRY has always been there. Both are fundamental principles and yet at some level they are incompatible, since writing reusable code necessarily involves increasing its complexity. And the less you want to RY, the more complexity you have to build in.
The C++ STL is a shining example of this. Everyday developers shouldn't be writing their own lists and array and hashmaps. They definitely shouldn't write their own string utilities. And they shouldn't have to change those implementa
Where is the RPM? (Score:3, Funny)
I tried to do
yum localinstall visualstudio-2013.exe
but it wouldn't load on any of my Fedora or CentOS boxes. Tried the same with aptitude on my Debian boxes, same story.
Is someone gonna repackage this for our favorite distro? Really, these guys are worse than Canonical when it comes to supporting the community.
Re: (Score:2)
I'm really sorry. We tried to build an RPM and a DEB, but for some reason no distro provides kernel32.dll in its repositories, and we need it as a dependency. I hope they fix that soon. ~
Still half-assed C++11 support (Score:4, Insightful)
(sigh)
Oh well... maybe next year they'll catch up. Oh wait, that's when C++14 is supposed to be standardized.
[double facepalm]
Re: (Score:2)
First thing that comes to mind? compile-time hashes used as case labels.
constexpr unsigned crc32_table(unsigned c,unsigned k=8)
...
{
return (k==0)?c:crc32_table((((c&1)?0xedb88320u:0)^(c>>1)),k-1);
}
constexpr unsigned crc32(const char *str, std::size_t len)
{
return (len==0)?0xffffffffu:((crc32(str,len-1)>>8) ^ crc32_table((crc32(str,len-1) ^ str[len-1]) & 0xFF));
}
constexpr unsigned operator "" _hash(const char *str, std::size_t len)
{
return crc32(str,len)^0xffffffffu;
}
w
Re: (Score:2)
Re: (Score:2)
Because it's a pain in the ass, that's why.
Also, I don't like wasting my time writing tools to "fix" somebody else's partially complete implementation of something... in this case, C++11.
Yes, I'm lazy. I'm a computer programmer.
Re: (Score:2)
Re: (Score:2)
Speaking for myself, there's some truth in that.
Only faster runtime.
Re: (Score:3)
In your example "show"_hash and "fill"_has
Re: (Score:2)
Re: (Score:2)
If used on a context which requires a constant, a constexpr will *always* be evaluated at runtime.
If used in any other context, it's possible that it will instead output code to compute the value instead of evaluating it.
Re: (Score:2)
Looking at gcc 4.7.1's output with constexpr, I've found that putting it in a context which *requires* a constant, such as a case label in a switch statement, the compiler will faithfully output an evaluated value, as expected.
If it is used in more general contexts, however, it doesn't always work. My experience is that tail recursive constexpr's, in particular, did not always output the evaluated constant directly, but instead would often output the code to compute it per the algorithm described in th
Re: (Score:2)-ty
Re: (Score:3)
C++ standard is evolving fast
Does no one else find it funny that saying that about five years ago would have been met with "WTF?!!"
However, I do have to agree, VS still has half-baked C++ support period. It's neat that they have their own
.NET stuff for C++, but I think they tend to think about that .NET stuff first and ISO C++ second. That's a shame really because I know quite a few (and maybe it's just the area I'm in) places wanting to hire those with C++11 skills.
so we can't expect all the compilers to offer an implementation in less than 6 months
Well the thing about it is that they've had longer than six months
Re: (Score:2)
Actually, yes, that's one of the notable changes in VC++ 2013 - it now supports C99 _Bool, compound literals, designated initializers, and most of C99 new headers. Apparently, that's sufficient to compile ffmpeg, which was the point of the exercise. Still no VLAs, though.
TFS... (Score:2)
Re: (Score:3)
Re: (Score:3)
TFS2010 very good? Oh, my.
I've seen: check-ins transpose lines on check out; complete failures to update to actual latest versions of code; and random check-outs of code with no local changes.
Other fun aspects: can't unshelve to anything but the changeset that the shelf came from; industry worst? merge and diff tool; no non-connected way of getting changeset info for automatic version information; despite being a centralized model, local workspaces can't be moved (say, in the advent of hardware failure on a
Re: (Score:2)
If you can reproduce any of those things, please email me.
This isn't really legible, but I think you meant to say "branch" or "workspace". But in any case, it isn't true. You can use tfpt.exe (tf power tools) and force all kinds of "unsafe" things, like unshelving an add/edit shelvese
Re: (Score:2)
I'm willing to believe things are going great in your environment — we have been plagued by problems. (Some of the gripes in my post may have been specific to TFS2008, though the mind-boggling line transposition was just two months ago.) We will almost certainly be upgrading TFS when we move to VS2013, though given some of the egregious compiler bugs present in the new release, we will probably wait until the first SP. In the meantime, we're migrating projects over to git, and ultimately we will proba
Re: (Score:2)
Ummm... How can you on one hand talk about your giddiness of moving to Git, and then complain about how things aren't accessible in VS? You have to drop to the git command line for a lot of things...
Just downgraded something to .NET 2.0 (Score:2)
Re: (Score:3, Funny)
I thought
.NET was dead and the Microsoft future was HTML5 now?
Re: (Score:2)
What have *you* been huffing?
.NET, in one form or another, is *the* main development framework Microsoft has been pushing the last few years, honestly.
Windows desktop pre-Win8: Native code or
.NET. .NET (via the subset usable in WinRT), native code (same caveat), or HTML5/JS. .NET (via Silverlight) or .NET (via XNA). .NET (via WinRT subset for phone) or native code (WinRT). .NET (via XNA).
Win8 / Windows RT apps:
Windows Phone 7:
Windows Phone 8:
Xbox 360 indie games:
This goes back even further, actually, but
Re: (Score:2, Interesting)
Apparently you missed the renewed interest in C++.
.NET is still very popular, but the .NET team never sold the Windows development team on .NET, who went off in their own direction with Metro and additions to WinAPI. So, if we're talking the past two years, then .NET is definitely not *the* main development framework, it's C++ (i.e. native code). How have you missed this? There have been a ton of articles over the past couple of years analyzing Microsoft's schizophrenia.
Perhaps you were just working really
Re: (Score:2)
The C++ frameworks have always had the cutting edge features first. MFC received ribbon support before the managed frameworks for example. This is because the managed frameworks usually just wrap around that anyway - i.e. WinForms wasn't much more than a wrapper around Win32 API.
But that doesn't mean they're the preferred, main, or recommended development framework. The Windows development team use C++ because they're doing OS development and it's the best tool for the job, coupled with the fact it's all bu
Re: (Score:2)
Microsoft has been pushing the idea that native code development is back (not that I noticed it was gone, I just kept writing C++). This may not be the best idea (I find your suggestion about pushing
.NET instead very plausible), but MS is at least publicly changing direction. Not as bad as Apple used to do, but it isn't pretty.
Re: (Score:2)
This is because the managed frameworks usually just wrap around that anyway - i.e. WinForms wasn't much more than a wrapper around Win32 API.
This actually hasn't been true for a while. You're right on WinForms, but that has been de facto deprecated from
.NET 3 onwards, with WPF taking its place - and WPF doesn't wrap any OS API, it does everything down to rendering on its own (and that is directly on top of Direct3D). Similarly, WPF Ribbon does not wrap the OS ribbon, it reimplements it.
Also, MFC received ribbon support first simply because Microsoft has bought it from a third party company which implemented it already. I believe it is also a fr
Re: (Score:2)
I write equal amounts native and managed code. I'll grant that managed ha seen a resurgence, but the only way you could call it the "main" framework is to note that the last few tool versions have added more updates to native-oriented tools than managed-oriented ones... which sounds good until you realie that the native tools were left to languish for so long that these updates have been almost entirely a matter of catching up, while
.NET has still gotten a bunch of cool new stuff like async.
Re: (Score:2)
LOL. We're waiting for Microsoft to catch up. It ain't 2008, bitch.
As a new user of Visual Studio (Score:2)
I'd like to ask - what am I missing?
Until recently, I hadn't programmed in anything apart from Matlab in Linux (which has a crappy "IDE") in over ten years (the last version of VS I ever used in any way was VS6.0). Anyway, I started to work on Python and C++, and have so far found a lot of positives with the IDE (Ultimate VS2012 - free from my organization).
VsVim and PTVS let me use a vim like editing features, and Python Tools for VS has also performed well (interactive debugging, autcomplete and comman
Re:As a new user of Visual Studio (Score:4, Informative)
Missing relative to other tools? Not terribly much, honestly; I wouldn't use VS for Java (by preference, I'd use NetBeans) or for POSIX native code, but both are possible. Some VS extensions are very handy; there's a tool for finding, installing and updating them called NuGet (should be built into current versions of VS, I think); you may want to check them out although it sounds like you've already found some plugins that you like. The git integration will probably improve over time; there has already been an update or two. Eclipse has slightly more refactoring power than is built into VS, but there are plugins for that and the Eclipse UI drives me nuts when I try to use it. The only major thing that comes to mind is that VS isn't going to run on anything except Windows (unless Wine support for it is a lot better than I remember) so, although there are Linux-compatible IDEs that can read its project files, it might not be the ideal tool for mixed environments.
Re: (Score:2)
Out of curiosity, did you have a chance to look at the Python/C++ mixed mode debugging [codeplex.com], and if so, how did you find it?
(I'm the PTVS developer who implemented it, and I'm always looking for feedback from users who use the feature on real-world applications, especially in terms of use cases, scenarios etc - i.e. what can be added or rearranged to improve the typical or not-so-typical workflow or make it more convenient. Bug reports are also always welcome, of course!)
Re: (Score:2)
if you're doing C++ development then it's great, but Python? I'd imagine not so much
You'd be surprised. I dare say that we're neck to neck with PyCharm, and doing some things better than them - e.g. type inference for code completion (try some of the code snippets in this video [youtube.com] in your favorite Python IDE, and see how it fares...). And no other Python IDE has anything like this [codeplex.com], to the best of my knowledge.
I can fully understand where you're coming from - it's true that, historically, Microsoft developer tools have focused on supporting pretty much only Microsoft languages and frameworks.
from demos (Score:2)
It seems that the editor changes are mainly a roll in of the powertools (I don't do client side web dev so javascript and ASP side of things don't matter to me). Makes me wonder: what will the next power tools be as it seems to be the only way I'll be getting new editor features?
I can't remember if VS2012 added it or not as my work developes mainly in 2010 but a big one I'd like to see is coding time checks on stored procedures for database projects. It annoys me that I have to migrate my database and run u
Re: (Score:2)
What are you talking about? I'm talking hand written sql files not ORM. The test project in the solution targets a specific version of SQL Server so VS should know the semantics of the TSQL we are writing but it seems (at least VS2010) to only consider stuff in the current file and names from other files. Say you have a proc called dumb and another called dumber which takes @cust int as an argument.
If dumb tries to call dumper
... declaration/initialization code
exec dumber @customer = @bob
You can compile th
Mandatory registration (Score:4, Informative)
Writing a program in Visual Studio requires mandatory registration, or the program will refuse to start up. This also gives Microsoft to arbitrarily deny specific programmers the ability to publish a program.
Oh, and this, from the VS 2010 Privacy Policy [microsoft.com],.
Re: (Score:3)
Microsoft can remotely target your computer
When additional data is requested, you can review the data and choose whether or not to send it.
Interesting use of "target."
Re: (Score:2)
"It's somewhat disappointing that Slashdot is used to advertise software like this. Fuck that, I'll stick with free (as in freedom) compilers like GCC, MinGW, LLVM etc. and free IDEs."
Yeah and free browsers like Firefox.
Oh wait, guess what Mozilla does when Firefox crashes? It does the following:
In rare cases, such as problems that are especially difficult to solve, Mozilla may request additional data, including sections of memory (which may include memory shared by any or all applications running at the ti
Microsoft is making it easy... (Score:2)
... to quit. It's because of Microsoft that I haven't coded in C++ for fifteen years. Really, is there a single developer on
/. that prefers this environment?
Got to taunt: A C++ developer is only useful when he knows how to code in C.
Re: (Score:2)
Given that most C++ projects I've received since forever from academic code on EdX through to proprietary game engine code contain a Visual Studio project I'd wager there's an awful lot of developers that prefer Visual Studio, and yes, I suspect a number of those are on Slashdot.
Though I find it a lot odd you say you haven't coded in C++ in 15 years.
Right, well aren't you just Mr Overqualified when it comes to judging then?
I haven't used a BSD distribution in about 15 years so it'd seem a little odd if I sa
Re: (Score:2)
Apparently, about the time you stopped coding in C++ was the time I started professional C++ development (I'd been teaching myself for a few years before that).
You should take a look at what C++ 11 can do for C++ code. It's rather dramatic. Nowadays, I almost never use raw pointers or call new or delete, and follow RAII practices. Result? Memory management is nearly automatic (it feels almost like garbage collection), and leaks are pretty much forgotten. The class with an actual destructor in it is fai
One major deficiency (Score:2)
The installer that was removed at the introduction of VS2012 has not been re-introduced. That means that now the Nullsoft alternative is more attractive.
The hope that Microsoft would adopt ADA is of course futile.
Windows SDK, VS Express, etc (Score:2)
I basically just want C/C++ libraries, compilers and build tools. But not the GUI of Visual Studio.
It used to be possible to Download the Windows SDK/Platform SDK for no charge, and it contained all the command line tools and libraries need to build applications. Now: directly from the download page [microsoft.com]: "The Windows SDK no longer ships with a complete command-line build environment. You must install a compiler and build environment separately. If you require a complete development environment that includes com
Re: (Score:2)
Yes, all Express editions that include C++ ship with a 64-bit compiler [slashdot.org] from VS 2012 onward.
(I still wish it was a separate download, though. A lot of people don't need to write code, just to compile downloaded stuff - e.g. when installing Python packages from PyPI)
Re:zero cost (Score:4, Insightful)
Apple, for instance, only charges $100 to develop on the iPad, giving the tools away.
Sure, and the dealership just GAVE ME the car I'm driving after charging me money for it! Wow that was nice of them.
Re: (Score:2)
Apple, for instance, only charges $100 to develop on the iPad, giving the tools away.
Sure, and the dealership just GAVE ME the car I'm driving after charging me money for it! Wow that was nice of them.
Ignorance is bliss... Xcode is still free even if you don't want to pay $100 for a developer account.
Actually you had to choose between two possible interpretations of what I said. 1) I am being facetious and am simply making a joke about the way he worded that, and 2) I was making a factual statement about developing software on (or for) the iPad. Because there was no additional context, you had to pick one. Naturally you chose the one that lets you make a smug comment while judging yourself smarter than me.
Is that bliss? Seems the product of a deep-seated (and horribly widespread) insecurity to m
Re: (Score:2)
Nah. It ain't "deep-seated". We just hate the same old bullshit by lousy "programmers". Apple likes developers and gives its tools away for free. Actually, the only company that doesn't respect its developers is Microsoft. But, oh wait, there are no real developers on the Microsoft platform. Apple and *nix has all the developers.
Ouch. Burn.
Prove me wrong. Show me any tool that is coded for or coded by Microsoft that is:
1) desirable 2) practical 3) intuitive
Waiting...........
If you're looking for a fan of Microsoft to defend the merits of their software, you're barkin' up the wrong tree, friend. I've been a Linux user since around 1996 or so and have no interest in Microsoft products.
... after you pay. That's all. I have no idea how it can be so difficult to appreciate (or dislike) a simple jest.
I simply found it amusing the way that guy worded his sentence, saying that something was free
Re:zero cost (Score:4, Insightful)
"I can't even begin to comprehend why MS feels it needs to charge for the product"
I know, right? I don't know why the grocery store charges for hot dogs either. It's just a product.
More apps for the iPad means more app sales, which Apple takes a cut of, so that's a pretty bad example. Microsoft does give away the Express version, which is pretty decent for most non-commercial software.
Re: (Score:2)
Huh? No, what you mean is that a Microsoft grocery store would charge for the hotdogs and buns just so I can buy the ketchup. From there, I can only sell their own brand hotdog in a square full of Microsoft employees.
Did I mention the hotdogs were five-years-old?
Re: (Score:2)
I don't think you know what you're talking about. Developing for Windows / Win Phone is $19 and the express version does do everything most people will need.
Most people who pay for VS do so via MSDN which gets you a lot more than just VS.
Re: (Score:3, Informative)
The Express editions have a bunch of arbitrary limitations in them.
The two that bit me were:
1. You can't install plugins. I don't currently use any I can't live without, but several features in VS2013 -- e.g. NuGET, the thumbnail view replacing the scroll bar, better refactoring, visual indent level indication -- started out as plugins. Even if you take the view that eventually, all third-party plugin features eventually make it into the retail version, you're opting into being years behind the current stat
Re: (Score:2)
That's mostly a problem of team, not tools. Lots of open source developers are shit at bug tracking too. TFS isn't my first choice of tool, but it works.
Also, to the extent that the tools are the problem, that's largely because you're using tools that are 3-5 years old. Updating to newer versions won't make them any more familiar to you (as if ability to adapt to tools isn't a vital skill for a professional coder...) but it will add a lot of functionality that you may be looking for.
It won't fix team stupid
Re: (Score:2)
Just stepped into an organisation running TFS '08 & VS '10.
Coming from a background in open source, using Eclipse, SVN, Bugzilla & TRAC this MS stuff seems like absolute dross to me but I'm not in the position to change it yet.
Anyone have any advice regarding getting up to speed on this stuff. In particular the team I'm working with have NO concept of bug tracking which seems like madness. Is this side of TFS really so terrible?
If you are referring to Bugzilla as your "bug tracker", god help you. What a nightmare of a user interface. It would be easier to track bugs by chiseling them into granite than to use Bugzilla.
Give Visual Studio a chance. I haven't used it for a few years but it's clean and works well. Don't pine for your old environment till you've tried the new.
Re: (Score:2) b
Re: (Score:2)
"Graphics debugging has been furthered"
I don't believe that 'further' is a verb.
Not sure if I'm missing the joke, but further is a verb and furthered is its past participle.
Re: (Score:2)
You're not missing the joke. Everyone else is just missing English class.
Re:WOW (Score:5, Interesting)
All this value free for the express edition! gotta thank GNU, if it weren't for them we'd be milked for way less stuff.
Actually, you can thank the Microsoft's own Platform SDK for all this free value. This included a free C++ compiler, and was released at the start of this century. It was originally for MSDN subscribers, but it was released to the public for anyone to download. If you want to thank anyone for this inital free release, I think it would be Watcom C++ which was released as open source in 2000 after commercial development stopped. At the time that was a much bigger competitor to Microsoft's dev kits than any GNU software.
Re: (Score:2)
There's a free plugin for VS2010 and on that replaces the editor with a vim-style one. It's not quite as nice as using gvim itself but really is fantastic. I don't know how developers can live with the standard editor's find tool.
The author orginally wrote it to teach himself F# - [microsoft.com]
Re: (Score:2)
I vaguely recollect someone years ago wrote an BASIC interpreter in Excel. It would even generate ASCII graphics. It wasn't fast but...
Visual Studio? Released? (Score:5, Funny)
On bond, or recognizance?
Re:Link to.Net 4.5.1 ? (Score:4, Funny)
Where is the real link to the final release of
.Net 4.5.1 ???
Here [microsoft.com]. At a labour rate of $100/h, that would be a charge of $0.01.
Re: (Score:2)
Maybe I'm more irritated by this than most, but I liked the VS2010 GUI; colorful icons, a relatively smart professional image. With VS2013 they appear to have tried to "geek it up" or something by making all the tool menus have CAPITAL headings which looks fucking retarded, and making most of the items monochrome (what is that, retro?) Apparently they're trying to 'draw my attention' to the code without distracting me with icons that are nice looking and, ya know, give you a clue what the fuck they do. It just looks like a trainwreck. If there's a VS2010 skin, that's the first thing to install.
There is a registry hack to get rid of the dreaded ALL CAPS.
2012 Full: HKCU:\Software\Microsoft\VSWinExpress\11.0\General\\SuppressUppercaseConversion DWORD 1
2012 Express: HKCU:\Software\Microsoft\VWDExpress\11.0\Genera\\SuppressUppercaseConversion DWORD 1
For 2013 replace 11.0 with 12.0.
Re: (Score:2)
For those of you who have switched because of the dreaded Windows registry, say Amen!
I said Amen!!!!
Can I have an Amen??
Re: (Score:3)
Flat, minimalist 'design' (And I use that word loosely) is all the rage these days. Take a look at google+
...it looks fucking hideous. There are plenty of other websites following this shitty trend, miles of brilliant whitespace everywhere, no borders around anything to give it some context, It gives me a headache and ensures I won't visit again. Office 2013 is just as awful; NOT ONLY DO THE RIBBON MENUS SHOUT AT YOU, it's a bland wasteland of empty ideas, with only three colour schemes - brilliant white,
Re: (Score:2)
Just wait until tomorrow. What will happen with
/. scheduled downtime?.. will they replace the current site with their Wordpress beta theme?
Re: (Score:2)
The prompt also had a link to skip logging in. You should pay more attention.
Re: (Score:2).
Re: (Score:3)
What? one minute you're complaining about having to have an account to download, the next you're complaining that they might delete your account that you don't want in the first place.
Then you're jumping to some nonsense conclusion that by terminating your account they'll somehow hack into your computer and delete your software too?
This isn't Google apps. It's not a web based tool.
Re: (Score:2)
what's a good free code editor or IDE for C# or F# that still does projects/solutions
SharpDevelop?
MSbuild that MS offers right now is for 4.5.1x only
It requires 4.5.1 to run (but then why wouldn't you want to upgrade?), but it can build applications for any version of
.NET from 2.0 up.
What if I want to compile a single source file and I don't want the stupid dev command line or MSbuild to do it for me?
You don't need any special command line, or MSBuild, to compile a single file. Just add csc.exe to your path and run it directly: csc foo.cs.
Re: (Score:2)
This version is a somewhat better at C99: it has compound literals, designated initializers, _Bool, and most C99 headers. Still not full support for the standard, but it should be much easier to compile a lot of code from the Linux land now. | http://developers.slashdot.org/story/13/10/17/2142241/visual-studio-2013-released?sbsrc=developers | CC-MAIN-2015-48 | refinedweb | 6,005 | 72.56 |
Welcome to the React for Beginners guide. It's designed to teach you all the core React concepts that you need to know to start building React applications in 2021.
I created this resource to give you the most complete and beginner-friendly path to learn React from the ground up.
By the end you will have a thorough understanding of tons of essential React concepts, including:
- The Why, What, and How of React
- How to Easily Create React Apps
- JSX and Basic Syntax
- JSX Elements
- Components and Props
- Events in React
- State and State Management
- The Basics of React Hooks.
React Basics
What is React, really?
React is officially defined as a "JavaScript library for creating user interfaces," but what does that really mean?
React is a library, made in JavaScript and which we code in JavaScript, to build great applications that run on the web.
What do I need to know to learn React?
In other words, you do need to have a basic understanding of JavaScript to become a solid React programmer?
The most basic JavaScript concepts you should be familiar with are variables, basic data types, conditionals, array methods, functions, and ES modules.
How do I learn all of these JavaScript skills? Check out the comprehensive guide to learn all of the JavaScript you need for React.
If React was made in JavaScript, why don't we just use JavaScript?
React was written in JavaScript, which was built from the ground up for the express purpose of building web applications and gives us tools to do so.
JavaScript is a 20+ year old language which was created for adding small bits of behavior to the browser through scripts and was not designed for creating complete applications.
In other words, while JavaScript was used to create React, they were created for very different purposes.
Can I use JavaScript in React applications?
Yes! You can include any valid JavaScript code within your React applications.
You can use any browser or window API, such as geolocation or the fetch API.
Also, since React (when it is compiled) runs in the browser, you can perform common JavaScript actions like DOM querying and manipulation.
How to Create React Apps
Three different ways to create a React application
- Putting React in an HTML file with external scripts
- Using an in-browser React environment like CodeSandbox
- Creating a React app on your computer using a tool like Create React App
What is the best way to create a React app?
Which is the best approach for you? The best way to create your application depends on what you want to do with it.
If you want to create a complete web application that you want to ultimately push to the web, it is best to create that React application on your computer using a tool like Create React App.
If you are interested in creating React apps on your computer, check out the complete guide to using Create React App.
The easiest and most beginner-friendly way to create and build React apps for learning and prototyping is to use a tool like CodeSandbox. You can create a new React app in seconds by going to react.new!
JSX Elements
JSX is a powerful tool for structuring applications
JSX is meant to make creating user interfaces with JavaScript applications easier.
It borrows its syntax from the most widely used programming language: HTML. As a result, JSX is a powerful tool to structure our applications.
The code example below is the most basic example of a React element which displays the text "Hello World":
<div>Hello React!</div>
Note that to be displayed in the browser, React elements need to be rendered (using
ReactDOM.render()).
How JSX is different from HTML
We can write valid HTML elements in JSX, but what differs slightly is the way some attributes are written.
Attributes that consist of multiple words are written in the camel-case syntax (like
className) and have different names than standard HTML (
class).
<div id="header"> <h1 className="title">Hello React!</h1> </div>
JSX has this different way of writing attributes because it is actually made using JavaScript functions (more on this later).
JSX must have a trailing slash if it is made of one tag
Unlike standard HTML, elements like
input,
img, or
br must close with a trailing forward slash for it to be valid JSX.
<input type="email" /> // <input type="email"> is a syntax error
JSX elements with two tags must have a closing tag
Elements that should have two tags, such as
div,
main or
button, must have their closing, second tag in JSX, otherwise it will result in a syntax error.
<button>Click me</button> // <button> or </button> is a syntax error
How JSX elements are styled
Inline styles are written differently as well as compared to plain HTML.
- Inline styles must not be included as a string, but within an object.
- Once again, the style properties that we use must be written in the camel-case style.
<h1 style={{ color: "blue", fontSize: 22, padding: "0.5em 1em" }}> Hello React! </h1>;
Style properties that accept pixel values (like width, height, padding, margin, etc), can use integers instead of strings. For example,
fontSize: 22 instead of
fontSize: "22px".
JSX can be conditionally displayed
New React developers may be wondering how it is beneficial that React can use JavaScript code.
One simple example if that to conditionally hide or display JSX content, we can use any valid JavaScript conditional, like an if statement or switch statement.
const isAuthUser = true; if (isAuthUser) { return <div>Hello user!</div> } else { return <button>Login</button> }
Where are we returning this code? Within a React component, which we will cover in a later section.
JSX cannot be understood by the browser
As mentioned above, JSX is not HTML, but is composed of JavaScript functions.
In fact, writing
<div>Hello React</div> in JSX is just a more convenient and understandable way of writing code like the following:
React.createElement("div", null, "Hello React!")
Both pieces of code will have the same output of "Hello React".
To write JSX and have the browser understand this different syntax, we must use a transpiler to convert JSX to these function calls.
The most common transpiler is called Babel.
React Components
What are React components?
Instead of just rendering one or another set of JSX elements, we can include them within React components.
Components are created using what looks like a normal JavaScript function, but it's different in that it returns JSX elements.
function Greeting() { return <div>Hello React!</div>; }
Why use React components?
React components allow us to create more complex logic and structures within our React application than we would with JSX elements alone.
Think of React components as our custom React elements that have their own functionality.
As we know, functions allow us to create our own functionality and reuse it where we like across our application.
Components are reusable wherever we like across our app and as many times as we like.
Components are not normal JavaScript functions
How would we render or display the returned JSX from the component above?
import React from 'react'; import ReactDOM from 'react-dom'; function Greeting() { return <div>Hello React!</div>; } ReactDOM.render(<Greeting />, document.getElementById("root));
We use the
React import to parse the JSX and
ReactDOM to render our component to a root element with the id of "root."
What can React components return?
Components can return valid JSX elements, as well as strings, numbers, booleans, the value
null, as well as arrays and fragments.
Why would we want to return
null? It is common to return
null if we want a component to display nothing.
function Greeting() { if (isAuthUser) { return "Hello again!"; } else { return null; } }
Another rule is that JSX elements must be wrapped in one parent element. Multiple sibling elements cannot be returned.
If you need to return multiple elements, but don't need to add another element to the DOM (usually for a conditional), you can use a special React component called a fragment.
Fragments can be written as
<></> or when you import React into your file, with
<React.Fragment></React.Fragment>.
function Greeting() { const isAuthUser = true; if (isAuthUser) { return ( <> <h1>Hello again!</h1> <button>Logout</button> </> ); } else { return null; } }
Note that when attempting to return a number of JSX elements that are spread over multiple lines, we can return it all using a set of parentheses () as you see in the example above.
Components can return other components
The most important thing components can return is other components.
Below is a basic example of a React application contained with in a component called
App that returns multiple components:
import React from 'react'; import ReactDOM from 'react-dom'; import Layout from './components/Layout'; import Navbar from './components/Navbar'; import Aside from './components/Aside'; import Main from './components/Main'; import Footer from './components/Footer'; function App() { return ( <Layout> <Navbar /> <Main /> <Aside /> <Footer /> </Layout> ); } ReactDOM.render(<App />, document.getElementById('root'));
This is powerful because we are using the customization of components to describe what they are (that is, the Layout) and their function in our application. This tells us how they should be used just by looking at their name.
Additionally, we are using the power of JSX to compose these components. In other words, to use the HTML-like syntax of JSX to structure them in an immediately understandable way (like the Navbar is at the top of the app, the Footer at the bottom, and so on).
JavaScript can be used in JSX using curly braces
Just as we can use JavaScript variables within our components, we can use them directly within our JSX as well.
There are a few core rules to using dynamic values within JSX, though:
- JSX can accept any primitive values (strings, booleans, numbers), but it will not accept plain objects.
- JSX can also include expressions that resolve to these values.
For example, conditionals can be included within JSX using the ternary operator, since it resolves to a value.
function Greeting() { const isAuthUser = true; return <div>{isAuthUser ? "Hello!" : null}</div>; }
Props in React
Components can be passed values using props.
ReactDOM.render( <Greeting username="John!" />, document.getElementById("root") ); function Greeting(props) { return <h1>Hello {props.username}</h1>; }
Props cannot be directly changed
Props must never be directly changed within the child component.
Another way to say this is that props should never be mutated, since props are a plain JavaScript object
// We cannot modify the props object: function Header(props) { props.username = "Doug"; return <h1>Hello {props.username}</h1>; }
Components are considered pure functions. That is, for every input, we should be able to expect the same output. This means we cannot mutate the props object, only read from it.
Special props: the children prop
The children prop is useful if we want to pass elements / components as props to other components
The children prop is especially useful for when you want the same component (such as a Layout component) to wrap all other components.
function Layout(props) { return <div className="container">{props.children}</div>; } function IndexPage() { return ( <Layout> <Header /> <Hero /> <Footer /> </Layout> ); } function AboutPage() { return ( <Layout> <About /> <Footer /> </Layout> ); }
The benefit of this pattern is that all styles applied to the Layout component will be shared with its child components.
Lists and Keys in React
How to iterate over arrays in JSX using map
How do we displays lists in JSX using array data? We use the
.map() function to convert lists of data (arrays) into lists of elements.
const people = ["John", "Bob", "Fred"]; const peopleList = people.map((person) => <p>{person}</p>);
You can use
.map() for components as well as plain JSX elements.
function App() { const people = ["John", "Bob", "Fred"]; return ( <ul> {people.map((person) => ( <Person name={person} /> ))} </ul> ); } function Person({ name }) { // we access the 'name' prop directly using object destructuring return <p>This person's name is: {name}</p>; }
The importance of keys in lists> {people.map((person) => ( <Person key={person.id} name={person.name} /> ))} </ul> ); }
State and Managing Data in React
What is state in React?
State is a concept that refers to how data in our application changes over time.
The significance of state in React is that it is a way to talk about our data separately from the user interface (what the user sees).
We talk about state management, because we need an effective way to keep track of and update data across our components as our user interacts with it.
To change our application from static HTML elements to a dynamic one that the user can interact with, we need state.
Examples of how to use state in React
We need to manage state often when our user wants to interact with our application.
When a user types into a form, we keep track of the form state in that component.
When we fetch data from an API to display to the user (such as posts in a blog), we need to save that data in state.
When we want to change data that a component is receiving from props, we use state to change it instead of mutating the props object.
Introduction to React hooks with useState
The way to "create" state is React within a particular component is with the
useState hook.
What is a hook? It is very much like a JavaScript function, but can only be used in a React function component at the top of the component.
We use hooks to "hook into" certain features, and
useState gives us the ability to create and manage state.
useState is an example of a core React hook that comes directly from the React library:
React.useState.
import React from 'react'; function Greeting() { const state = React.useState("Hello React"); return <div>{state[0]}</div> // displays "Hello React" }
How does
useState work? Like a normal function, we can pass it a starting value (like "Hello React").
What is returned from useState is an array. To get access to the state variable and its value, we can use the first value in that array:
state[0].
There is a way to improve how we write this, however. We can use array destructuring to get direct access to this state variable and call it what we like, such as
title.
import React from 'react'; function Greeting() { const [title] = React.useState("Hello React"); return <div>{title}</div> // displays "Hello React" }
What if we want to allow our user to update the greeting they see? If we include a form, a user can type in a new value. However, we need a way to update the initial value of our title.
import React from "react"; function Greeting() { const [title] = React.useState("Hello React"); return ( <div> <h1>{title}</h1> <input placeholder="Update title" /> </div> ); }
We can do so with the help of the second element in the array that useState returns. It is a setter function, to which we can pass whatever value we want the new state to be.
In our case, we want to get the value that is typed into the input when a user is in the process of typing. We can get it with the help of React events.
What are events in React?
Events are ways to get data about a certain action that a user has performed in our app.
The most common props used to handle events are
onClick (for click events),
onChange (when a user types into an input), and
onSubmit (when a form is submitted).
Event data is given to us by connecting a function to each of these props listed (there are many more to choose from than these three).
To get data about the event when our input is changed, we can add
onChange on input and connect it to a function that will handle the event. This function will be called
handleInputChange:
import React from "react"; function Greeting() { const [title] = React.useState("Hello React"); function handleInputChange(event) { console.log("input changed!", event); } return ( <div> <h1>{title}</h1> <input placeholder="Update title" onChange={handleInputChange} /> </div> ); }
Note that in the code above, a new event will be logged to the browser's console whenever the user types into the input
Event data is provided to us as an object with many properties which are dependent upon the type of event.
How to update state in React with useState
To update state with useState, we can use the second element that useState returns to us in its array.
This element is a function that will allow us to update the value of the state variable (the first element). Whatever we pass to this setter function when we call it will be put in state.
import React from "react"; function Greeting() { const [title, setTitle] = React.useState("Hello React"); function handleInputChange(event) { setTitle(event.target.value); } return ( <div> <h1>{title}</h1> <input placeholder="Update title" onChange={handleInputChange} /> </div> ); }
Using the code above, whatever the user types into the input (the text comes from
event.target.value) will be put in state using
setTitle and displayed within the
h1 element.
What is special about state and why it must be managed with a dedicated hook like useState is because a state update (such as when we call
setTitle) causes a re-render.
A re-render is when a certain component renders or is displayed again based off the new data. If our components weren't re-rendered when data changed, we would never see the app's appearance change at all!
What's Next
I hope you got a lot of out this guide.
If you want a copy of this cheatsheet to keep for learning purposes, you can download a complete PDF version of this cheatsheet here.
Once you have finished with this guide, there are many things you can learn to advance your skills to the next level, including: | https://www.freecodecamp.org/news/react-for-beginners-cheatsheet/ | CC-MAIN-2021-25 | refinedweb | 2,982 | 62.68 |
Opened 7 years ago
Closed 4 years ago
#12231 closed Bug (worksforme)
The project path is incorrectly build, it wipe the namespace
Description (last modified by )
If by example your django is in an egg like
my.site.project
If you call the django core execution manager with:
mod = __import__('my.site.project') django.core.management.execute_manager(mod)
Your project will not be found cause the django core is declaring the main DJANGO_SETTINGS_MODULE with:
django.core.management.init:
... project_directory, settings_filename = os.path.split(p) if project_directory == os.curdir or not project_directory: project_directory = os.getcwd() project_name = os.path.basename(project_directory) ... os.environ['DJANGO_SETTINGS_MODULE = '%s.%s' % (project_name, settings_name) ...
The project_name variable represent only the first level of the project egg. It could not work with a multilevel package and we now have to use the variable 'original_settings_path' which is just optional.
The solution maybe to use the special package variable of the settings_mod module.
I attach a patch with this ticket whick correct this problem.
Attachments (1)
Change History (8)
Changed 7 years ago by
comment:1 Changed 7 years ago by
Please use preview.
comment:2 Changed 7 years ago by
comment:3 Changed 7 years ago by
1.2 is feature-frozen, moving this feature request off the milestone.
(and egg support is most definitely a feature request)
comment:4 Changed 6 years ago by
comment:5 Changed 5 years ago by
Change UI/UX from NULL to False.
comment:6 Changed 5 years ago by
Change Easy pickings from NULL to False.
comment:7 Changed 4 years ago by
Most of this code is now obsolete. Reopen with current code references if still valid.
django.core.management.init.py patch | https://code.djangoproject.com/ticket/12231 | CC-MAIN-2017-09 | refinedweb | 281 | 50.02 |
In this article, we'll look at interacting with your Java programs remotely by taking advantage of the Apache Web Server that ships with Mac OS X. We'll write a quick CGI script to compile a Java program and then write a slightly more complicated script to process HTML files. Many thanks to Herb Schilling of NASA's Glenn Research Center in Cleveland and O'Reilly's Nat Torkington for their help getting the CGI scripts to do what I want. The example is taken from an upcoming book that I co-wrote with Dan Palmer on Extreme Software Engineering from Prentice Hall.
The Fit framework is another great idea from Ward Cunningham. You can read more about it and download it at fit.c2.com . Before looking at the CGI scripts, I want to talk briefly about the Fit framework. It's such a cool idea and much simpler than the solutions that many of us have been playing with for writing and automatically running acceptance tests. As always, suggestions for future articles and comments can be sent to me at DSteinberg@core.com.
At least once a day someone ducks into the office of one of my colleagues and makes some comment about the two Macs on his desk. He teaches computer science at a local university and used to run Windows and Linux but now has a TiBook and an iMac. Once each day he listens as someone derides his choice of a "toy machine."
I ask him what his answer is, and he shrugs and responds, "What's the point?"
"What's the point?" I exclaim. "The point is that your Mac ships with Java, Ruby, Python, and Perl. The point is that you can open up a Terminal window and edit using vi or emacs. You can set up sendmail or use lynx. You can enable the Apache Web server that ships with every Mac by checking a check box. That's the point."
"I usually tell them that," he says, "but then they ask me if they can play the latest version of some game on it like they can on a Windows box."
"Well then," I reply, "which one is the toy machine?"
I've been thinking a lot about toys lately, and I've reconsidered my position. The Mac is a toy. Mac OS X is a totally decked out toy for developers. In the old days I would go to conferences and see the latest technology and know that it would be years until I'd see a Mac version. In fact, I first got into the business of covering Java on the Mac because Apple's releases always lagged so far behind the corresponding Windows release. Sure, Java 1.4 still isn't final, but you can download regular releases of the Developers Preview for free at the ADC site . We'll talk more about why the upcoming Java release is special in a future article.
Last month I went to Seattle for the Ruby developers' conference and for the annual OOPSLA (Object Oriented Programming something something something ) conference and then on to Dallas for the Lonestar Software Symposium. The first thing that struck me at all of these conferences was the growing percentage of Macs that were around. If Macs still represent only about five percent of the market, then they are way over-represented at developers shows these days. Ruby inventor Matz said that he is pleased that Ruby is distributed as part of Jaguar but that no one from Apple ever contacted him. In Dallas, different developers used different IDEs from IDEA and Eclipse to ProjectBuilder and command line tools. At OOPSLA, Martin Fowler mentioned a new framework available for supporting acceptance testing.
The Fit framework allows business people to specify what software should do or how it will behave in simple tables. These tables are meant to be embedded in HTML or Wiki pages. The tables are then tied into the application that they are testing by fixtures that extend and use the classes provided in the framework. What makes this framework even cooler is that a customer can run the acceptance tests remotely just by clicking a hyperlink in a browser. The focus of this article will be setting up the CGI scripts to remotely process the HTML pages. In the next section, you'll get a taste of what Fit is all about.
First, let's imagine that you're building a cash register program. If you enter the unit price in pennies and the number of items purchased, then you'd like to verify that the total cost for that particular item is correct. Your table might look something like this: (You can view source if you need an HTML refresher.)
For example, in this table the items in all rows except the last one are priced at $8 each. The first row helps you confirm that the price of one item is $8, and the second row helps you confirm that the price of two items is $40. The third row doesn't seem right at first. Your client is trying to verify that a 5 percent discount is given for each set of twelve purchased.
One of the advantages of the Fit framework is that the tables are easy for the clients to create and for the developers to process. In this case, the developers have indicated that the Java class that will be used to process it is called CaseDiscountFixture and that it is inside of the register package. This particular fixture extends the ColumnFixture class that is part of the Fit framework. In this case, each column of the table will map to a different member of the CaseDiscountFixture class. The first two columns will be inputs and correspond to variables, the third column is the expected return value from a method call. The CaseDiscountFixture class will need to contain variables named unitPrice and numberPurchased of type int and a method named itemTotal() that takes no arguments and returns an int .
CaseDiscountFixture
ColumnFixture
unitPrice
numberPurchased
int
itemTotal()
Here's a look at one possible example of register.CaseDiscountFixture .
register.CaseDiscountFixture
package register;
import fit.ColumnFixture;
public class CaseDiscountFixture extends ColumnFixture{
public int unitPrice;
public int numberPurchased;
public int itemTotal(){
Item item = new Item(unitPrice);
item.addToOrder(numberPurchased);
return item.totalItemCost();
}
}
Notice this creates a new instance of type Item and sends it messages. If you'd like, for now you can replace the body of itemTotal() with this return statement.
Item
return unitPrice * numberPurchased;
In practice, you would use the first approach, but this second, stubbed out version will allow you to easily experiment with the framework without creating all of the supporting classes.
Related Reading
Learning Unix for Mac OS X
By Dave Taylor, Brian Jepson
As a second example, let's look at a table that models how you might control a GUI. Here you'll model pressing buttons, entering information, and checking the results. Continuing the cash register example, this might represent manually entering the price for an item that can't be scanned.
In this case there are four keywords recognized in the fit.ActionFixture class: start , press , enter , and check . The press keyword is used to simulate a button press. In this example you can see that there is a miscButton , enterButton , timesButton , and doneButton . These will correspond to method calls -- not in the ActionFixture class but in the register.MiscItemFixture that is instantiated with the start keyword. The enter keyword also calls methods with names specified in the second column with values passed in the third column. Finally, the check keyword calls the method in the second column and compares the return value with the value in the third column.
fit.ActionFixture
start
enter
check
miscButton
enterButton
timesButton
doneButton
ActionFixture
register.MiscItemFixture. | http://www.macdevcenter.com/pub/a/mac/2002/12/10/osx_java.html | CC-MAIN-2014-15 | refinedweb | 1,310 | 62.27 |
At a former job I developed telephony applications that took advantage of speech recognition and text to speech. Since android has built in text to speech functionality I figured I would expose the functionality to PhoneGap developers via a plugin. Maybe some other folks can take advantage of this plugin and provide more accessible Android applications.
In order to use the TTS plugin wait until you get the deviceready event and then call:
window.plugins.tts.startup(startupWin, fail);This will start the TTS service on your device. However, the service will take a bit of time to start so the startup success callback is first called with a value of TTS.INITIALIZING which tells you that the service is initializing. Once the service is completely started the success callback is executed again with a value of TTS.STARTED which means we are ready to synthesize text.
function startupWin(result) { // When result is equal to STARTED we are ready to play if (result == TTS.STARTED) { //Ready to go } }To have your device play some text you simply call the speak method:
window.plugins.tts.speak("The text to speech service is ready");If you want to have some silence between utterances you can call the silence method:
window.plugins.tts.speak("Let me think."); window.plugins.tts.silence(2000); window.plugins.tts.speak("I do not know");In the above example the TTS service will pause for 2 seconds between the two speak calls. Both the speak and silence methods allow you to provide optional success and error callbacks if you need that kind of information.
Of course all of the above examples assume you are using the English (American) package to speak your utterances. If want to find out what the currently assigned TTS language is you call the getLanguage method:
window.plugins.tts.getLanguage(win, fail);The success callback has one parameter and that will be a text representation of the Locale currently set for the TTS service.
Now, Android by default supports the following languages for the TTS service English (American), English (UK), French, German, Italian and Spanish. However, it is not guaranteed that each package will be installed on your device. So before you set the TTS language you will want to check to see if it is available by calling the isLanguageAvailable method and in the success callback then you set the TTS language you want to use by executing the setLanguage method:
window.plugins.tts.isLanguageAvailable("fr", function() { window.plugins.tts.setLanguage("fr"); }, fail);And of course if you no longer need the TTS service than call the shutdown method to free up the resources:
window.plugins.tts.shutdown(win, fail);If you don't call shutdown on the TTS service the service will be shutdown when your device exits.
The plugin is available at the Android PhoneGap Plugin repository. Please give it a download, try it out and let me know what you think.
The following listing shows a full example of using the TTS Plugin:
65 comments:
Hi,
Thanks for sharing this cool plugin. What do you thin about speech recognition? Is it possible to extend your plugin this way? I'm inexperienced with the native Android API so any suggestion is welcome :)
Regards,
Chris
@Chris79 Speech recognition is possible but I will probably create a whole new plugin to support it.
@simon... i cant find tts.js..where i can download that.?? .. did you create new plugin for "text to speech". it would be great if you release that. Thanks in advance
Am trying to find tts for my phonegap application. i found your plugin and try to see the demo in my local. i cant able to find the tts.js. i added phonegap.js and the speech.html. if you provide that it would be great for me to work.
Thanks in advance
@abdulrahman You can get all the code from GitHub.
Hi, Simon.This seems like a great plugin. Unfortunately I have been unable to get it to work even after following the instructions on github. This should work fine in phonegap 1.4.1 shouldnt it? Also any phone running android 1.6 or above should be able to run the plugin correct?
@denhosi1
It should work fine on PhoneGap 1.4.1 but you'll need Android 2.1 or better.
For people using Phonegap 1.5, I let eclipse edit line #69 in the tts.java file from mTts = new TextToSpeech(ctx, this); to mTts = new TextToSpeech((Context) ctx, this);. I tried to learn what (context) means exactly, but it was a very abstract concept. I think it is similar to using this in a function but in a broader scale Everything is working well now!
Thank you so much for writing this library, this will allow me to make a better art application, and for others to make more accessible/safer/entertaining apps. Thanks!
@gregcoleinfo
Actually if you are using 1.5 you should take a look at the change I pushed up to Github yesterday. Instead of casting ctx to a Context you call ctx.getContext().
And finally, you're welcome.
Hi, Simon.This seems like a great plugin. Unfortunately I have been unable to get it to work even after following the instructions on github. TTS.java is a java class or is an android activity. How do i link TTS.java to the HTML. Sorry i just started using phonegap, so i might asked a stupid question
@BK-Stalker
TTS.java is a Java source file that gets compiled into a class when you build your app. Check out:
For installation instructions into your project.
Hi Simon,
I want to highlight the text with speech recognition.Suppose I have a text file so i want to highlight the text which is reading by speech recognition like if it reads Hello so it should highlight Hello.
I hope you got my point.
Can you help me with this please.
Thanks A lot.
@Sarika
Yeah, that shouldn't be too hard to do. You'd need to break it down so that each word is sent to the speak command and as you do that change the CSS of the element in the web view to show the highlight.
Hi Simon,
Great plugin.
FYI: in my testing on a phone with the "Pico TTS" speech engine installed and "English (United Kingdom)" chosen as the language, your plugin reports "en_GBR" (with an "R") as the result of getLanguage().
Using isLanguageAvailable(), it finds three types of English: en_US, en_GB, and en_GBR. However, en_GB has exactly the same voice as en_US (i.e., with an American accent). Only en_GBR sounds British.
Keep up the good work!
Hi Simon, I have three important questions:
The TTS plugin works offline?
Is posible to install other languages, like Portuguese?
The response is as fast as lowlatency plugin, or is very slow in comparison?
Thanks as always!
@nicoprofe
Yes
Yes, you can install additional languages. I should update the plugin to use the ACTION_INSTALL_TTS_DATA which takes you to the market to get the new language.
I don't know.
Hi Simon,
The plugin works great. I have 2 questions :
1. If i want to stop the plugin from speaking, how can i do add. Is there a command for stopping the speech.
2. Are you planning to make TTS for iOS devices?
@Vishal
Yes, you can call stop to cancel the playback or interrupt which will stop it and start playing new text you pass in. Check out the tts.js file for all the methods.
Hi Simon, I am using your wonderful plugin in one project.
Is possible to get a better voice that the one that comes with android (pico TTS)? For example something with the quality of the voice commands app that come with android devices...
Thanks!
@nicoprofe
Yes, it is possible to install extra voices and even different TTS engines on Android from version 2.2 or later.
I still have that "ACTION_INSTALL_TTS_DATA" on my to do list.
Hi, do you know if voice engines like (Ivonna, At&t, iSpeech, DragonMobile, etc) are Phonegap and TTS plugin friendly?
About the ACTION_INSTALL_TTS_DATA, I am a kind of javascript developer with no java skills. I'll wait and maybe, if I invite you a coofee :)
Thanks again Simon!
Hi Simon,
very nice plugin of you!
But i have one question:
if i try let the app read a paragraph with the id="asd" and call this with
function speak() {
window.plugins.tts.speak(document.getElementById('muh').value);
}
i only get null, the same happens, if i try to let javascript write the text before the speak function.
Can you tell me how to let the plugin some text, would be very awesome.
Thanks.
Regards,
Alex
@Alex 921
Can you get the plugin to read any static text? If you can't then the plugin is not installed correctly. My next step would be for you to check for problems in "adb logcat".
@nicoprofe
I put it on my to do list for when I updated the TTS plugin.
Hi Simon.... i used your TTs plugin but TTS.java i am getting error only in this line [ mTts = new TextToSpeech(cordova.getActivity().getApplicationContext(), this);] which says cordova cannot be resolved.. i am using phonegap-1.3.0.js and phonegap-1.3.0.jar...
can you please tell me how to correct it....
Hi simon....finally i found out the issue in last post of mine...Now code works fine but when i entered in something in editbox and clicking speak it doesn't speak....can u tell me why....any solution....
@shamsheer
I'm sure your first issue was caused by the fact you were trying to use the updated plugin with an older version of PhoneGap.
As for your second problem, what do you see in "adb logcat"?
Hi @Simon....thanks for your god response....In Logcat i am getting like this "No keyboard for id 0" and "Using default keymap: /system/usr/keychars/qwerty.kcm.bin".....I am not getting idea how to solve it...
Hi Simon.. I am developing Phonegap iOS apps by using Xcode 4.5.2. Can you tell me how the TTS plugin add in the Xcode. Thanks in advance
can we have chances to change the voice setting , this voice not clear
Hi Simon.....Finally i got solution .Now it is working fantastically....super cool awesome plugin. My mistake was that i used old version of TTS plugin.... when i use new version TTS plugin. its works clearly..... thanks for your help and your plugin tutorial......
@Test Apps
You can adjust the speed and pitch of the voice. Look at tts.js for the docs on those methods.
@chakri kasam
I dont' think anyone has posted one for iOS but if you look at this SO answer you'll be able to find out how to begin creating one:
Hi
This plugin works really well, but so far i have only been able to test it on english, My dropdown box does not get populated with languages and i am not sure how to make this appear. Can you please help.
@Hayden Sookchand
That's weird. You can go to the Play store and install more languages though.
Thanks for discussing this awesome plug-in. What do you slim about conversation recognition? Is it possible to boost your plug-in this way? I'm unskilled with local Android operating system API so any . I heard about some tts online that they also have some cool features too.
@Willard Catalan
For speech recognition use this plugin:
Great plugin. One question. I am calling this on a setInterval to speak an array of items in turn. But I need to know if TTS is still speaking before I interrupt it with the next item. Is it possible to know when the TTS engine has finished speaking?
@John Blessing
Well the speak method has a success callback which will be invoked when the text is done being spoken. What you should be doing instead of setTimeout is to push all your text into an array then each call of the success callback pops text off the array and reads it to the user.
Sorry to bug you. I have just downloade v 2.2 of the tts plugin.
I am now calling the speak and specifying a success/fail function, but I never see the success function entered. My code:
...
window.plugins.tts.speak("This is a long piece of text ",ttsSuccess,ttsFail);
...
function ttsSuccess(result) {
//called after speech has finished
console.log("ttsSuccess " + result);
}
function ttsFail(result) {
console.log("Error = " + result);
}
Any help would be appreciated.
@John Blessing
Yeah, I see that too. It is an open issue on the plugin. Working on it when I can.
Hello Simon,
is there a way to create a stop during the reading without shutting down the whole tts service?
Thanks for your answer.
Regards
Ales
@Alex 921
Call stop to stop any playing tts or interrupt to stop what is currently being played and replace it with new text.
@John Blessing
Bug has been fixed and code pushed to GitHub.
Fantastic. Works great now. Thanks.
Actually...
Success function is not reached when speaking silence
window.plugins.tts.speak(10000ttsSuccess); works fine, but
window.plugins.tts.silence(10000, ttsSuccess); never sees triggers ttsSuccess
Thanks in advance.
@John Blessing
Can you raise an issue on the TTS github repo for me? That way I won't forget to fix it.
Hi Simon,
I am running Cordova 1.5. I tried to use the version 2.2 TTS plugin as per following the instructions at to add the plugin to my project.
Eclipse is reporting many errors in TTS.java:
- CallbackContext cannot be resolved to a type
- cordova cannot be resolved
- CordovaPlugin cannot be resolved to a type
- The import org.apache.cordova.api.CallbackContext cannot be resolved
- The import org.apache.cordova.api.CordovaPlugin cannot be resolved
Is TTS version 2.2 supposed to be compatible with Cordova 1.5? If not, which version of TTS should I use with Cordova 1.5.
Thanks,
David
@David Wong
You are using the wrong version. The 2.2.0 directory includes a Plugin that works from Cordova 2.2.0 and higher. If you are using 1.5.0 then try the 1.8.1 directory.
Thanks Simon for your fast response. I got errors with TTS 1.8.1 and Cordova 1.5. I will try the other lineup of TTS and Cordova. I appreciate your help with the information.
Hi Simon,
I tried Cordova 2.0.0 with TTS 2.0 as well as Cordova 2.2.0 with TTS 2.2. In both cases, window.plugins.tts.speak("The text to speech service is ready") produces no speech on my Android 2.2.1 smartphone. getLanguage() succeeds with a null string in the callback result. isLanguageAvailable() for "en_US" succeeds and a second call to getLanguage() succeeds with a null string in the callback result.
I am running the code as in your example. The window.plugins.tts.startup() succeeds with two invokations of the callback with result 1 and 2 (TTS.STARTED).
Do you have any thoughts on what might be wrong?
Thanks,
David
Hi Simon,
I posted my last message a little too soon! The PhoneGap TTS plugin works after I installed Android's speech to text data (SpeechSynthesis Data Installer) from Google Play.
David
Hi Simon,
I am using Cordova 2.2.0 with TTS 2.2. When I launch my app, my index.html calls window.plugins.tts.startup() after device is ready - I get TTS.STARTED - window.plugins.tts.speak() works. When I transition from my base index.html to a second help.html page, and transition back to index.html, I do the same window.plugins.tts.startup() after device is ready - now I do not get TTS.STARTED - fortunately window.plugins.tts.speak() continues to work.
When I transition from index.html to help.html, I added a call to window.plugins.tts.shutdown() which succeeds. Upon transition from help.html to index.html, I no longer get TTS.STARTED and window.plugins.tts.speak() no longer utters any sound.
Is there a need to call shutdown() in the way I've described above? Are resources being allocated but not released by not calling shutdown()? However, if I call shutdown() on transition from index.html to help.html, speak() no longer works after re-entry to index.html.
Thanks,
David
@David Wong
Since you are going to continue using the TTS service when you get back to index.html I wouldn't bother calling shutdown.
Thanks for your information Simon. The TTS plugin is working very well in a new app I am developing. Since I target my apps to Android as well as iOS, I wish for a similar Cordova plugin for the latter. Are you aware of any good options for iOS?
Thanks for great plugin,This plugin works with SpeechSynthesis Data Installer from android market
Is there any way to do it without the use of SpeechSynthesis Data Installer..
@rajan singh
Yeah, if your device does not come pre-installed with Google TTS then you will need to install a 3rd party TTS.
@David Wong
Apple does not provide an SDK for TTS. You'd have to use a third party solution and wrap it in a plugin like:
Thanks for the lead, Simon. I'll investigate.
For version 3.0 of cordova , the package name seems to have changed.
from
import org.apache.cordova.api.CallbackContext;
import org.apache.cordova.api.CordovaPlugin;
import org.apache.cordova.api.PluginResult;
to
import org.apache.cordova.*;
after making this change I am able to compile with cordova 3.0
@Bhavesh Patel
Use this repo for 3.0.0 support.
I will be doing a blog post on it soon.
Hello ...
Is there someone that might have a link to a simple test app (in phonegap) that uses the above tts code ...I could download and test on my android Device ...
Thank you in advance ...
Michael T.
@Michael
This plugin is very out of date. I've switched over to basing things off of the W3C Speech Synthesis spec. Here is a demo repo:
How to get end of speak event.
I want to speak next string after end of currently playing one.
is there any event which give me end event?
End of speak event for auto play next item after complete current one
this plugin can provide end event for current playing item is end to speak. | http://simonmacdonald.blogspot.com/2011/05/text-to-speech-plugin-for-phonegap.html | CC-MAIN-2019-09 | refinedweb | 3,119 | 77.33 |
twisted.python.compatmodule documentation
twisted.pythonView Source
Compatibility module to provide backwards compatibility for useful Python features.
This is mainly for use of internal Twisted code. We encourage you to use the latest version of Python directly from your code, if possible.
Returns whether or not we should enable the new-style conversion of
old-style classes. It inspects the environment for
TWISTED_NEWSTYLE, accepting an empty string,
no,
false,
False, and
0 as falsey values
and everything else as a truthy value.
Returns whether or not we should enable the new-style conversion of
old-style classes. It inspects the environment for
TWISTED_NEWSTYLE, accepting an empty string,
no,
false,
False, and
0 as falsey values
and everything else as a truthy value.
In Python 3,
inspect.currentframe
does not take a stack-level argument. Restore that functionality from
Python 2 so we don't have to re-implement the
f_back-walking
loop in places where it's called.
Emulator of
socket.inet_pton.
Execute a Python script in the given namespaces..
Compare two objects.
Returns a negative number if
a < b, zero if they are
equal, and a positive number if
a > b.
Class decorator that ensures support for the special
__cmp__ method.
On Python 2 this does nothing.
On Python 3,
_
unicode to the native
unicode)
and convert it to the same type as the input string.
constantString should contain only characters from ASCII; to
ensure this, it will be encoded or decoded regardless..
Coerce ASCII-only byte strings into unicode for Python 2.
In Python 2
unicode(b'bytes') returns a unicode string
'bytes'. In Python 3, the equivalent
str(b'bytes') will return
"b'bytes'"
instead. This function mimics the behavior for Python 2. It will decode the
byte string as ASCII. In Python 3 it simply raises a
TypeError
when passing a byte string. Unicode strings are returned as-is. | http://twistedmatrix.com/documents/current/api/twisted.python.compat.html | CC-MAIN-2017-26 | refinedweb | 314 | 60.01 |
Exception Handling - Java vs .NET
When I began working with .NET, one thing that struck me as odd was that a class which threw a specific error was not required to specify that it threw it, nor were enclosing classes required to catch it. This is different to me from the Java world where if you are using, say, an IO class, you have to be able to catch the IOException it could throw.
To me this means that when you are working on a large project with .NET you can't count on your exceptions being caught or handled at all.
I was thinking to myself this morning that perhaps it wasn't so odd, and was wondering what you JOS'ers thought about the difference in exception handling.
CF
Tuesday, July 6, 2004
Exceptions propogate to WinMain() which is just hidden by the VB or C# equivalents.
int WinAPI WinMain(HINSTANCE..)
{
HandledWinMain(hInstance...);
}
int WINAPI HandledWinMain(HINSTANCE)
{
try {
while (GetMessage())
{
}
}
catch(...)
{
}
}
I'm a jackass
Tuesday, July 6, 2004
There's a big discussion of why this is so somewhere on the Microsoft website, among their design of C# interviews.
What it boils down to is this:
Throw specifiers (in the opinion of C#'s designers) impose an unreasonable overhead in terms of extending existing code and an unreasonable dependency of code on what it calls and what inherits from it.
I have to say I found their arguments compelling, but then I have no real experience of throw specifiers. Although they exist in C++ their implementation is such that they are practically useless if you ever interface with any libraries.
Mr Jack
Tuesday, July 6, 2004
Actually, if you were to be pedantic, Java also has exceptions that don't need to be caught, or unchecked exceptions (see RunTimeException)
Making all exceptions unchecked was a conscious decision by Anders Hejlsberg, and he mentions his justifications in the following interview:
I don't necessarily agree with his reasoning but I understand & respect his thoughts on the matter. BTW, Bruce Eckel, the interviewer and author of Thinking in Java, is also against checked exceptions as well.
SC
Tuesday, July 6, 2004
BTW, I program in both Java and C#. However, I see the argument between checked and unchecked exceptions being reminiscent of Joel's argument against Custom Fields in FogBugz.
Ultimately, it's a philosophical decision on whether the language should insulate you against bad habits.
SC: Thanks for the article. It was very interesting, though I agree with you that some of his points weren't good arguments. Saying that checked exceptions are bad because most programmers are lazy and can't be bothered with catching them doesn't seem like a good argument IMHO.
He also seems to contradict himself a bit - he says to make things as simple as possible, but then talks about creating a class that has 40 checked exceptions because of all the various classes it inherits from.
It seems to me that if checked exceptions were created properly - using an inheritance chain - then handling them shouldn't be as difficult in your calling code. For example, if I am inheriting from a class which can throw a File not Found, a File Read Only, and things like Array Out of Bounds, etc, I am violating design patterns in which you should only do one thing in your classes.
Perhaps you are right that it is a philosophocal question, but it just seems like you can write better code if you know what in the world the classes above you are going to do. Otherwise you end up just doing a try/catch Exception which defeats the whole purpose of throwing well-defined exceptions.
CF
Tuesday, July 6, 2004
Speaking of this, lately I've been using FxCop to validate my .NET code per Microsoft standards, and it contantly bitches about my use of
try
{
// do something to get a value, for instance
}
catch (Exception)
{
// set the value to the default
}
I do this ultimately because I have no clue what specialized exception this particular class throws, and I don't really care to do trial and error just to learn it. Is there any documention or method to know what normal exceptions may result from an operation?
.
Tuesday, July 6, 2004
only approach I've seen that works here is decompiling (google for anakrino), although that can be painful if the codepath goes through many different methods/classes.
schmoe
Tuesday, July 6, 2004
I agree with . and schmoe... It'd be *really* nice if VS.NET intellisense had some way of showing you what exceptions a method call could throw.
I did find it a pain to declare all my exceptions w/ "throws" in Java, but I'm not sure the .NET way is any better since it just leaves me digging through sparse documentation.
In the end, I also end up resorting to catch(Exception ex) style quite a bit. In certain situations, I think it's perfectly appropriate to insulate the users of your code from its inner workings. That's what OO is all about, right?
Of course, if you're swallowing exceptions that you didn't really know could/would occur, that can cause strange behavior...but sometimes it's enough just to know the operation failed, log the message & stack trace, and get on with life without some ugly melt down right in the user's face. None of which requires knowing what specific subclass of Exception you are catching.
Perhaps what we really need in .NET is a "throws" keyword and a project/assembly-level compiler option specifying whether or not to enforce it. Or would that be giving us lazy programmers too much choice? ;-)
Joe
Tuesday, July 6, 2004
"Perhaps what we really need in .NET is a "throws" keyword and a project/assembly-level compiler option specifying whether or not to enforce it. "
That is exactly what I would look for that would make me happy. Unfortunately it would probably have to be enforced all the way up the stack, and hence, since Microsoft themselves didn't use it, would cause most everything to break. But I wouldn't mind having a choice in the matter, kind of like compiling with the Strict option.
Sure, except that (as the interview with Anders says) a solution like this would break because of polymorphism and versioning. What exceptions does a virtual method throw? When compiling with "strict", either the virtual method throws all possible exceptions, or else you could be missing exceptions. And what happens if you always compile with "strict", but compile a newer version that throws more exceptions? What happens to the callers?
Unfortunately, the split really needs to be there - in Java, the checked exceptions thrown by a method are a part of its signature, so a subclass or interface implementor can't throw new exceptions. Either you live with this limitation or you don't - either one has its own costs.).
If you add exceptions to the signature, the callers must be updated to handle them. Like any part of the method signature of a public-facing API, it should be carefully considered and only modified when absolutely necessary.
Also, just because you compile your assembly with "Option Strict Exceptions" doesn't mean that the users of your API also need to compile their apps with it.
Furthermore, the strict exception setting should be able to have a warning level attached to it, so that you have a choice of whether to generate a full error or just a warning at compile time.
Oh, one more thing...I think the story for updating an existing API is actually better with this proposed extension. If you change a method today, and it now throws new exceptions, you are relying on your consumers to read your updated docs (which isn't very likely) and then update their code. If they don't read the docs and handle the new exception, they may get seemingly unexplained errors crashing the program when they drop in your new .dll, even though everything compiles fine.
At least with "Strict Exceptions" turned on, there is a compile-time warning or error raised. Consumers still have the same choices - catch the new exception, or let it rise to the top of the stack if it happens to occur (by turning Strict Exceptions down or off for a time in the interest of getting something out the door on a deadline). But at least you can be reasonably assured that they are aware of it.
The problem with virtual is that callers may have been compiled against my base class. If the base class' definition of a method throws different exceptions than my override of that method, callers' code will not be warned about the correct exceptions. They'll assume that only FooException can be thrown, since that's what's declared by the base class, but a subclass can throw BarException, which is not a subclass of FooException.
)."
That doesn't work, clearly, because the code that's compiled against the base class (or interface) doesn't ever have any contact with the derived class. It doesn't know the derived class exists (in fact, it might not yet exist when the client code is written). Since this is support to be a compile-time thing and not a runtime thing, it's impossible to allow.
Therefore, a virtual method cannot extend the throws clause.
Brad Wilson (dotnetguy.techieswithcats.com)
Tuesday, July 6, 2004
Ahh, of course, virtual methods can't change the signature, that makes sense, sorry. I still think in general it's a good idea though.
There is one way to change a base's method signature though -- method hiding ('new' in C#). Clearly it's very different than overriding the method and doesn't present the same kind of flexibility.
But since we're talking about an add-on that is completely optional, I wonder what kind of viability it would have given the restriction that you couldn't change the throws clause of a base method?
If nothing else, you could create a custom exception type and declare it in the throws clause of your base method...then any virtual methods would just have to wrap their exceptions appropriately. Of course this isn't ideal either, but it seems workable.
That smells like nothing different than saying "throws(Exception)", if you ask me.
"If nothing else, you could create a custom exception type and declare it in the throws clause of your base method...then any virtual methods would just have to wrap their exceptions appropriately. Of course this isn't ideal either, but it seems workable."
Yep, Java does this (InvocationTargetException, SAXException, etc.), as do several projects I've worked on. Generally not a bad idea - the idea here is that this exception-wrapping becomes part of the interface contract. The (minor) problem here is that the interface contract is basically dictating that the callers shouldn't care about which exceptions get thrown. Of course, if they do care, they can catch the wrapped exception, unwrap it, and throw it to themselves - I've never seen this actually be useful, but it's possible. And at least it allows for some documentation. :)
(Of course, in Java, the method in question could still throw a non-wrapped unchecked exception. Convention holds that even unchecked exceptions should be wrapped in this case, but this isn't enforced.)
I spoke recently with someone who looks closely at pre-release versions of Java. Apparently the situation is that checked exceptions weren't even a serious issue, until someone pushed hard for it, and in testing it helped the team find some compiler bugs. So on the strength of one test case, it was thrown in at the last minute.
Interpret that how you wish, you could probably find very old versions and check if you're inclined.
The problem with checked exceptions for me is people don't talk about handling errors. They talk about how to handle the exceptions machinery. If it really was a virtually untested feature, I don't think it should've been in. (Many things could've found bugs in prerelease code.)
In practice, it's not clear to me that the sky will fall as a result of C#/Ruby/Python/lisp code, where exceptions are unchecked. In real Java code you will find "catch (Exception e) {e.printStackTrace();}".
Also, how else do you write many kinds of reusable code, except by generalizing methods to throw Exception, instead of some specific exception? Like Java Generic Library's old map(). (Keeping in mind the JGL hadn't been obsoleted by Collections, like the PR said.)
Tayssir John Gabbour
Tuesday, July 6, 2004
Brad, it is a step above "throws Exception" in that by saying "throws MyExceptionType" you need to explicitly wrap any exception you intend to throw. This is different because it prevents unexpected exceptions from lazily propagating up the stack. You might also define multiple exception types to categorize the exceptions that various implementations may throw in the context of the common operation being performed.
Of course, there is nothing stopping anyone from writing a big try{...} catch(Exception ex) {throw new MyExType(ex);} clause around their subclass's virtual method and wrapping any-and-all exceptions, but that would just be bad style, inappropriate use of the tools, and a violation of the inheritance contract.
schmoe -- there's an example in the .NET world of that too. Web Service calls always wrap any exceptions thrown on the server inside a SoapException for delivery to the client.
I don't really think it means that the contract is saying you shouldn't care which exception gets thrown though. What the contract says is that you should do this:
try
{
...
}
catch(MyExType ex)
{
if(ex.InnerException is MySpecificExType)
{ ... }
else if(ex.InnerException is MyOtherExType)
{ ... }
}
Of course that doesn't work on SoapExceptions because the contents are just plain text rather than wrapped Exception objects, but that's a specific case dealing with passing exceptions over a platform-independent remoting boundary :)
[ documentation is very poor, and the class designers have in many cases not followed their own guidelines. For example, Sysytem.Web.Mail, which is basically wrapping CDO, has no documented exceptions. If you use Roeder's .NET reflector (well worth having), classes in the namespace are quite clearly throwing HttpExceptions. As an example of extremely poor design, look at the MailAttachment constructor. It includes a call to a private method:
private void VerifyFile()
{
try
{
File.Open(this._filename, FileMode.Open, FileAccess.Read, FileShare.Read).Close();
}
catch
{
throw new HttpException(HttpRuntime.FormatResourceString("Bad_attachment", this._filename));
}
}
This is absolutely the pits in coding design:
1. All exceptions are caught. Any class designer who thinks they know how to handle and recover from ALL exceptions,
including non-CLS complient exceptions (which is what this catch block does) is living in cloud-cuckoo land.
2. Any original exception raised by File.Open which provides information on why the call didn't succeed is lost. It should be either wrapped or rethrown so the class library user can decide how or if they want to recover from this exception.
3. The thrown HttpException exception is not even documented.
el
Wednesday, July 7, 2004
This was also one of my main beefs with my (admittedly shallow) .NET experience. I want to know what exceptions I can expect without having to resort to a decompiler.
Just me (Sir to you)
Wednesday, July 7, 2004
Wow, I hadn't noticed that about .NET, that all exceptions are unchecked. That's absolute crap, reducing C# to VB with squigglies.
Oh well... another language for the dustbin of mediocrity.
Bob
Wednesday, July 7, 2004
Look, the Java trolls are getting agitated. Microsoft must be doing something right!
Chris Nahr
Thursday, July 8, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware5/default.asp?cmd=show&ixPost=159599&ixReplies=24 | CC-MAIN-2018-17 | refinedweb | 2,665 | 62.48 |
A11yTests is an extension to
XCTestCase that adds tests for common accessibility issues that can be run as part of an XCUI Test suite.
Tests can either be run separately or integrated into existing XCUI Tests.
Good accessibility is not about ticking boxes and conforming to regulations and guidelines, but about how your app is experienced. You will only ever know if your app is actually accessible by letting real people use it. Consider these tests as hints for where you might be able to do better, and use them to detect regressions.
Failures for these tests should be seen as warnings for further investigation, not strict failures. As such i'd recommend always having
continueAfterFailure = true set.
add
import A11yUITests to the top of your test file.
Tests can be run individually or in suites.
func test_allTests() { XCUIApplication().launch() a11yCheckAllOnScreen() }
To specify elements and tests use
a11y(tests: [A11yTests], on elements: [XCUIElement]) passing an array of tests to run and an array of elements to run them on. To run all interactive element tests on all buttons:
func test_buttons() { let buttons = XCUIApplication().buttons.allElementsBoundByIndex a11y(tests: a11yTestSuiteInteractive, on: buttons) }
To run a single test on a single element pass arrays with the test and element. To check if a button has a valid accessibility label:
func test_individualTest_individualButton() { let button = XCUIApplication().buttons["My Button"] a11y(tests: [.buttonLabel], on: [button]) }
A11yUITests contains 4 pre-built test suites with tests suitable for different elements.
a11yTestSuiteAll Runs all tests.
a11yTestSuiteImages Runs tests suitable for images.
a11yTestSuiteInteractive runs tests suitable for interactive elements.
a11yTestSuiteLabels runs tests suitable for static text elements.
Alternatively you can create an array of
A11yTests enum values for the tests you want to run.
minimumSize or checks an element is at least 18px x 18px.
Note: 18px is arbitrary.
minimumInteractiveSize checks tappable elements are a minimum of 44px x 44px.
This satisfies WCAG 2.1 Success Criteria 2.5.5 Target Size Level AAA
Note: Many of Apple's controls fail this requirement. For this reason, when running a suite of tests with
minimumInteractiveSize only buttons and cells are checked. This may still result in some failures for
UITabBarButtons for example.
For full compliance, you should run
a11yCheckValidSizeFor(interactiveElement: XCUIElement) on any element that your user might interact with, eg. sliders, steppers, switches, segmented controls. But you will need to make your own subclass as Apple's are not strictly adherent to WCAG.
labelPresence checks the element has an accessibility label that is a minimum of 2 characters long.
Pass a
minMeaningfulLength argument to
a11yCheckValidLabelFor(element: XCUIElement, minMeaningfulLength: Int ) to change the minimum length.
This counts towards WCAG 2.1 Guideline 1.1 Text Alternatives but does not guarantee compliance.
buttonLabel checks labels for interactive elements begin with a capital letter and don't contain a period or the word button. Checks the label is a minimum of 2 characters long.
Pass a
minMeaningfulLength argument to
a11yCheckValidLabelFor(interactiveElement: XCUIElement, minMeaningfulLength: Int ) to change the minimum length.
This follows Apple's guidance for writing accessibility labels.
Note: This test is not localised.
imageLabel checks accessible images don't contain the words image, picture, graphic, or icon, and checks that the label isn't reusing the image filename. Checks the label is a minimum of 2 characters long.
Pass a
minMeaningfulLength argument to
a11yCheckValidLabelFor(image: XCUIElement, minMeaningfulLength: Int ) to change the minimum length.
This follows Apple's guidelines for writing accessibility labels. Care should be given when deciding whether to make images accessible to avoid creating unnecessary noise.
Note: This test is not localised.
labelLength checks accessibility labels are <= 40 characters.
This follows Apple's guidelines for writing accessibility labels.
Ideally, labels should be as short as possible while retaining meaning. If you feel your element needs more context consider adding an accessibility hint.
header checks the screen has at least one text element with a header trait.
Headers are used by VoiceOver users to orientate and quickly navigate content.
buttonTrait checks that a button element has the Button or Link trait applied.
imageTrait checks that an image element has the Image trait applied.
disabled checks that elements aren't disabled.
Disabled elements can be confusing if it is not clear why the element is disabled. Ideally keep the element enabled and clearly message if your app is not ready to process the action.
duplicated checks all elements provided for duplication of accessibility labels.
Duplicated accessibility labels are not an accessibility failure - but can make your screen confusing to navigate with VoiceOver, and make Voice Control fail. Ideally you should avoid duplication if possible.
To run the example project, clone the repo, and run
pod install from the Example directory first.
A11yUITests_ExampleUITests.swift contains example tests that show a fail for each test above.
iOS 11
Swift 5
A11yUITests is available through CocoaPods. To install add the pod to your target's test target in your podfile. eg
target 'My_Application' do target 'My_Application_UITests' do pod 'A11yUITests' end end
value(forUndefinedKey:)method on NSObject to guard against potential crashes if Apple changes their private API in future. Any calls to this function will return
nilafter running any tests which access accessibility traits. This affects your test suite only, not your app.
If two elements of the same type have the same identifier this will cause the tests to crash on iOS 13+. eg, two buttons both labeled 'Next'.
Rob Whitaker, rw@rwapp.co.uk
A11yUITests is available under the MIT license. See the LICENSE file for more info.
Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | https://swiftpack.co/package/rwapp/A11yUITests | CC-MAIN-2021-21 | refinedweb | 925 | 50.94 |
NumLock switched off when launching Notepad++
- Theo Fondse
Every time I launch Notepad++ on my Windows 10 laptop, NumLock is Switched OFF.
I have used the Computer\HKEY_USERS.DEFAULT\Control Panel\Keyboard\InitialKeyboardIndicators=2 registry hack to switch Numlock ON when the machine boots, and it will stay on only until the point where I start Notepad++.
This is driving me up the walls!
Is there a way to prevent the NumLock from getting set to OFF when launching Notepad++?
- Scott Sumner
There is probably a better solution, but until you find it you could script turning Num Lock back on after Notepad++ starts up with the Pythonscript plugin.
Put the following code in startup.py and make sure Pythonscript’s “Initialisation” option is set to ATSTARTUP (under the Configuration menu option for the Pythonscript plugin). Note that Pythonscript is only 32-bit right now, so it will only work with 32-bit Notepad++.
I really hope you find a better solution and don’t have to resort to this !! :-D
If this (or ANY posting on the Notepad++ Community site) is useful, don’t reply with a “thanks”, simply up-vote ( click the
^in the
^ 0 varea on the right ).
import ctypes def turn_on_numlock(): VK_NUMLOCK = 0x90 KEYEVENTF_EXTENDEDKEY = 0x01 KEYEVENTF_KEYUP = 0x02 GetKeyState = ctypes.windll.user32.GetKeyState keybd_event = ctypes.windll.user32.keybd_event if not GetKeyState(VK_NUMLOCK): keybd_event(VK_NUMLOCK, 0x45, KEYEVENTF_EXTENDEDKEY | 0, 0) keybd_event(VK_NUMLOCK, 0x45, KEYEVENTF_EXTENDEDKEY | KEYEVENTF_KEYUP, 0) turn_on_numlock() | https://notepad-plus-plus.org/community/topic/14476/numlock-switched-off-when-launching-notepad | CC-MAIN-2017-43 | refinedweb | 238 | 64.3 |
See also: IRC log
<shadi>
saz: group is a bit behind the schedule
... EARL implementations are required for Q3 1009
<shadi>
ci: section 1.1: namespace decision is pending,
could be
... section 2.3.1.1.1: XMLNamespace class based on Namespaces 1.0 or 1.1?
... section 2.3.3: make charNumber in LineCharPointer optional, because some tools don't provide only line numbers
... section 2.3.4 what to do with HTMLPointer?
... what to do with conformance requrements? harmonize with our other publications
... what to do with tables in appendix A?
... otherwise document seems quite stable
saz: namespace will probably be .../2009/...
ms: some specs have conformance sections in
appendix, some within the content
... ask for comments when documents are published
saz: ms, please summarize your observations
about conformance in a mail
... group should think about the use cases section
... section 2.3.1.1.1: no objections to using namespaces 1.1 (IRI instead of just URI)
RESOLUTION: use "Namespaces 1.1" for XMLNamespace class
section 2.3.3 LineCharPointer Class ...
RESOLUTION: make charNumber optional
section 2.3.4 HTMLPointer Class ...
saz: didn't get any information about fuzzy pointer idea
tables in appendix A
saz: like them for quickly looking up things
... requred/optional is not RDF terms but to be seen as conforming to this specification; make this clear in some additional text
... group, please read pointers document and send comments until next week
saz: next week's meeting tentative
... next sure meeting is Jan 21st 2009 | http://www.w3.org/2009/01/07-er-minutes.html | CC-MAIN-2016-40 | refinedweb | 252 | 61.83 |
panda3d.core.HTTPEntityTag¶
from panda3d.core import HTTPEntityTag
- class
HTTPEntityTag¶
A container for an “entity tag” from an HTTP server. This is used to identify a particular version of a document or resource, particularly useful for verifying caches.
Inheritance diagram
__init__(copy: HTTPEntityTag) → None
__init__(weak: bool, tag: str) → None
This constructor accepts an explicit weak flag and a literal (not quoted) tag string.
__init__(text: str) → None
This constructor accepts a string as formatted from an HTTP server (e.g. the tag is quoted, with an optional W/ prefix.)
isWeak() → bool¶
Returns true if the entity tag is marked as “weak”. A consistent weak entity tag does not guarantee that its resource has not changed in any way, but it does promise that the resource has not changed in any semantically meaningful way.
getString() → str¶
Returns the entity tag formatted for sending to an HTTP server (the tag is quoted, with a conditional W prefix).
strongEquiv(other: HTTPEntityTag) → bool¶
Returns true if the two tags have “strong” equivalence: they are the same tag, and both are “strong”.
weakEquiv(other: HTTPEntityTag) → bool¶
Returns true if the two tags have “weak” equivalence: they are the same tag, and one or both may be “weak”.
compareTo(other: HTTPEntityTag) → int¶
Returns a number less than zero if this HTTPEntityTag sorts before the other one, greater than zero if it sorts after, or zero if they are equivalent. | https://docs.panda3d.org/1.10/python/reference/panda3d.core.HTTPEntityTag | CC-MAIN-2020-05 | refinedweb | 234 | 52.49 |
.
The Exchange 2003 Migration Tool Kit.
Coexistence of Exchange 2013 and earlier versions of Exchange Server: the latest CU or Service Pack (whichever is newer) for Exchange 2013 to support hybrid functionality with Office 365.
For a complete listing of Exchange Server and Office 365 for enterprises tenant hybrid deployment compatibility, see the requirements listed in the following table for Exchange 2013-based and Exchange 2010-based hybrid deployments.
Notes:
1 – Blocked in Exchange 2013 setup
2 – Tenant upgrade notification provided in Exchange Management Console
3 – Requires at least one on-premises Exchange 2010 SP2 server
4 – Requires at least one on-premises Exchange 2010 SP3 server
5 – Requires at least one on-premises Exchange 2013 CU1 or greater (recommended) server
Office 365 for Enterprises
An Office 365 for enterprises tenant and administrator account and user licenses must be available on the tenant service to configure a hybrid deployment. The Office 365 tenant version must be 15.0.000.0 or greater to configure a hybrid deployment with Exchange 2013. Additionally, your Office 365 tenant status must not be transitioning between service versions. For a complete summary, see the preceding table. To verify your Office 365 tenant version and status, see Verify Office 365 tenant version and status here.
For Microsoft Premier customers who are interested in a more hands-on assisted approach, Microsoft has a workshop offering (EMRA – Exchange Migration Readiness Assessment). We know that your company relies on Microsoft for reliable and secure business communication and collaboration. To ensure your users experience optimal levels of performance and availability during the migration period, Microsoft Services has created the Migrations Readiness Assessments.
Customized for your business, the Exchange Migration Readiness Assessment will provide an in-depth analysis of Active Directory and current Exchange environment for focus on readiness for a transition Exchange Server 2010 deployment.
A final report will be provided summarizing the key findings as well as key metrics collected from the environment, capturing the state of the current environment and its overall deployment readiness.
Focused on your IT needs, the Exchange Server 2010 Migration Readiness Assessments make up a 4 day on-site engagement that prepares both your data center and IT staff for a successful migration. We created the assessment to meet your specific needs focusing on the following areas:
- Proactive analysis of your on-premises messaging environment.
- Focused and tailored knowledge transfer to enable and prepare your IT staff.
- A detailed report of your current on-premises deployment.
- Key planning metrics to accelerate migration and deployment.
- Documented roadmap to enable worry free migration.
- Optional component to evaluate readiness to support Exchange Online features.
- Accelerate Migration and Maximize Adoption
Microsoft Services delivers a comprehensive assessment of your current environment in preparation for a smooth migration. Exchange Server 2010 Migrations Readiness Assessment significantly reduces your exposure to costly mistakes and delays that can impact your end users’ experience with Microsoft Exchange Server 2010.
For more information about consulting and support offerings from Microsoft, contact your Microsoft Services representative or visit.
References
Understanding
For all of the YouTube fans, there are numerous Exchange MVP and community contributors who have published hundreds of videos with guided walk-through for Exchange 2003 migration scenarios which may be helpful to review:
Premier Field Engineers
Great to see detailed information, and migration advice for people still using Exchange 2003. We offer specialist consulting for email migration projects (), and have migrated many customers off Exchange 2003, so we know there are plenty of instances still out there. Exchange 2003 has provided a reliable email platform for a long time know (thanks to Microsoft) but it is finally time to migrate to a newer platform.
Hey Chris, Good Notes…But why didn’t Microsoft can launch connector tool or transporter tool to get migrated from Exchange 2003 to 2013 Onpremises …i mean like lotus notes connector or quest connector or Blackberry transporter…
Detailed Explanation Chris. Thank You for the efforts.
Exchange 2003 migration toolkit
well done
not that these days aren’t good
but nothing compared to ADC :)
We did the Exchange Server 2003 > Exchange Server 2013, using 3rd party migration tool.
You can find several different 3rd party migration tools in the market today, below is Code Two.
After moving user,s account to O365 most of the users are not able to see archive option in outlook 2013.
@ Exchange Queries
Because there can only be 1 exchange enviroment in a forest. If you want to do a migration to a new exchange with a new forest you can simply route the smtp to the new forest. The only implication you will get is if both forests use the same namespace autodiscover won`t fully work for free/Buzzy.
Best way stays to do a proper migration to 2010 and then do a upgrade to 2013. If you build all in one servers based on building blocks of 4 servers per DAG you can easily do a LCM upgrade to 2013 afther your enviroment has been migrated to 2010.
Thanks for the Great Post!!! One more point to consider do we really need to have 2013 server in place if we are planning for Hybrid from now on can’t we just have a 2010 Sp3server in place and do things.. Just want to confirm is there any future Roadmap from Microsoft where in customer using Hybrid deployment should at least have a 2013 server in place in mere future.. :)
@Chris Lineback – Is it possible to migrate Exchange Server 2003 > Exchange Server 2013 with a 3rd party tool?
What is with the advertisements disguised as complements? I’ve seen your posts all over this forum. The only thing you accomplished in your comment is to get people to turn to your company rather than try to use this excellent information for themselves. It would be great if the comments were moderated to remove all advertising.
As someone who held on to Exchange 5.5 until 2009…good luck to those who are about to embark on the adventure of custom support/no support. After a while, you learn to get by!
Why are people allowed to advertise in their posts? It really takes away from the discussion of services and cheapens the thread. Posts like Rob’s should never have gotten through. | https://blogs.technet.microsoft.com/exchange/2014/03/10/exchange-2003-migration-toolkit/ | CC-MAIN-2017-26 | refinedweb | 1,048 | 51.18 |
Automatic Code Documentation with javadoc
Writing API documentation for a system has to be one of the most unpleasant jobs a developer will ever face. Sure, maintenance programming and debugging are chores, but documentation is the kind of job that could drive you to despair. While others are cutting code, and designing cool systems, you're stuck writing code documentation that people may only glance at, if they ever read it at all. Yet producing high-quality documentation is an important task, particularly if the system is to be maintained in the future by other developers. They'll come into a new system, and without your polished guide to the project they'll be totally baffled.
So if you do need to produce code documentation, it's worth doing it right, and starting early. Leaving documentation till after the system is constructed makes it a nightmare to complete. Ideally, you need to start from day one of coding, and produce daily or weekly updates of the class documentation so that developers working on different parts of the system have easy access to information, such as the various methods and fields each class provides. But manually producing code documentation as frequently as you do a build would be a full-time job. You need a tool to automate the process and cut down some of the laborious and time-consuming tasks associated with producing code documentation. By using an automated tool, you can keep your design documentation, application code, and API documentation in sync. So what tool should you use?
The Solution Is
When it comes to documenting code, the free tool that ships with JDK makes life simple. Most readers will be familiar with the Java API documentation, which covers all the various packages and classes that make up the ever-growing core API. This is a set of beautifully rendered HTML pages. HTML is a convenient and platform-independent way of distributing documentation (either from a Web site or an intranet server, or as an archive in ZIP or TAR format that can be downloaded for offline reading). Individual classes, member variables, and methods are covered in rich detail, and hyperlinks within the documentation make it easy to jump from one section to another. You may be surprised to know that the tool was used to create this documentation. You'll probably be even more surprised to know that you too can produce the same style of documentation for all your code.
The tool works by reading your original Java source files (ending in a .java extension), and producing a set of HTML pages. It works automatically, and will include all of the methods and fields of a class (though by default, anything marked as private is omitted). However, unless you have meaningful method and parameter names, the quality of information produced is fairly limited. It is up to you, as the programmer, to give some assistance and provide comments for all of the important methods and member variables.
What's that? You already have comments in your code. Yes, comments are good, and help those reading the code understand what is going on but to produce documentation using you need to use a special type of comment. People reading your documentation won't want to look inside your source code files to understand how things work and if you distribute the binaries for your system, they may not even have the source code. So a special type of comment is used, one that distinguishes comments within code meant for developers creating and modifying the system from comments meant for developers who are using your classes. These comments, referred to as -style comments, will be used as the text description in the HTML documentation.
Javadoc-style Comments Make Documentation Easy
Javadoc comments give developers good control over how documentation is produced. The comments may be plain text, or include HTML code for finer control over formatting. They start with the following character sequence and end with a . You may even be using comments already but note that comments start with a double asterisk. As a general rule, you should place comments before every class, to offer a brief description of what the purpose of the class is, and before every non-private member variable and method.
In order to understand comments, you need to see how they are written. Let's look at a sample class, MyClass, which has a public member variable, a constructor, and two methods.
public class MyClass { public static final String MESSAGE = "Hello World!"; public MyClass() { // some initialization code would go here } public void printMessage() { System.out.println (MESSAGE); } public void printMessage(String prefix, String postfix) { if (prefix == null) throw new NullPointerException(); if (postfix == null) throw new NullPointerException();
System.out.print ( prefix ); System.out.print ( " " + MESSAGE ); System.out.println ( " " + postfix ); } }
Listing 1. MyClass.java.
When is run on this source file, the following HTML page is produced. The documentation generated by the tool for this class is a good start, but there is no indication of how the constructor works, or the methods, or even the meaning of the MESSAGE member variables. To fill in the blanks, comments need to be added. Examine the comments carefully, and see how they work.
/** * MyClass is a sample class that demonstrates how javadoc HTML pages are * produced. It doesn't really do much, but is a helpful guide to good * documentation generation. */ public class MyClass { /** Message to be displayed */ public static final String MESSAGE = "Hello World!"; /** Creates a MyClass object, and initializes it */ public MyClass() { // some initialization code would go here } /** Prints a message to standard output */ public void printMessage() { System.out.println (MESSAGE); } /** * Prints a message to standard output, preceded and * followed by the specified strings. * * @param prefix String to be printed before message * @param postfix String to be printed after message * @throws NullPointerException Thrown if either prefix or postfix strings are null */ public void printMessage(String prefix, String postfix) { if (prefix == null) throw new NullPointerException(); if (postfix == null) throw new NullPointerException();
System.out.print ( prefix ); System.out.print ( " " + MESSAGE ); System.out.println ( " " + postfix ); } }
Listing 2. MyClass.java with javadoc comments.The results of running again over MyClass.java reveals an impressive difference in the resulting HTML output . If you look closely at the comments, you'll see that there is more information passed to than just text comments. Special parameters can also be passed, to allow tighter control over documentation. For example, in the method, there are two parameter descriptions and an exception description specified, by using the and tags. These illustrate a fraction of the many tags that supports, however. The following table shows list of common javadoc tags supported by Java. Coverage of further tags can be found in Resources below.
Table 1. Javadoc comment tags.
If you want to have good documentation, you need comments for every non-private member variable, method and class. This means that you should adopt the practice of putting in comments as you write code, so that it doesn't become a big job towards the end of the project. In addition, this practice gives you the opportunity to produce documentation part-way through a project, which can assist other developers, and even you. As you work with code, you'll have easy-to-navigate HTML documentation of all the classes in your project.
Generating the Documentation
The easiest part of is producing the HTML pages. It's a fairly fast process, but slows down a little as the number of classes in a project increases. You can do documentation builds as frequently as you require them. Since it is an automated process, there's no need to rewrite documentation as changes to a class are made. Simply generate the documentation again, and the changes will be incorporated.
The tool can be found in your JDK\bin directory, so if you have a path statement set you can run the tool from your source code directory. Simply specify all of the source files (using wildcards if possible) and any optional parameters you require. There are plenty of parameters, and if you use frequently, you'll probably want to familiarize yourself with their features. To start, though, let's look at the command-line parameters to generate the documentation for MyClass.
javadoc -link -nonavbar MyClass.java
You'll notice that I've specified a ' -link ' parameter. This links documentation to an existing set of HTML pages (in this case, the core Java API). Without this parameter, references to external classes (like String in the case of MyClass) won't be hyperlinked. To see the final documentation, click here .
Summary
Automatic generation of code documentation doesn't have to be a long and arduous process. Using a free tool that all developers using JDK will have at their disposal, you can create sophisticated looking HTML pages for all your classes. Not only will they look impressive, they're also handy during development. Javadoc produced documentation is particularly useful if you're working with other developers who are changing and "enhancing" classes by adding new methods and member variables. This way the entire development team can understand any changes.
Resources
JavaDoc Tool Documentation:
JavaDoc Comment Tags: August 31, 2000 | https://www.developer.com/java/ent/article.php/629281/Automatic-Code-Documentation-with-javadoc.htm | CC-MAIN-2020-50 | refinedweb | 1,538 | 54.63 |
On Oct 3, 12:20 am, Paul Pluzhnikov <address@hidden> wrote: > jeremy barrett <address@hidden> writes: > > > int* A::x = new int(5); > > Note that 'A::x' is initialized to a non-constant value. That means > 'gcc' has to initialize it dynamically. In effect, 'gcc' writes a > new function (called static_initialization_and_destruction()), > which is called via exactly the same mechanism as your initAx(). > > The order of static_init...() and initAx() is not specified, > gcc-3.3.3 for Linux/x86 calls initAx() after static_init...(), > while gcc-4.3.0 calls them in reverse order. Thanks for this insight. I discovered (anecdotally) the same thing last night, while trying to reproduce this error on a different platform (gcc-4.1.3 on Linux/x86 at work, gcc-4.0.1 on Mac/x86 at home). ((snip)) > > It should be enlightening for you to compile the following separate > snippets into assembly and understand the result: > > int x = 5; // compile-time constant > > int x; > int *px = &x; // link-time constant > > int foo(); > int x = foo(); // runtime intialization required I'll perform this exercise. Thanks. > > Finally, it is entirely pointless to provide initAx() and initBx(). > If you want to initialize A::x and B::x to some values, just do > so directly: Here's the point. A and B are very generic classes that are included in a number of different libraries I've built. So the static initializers set x to something suitably generic for use in standard cases. In order keep data appropriately hidden (and save myself some bookkeeping), I add this constructor function to various of the shared libraries to initialize x to something meaningful (conceptual zero) for the given application/environment. Any (portable) ideas about how I can accomplish my goal? I understand that __attribute__ ((constructor)) is not portable, but is there at least a method that will work with the generic condition: #ifdef __GNUC__ //some brilliant idea #endif ? Thanks! jeremy | http://lists.gnu.org/archive/html/help-gplusplus/2008-10/msg00002.html | CC-MAIN-2017-17 | refinedweb | 318 | 56.05 |
When.
Difference between abstract class and interface in Java
1) Interface in Java can only contains declaration. You can not declare any concrete methods inside interface. On the other hand abstract class may contain both abstract and concrete methods, which makes abstract class an ideal place to provide common or default functionality. I suggest reading my post 10 things to know about interface in Java to know more about interfaces, particularly in Java programming language.
2) Java interface can extend multiple interface also Java class can implement multiple interfaces, Which means interface can provide more Polymorphism support than abstract class . By extending abstract class, a class can only participate in one Type hierarchy but by using interface it can be part of multiple type hierarchies. E.g. a class can be Runnable and Displayable at same time. One example I can remember of this is writing GUI application in J2ME, where class extends Canvas and implements CommandListener to provide both graphic and event-handling functionality..
3) In order to implement interface in Java, until your class is abstract, you need to provide implementation of all methods, which is very painful. On the other hand abstract class may help you in this case by providing default implementation. Because of this reason, I prefer to have minimum methods in interface, starting from just one, I don't like idea of marker interface, once annotation is introduced in Java 5. If you look JDK or any framework like Spring, which I does to understand OOPS and design patter better, you will find that most of interface contains only one or two methods e.g. Runnable, Callable, ActionListener etc.
I haven't included all syntactical difference between abstract class and interface in Java here, because focus here to learn when to use abstract class and interface and choosing one over other. Nevertheless you can see difference between interface and abstract class to find all those syntactical differences.
When to use interface and abstract class in Java
As I said earlier,. Also familiarity with coupling and cohesion is important. You at least should know that effort of designing should lead to reduce coupling and increased cohesion, ease of maintenance etc. In this part, we will see some scenarios, guidelines, rules which can help you to decide when to use abstract class and interface in Java.
1) In Java particularly, decision between choosing Abstract class and interface may influence by the fact that multiple inheritance is not supported in Java. One class can only extend another class in Java. If you choose abstract class over interface than you lost your chance to extend another class, while at the same time you can implement multiple interfaces to show that you have multiple capability. One of the common example, in favor of interface over abstract class is Thread vs Runnable case. If you want to execute a task and need run() method it's better to implement Runnable interface than extending Thread class.
2) Let's see another case where an abstract class suits better than interface. Since abstract class can include concrete methods, it’s great for maintenance point of view, particularly when your base class is evolving and keep changing. If you need a functionality across all your implementation e.g. a common method, than, you need to change every single implementation to include that change if you have chosen interface to describe your base class. Abstract class comes handy in this case because you can just define new functionality in abstract super class and every sub class will automatically gets it. In short, abstract class are great in terms of evolving functionality. If you are using interface, you need to exercise extra care while defining contracts because its not easy to change them once published.
3) Interface in Java is great for defining Types. Programming for interfaces than implementation is also one of the useful Object oriented design principle which suggests benefit of using interface as argument to function, return type etc.
4) One more general rule of when to use abstract class and interface is to find out whether a certain class will form a IS-A hierarchy or CAN-DO-THIS hierarchy. If you know that you will be creating classes e.g. Circle, Square than it's better to create an abstract class Shape which can have area() and perimeter() as abstract method, rather than defining Shape as interface in Java. On the other hand if you are going to create classes which can do thinks like, can fly, you can use interface Flyable instead of abstract class.
5) Interface generally define capability e.g. Runnable can run(), Callable can call(), Displayable can display(). So if you need to define capability, consider using interface. Since a class can have multiple capabilities i.e. a class can be Runnable as well as Displayable at same time. As discussed in first point, Since java does not allow multiple inheritance at class level, only way to provide multiple capability is via interfaces.
6) Let's see another example of where to use Abstract class and Interface in Java, which is related to earlier point. Suppose you have lot of classes to model which are birds, which can fly, than creating a base abstract class as Bird would be appropriate but if you have to model other things along with Birds, which can fly e.g. Airplanes, Balloons or Kites than it's better to create interface Flyable to represent flying functionality. In conclusion, if you need to provide a functionality which is used by same type of class than use Abstract class and if functionality can be used by completely unrelated classes than use interface.
7) Another interesting use of Abstract class and interface is defining contract using interface and providing skeletal using abstract class. java.util.List from Java collection framework is a good example of this pattern. List is declared as interface and extends Collection and Iterable interface and AbstractList is an abstract class which implements List. AbstractList provides skeletal implementation of List interface. Benefit of using this approach is that it minimize the effort to implement this interface by concrete class e.g. ArrayList or LinkedList. If you don't use skeletal implementation e.g. abstract class and instead decide to implement List interface than not only you need to implement all List methods but also you might be duplicating common code. Abstract class in this case reduce effort to implement interface.
8) Interface also provide more decoupling than abstract class because interface doesn't contain any implementation detail, while abstract class may contain default implementation which may couple them with other class or resource.
9) Using interface also help while implementing Dependency Injection design pattern and makes testing easy. Many mock testing framework utilize this behavior.
That's all on When to use Abstract class and interface in Java. Though discussion here is centered around Java but given concept of abstract class and interface goes beyond Java and also applicable to other Object oriented language, some of the tips are also applicable to other OOPS languages. Key thing to remember here is there definition of abstract class and interface e.g. in C++ and C# it varies a lot like in C++ and Java. Single most difference is multiple inheritance. We have also discussed some key differences between abstract class and interface in Java, which influence decision of choosing abstract class over interface or vice-versa. Last thing to remember is that interface is extremely difficult to evolve, so put extra care while designing interfaces.
PS: Effective Java, which is one of the best book on Java programming also has couple of items on interface and abstract class. Joshua Bloch has advised to prefer interface over abstract class in some scenario, which is worth reading.
Further Learning
Design Pattern Library
SOLID Principles of Object Oriented Design
Head First Design Pattern
15 comments :
When should you use abstract classes and interfaces? Always. Subclassing makes testing and reasoning about your code more difficult.
Think the difference between abstract class and interface may be given point wise. More discussion on their restrictions should be attached.
Great Post Javin. I have faced this question couple of times on interviews, but not able to give proper example, when it comes to choosing interface over abstract class or vice-versa. They always looks similar to me, but I love the way you explain things, If you don't mind, can you take an example and walk though us of using Interface over abstract class, and another example in which abstract class is more suitable than interface? Anyway, learn so many things on object-oriented programming from you, Thank you.
Well detailed notes about legendary diferrence, thx... my point of view, abstract classes are best fit in low levels of system architecture: persistency, network, web, ui to keep common things under control and reduce redundancy. Interfaces are as is above:)
@Oguzhan Acargil, I sort of agree with you. Interface in my opinion can represent highest level of abstraction, and abstract class may be with slightly lower than, given they have some sort o implementation, which as you said good for low levels of system architecture.
My reason of using Abstract class is simple, avoid using interfaces until Java 8 comes with default method implementation. I don't like empty methods and no functional code, which is the case with Java interface.
Hello guys, in state design pattern, should we use an abstract class or interface for modelling State abstraction? this was asked to me in a recent java interview. I said, concrete class because we definitely wants to provide default implementation of State and let's specific state only overrides methods, which make sense on those state. Please le t me know, if my answer is correct?
One should always use interface for declaring Types and always code against interface than any particular implementation. Recently I had to refactor lot of code, which was dependent on class which implements interface than the interface itself. Programmers was type casting into specific class, even if they are calling public methods of interface, this code was fragile and broke once I have added another implementation of interface. So use interface on declaring member variable, local variable, method parameters, return type or anywhere you need a Type. Code written using interface are much more flexible than otherwise. Never type cast object into implementation, instead if you need to, cast into interface or abstract class as shown in following code :
/*
* interface
*/
public interface PaymentGateway{ }
/*
* original implementation
*/
public class VisaPaymentGateway implements PaymentGateway{}
/*
* added later
*/
public class MasterPaymentGateway implements PaymentGateway{}
public class CreditCardProcessor{
public void process(CreditCard cc){
PaymentGateway gateway = (PaymentGateway) getComponent("VisaPaymentGateway"); //Ok
VisaPaymentGateway gateway = (VisaPaymentGateway) getComponent("VisaPaymentGateway"); //bad, could have broken
gateway.process();
}
}
@Javin perfect article , agreed that is most hot question in java world.!! :) ...lem me add few conclusions points to it...
Interface :-
-------------
In general, interfaces should be used to define contracts
(what is to be achieved, not how to achieve it).
Abstract classes should be used for (partial) implementation.
They can be a mean to restrain the way API contracts should be implemented.
@Javin on e more short example I want to add ..
7 down vote
I will give you an example first :
public interface LoginAuth{
public String encryptPassword(String pass);
public void checkDBforUser();
}
Now suppose you have 3 databases in your application then each and every implementation for that db need to define the above 2 methods where encryptPassword() is not dependent on db. Means
public class DBMySQL implements LoginAuth{
// Need to implement both method
}
public class DBOracle implements LoginAuth{
// Need to implement both method
}
public class DBAbc implements LoginAuth{
// Need to implement both method
}
But here in our scenario encryptPassword is not db dependent. So we can use below approach :
public abstract class LoginAuth{
public String encryptPassword(String pass){
//Implement default behavior here
}
public void checkDBforUser();
}
Now in our child class, we need to define only one method which is db dependent.
I tried my best and Hope this will clear your doubts.
Great post, very clear and well explained use scenarios and examples. Thanks a bunch dude!
Fantastic difference !!! A clear distinction between abstract class and interface from design perspective.
This is one of the good OOP Interview question for Java developers, more important for beginners.
"Programming for interfaces than implementation is also one of the useful Object oriented design principle which suggests benefit of using interface as argument to function, return type etc."
I didn't actually get this. Can you please explain a bit on this?
what is the difference between abstract class and interface in java8? | http://javarevisited.blogspot.co.uk/2013/05/difference-between-abstract-class-vs-interface-java-when-prefer-over-design-oops.html | CC-MAIN-2018-05 | refinedweb | 2,111 | 52.39 |
I have the following setup in Django. A text input validated by CharField and a FileField for an image upload. The desired response for when a field is empty should be that the data originally on the form is present and all the user needs to do is fill in the missing data. I've listed the two situations that might require validation and the current state of how the app responds:
How does one hold onto the file data after validation?
view.py
def signin(request, template="signin.html"):
c['form'] = SignInForm()
if request.method == 'POST':
c['form'] = SignInForm(request.POST, request.FILES)
if c['form'].is_valid():
#TODO: Commit data
return redirect("somwhere_else.html")
return render(request, template, c)
forms.py
class SignInForm(forms.Form):
name = forms.CharField(max_length=50,required=True)
photo_input = forms.FileField(required=True)
Here I've found:
"You can't specify any value in FileUpload control due to security restriction.
Imagine that you e.g. have such ability, you're specifying file path on the web server and after user has submited the page you are downloading a file which in fact wasnt selected by the user. So, in this case use are stealing file from user computer. Thus, browser limits capabilities of FileUpload control on a client side to just have ability for user to select the file and confirm the upload to a server.
So that, you should make a file selection the last action the user able to do before any submit.
Or, use AJAX approach to not submit entire page when selecting smth." | http://m.dlxedu.com/m/askdetail/3/f49afae970dd0c0e30f195f93e6837ed.html | CC-MAIN-2018-30 | refinedweb | 262 | 59.7 |
Function Performance Update
Above all others, there is one article I refer back to most: 2009’s Function Performance. It was updated for Flash Player 10.1 and 10.2, but not 10.3, 11.0, 11.1, or 11.2. Today I’m updating this article for Flash Player 11.2, adding some missing function types, and including a set of graphs to make for the ultimate function performance reference.
Essentially, the goal is to test every type of function there is in AS3 and every way you’d want to call them. I already had the basics like static and non-static methods, interfaces, and local functions. Now I’m making some updates:
- Added tests for calling static functions via a class (i.e.
MyClass.foo())
- Split apart “dynamic” function tests: local function, function variable, “plain” function (i.e. in the package but not in the class). Thanks to Skyboy for the tip.
- Split function variable testing in two: calling a function variable backed by a plain function and backed by a private method
- Added calls to a plain function and a private method via the
calland
applymethods of
Function
Here’s the source code for the updated performance test:
I ran this test on the following environment:
- Flex SDK (MXMLC) 4.6.0.23201, compiling in release mode (no debugging or verbose stack traces)
- Release version of Flash Player 11.2.202.229
- 2.4 Ghz Intel Core i5
- Mac OS X 10.7.3
And got these results:
Here are the graphs of these results. The first is for all functions, the second for just the “fast” functions (there is an obvious divide), and the third for just the “slow” functions.
While much of these results remains roughly the same as back in Flash Player 10.0-10.2, much has changed as well:
- Even since Flash Player 11.1 in January, we now see calling static methods via a class name (i.e.
MyClass.foo()is just as quick as not using the class name.
- Using a
Function-typed variable (e.g. as with most callbacks) or using the
callor
applymethods of
Functiondramatically slows down the function call (by at least 2x).
- Most of the “fast” functions are about the same speed. There is no appreciable difference between any of the following:
- Access specifier (e.g.
private,
public
finalor not
final
- Calling through the
thisobject
- Getters/Setters vs. other methods
- Overriding functions vs. functions that don’t override
- Static vs. non-static (through a class name or otherwise)
- Calling through the
superobject
- While the “slow” class of function types are all indeed slow, calling non-dynamic functions like private methods via
callor
applyare much (4x) slower.
I hope this article serves as a good reference when you’re considering the performance of a certain type of function; I know its predecessor served me well.
If you’ve spotted a bug or have a suggestion, please feel free to leave a comment.
#1 by Pavel fljot on April 23rd, 2012 · | Quote
Hei. May I “request” function calls via custom namespaces (which might be a common for libraries/frameworks)? Function call in a custom namespace on specific (strong typed) object and on interface-typed object.
#2 by jackson on April 23rd, 2012 · | Quote
Good catch! While I’ve covered that elsewhere, I didn’t add it to this test. It’s on my list for “Function Performance Update… Update”. :)
#3 by Rackdoll on April 23rd, 2012 · | Quote
Nice one.
Am glad they pumped up the perfomance on static function calls.
Good work Jackson.
Keep on rockin! :)
Rackdoll | http://jacksondunstan.com/articles/1820 | CC-MAIN-2017-22 | refinedweb | 600 | 75 |
11 June 2009 07:46 [Source: ICIS news]
(Revises fourth paragraph; adds pricing information in paragraphs five and six; adds quote in paragraph seven)
SINGAPORE (ICIS news)--Rising crude oil prices over the past three weeks have boosted sentiments in the ailing Asian biodiesel market despite limited trading, market sources said on Thursday.
Trade in Asian biodiesel had been muted so far this year as market conditions were made unfeasible due to rising feedstock vegetable oil values and tepid crude oil prices. This made vegetable oil-based biodiesel an unattractive alternative to traditional fossil fuels.
Most biodiesel plants in Indonesia and Malaysia were idled or were operating at very low rates as poor demand and thin margins capped production in the region.
However, the uptrend in crude oil prices in recent weeks, as well as relative stability in palm oil prices, might provide a much-needed boost to trade in the struggling Asian biodiesel industry, market sources said, as the spread between prices of crude oil derivatives and vegetable oils was narrowing.
Forward month ICE gasoil prices rose $20/tonne from last week to $560-570/tonne on Thursday due to high crude values.
Conversely, healthy vegetable oil production and poor export volumes weighed on crude palm oil (CPO) prices, with June delivery futures trading at ringgit (M$) 2,500/tonne ($714/tonne), falling M$75/tonne from 4 June.
A seller based in ?xml:namespace>
"The numbers seem workable, but there are no buyers yet," the marketing executive of a major Malaysian plant said, adding that most traders were biding their time to see if the situation was sustainable.
Biodiesel manufacturers in Asia include Vance Bioenergy, Carotino, Mission Biotechnology and Wilmar.
($1 = M$3.50) | http://www.icis.com/Articles/2009/06/11/9224032/higher-crude-oil-prices-boost-asian-biodiesel-sentiment.html | CC-MAIN-2013-48 | refinedweb | 286 | 52.43 |
Using Convolutional and Long Short-Term Memory Neural Networks to Classify IMDB Movie Reviews as Positive or Negative
We will explore combining the CNN and LSTM along with Word Embeddings to develop a classification model with Python and Keras. The data we will look at is the IMDB Movie Review dataset. The data consists of a review (free text) and the sentiment, whether positive or negative.
We will not go in depth on how to deal with text data and preprocess it for modeling. The article focuses on increasing the accuracy of the model. Of course, with more text preprocessing we will achieve better results and it is the best practice.
Introduction
We will tackle our problem with three different techniques. Word Embeddings, Convolutional and LSTM neural networks. Each technique can fit in a book, or even books, on its own.
Word Embedding is a technique of natural language processing where the model maps words or phrases to real numbers. Essentially, embeddings are the representation that the model learns from text. Similar words that may have the same meaning, will have a similar representation (with numbers).
Convolutional Neural Network is a type of deep neural networks. Famously known for its capabilities in image processing and computer vision. They are excellent at learning the data spatial structure and extracting the characteristics in data.
Long Short-Term Memory neural network is a special type of Recurrent neural networks. LSTMs are great in capturing and learning the intrinsic order in sequential data as they have internal memory. That’s why they are famous in speech recognition and machine translation.
Now, it is time to get into the data and code!
Data Preparation
The data we have consists of 50K reviews and their sentiment. We will first read it and look quickly at it.)
N.B. It is a good practice to have all your import statements at the beginning of the script. However, here we want to highlight what library every class and function belong to.
Although we won’t do text processing, but the data seems to have some issues. HTML tags. We can easily spot them. However, we need also to remove the words that have no meaning but exist heavily. For example, The, and, or, become, is, be etc. These words can have an undesired effect on the model. So we will remove them.
We will write a function that does the steps and returns a cleaned string.) ## before cleaning text = reviews.review[0] print(text[:200]) # One of the other reviewers has mentioned that after watching just 1 Oz episode you'll be hooked. They are right, as this is exactly what happened with me.<br /><br />The first thing that struck me abo ## after cleaning cleaned_text = clean_review(text, stop_words) print(cleaned_text[:200]) # One reviewers mentioned watching just 1 Oz episode you'll hooked. They right, exactly happened me.The thing struck Oz brutality unflinching scenes violence, set right word GO. Trust me, faint hearted ## cleaning the review column reviews["cleaned_review"] = reviews["review"].apply(lambda x: clean_review(x, stop_words))
Great! Now we need to convert the text into a sequence of numbers to feed it into the neural network model. We will use Keras’ Tokenizer Class to achieve this. Tokenizer vectorizes a text corpus, by turning each text into a sequence of integers.
from keras.preprocessing.text import Tokenizer ## maximum words to keep based on frequency max_features = 5000 ## replace out-of-vocab words with this oov = "OOV" tokenizer = Tokenizer(num_words = max_features, oov_token = oov) tokenizer.fit_on_texts(reviews["cleaned_review"]) ## convert text into integers tokenized = tokenizer.texts_to_sequences(reviews["cleaned_review"])
Let’s now change the “sentiment” column into integers as well. Here we will use LabelEncoder from Scikit-Learn.
from sklearn.preprocessing import LabelEncoder def sentiment_encode(df, column, le): le.fit(df[column]) sentiment_le = le.transform(df[column]) return sentiment_le, le le = LabelEncoder() sentiment_le, le = sentiment_encode(reviews, "sentiment", le) print(len(le.classes_)) # 2 le.classes_ # array(['negative', 'positive'], dtype=object)
Perfect!
N.B. We are fitting all transformers on all data, which may not be the best thing to do as we would need to fit on train data then transform test data. So we would need to split data into train and test before all this.
Now we will make sure that all sequences have the same length and truncate them into a maximum length of 500 words only.
from keras.preprocessing import sequence max_len = 500 Xtrain = sequence.pad_sequences(tokenized, maxlen = max_len)
Now, we split our data into train and test. We will split the data into two halves. 25K train and 25K test.
from sklearn.model_selection import train_test_split ## we will do the splitting using a random state to ensure same splitting every time X_train, X_test, y_train, y_test = train_test_split(Xtrain, sentiment_le, test_size = .5, random_state = 13)
Model Development
Now we have everything ready for the model.
The first layer of our model is the Embedding Layer which will try to learn the text representation and represent it in the specified number of vectors. Next, we add a one-dimensional CNN to capture the invariant features of a sentiment. Then we pass the learned features to an LSTM so that it learns them as sequences. We may add some dropout to avoid overfitting and a Bidirectional LSTM to improve performance.
## importing from keras.models import Sequential from keras.layers import Dense, LSTM, Bidirectional, Dropout from keras.layers.embeddings import Embedding from keras.layers.convolutional import Conv1D, MaxPooling1D ## model parameters vocab_size = max_features #5000 embedding_dims = 128 # dimensions to which text will be represented num_epochs = 3 noutput = len(le.classes_) #2 (binary) ## model model = Sequential() # embedding layer (vocab_size is the total number of words in data, # then the embedding dimensions we specified, then the maximum length of one review) model.add(Embedding(vocab_size, embedding_dims, input_length = max_len)) # CNN model.add(Conv1D(128, kernel_size = 4, input_shape = (vocab_size, embedding_dims), activation = "relu")) # max pooling layer model.add(MaxPooling1D(pool_size = 3)) # bidirectional LSTM model.add(Bidirectional(LSTM(64, return_sequences = True))) # LSTM and droput model.add(LSTM(32, recurrent_dropout = 0.4)) model.add(Dropout(0.2)) # 1 neuron output layer and sigmoid activation (binary 0 or 1) model.add(Dense(noutput - 1, activation = "sigmoid")) # model summary and layout model.summary() # Model: "sequential" # _________________________________________________________________ # Layer (type) Output Shape Param # # ================================================================= # embedding (Embedding) (None, 500, 128) 640000 # _________________________________________________________________ # conv1d (Conv1D) (None, 497, 128) 65664 # _________________________________________________________________ # max_pooling1d (MaxPooling1D) (None, 165, 128) 0 # _________________________________________________________________ # bidirectional (Bidirectional (None, 165, 128) 98816 # _________________________________________________________________ # lstm_1 (LSTM) (None, 32) 20608 # _________________________________________________________________ # dropout (Dropout) (None, 32) 0 # _________________________________________________________________ # dense (Dense) (None, 1) 33 # ================================================================= # Total params: 825,121 # Trainable params: 825,121 # Non-trainable params: 0 # _________________________________________________________________
Now it is time to fit and run the model:
# adam optimizer and binary crossentropy model.compile(loss = "binary_crossentropy", metrics = ["accuracy"], optimizer = "adam") model.fit(X_train, y_train, epochs = num_epochs, batch_size = 32, validation_data = (X_test[:1000], y_test[:1000]), verbose = 1) # Epoch 1/3 # 782/782 [==============================] - 184s 223ms/step - loss: 0.4652 - accuracy: 0.7526 - val_loss: 0.3489 - val_accuracy: 0.8070 # Epoch 2/3 # 782/782 [==============================] - 163s 208ms/step - loss: 0.2253 - accuracy: 0.9151 - val_loss: 0.3334 - val_accuracy: 0.8600 # Epoch 3/3 # 782/782 [==============================] - 185s 237ms/step - loss: 0.1525 - accuracy: 0.9496 - val_loss: 0.3093 - val_accuracy: 0.8660
Very Good! Pretty good results with fairly simple implementation. Of course with more data transformation especially text and adjusting the model parameters, the model can achieve better results.
Model Evaluation
Let’s now evaluate the performance of our model on the test data.
results = model.evaluate(X_test[1000:], y_test[1000:]) # 750/750 [==============================] - 51s 65ms/step - loss: 0.3550 - accuracy: 0.8637 print("test loss: %.2f" % results[0]) # test loss: 0.36 print("test accuracy: %.2f%%" % (results[1] * 100)) # test accuracy: 86.37%
The model is performing very well on the test data.
P.S. your result may somewhat vary. | https://minimatech.org/sentiment-prediction-using-cnn-lstm-keras/ | CC-MAIN-2021-31 | refinedweb | 1,291 | 51.55 |
IRC log of xmlsec on 2007-05-15
Timestamps are in UTC.
10:56:35 [RRSAgent]
RRSAgent has joined #xmlsec
10:56:35 [RRSAgent]
logging to
10:56:47 [jcc]
Zakim, this will be XMLSEC
10:56:47 [Zakim]
I do not see a conference matching that name scheduled within the next hour, jcc
11:11:19 [tlr]
tlr has joined #xmlsec
11:11:56 [tlr]
zakim, this will be xmlsec
11:11:56 [Zakim]
I do not see a conference matching that name scheduled within the next hour, tlr
11:47:43 [tlr]
zakim, this will be xmlsec
11:47:43 [Zakim]
I do not see a conference matching that name scheduled within the next hour, tlr
12:05:02 [tlr]
zakim, this will be xmlsec
12:05:02 [Zakim]
ok, tlr; I see T&S_XMLSEC()9:00AM scheduled to start in 55 minutes
12:13:51 [jcc]
jcc has joined #xmlsec
12:14:23 [jcc]
Zakim, this will be XMLSEC
12:14:23 [Zakim]
ok, jcc; I see T&S_XMLSEC()9:00AM scheduled to start in 46 minutes
12:14:34 [tlr]
hi Juan Carlos
12:15:04 [jcc]
Chair: Frederick Hirsch
12:15:19 [jcc]
Scribe: Juan Carlos Cruellas
12:19:30 [PeterL]
PeterL has joined #xmlsec
12:19:42 [jcc]
RRSAgent, make log public
12:20:13 [PeterL]
hi jcc, practising ;)
12:26:51 [tlr]
hallo Peter
12:32:45 [jcc]
jcc has joined #xmlsec
12:34:06 [PeterL]
hallo thomas
12:36:26 [tlr]
Peter, ich habe gerade Deine Mail re Workshop-Hosting beantwortet
12:37:38 [PeterL]
habs gesehen. JCC hat ja auch was angeboten.
12:38:00 [jcc]
I certainly must learn German ;-)
12:38:06 [tlr]
sorry
12:38:18 [tlr]
Peter had asked about specific requirements for the workshop, and I said I had just sent him e-mail.
12:38:26 [tlr]
He then said you had also offered hosting.
12:38:26 [jcc]
No problem... it is the natural way of comunicating among yourselves
12:38:26 [PeterL]
sure juan carlos, then I'd learn more espagnol
12:39:02 [PeterL]
;-)
12:39:33 [jcc]
One question Thomas, is there any command to record that an action has been closed?
12:39:54 [tlr]
not at this point.
12:40:07 [tlr]
Just record ACTION-NNN closed, and Frederick or I need to do it through the Web interface.
12:40:46 [jcc]
I see... just wondering if there was some command for doing it automatically...thanks
12:42:40 [jcc]
Thomas, do you know if Frederick has uploaded the agenda of this meeting to the W3C server?
12:42:53 [jcc]
if so I would record the URL for the minutes...
12:43:11 [tlr]
12:43:20 [tlr]
it's just in the public mailing list archive
12:44:02 [jcc]
Ah! I see, when the scribe's instructions mention a URL to the agenda, in fact they mean to the email where the agenda was distributed... thanks
12:45:03 [jcc]
Agenda:
12:45:22 [jcc]
Meeting: XMLSEC
12:45:37 [jcc]
Chair: Frederick Hirsch
12:45:48 [jcc]
Scribe: Juan Carlos Cruellas
12:46:01 [jcc]
RRSAgent, make log public
12:48:01 [jh]
jh has joined #xmlsec
12:48:43 [jh]
zakim, who is here?
12:48:43 [Zakim]
T&S_XMLSEC()9:00AM has not yet started, jh
12:48:44 [Zakim]
On IRC I see jh, jcc, PeterL, tlr, RRSAgent, Zakim, klanz2, trackbot-ng
12:51:10 [jh]
zakim, this will be xmlsecc
12:51:10 [Zakim]
I do not see a conference matching that name scheduled within the next hour, jh
12:51:15 [jh]
zakim, this will be xmlsec
12:51:15 [Zakim]
ok, jh; I see T&S_XMLSEC()9:00AM scheduled to start in 9 minutes
12:51:30 [jh]
Meeting: XML Security Specifications Maintenance WG Conference Call
12:51:40 [jh]
Chair: Frederick Hirsch
12:51:48 [jcc]
Scribe: Juan Carlos Cruellas
12:52:30 [jcc]
Agenda:
12:52:51 [Zakim]
T&S_XMLSEC()9:00AM has now started
12:52:58 [Zakim]
+Frederick_Hirsch
12:53:14 [jh]
zakim, who is here+
12:53:14 [Zakim]
I don't understand 'who is here+', jh
12:53:19 [jh]
zakim, who is here?
12:53:19 [Zakim]
On the phone I see Frederick_Hirsch
12:53:21 [Zakim]
On IRC I see jh, jcc, PeterL, tlr, RRSAgent, Zakim, klanz2, trackbot-ng
12:54:37 [jh]
Regrets: Donald Eastlake, Gregory Berezowsky
12:54:48 [tlr]
zakim, who is on the phone?
12:54:48 [Zakim]
On the phone I see Frederick_Hirsch
12:55:30 [jcc]
thomas, the command /me zakim, ??p9 is [handle] does not seem to work... I guess that I must substitute ?? by something?
12:55:31 [rdm]
rdm has joined #xmlsec
12:55:59 [tlr]
zakim, who is on the phone?
12:55:59 [Zakim]
On the phone I see Frederick_Hirsch
12:56:06 [EdS]
EdS has joined #xmlsec
12:56:07 [tlr]
juan carlos, there is no unidentified party on the phone...
12:57:00 [tlr]
zakim, call thomas-781
12:57:00 [Zakim]
ok, tlr; the call is being made
12:57:01 [Zakim]
+Thomas
12:57:28 [sean]
sean has joined #xmlsec
12:58:15 [PHB]
PHB has joined #xmlsec
12:58:17 [Zakim]
+[IPcaller]
12:58:31 [tlr]
zakim, IPcaller is JuanCarlosCruellas
12:58:31 [Zakim]
+JuanCarlosCruellas; got it
12:58:38 [tlr]
zakim, nick jcc is JuanCarlosCruellas
12:58:38 [Zakim]
ok, tlr, I now associate jcc with JuanCarlosCruellas
12:58:52 [Zakim]
+ +1.781.442.aaaa
12:59:07 [tlr]
zakim, nick fjh is FrederickHirsch
12:59:07 [Zakim]
ok, tlr, I now associate fjh with Frederick_Hirsch
12:59:21 [tlr]
zakim, aaaa is SeanMullen
12:59:21 [Zakim]
+SeanMullen; got it
12:59:25 [tlr]
zakim, nick sean is SeanMullen
12:59:25 [Zakim]
ok, tlr, I now associate sean with SeanMullen
12:59:55 [tlr]
zakim, who is on the phone?
12:59:55 [Zakim]
On the phone I see Frederick_Hirsch, Thomas, JuanCarlosCruellas, SeanMullen
13:00:04 [hal]
hal has joined #xmlsec
13:00:11 [Zakim]
+ +1.650.380.aabb
13:00:20 [klanz2]
I'm having trouble with my machine, I'll dial in ASAP
13:00:23 [Zakim]
+EdSimon
13:00:26 [Zakim]
+ +1.443.695.aacc
13:00:42 [tlr]
zakim, aabb is GregWhitehead
13:00:42 [Zakim]
+GregWhitehead; got it
13:00:56 [tlr]
zakim, aacc is RobMiller
13:00:56 [Zakim]
+RobMiller; got it
13:00:59 [fjh]
zakim, who is here?
13:00:59 [Zakim]
On the phone I see Frederick_Hirsch, Thomas, JuanCarlosCruellas, SeanMullen, GregWhitehead, EdSimon, RobMiller
13:01:01 [Zakim]
On IRC I see hal, PHB, sean, EdS, rdm, fjh, jcc, PeterL, tlr, RRSAgent, Zakim, klanz2, trackbot-ng
13:01:01 [tlr]
zakim, nick rdm is RobMiller
13:01:01 [Zakim]
ok, tlr, I now associate rdm with RobMiller
13:01:09 [Zakim]
+Hal_Lockhart
13:01:17 [tlr]
zakim, nick EdS is EdSimon
13:01:17 [Zakim]
ok, tlr, I now associate EdS with EdSimon
13:01:22 [tlr]
zakim, nick hal is Hal_Lockhart
13:01:22 [grw]
grw has joined #xmlsec
13:01:26 [Zakim]
ok, tlr, I now associate hal with Hal_Lockhart
13:01:26 [tlr]
zakim, who is on the phone?
13:01:28 [Zakim]
+ +30281039aadd
13:01:36 [Zakim]
On the phone I see Frederick_Hirsch, Thomas, JuanCarlosCruellas, SeanMullen, GregWhitehead, EdSimon, RobMiller, Hal_Lockhart, +30281039aadd
13:01:39 [PHB]
zakim, what is the dial in
13:01:44 [tlr]
zakim, nick grw is GregWhitehead
13:01:46 [tlr]
zakim, code?
13:01:51 [Zakim]
I don't understand 'what is the dial in', PHB
13:01:53 [tlr]
zakim, aadd is GilesHogben
13:01:57 [Zakim]
ok, tlr, I now associate grw with GregWhitehead
13:01:59 [Zakim]
the conference code is 965732 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), tlr
13:02:05 [Zakim]
+GilesHogben; got it
13:02:11 [rdm2]
rdm2 has joined #xmlsec
13:02:21 [tlr]
zakim, nick rdm2 is RobMiller
13:02:22 [Zakim]
ok, tlr, I now associate rdm2 with RobMiller
13:02:34 [fjh]
zakim, who is here?
13:02:34 [Zakim]
On the phone I see Frederick_Hirsch, Thomas, JuanCarlosCruellas, SeanMullen, GregWhitehead, EdSimon, RobMiller, Hal_Lockhart, GilesHogben
13:02:37 [Zakim]
On IRC I see rdm2, grw, hal, PHB, sean, EdS, rdm, fjh, jcc, PeterL, tlr, RRSAgent, Zakim, klanz2, trackbot-ng
13:02:39 [Zakim]
+ +1.781.306.aaee
13:02:43 [ghogben3]
ghogben3 has joined #xmlsec
13:02:49 [Zakim]
- +1.781.306.aaee
13:03:10 [jcc]
OPIC: 1) Administrative: Scribe confirmation, Attendance, Agenda review (9:00 am Eastern)
13:03:17 [Zakim]
+PHB
13:03:17 [tlr]
zakim, nick ghogben3 is GilesHogben
13:03:18 [Zakim]
ok, tlr, I now associate ghogben3 with GilesHogben
13:03:32 [tlr]
Topic: Administrative
13:03:36 [tlr]
Scribe for next week: PHB
13:03:51 [jcc]
fjh: confirming next week scribe Phillp Hallman for the week after, not next week
13:03:52 [tlr]
Scribe for 29 May: Giles Hogben
13:03:58 [tlr]
s/Hallam/Hallam-Baker/
13:04:12 [jcc]
fjh: asks Giles Hogben to do it for the week after
13:04:20 [jcc]
TOPIC: 1a) Regrets: Donald Eastlake, Gregory Berezowsky
13:04:23 [fjh]
13:04:43 [tlr]
Agenda:
13:04:49 [jcc]
TOPIC: 2) Review and Approval of WG minutes
13:05:04 [tlr]
13:05:08 [tlr]
(minutes from last week)
13:05:23 [jcc]
are posted changes to the canonicalizations. They were not in the minutes.
13:05:47 [tlr]
s/are posted/hal: there were/
13:05:47 [jcc]
fjh: minutes approved
13:06:16 [jcc]
action: fjh to post the changes to canonicalization process
13:06:16 [trackbot-ng]
Sorry, couldn't find user - fjh
13:06:19 [Zakim]
+??P20
13:06:58 [fjh]
ACTION: Frederick to post red-line link for C14N11
13:06:58 [trackbot-ng]
Created ACTION-25 - Post red-line link for C14N11 [on Frederick Hirsch - due 2007-05-22].
13:07:15 [fjh]
zakim, who is here?
13:07:15 [Zakim]
On the phone I see Frederick_Hirsch, Thomas, JuanCarlosCruellas, SeanMullen, GregWhitehead, EdSimon, RobMiller, Hal_Lockhart, GilesHogben, PHB, ??P20
13:07:17 [Zakim]
On IRC I see ghogben3, rdm2, grw, hal, PHB, sean, EdS, rdm, fjh, jcc, PeterL, tlr, RRSAgent, Zakim, klanz2, trackbot-ng
13:07:45 [klanz2]
My audio device has problems
13:07:52 [klanz2]
yes
13:07:56 [tlr]
zakim, ??P20 is klanz2
13:07:56 [Zakim]
+klanz2; got it
13:08:08 [klanz2]
and the IRC runs via vpn on aother machine
13:08:31 [jcc]
resolution: minutes of May 2nd and 3rd face to face meeting were approved
13:09:25 [fjh]
zakim, where am i?
13:09:25 [Zakim]
I don't understand your question, fjh.
13:09:30 [tlr]
zakim, bookmark?
13:09:30 [Zakim]
I don't understand your question, tlr.
13:09:33 [tlr]
rrsagent, bookmark?
13:09:33 [RRSAgent]
See
13:09:34 [fjh]
rrsagent, where am i?
13:09:34 [RRSAgent]
See
13:09:46 [jcc]
TOPIC: 3) Future WG Meetings
13:09:59 [jcc]
fjh: will be out, Thomas will chair the two next meetings
13:10:19 [jcc]
ACTION-3: closed
13:10:28 [klanz2]
zakim, unmute me
13:10:28 [Zakim]
klanz2 was not muted, klanz2
13:11:03 [jcc]
ACTION-4: closed; fjh updated the homepage.
13:11:32 [jcc]
ACTION-5: open for finishing.
13:12:09 [jcc]
ACTION-6: open. Konrad will complete in the next week
13:12:23 [jcc]
ACTION-8: closed as part of the editorial update
13:12:28 [klanz2]
zakim, mute me
13:12:28 [Zakim]
klanz2 should now be muted
13:13:07 [jcc]
ACTION-9: closed. Sent email to the list
13:13:46 [fjh]
13:13:52 [jcc]
tlr: asks Sean to pass the link of the message
13:14:23 [jcc]
ACTION-12: open. fjh has been working on it...almost done
13:14:32 [jcc]
ACTION-13: closed
13:15:47 [jcc]
ACTION-15: closed; done it 2007-05-14 call.
13:15:56 [tlr]
q+
13:16:23 [jcc]
the coordination group will take care of security issues
13:17:25 [jcc]
... fjh: when a charter is created it should include security considerations and how this will be done...
13:17:35 [tlr]
s/the coord/fjh: the coord/
13:17:42 [tlr]
s/... fjh: when/... when/
13:17:45 [jcc]
fjh: ... and the coordination group would take care of this.
13:17:53 [tlr]
s/fjh: .../... /
13:17:58 [tlr]
q?
13:18:22 [fjh]
ed: should have permanent security group to review materials
13:18:27 [tlr]
hal: +1 to ed
13:18:50 [tlr]
... also errata handling ...
13:18:50 [jcc]
hal: seconds the idea, and also to be in the position of receiving errata of security specifications
13:19:19 [jcc]
ed: the group should be the place where the policies and processes are reviewed
13:19:56 [jcc]
tlr: this is a useful proposal and this could be part of the outcome to be produced by the group
13:20:21 [jcc]
tlr: question to Frederick, what documentation?
13:20:53 [jcc]
what documentation should be managed? only minutes or others?
13:21:24 [jcc]
fjh: we should draft a note...
13:21:41 [jcc]
tlr: we could capture text from minutes and generate the note.
13:21:43 [Zakim]
+??P27
13:21:58 [PeterL]
zakim, ??P27 is peter_Lipp
13:21:58 [Zakim]
+peter_Lipp; got it
13:22:34 [jcc]
fjh: the group should start indicating what the issues are and then we will receive indications of what to do.
13:23:01 [tlr]
ACTION: thomas to draft CG note draft for submission to XML CG - due 2007-06-20
13:23:01 [trackbot-ng]
Created ACTION-26 - draft CG note draft for submission to XML CG [on Thomas Roessler - due 2007-06-20].
13:23:02 [PeterL]
zakim, nick PeterL is peter_Lipp
13:23:02 [Zakim]
ok, PeterL, I now associate you with peter_Lipp
13:23:48 [jcc]
ACTION-16: closed
13:23:58 [jcc]
ACTION-17: open
13:24:13 [klanz2]
zakim, unmute me
13:24:13 [Zakim]
klanz2 should no longer be muted
13:24:15 [jcc]
ACTION-18: open
13:24:22 [jcc]
ACTION-19: open
13:24:25 [tlr]
zakim, mute me
13:24:25 [Zakim]
sorry, tlr, I do not know which phone connection belongs to you
13:24:27 [klanz2]
ongoing
13:24:30 [jcc]
ACTION-20: done
13:24:30 [tlr]
zakim, mute thomas
13:24:30 [Zakim]
Thomas should now be muted
13:24:46 [klanz2]
zakim, mute me
13:24:46 [Zakim]
klanz2 should now be muted
13:25:01 [jcc]
fjh: we will indicate when we can meet.
13:25:06 [tlr]
zakim, unmuteme
13:25:06 [Zakim]
I don't understand 'unmuteme', tlr
13:25:13 [tlr]
zakim, unmute thomas
13:25:13 [Zakim]
Thomas should no longer be muted
13:25:49 [jcc]
fjh: when we will know when we will meet in November plenary
13:26:03 [jcc]
tlr: in the next few months...+
13:26:11 [jcc]
ACTION-21: closed
13:26:20 [jcc]
ACTION-22: open
13:27:02 [jcc]
ACTION-23: proposal for qnames. Frederick was not sure of the action on that issue
13:27:15 [jcc]
fjh: it is a timing issue ....
13:27:19 [tlr]
q+
13:27:53 [PHB]
qnames should not be used as data
13:28:23 [jcc]
qnames are prefixed or unprefixed, this introduces some ambiguity
13:28:26 [PHB]
The prefix namespaces do not work within the data space
13:28:30 [hal]
q+
13:28:39 [PHB]
There is an AB finding on the topic
13:28:55 [fjh]
s/AB/TAG/
13:29:41 [jcc]
qnames may be prefixed or unprefixed, and the issue is on prefixed qnames
13:29:43 [tlr]
EdSimon: Said qnames are prefixed or unprefixed; didn't talk about ambiguity. The concern is about prefixed qnames in data space. It's an issue I thought about during the last week WRT c14n
13:29:44 [PHB]
The point here was that there should be a note in the C18N section to the effect that prefixes will break, and protocols should avoid them per the TAG
13:29:58 [tlr]
hal: Don't agree that only prefixed names are a problem
13:30:08 [tlr]
s/C18N/C14N/
13:30:28 [jcc]
fjh: does not think this affects canonicalization
13:30:41 [fjh]
s/does not think/asks whether/
13:30:49 [tlr]
q+ to ask whether this is considered critical path for C14N 1.1
13:31:09 [fjh]
greg: treat as best practice
13:31:10 [jcc]
Greg: provide a best practice document
13:31:32 [klanz2]
zakim, unmute me
13:31:32 [Zakim]
klanz2 should no longer be muted
13:31:35 [klanz2]
q+
13:31:49 [klanz2]
zakim, mute me
13:31:49 [Zakim]
klanz2 should now be muted
13:32:02 [tlr]
ack tlr
13:32:02 [Zakim]
tlr, you wanted to ask whether this is considered critical path for C14N 1.1
13:32:11 [hal]
q -
13:32:37 [hal]
q-
13:33:10 [fjh]
q+
13:33:12 [jcc]
tlr: we should advice core group as soon as we can on the issues we identify
13:33:14 [klanz2]
zakim, unmute me
13:33:14 [Zakim]
klanz2 should no longer be muted
13:33:27 [EdS]
q+
13:33:37 [tlr]
nah... we can always ask politely.
13:33:51 [fjh]
konrad suggests only formal objection possible now
13:34:04 [klanz2]
zakim, mute me
13:34:04 [Zakim]
klanz2 should now be muted
13:34:25 [tlr]
fjh, speaking as self: don't think we need to do more, rather do best practice approach
13:34:30 [jcc]
fjh: as Frederick, would prefer the best practice approahc
13:34:48 [fjh]
q?
13:34:51 [tlr]
EdSimon: proposed changes to c14n would need to be broader; rather thinking of C14N 2.0
13:34:51 [fjh]
ack klanz
13:34:52 [jcc]
ed: canonicalization is so broad that he is thinking in canonicalization 2.0
13:34:57 [tlr]
... don't expect resolution near-term ...
13:34:58 [fjh]
ack Frederick_Hirsch
13:35:02 [fjh]
ack EdSimon
13:35:04 [fjh]
q?
13:35:04 [klanz2]
zakim, mute e
13:35:05 [Zakim]
EdSimon should now be muted
13:35:10 [tlr]
q+ to note that this sounds like a proposal for further work
13:35:14 [klanz2]
zakim, mute me
13:35:14 [Zakim]
klanz2 should now be muted
13:35:27 [jcc]
fjh: can we agree on that?
13:35:35 [Zakim]
+ +1.514.861.aaff
13:35:43 [tlr]
zakim, aaff is DonEastlake
13:35:43 [Zakim]
+DonEastlake; got it
13:36:10 [jcc]
fjh: can we agree on the best practice issue?
13:36:14 [tlr]
q=
13:36:15 [tlr]
q?
13:36:26 [tlr]
q-
13:36:39 [tlr]
RESOLUTION: We are not going to bring qnames in content to XML Core, but rather feed that into best practices.
13:36:45 [jcc]
RESOLUTION: we are not going to bring the qname issue to the core group but be part of the best practices
13:36:59 [tlr]
s/RESOLUTION: We are not going to bring qnames in content to XML Core, but rather feed that into best practices.//
13:37:11 [jcc]
this is something that we must notice this issue...
13:37:24 [klanz2]
phil
13:37:33 [deastlak]
deastlak has joined #xmlsec
13:37:34 [EdS]
Ed: I strongly agree with Phill.
13:37:46 [jcc]
phil: sligthly more than best practices: something that has to be noted as property of the algorithm
13:37:53 [EdS]
q+
13:37:57 [klanz2]
q+
13:37:58 [jcc]
phil: it is a consequence of the XML...
13:38:09 [jcc]
phil: we should provide more information .....
13:38:18 [tlr]
q+
13:38:26 [jcc]
fjh: is it possible to provide more text for CN14.1
13:38:28 [EdS]
q-
13:39:39 [jcc]
tlr: we need to coordinate with core as they have been waiting for us
13:39:59 [jcc]
greg: i would think that the note would be rather simple....
13:40:30 [jcc]
greg: using qnames values in data then you must use the implicit namespace or the prefixes may not be captured
13:40:49 [jcc]
greg: just pointing what is not obviuos for all the people
13:41:11 [EdS]
+1 to greg
13:41:13 [jcc]
phil: best practices suggest that you have options
13:41:28 [klanz2]
q+
13:41:51 [jcc]
hal: there are aspects to basic XML semantics, security considerations...
13:42:09 [jcc]
hal: do we want to discuss this now? is a lengthy discussion
13:42:37 [jcc]
fjh: this is an important topic and we have to discuss....maybe in the next call
13:42:50 [EdS]
q+
13:42:52 [fjh]
ack klanz
13:42:54 [klanz2]
zakim, unmute me
13:42:54 [Zakim]
klanz2 was not muted, klanz2
13:42:57 [tlr]
ack tlr
13:43:01 [fjh]
ack tlr
13:43:35 [klanz2]
q-
13:43:39 [jcc]
konrad: no syntactical means for distinguighing
13:43:57 [jcc]
...prefixed from unprefixed qnames...
13:44:05 [klanz2]
+1 to fjh
13:44:45 [jcc]
ed: maybe we ....
13:44:45 [klanz2]
to distinguish from other data that may also look like prefixed names
13:44:58 [klanz2]
eg: urn:somename
13:45:06 [jcc]
ed: should get broader attention to this as this may not be an issue only on one type of canonicalization algorithm
13:45:11 [tlr]
q+ to note that c14N 1.1 is actually explicit
13:45:51 [hal]
q+
13:46:01 [jcc]
phil: when applying transforms, and you use prefixed qnames, then you have to take into account how to deal with them..
13:46:12 [EdS]
Ed: qname discussion not likely to be resolved in short order; will likely lead to significant discussion. I suggest capping c14n 1.1, and getting to work on c14n 2.0 ASAP.
13:46:33 [hal]
+1
13:46:46 [tlr]
ack tlr
13:46:46 [Zakim]
tlr, you wanted to note that c14N 1.1 is actually explicit
13:47:17 [jcc]
tlr: supports moving on, and support not asking for an explicit remark to be added...
13:47:22 [EdS]
q-
13:47:25 [fjh]
ack EdSimon
13:47:40 [klanz2]
+1 to tlr
13:48:02 [fjh]
tlr: table qname issues for now, leave C14N11 as now, future work item
13:48:06 [jcc]
tlr: if this is relevant, then we should include it for future work... leave C14n1 as it is
13:48:21 [tlr]
q?
13:48:36 [jcc]
hal: agrees moving on.
13:48:37 [tlr]
ack hal_lockhart
13:48:47 [tlr]
fjh: phill, can you live with this?
13:48:50 [tlr]
phill: yeah *sigh*
13:49:16 [jcc]
fjh: consider whether to do anything with sig or...
13:49:31 [jcc]
RESOLUTION: not to feed C14n1
13:49:40 [jcc]
RESOLUTION: not to feed C14n1 on the qnames issue
13:49:51 [klanz2]
shall we distill some thing for the future work now from this discussion
13:49:52 [grw]
grw has joined #xmlsec
13:50:19 [jcc]
ACTION-23: closed
13:50:24 [hal]
q+
13:50:56 [hal]
13:50:59 [jcc]
hal: mentions some text on qnames...
13:51:07 [jcc]
ACTION-24: closed
13:51:17 [jcc]
fjh: asks to complete the questionnaire on interop.
13:51:34 [jcc]
TOPIC: 5) Editorial Status
13:51:51 [jcc]
fjh: asks to review the editorial material circulated. Not possible to discuss it now
13:52:01 [jcc]
TOPIC: 5a) Review status of XML Signature draft
13:52:01 [jcc]
TOPIC: 5b) Review status Decryption Transform draft
13:52:01 [EdS]
I share Phill's sigh. From my review of c14n 1.1, uddi c14n (
), and the qname issue, my strong initial impression is that it will be best to move from c14n 1.1 to c14n 2.0 ASAP.
13:52:26 [jcc]
TOPIC: 7. Workshop Planning
13:52:38 [jcc]
fjh: two or three proposals for workshops?
13:52:48 [jcc]
fjh: Austria, Spain, California...
13:52:54 [klanz2]
zakim, unmute me
13:52:54 [Zakim]
klanz2 was not muted, klanz2
13:53:07 [tlr]
peterlipp: would be willing to host in Graz
13:53:09 [jcc]
PeterL: offers Austria (Graz)
13:53:22 [klanz2]
zakim, mute me
13:53:22 [Zakim]
klanz2 should now be muted
13:53:24 [jcc]
fjh: how many days? assumed 2 or 3
13:53:41 [jcc]
tlr mentioned typically 2
13:53:47 [jcc]
what about preparation day?
13:53:56 [tlr]
fjh: do we need face-to-face processing time?
13:54:03 [tlr]
... any difference to the folks who would host?
13:54:07 [tlr]
hal: no difference to us
13:54:10 [tlr]
peter: no problem
13:54:14 [jcc]
PeterLipp: does not care on the days.
13:54:17 [tlr]
juanCC: can do 3
13:54:42 [jcc]
tlr: three months in advance it announces the workshop
13:55:04 [jcc]
tlr: not earlier than September.
13:55:24 [jcc]
fjh: people think on time.
13:55:34 [jcc]
fjh: avoid first week of September
13:55:40 [klanz2]
zakim, unmute me
13:55:40 [Zakim]
klanz2 should no longer be muted
13:55:53 [klanz2]
zakim, mute me
13:55:53 [Zakim]
klanz2 should now be muted
13:55:56 [jcc]
fjh: Konrad constraints existent
13:56:08 [jcc]
PeterLipp: only the first week of september is difficult
13:56:11 [jcc]
q+
13:56:33 [jcc]
fjh: might be an advantage having in Europe for attracting European people...
13:56:38 [ghogben3]
q+
13:56:49 [tlr]
q?
13:56:50 [fjh]
q?
13:56:55 [jcc]
fjh: would producing a questionnaire for getting information be a good idea?
13:57:13 [fjh]
ack Hal_Lockhart
13:58:33 [jcc]
tlr: if we konw that a big part of XML security community is on West Coast, that would be a good reason...
13:59:06 [jcc]
... for having it there, on the other side if having it in Europe would attract European people in a relevant enough number....
13:59:14 [jcc]
that would be a reason for Europe.
13:59:27 [fjh]
ack JuanCarlosCruellas
13:59:28 [jcc]
fjh: agree not to do in the first week of September?
13:59:36 [hal]
q+
13:59:57 [fjh]
generally agreed not to have 1st week of september
14:00:22 [fjh]
Juan Carlos: Has to make bookings in advance, has made bookings. Needs to know in advance, October possible
14:00:56 [tlr]
q+ to ask for clarification
14:01:24 [fjh]
ack GilesHogben
14:01:38 [jcc]
tlr: make a poll on the email
14:01:45 [jcc]
... for the location
14:02:20 [ghogben3]
add October?
14:02:35 [fjh]
ack Hal_Lockhart
14:02:37 [jcc]
tlr: first week of October also possible.
14:02:39 [fjh]
ack tlr
14:02:39 [Zakim]
tlr, you wanted to ask for clarification
14:02:57 [jcc]
Hal: relevant input coming from people that have implementation?
14:03:08 [jcc]
tlr: good qu3estion, discuss it through email
14:03:15 [tlr]
ACTION: thomas to put up WBS for known constraints in SeptembeR/October
14:03:15 [trackbot-ng]
Created ACTION-27 - Put up WBS for known constraints in SeptembeR/October [on Thomas Roessler - due 2007-05-22].
14:03:36 [jcc]
fjh: review the links in the agenda and take a look to the material linked.
14:03:44 [jcc]
fjh: ajourns the meeting.
14:03:50 [Zakim]
-Hal_Lockhart
14:03:52 [Zakim]
-SeanMullen
14:03:52 [klanz2]
thanks bye
14:03:53 [Zakim]
-GregWhitehead
14:03:55 [Zakim]
-RobMiller
14:03:57 [Zakim]
-peter_Lipp
14:03:58 [Zakim]
-GilesHogben
14:03:59 [Zakim]
-klanz2
14:04:02 [Zakim]
-PHB
14:04:08 [Zakim]
-EdSimon
14:04:19 [tlr]
zakim, list participants?
14:04:19 [Zakim]
I don't understand your question, tlr.
14:04:24 [tlr]
zakim, list participants
14:04:24 [Zakim]
As of this point the attendees have been Frederick_Hirsch, Thomas, JuanCarlosCruellas, +1.781.442.aaaa, SeanMullen, +1.650.380.aabb, EdSimon, +1.443.695.aacc, GregWhitehead,
14:04:27 [Zakim]
... RobMiller, Hal_Lockhart, +30281039aadd, GilesHogben, +1.781.306.aaee, PHB, klanz2, peter_Lipp, +1.514.861.aaff, DonEastlake
14:04:32 [tlr]
rrsagent, please draft minutes
14:04:32 [RRSAgent]
I have made the request to generate
tlr
14:05:19 [tlr]
zakim, who is on the phone?
14:05:19 [Zakim]
On the phone I see Frederick_Hirsch, Thomas, JuanCarlosCruellas, DonEastlake
14:06:23 [tlr]
zakim, excuse us
14:06:23 [Zakim]
leaving. As of this point the attendees were Frederick_Hirsch, Thomas, JuanCarlosCruellas, +1.781.442.aaaa, SeanMullen, +1.650.380.aabb, EdSimon, +1.443.695.aacc, GregWhitehead,
14:06:23 [Zakim]
Zakim has left #xmlsec
14:06:26 [Zakim]
... RobMiller, Hal_Lockhart, +30281039aadd, GilesHogben, +1.781.306.aaee, PHB, klanz2, peter_Lipp, +1.514.861.aaff, DonEastlake
14:06:28 [tlr]
rrsagent, bye
14:06:28 [RRSAgent]
I see 4 open action items saved in
:
14:06:28 [RRSAgent]
ACTION: fjh to post the changes to canonicalization process [1]
14:06:28 [RRSAgent]
recorded in
14:06:28 [RRSAgent]
ACTION: Frederick to post red-line link for C14N11 [2]
14:06:28 [RRSAgent]
recorded in
14:06:28 [RRSAgent]
ACTION: thomas to draft CG note draft for submission to XML CG - due 2007-06-20 [3]
14:06:28 [RRSAgent]
recorded in
14:06:28 [RRSAgent]
ACTION: thomas to put up WBS for known constraints in SeptembeR/October [4]
14:06:28 [RRSAgent]
recorded in
14:06:30 [tlr]
781.426.3109 | http://www.w3.org/2007/05/15-xmlsec-irc | CC-MAIN-2014-35 | refinedweb | 5,030 | 64.64 |
Troubleshooting HTTP APIs
After we announced support for HTTP APIs in the Serverless Framework we saw a lot of enthusiasm around the benefits of the new HTTP APIs. People were excited about the possibility for significant cost reduction and performance improvement. But, there was still the question of effectively troubleshooting your Lambda infrastructure in combination with the new HTTP API.
Because of this, we’re excited to announce newly released monitoring and debugging support for HTTP APIs. Now you can get automatically instrumented monitoring and debugging tools on top of your HTTP APIs right out of the box. Let’s see how with a simple service.
Setting up Troubleshooting
First, make sure you’ve already done a few things:
- Installed the Serverless Framework
npm install -g serverless
- Created a free Framework Pro account
After this, you should be able to create an HTTP API pretty easily.
First, let’s create a new project directory and create a
serverless.yml file in it:
mkdir http-api-project
cd http-api-project
touch serverless.yml
Then, add this to your
serverless.yml file:
org: yourorg app: http-api-example service: http-api-example-python provider: name: aws runtime: python3.8 functions: getProfileInfo: handler: handler.hello events: - httpApi: method: GET path: /hello
Make sure to replace the
org and
app values with the ones for your Framework Pro account. From there, you can create a new
handler.py file:
touch handler.py
And then add this Python code inside the file:
import json def hello(event, context): body = event['body'] print(body) response = { "statusCode": 200, "body": json.dumps({ "message": "Hello friends!" }) } return response
After you make sure to save both
serverless.yml and
handler.py you can run
serverless deploy to deploy your HTTP API.
From here, just open up the URL in a browser and refresh the page a few times. The URL should look something like this:
When you load it up in the browser you should see this:
After you refresh the page a few times, open up your Framework Pro Dashboard and navigate to your app and service. You should now see the recent logs:
And that’s it! You’ve just setup your HTTP API with monitoring and alerting capabilities.
What Next?
Now that you know how to setup a basic HTTP API with monitoring you’re ready to continue developing your HTTP APIs. As you dive into it, you might be interested in some of our other guides on HTTP APIs:
- Our Official Guide to AWS HTTP APIs covers important essentials and context around the newer HTTP APIs
- Serverless Auth with HTTP APIs is an introductory tutorial to getting started with HTTP API authorizers
- Also check out this example of a more complex multi-entity “Surveys service” using DynamoDB and Python
- Or, if you prefer Node, this example of the same multi-entity “Surveys service” using DynamoDB and Node.js | https://awsfeed.com/whats-new/serverless/announcing-http-api-troubleshooting | CC-MAIN-2021-31 | refinedweb | 483 | 62.38 |
OK so most can guess I am a student just now learning programming and starting out with C++.
I have tried many compilers and IDE's and have settled on Eclipse for now. I do not want to start a What compiler/IDE do you use, but I am trying to figure out each one I try as much as I can.
So I noticed that an assignment I was working on would compile and run on Microsoft Visual C++ 6.0 but not Eclipse.
The program was to teach us how to throw an exception. When Eclipse would hit that line in the code it would quit the program, but Visual C++ would run the program the whole way through.
here is the code (Sorry about the length)
//Written by Jason Stabins //March 19, 2008 //Chapter 16 //Programming Challange throw #include<iostream> #include<iomanip> using namespace std; //declare global size for name const int NAME_SIZE = 30; //Hold student info struct Student { char Name[30]; float GPA; int Major; }; //Prototypes Student StudentData(Student &S1); Student ChangeData(Student &S2); Student GetStudents(Student students[], int SIZE); Student printStudents(Student students[], int SIZE); int main() { Student S1, S2; //Two student structs //exceptions cout << "Enter the first students information and I will copy it into"; cout << " the second.\n DO NOT ENTER ZERO FOR THE MAJOR!\n\n"; try { S2 = StudentData(S1); cout << "Student 1 name:\t\t" << S1.Name << endl; cout << "Student 1 GPA:\t\t" << S1.GPA << endl; cout << "Student 1 Major:\t" << S1.Major << endl; cout << "--------------------\n"; cout << "Student 2 name:\t\t" << S2.Name << endl; cout << "Student 2 GPA:\t\t" << S2.GPA << endl; cout << "Student 2 Major:\t" << S2.Major << endl; } catch (char *exceptionString) { cout << exceptionString; } cout << "\nNow enter the data for student 2.\n AGAIN ZERO FOR MAJOR IS NOT ALLOWED!\n"; cout << "-------------------------------------\n"; try { S2 = ChangeData(S2); //Change s2 data cout << "Student 2 name:\t\t" << S2.Name << endl; cout << "Student 2 GPA:\t\t" << S2.GPA << endl; cout << "Student 2 Major:\t" << S2.Major << endl; } catch (char *exceptionString) { cout << exceptionString; } //declare array const int SIZE = 2; Student students[SIZE]; cout << "\nWe will now work with two arrays.\n"; try { students[SIZE] = GetStudents(students, SIZE); cout << "\nHere is the students information from the arrays\n"; cout << "-------------------------------------------------\n"; printStudents(students, SIZE); } catch (char *exceptionString) { cout << exceptionString; } return 0; } Student StudentData(Student &S1) { cout << "Please enter the students name:"; cin.getline(S1.Name, NAME_SIZE); cout << "Enter the students GPA:"; cin >> S1.GPA; cout << "Enter the students Major:"; cin >> S1.Major; cin.ignore(); if (S1.Major == 0) { throw "Bad Major!\n "; } else return S1; } Student ChangeData(Student &S2) { cout << "Please enter the students name:"; cin.getline(S2.Name, 30); cout << "Enter the students GPA:"; cin >> S2.GPA; cout << "Enter the students Major:"; cin >> S2.Major; cin.ignore(); if (S2.Major == 0) { throw "Bad Major!\n "; } else return S2; } Student GetStudents(Student students[], int SIZE) { for(int i = 0; i < SIZE; i++) { cout << "Enter the name for the #" << i + 1 << " array input:"; cin.getline(students[i].Name, NAME_SIZE); cout << "Enter the GPA:"; cin >> students[i].GPA; cout << "Enter the major:"; cin >> students[i].Major; cin.ignore(); } if (students[2].Major == 0 || students[1].Major == 0) { throw "Bad Major!\n "; } else return students[SIZE]; } Student printStudents(Student students[], int SIZE) { for(int i = 0; i < SIZE; i++) { cout << "Name:" << students[i].Name << "\t\tGPA:" << students[i].GPA; cout << "\t\tMajor:" << students[i].Major << endl; } return students[SIZE]; }
So first, am I using the technique correctly? is there a problem with the code itself?
Or is it a compiler/IDE problem?
The good news is my professor uses Microsoft Visual to grade our assignments so at least the program will run all the way through when she grades it.
Thanks for your help!
Jay | https://www.daniweb.com/programming/software-development/threads/120127/compiler-problem | CC-MAIN-2018-30 | refinedweb | 631 | 67.86 |
Wiki Language
A new programming language to be created by all the participants on this wiki. In the spirit of
WhyWikiWorks
, if you don't like any feature of this language you can always delete it, edit it, or
ReFactor
it. The intent is to achieve a language in which the
ShortestWikiContest
is won by a program consisting of a single line of code.
There are no functions. There are only binary operators. This is to enforce good factoring. If you need your operator to take more than two arguments, you'll need to do some encapsulation. For syntactic sugar, A(B) is equivalent to [0 A B].
Any operator can be made overloadable by using square brackets or
CamelCase
. Hence 1 + 1 evaluates to 2, but 1 [+] 1 is undefined until + is overloaded.
Operator definitions are persistently stored in a versioned database.
Operators can be namespaced by applying the A(B) syntax. Hence 1 My
Namespace(+) 1 can evaluate differently to 1 Your
Namespace(+) 1
others? (making
WikiAji
...)
EditText
of this page (last edited
October 16, 2003
) or
FindPage
with title or text search | http://c2.com/cgi/wiki?WikiLanguage | CC-MAIN-2014-42 | refinedweb | 185 | 58.38 |
19 July 2011 17:08 [Source: ICIS news]
LONDON (ICIS)--The acquisition of Evonik’s carbon black business by private equity firms Rhone Capital and Triton can go ahead the European Commission (EC), said on Tuesday.
The €900m ($1.27bn) deal will not significantly change the structure of the carbon black market or impede effective competition, it added in a statement.
“The transaction is good for Evonik, the future of the carbon black business and its employees,” CEO Klaus Engel said in April this year when the sale was announced.
“At the same time, this represents another major step toward a more clear-cut profile for Evonik as a leading specialty chemicals company when it goes public,” he added.
The German specialty chemicals major, which is based in ?xml:namespace>
Rhone Capital is US-based while Triton is headquartered in
Carbon black is mainly used as a reinforcing filler in the rubber industry, for tyres and industrial rubber goods, and as a pigment in plastics, inks and specialty coatings.
($1 = €0.71)
For more on Evon | http://www.icis.com/Articles/2011/07/19/9478640/european-commission-clears-sale-of-evoniks-carbon-black-business.html | CC-MAIN-2014-35 | refinedweb | 176 | 58.62 |
Book Review: Learning ExtJS 3.2 46
dulepov writes "An extensive set of features makes ExtJS a very popular framework. But a rich set of features comes with a cost: the framework is complex. While many frameworks can be learned from source, with ExtJS this is not the case. Syntax of object-oriented programming in JavaScript can be very difficult to understand and ExtJS sources demonstrate that. As a practical programmer, I think that the best way to learn ExtJS is to read a good book and follow examples inside.The ExtJS book I got was published by Packt Publishing. It is called Learning ExtJS 3.2. I consider myself an experienced ExtJS developer but there are always more experienced developers and this book was written by several of them." Read below for the rest of dulepov's review.When I looked through the table of contents, I realized that it is one of those rare books that suits all kind of readers: from beginners to advanced. The book starts from "Getting ExtJS" chapter. It discusses why ExtJS is different, how to get it, where to put it, etc. While this may seem like a chapter for beginners, I read it with interest and found several tips I will use in my next project. The opening chapter also tells what to do if the developer sees error messages. This is another advantage of the book: it is highly practical.
Further chapters describe how to use ExtJS. Here is what is covered: getting elements, creating and using forms, working with menus and toolbars, displaying and editing data with grids, using layouts for components (you can quickly rearrange objects by just applying another layout), creating tree controls, using windows and dialogs. There are also chapters about charts, effects and drag-and-drop. In addition there is a chapter about extending ExtJS. This area is probably one of the most difficult for programmers because this is not what the developer can find in the ExtJS package. The topic about extending ExtJS takes 38 pages, so it is really well covered.
Another interesting topic discussed in the book is data transfer between the browser and the server. There are traditional ways (such as AJAX) but ExtJS and the book go further discussing remote method invocation from the client on the server using ExtDirect. ExtDirect is a hot topic in the ExtJS community because it greatly simplifies communication between the client and the server. Thus the developer can save development time.
The final chapter in the book talks about useful additions to ExtJS such as HTML editor, state management on the browser side, using AIR, etc. It also describes several community extensions to ExtJS (such as TinyMCE and SwfUploadPanel) and how to use them..
Despite being experienced in ExtJS and using it since version 1.x, I found a lot of good tips in this book. It is really useful and now lives on the shelf among good programming books. So if you need a good learning resource about ExtJS, I can definitely recommend Learning ExtJS 3.2 .
P.S. Current version of ExtJS at the time of writing of this review is 3.3.1. That does not make the book obsolete at all.
You can purchase Learning Ext JS 3.2 from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Re: (Score:2)
The problem with Packt publishing is that often, they're the ONLY ONES who produce a book on your particular subject.
Hell, Packt's published more information about Moodle's PHP API than THE MOODLE TEAM has. *grumble*
Re: (Score:2)
The problem with Packt publishing is that often, they're the ONLY ONES who produce a book on your particular subject.
Why review them then? A simple "thumbs up" or "thumbs down" should suffice, since you don't have any other choices.
From Beginners to Advanced (Score:3, Insightful).
Re: (Score:3).
Or laser focused on one tiny little thing...
Next up on
/., "Learning printf on GCC 4.1.1" in 432 pages
Current version of GCC at the time of writing of this review is 4.4.5 assuming you use Debian Stable. That does not make the book obsolete at all.
Re: (Score:1)
Need a 'publisher' preference (Score:2)
Who votes these up, anyway? Or does the firehose only get used to make us feel like we have a say in things..
ExtJS 4 Preview is already out (Score:2)
Re: (Score:2)
vs. jQuery? (Score:2)
In what category of animal does ExtJs fit vs. jQuery combined with jQuery UI?
jQuery has basically broken away from the pack [google.com] from other Javascript toolkits/frameworks/libraries. (Which is not to say they all have the same purpose.)
When you've got a lot of players in the field, and have to decide what to use, and are also thinking about new devs already being familiar with a package, going with the market leader seems to be what most people will do.
The $ and css-based selector syntax of jQuery makes it highly
Re: (Score:2)
I use 'em both all the time. If I'm coding a web app (or even a new page in an existing web app) from scratch, I'll use Ext. Once you wrap your head around OOJS and Ext's API, it's widget set is far easier to use and more consistent than jQuery. OTOH, if you are enhancing and existing page, then jQuery is tops. There are even times when *gasp* I use 'em both on the same page. No, you don't want to be loading lots & lots of library code, but for apps that don't go over the internet, and are used within a
Re: (Score:2)
Mostly because the philosophy of jQuery seems to be embracing CSS and the DOM, rather than abstracting them away. It often feels like the API that was missing from the W3C spec.
The other, bigger reason I personally have been avoiding EXT is the attitude of the core developer(s) about the GPL -- in particular, they not only switched to the GPL lately, but they have a fairly perverse understanding of the GPL which suggests that using EXT would to build a frontend would require me to open source my entire back
Re: (Score:1)
Re: (Score:2)
Right, which I mentioned explicitly:
This may have changed recently, but I distinctly remember having to strip EXT from a commercial product simply because we could not afford to either be stuck with an old version of a framework or pay licensing fees for a javascript toolkit.
What's worse, they went from not requiring that to requiring it, and they demanded more than the GPL asks for. In particular, they decided that the entire application included both the frontend and the backend. It probably wouldn't have been OK for us to GPL our frontend either, but both frontend and backend was out of the question.
Note that this disallows quite a lot of things which would otherwise make sense. For instance, if I develop some sort of backend-agnostic fron
Re: (Score:2)
I know you're an AC, and maybe I'm foolish to expect more, but your main point:
If you want to keep your product closed, buy a damn license.
I answered in the post you're replying to:
we could not afford to either be stuck with an old version of a framework or pay licensing fees...
This point is even more asinine:
I've never understood why people complain that something is not free and open when they want to build closed and non free products.
The complaint isn't that it's "not free and open", it's that it's problematic for non-GPL'd stuff, proprietary or otherwise, and even for GPL'd software which wants to connect to non-GPL'd servers. Presumably you have no problem with the fact that your web browser, whatever its license is, can connect to a server running non-free code? Would you prefer it d
Re: (Score:2)
We use ExtJS at work to do web forms.
It comes in two parts, a 'base' and the rest of it. The default base can be swapped out for jQuery (or a couple of other JS libraries) via an ExtJS 'adapter' which deals with various things including namespace issues, so use of both jQuery and ExtJS is officially blessed.
We evaluated a few others, but ExtJS's widget set seemed more comprehensive. (The killer at the time for us was a robust tree control supporting drag and drop.) Having used it for a while, it is fairly c
Re: (Score:2)
Good that ExtJS is working for you.
Back a while ago (few years ago), I would go to dojotoolkit.org, check out the demo, and wonder if Dojo was slow for everybody, or was it just me?
Re: (Score:2)
The $ and css-based selector syntax of jQuery makes it highly welcoming for devs that have to learn Yet Another Library.
God, really? The worst decision the designers of jQuery, Mootools, etc made was to all decide to use $ as their base object. There's no reason why the couldn't just call it jQuery instead, but they had to go and use one character that everyone else also decided was so cool that they would use that for everything they did too, and now everything either overwrites each other or you need to use alternative methods to access it. They should have just named their objects in a meaningful way in the first place
Re: (Score:1)
_ = function(_){ return { _: "brainfuck 2.0"}; }
_(_._)._
Re: (Score:2)
That's definitely worse, but not a whole lot worse than $($.$).$
Which itself is only marginally worse than jQuery(jQuery.jQuery).jQuery
Not exactly the most welcoming type of thing for a new user. Is it a function? Is it an object? Is it a property? Yes!
It's like they only did it for the novelty of the thing, not because it's useful in any way. My CSE 100 classes taught the benefits of meaningful variable names. These guys must have skipped the intro classes.
Lost? The title sencha to the wrong place... (Score:2)
It ain't Ext any more, and Slocum is long gone.
Ext is now Sencha:
Rob
Re: (Score:1)
ExtJS lives on as one of Sencha's products.
Re: (Score:1)
Spend much time at urban dictionary?
38 pages == good :) (Score:2)
The topic about extending ExtJS takes 38 pages, so it is really well covered.
Well, if more pages == more good, then I guess I ought to go looking for an even bigger book!
:)
I would have loved to know what it is in those 38 pages that cover the topic of extending ExtJS well. Even basic info about the 38 pages (it walks you through a single example in detail over 38 pages; it starts with a small example & builds on it over 38 pages; it covers sub-topics X, Y, and Z in detail (and X,Y, and Z are particularly important/difficult to do/etc), or whatever) would help me know if this b
Need better summaries (Score:2)
I know this is "News for Nerds", but you know what would have made this post better? A 1-sentence description of what ExtJS means. Sure, I figured it out from context that "JS" meant "JavaScript", but what's the "Ext" indicate? "Extended"? "Extensions"? Is ExtJS part of the JavaScript standard that every browser includes? Why should I care about ExtJS?
At the very least, include a link to the ExtJS [wikipedia.org] entry at Wikipedia. (At least, I assume that's the right link?)
Re: (Score:2)
Obviously ExtJS is a Linux file system type implemented entirely in JavaScript. It's built as a browser-based extension of FUSE.
At the risk of writing flamebait... (Score:2)
ExtJS sucks.
Yes, it has a lot of features. But no, it doesn't scale well when what you need is granular control of how javascript loads and executes, and it doesn't help multiple developers working on different modules. Lots of hardcoded references to global objects, long namespaces, HUGE file downloads. It just doesn't add up. Sencha needs to really step up if it wants to stay competitive with a paid product.
Way better alternatives are YUI3 [yahoo.com] and GWT [google.com]. Even ideas such as Wijmo [wijmo.com] perform better.
Re: (Score:1)
To be fair... (Score:2)
Thank You For The Review (Score:1)
Thanks for the great review. I'm really glad you enjoyed the book, and especially that you were able to get something tangible to use. It's great to see that kind of feedback. When I first started learning Ext JS there weren't any books out there. I spent hours reading through the demo code, and combing through the forums. When Packt contacted me to help complete the first book I jumped on it, knowing that there were other developers out there like me that would learn more (and faster) from a b | https://books.slashdot.org/story/11/03/16/1324238/Book-Review-Learning-ExtJS-32 | CC-MAIN-2016-36 | refinedweb | 2,226 | 72.05 |
Intro: Smart Skull
Well, This is a fun project to make, and the spin offs and personalization make this fun for all.
Whats that? Well what exactly is it! I'm glad you asked! This is no ordinary skull, this is a singing, speech recognizing, message-able, portable, talking skull with charm named Calvin Cium, or Cal for short, get it?
Now don't let that deter you. This is cheap (under $150 US), fun and medium level difficulty build project using a raspberry pi and arduino and other stuff I had laying around (you'll probably have to buy some). The code varies in difficulty but i recommend being intermediate with python and linux systems.
And as said have fun with your own spin offs and feel free to tell me about them ( I'm really hoping someone does Handles from Dr. Who, I couldn't find a good CAD).
Right lets start!
Step 1: Get the Parts!
Alright, to play Dr. Frankenstein here we need to dig up some body parts, or rather just the head parts.
- Brain: Raspberry pi 3 (need wifi) + Arduino Nano (optional but I used it)
- Eyes: 2 LED's (i used multi-color RGB LED)
- Jaw Muscles: Standard Servo
- Bones: 3D printer and filament
- Vocal Chords: 12 watt Speaker + Drok 5 volt, 8 Watt Audio Amplifier
- Ears: USB microphone
- Nerves: Regular Wires +female 3 pin (tail end of servo) Wires + male to male aux cable (short) + MicroUSB +Mini USB
- Blood: 10000mah portable battery + Power supply (3 amp, 5v) (I used a spare switch powersupply and a drok buck converter)
- Misc: Main Line Switch, male end power cord, thick wire (2-4 mm, 8 ''), PVC pipe, box, dc barrel plug and jack, paint, screw bottle top and cap (feel free to customize, or find alternatives for these parts)
This is a lot, but don't let it get you down. There are lots of alternatives, I just recommend you look for small stuff. I will explain the purpose of each part when I install it so you can choose alternatives, most of this I collected over many years so I put it to use rather then buy.
Step 2: The Bare Bones of It
I printed this in 6 different parts, There are 5 CAD files (I sliced one into two to fit my printer and I kept knocking the teeth out so the split was fortunate). Right of the bat I want to thank Dantana for the great CAD files. The link to thingiverse and the files is . Feel free to find others, but the key is an empty cranium. Then print and wait. I also recommend printing the bottom half of a pi case. Any will do and keep it to the side for now.
I recommend hot gluing the pieces together now (NOT THE SKULL TOP) and molding the hotglue to smooth over the seams.
Next paint. Regardless of the filament color I recommend painting it (spray paint works, but by hand is a lot better) I went for white because I like a white skull over a more realistic color.
With hindsight I also advise you know to put pins or strong small magnets in the skull to hold the top part of the skull in place. The part that you didn't glue, right, RIGHT? I put pins in the front and back, as shown in the images with the holes being in the top part. Magnets take more space so if you use those put them on the sides.
To put the jaw in i used a paperclip and made a small hole in the jaw. The 4th pic shows this on the jaw's left. then hotglue the paperclip in the inside of the skull. This is the hardest part and requires a lot of patience because its hard to see and in a small crevice. Refer to the 5th picture to get an idea of the positioning. The jaw should now move freely.
Finally the bottle cap. Put a hole for 2 wires in it and glue it excessively in place, (trust me there's a reason teeth kept getting knocked out). Place it in the bottom of the skull where the spine would connect. This is the connection to the charging station and base. Put 2 wires for power through the hole so there is about a 1.5 ft inside and 3 in in the cap outside and glue it in place. (No pics of this b/c hindsight)
That's it now you got the bare bones of it.
Step 3: Jaws
Time to give this guy some bite. Get a servo and that thick wire. The servo moves the jaw by pulling the wire back and forth which is connected to the back of the jaw. In the first image you can see the servo connected to the wire at the bottom of the skull. The next two images show the wire painted white and where it enters the skull and connects to the jaw.
Drill a hole in the jaw near the back and bend the wire in a z like shape, but perpendicular. Then thread it through the skull and attach it to the jaw through the hole. A blob of solder on the end will keep it from coming out. Next attach it to the servo and position it at the bottom of the skull. try to keep it as flat to the bottom as possible and make adjustments as necessary. Try turning the servo before you glue it to the skull and bottle cap to make sure you get the desired range of motion. The more glue on the cap the better.
Step 4: LED Eyes
For this get your LEDs and wires. If you desire RGB LEDs I recommend the 3 pin female wires from an old servo. Put the 3 pin wires and a single regular through the oracular nerve cavity in the skull with the jack ends inside the skull. Then solder the color led pins to the three wires and ground to the single. Do the same for the other eye and connect the grounds, that's all.
At this point you can pull the wires to get the LEDS further back into the oracular cavity and glue it there. Feel free to paint the wires outside now to make it look better.
Step 5: If I Only Had a Brain!
Now one can put in the rpi. Before that I recommend you solder wires to the 5v and ground on the bottom of the pi. and leave about a foot loose. Now make a small hole in the case and put the pi in with wires coming out of the case. Place the case on the servo or as low down as you can get it and glue it in place. Now you can connect the wires of the eyes to the pi and the servo too.
I had problems getting the servo to work on the pi and startup problems too, so i got an arduino nano and attached to the pi with a usb cable, and connected the servo to that. In the first image you can see the two servo cables are actually the LED eye cables and the black wire in the middle is ground the red wire is power to the arduino nano (redundant). the green wires connect to the power of the audio amp (to be mentioned later).
In the second picture you can see the power cable from the cap and the pi power wires (both red and black coiled) the stripped down spark-fun micro cable (red usb) (saved room) connecting to arduino nano. The servo is soldered to the nano
In the third picture you can see a mess of wires connected to power (the green screw jack) including the power of the servo. The servo is connected to the nano pins (black connector).
NOTE: Wires become quite a mess with so many components, twisting and labeling help but it still is a mess.
Step 6: Ears for 'earing.
Well, he doesn't actually have ears, rather, the microphone is ah, stuck up his nose (see pic 1, the black spot at the top). I got a usb mic and stripped it so all it was was a usb wire and a microphone on a circuit board (pic 2) I then desoldered the microphone and soldered longer wires to it. I ran the wires up his nose into the brain area and re-soldered it to the circuit board. Then i glued the mic inside the nose were no one could see it, the further from the electronics in the cranium the better. and plug it into the rpi (clear plastic usb under dongle in pic).
Thats that.
Step 7: Robots Deserve a Voice
Right. In order to hear Calvin speaking across the room I chose an 12 watt speaker, a bit big, but i made it fit. I also put tape for insulation. I bought a 8 watt drok audio amp from amazon (pic 2), set it to max and wired it up. This amp runs of 5v so, make sure you get one that runs on 5v since that's your only voltage here! Connect to pi audio out with aux, you may need to solder a audio jack or just cut, strip, and solder the aux cable.
Dont worry if its too loud, or a bit too quiet. there are programs that help boost volume on pi and the skull cap will muffle it a bit.
Step 8: Energize
Time to give it it's life blood. There is no fancy wiring to the battery pack, charging itself is ok. I took mine apart to make it fit, it is a bit old so bigger than today's 10000mah. Then all positives and all negatives connect respectively, including the cap. Then in the cap you can solder a dc jack or plug, your choice.
Then just plop it on top. Organization of this stuff is really done on a case by case basis depending on how things fit. I didn't glue this because I couldn't and tried multiple orientations. In the end i glued the speaker to the battery pack and it fit well (pic 3/4).
Step 9: Charging Station
Alright, this is it's stand, its home so make this look good. I just got a box put a 24v 8 amp switch power supply and connected it to a drok buck converter set to 5.2v. This crazy power supply is so it can charge, speak, and run at the same time. expect a peak draw of 3.5 amps and you'll be good. You can put 2 pi power supplies together for 5 amps. you might be able to get by with just a single rpi power supply, but I like to play it safe. I then glued excessively a pvc pipe and the top of a bottle and painted it white. I ran the wire up and attached the complimentary jack or plug depending what you do and called it a day.
Like i said make it pretty, pink duck tape and a nice plaque with its name and it's done.
Now you can screw your skull on and have it charge.
Step 10: Setup Pi
Right now. Your robot companion is almost done now to set up the os and programs.
So the basic is to get it auto connect to wifi, ssh capable, default aux out, not hdmi.
Test speakers. make sure that all works.
The Coding here can become quite complicated, but i'll walk you through some of it.
start by installing the folowing
sudo apt install espeak festival sox mpg123 ffmpeg (speech synthesis and audio manipulation)
sudo -H pip install fbchat (fb client so you can msg it)
Step 11: Startup Scipt
So we want it so that whenever it restarts it auto runs the main script.
- make a main script
- cd ~
- sudo touch runSkull.sh
- sudo nano /etc/rc.local
- add the following
export AUDIODEV=hw:1,0 # set default audio out to aux exec 2> /tmp/rc.local.log # send stderr from rc.local to a log file
exec 1>&2 # send stdout to the same log file set -x # tell sh to display commands before execution _IP=$(hostname -I) || true if [ "$_IP" ]; then printf "My IP address is %s\n" "$_IP" python /home/pi/IPStartup.py # Script sends IP addr to me every startup via messenger hostname -I | festival --tts # speak IP addr (optional) bash /home/pi/runshull.sh & # auto start script fi exit 0
Step 12: IPStartup.py Script
This Script automatically sends me the IP addr during startup.
Type :
cd ~ sudo nano IPStartup.py
#IP Startup Script
# Created by Wolfgang Huber 1/13/2017
import fbchat
import socket
import netifaces as ni ip = ni.ifaddresses('wlan0')[2][0]['addr'] # get ip addr as string
client = fbchat.Client("username", "password")
friends = client.getUsers("My Name Here") # return a list of names friend = friends[0] sent = client.send(friend.uid,'IP: '+str(ip)) #send msg if sent: print("Message sent successfully!")
Step 13: Main Script
The main script is above with its dependent library.
They do a lot of things which i wont explain to too much detail.
FBScanner runs in the background and constantly looks for msgs, storing them in a file.
Action Processor processes those msgs for commands like to sing(play mp3) download youtube audio, say a phrase, change eye color and volume and voices.
Skull.py actually updates the skulls states dealing with the IO.
There are many files that dealing with manipulating info, audio, and more.
This project can be scaled incredibly so it can do so much. Speech recognition is done with julius, or snowboy. A tutorial on that can be found on... who does a great job on setup for julius on pi.
Step 14: All Code
The coding on this project is extensive, confusing, and definitely advanced for all the bells and whistles. But for simple stuff like speaking words and eye color the previous scripts will do with a few bugs naturally with your own mods and getting the necessary packages installed.
For those of you who want all the dice I give you all the code and work which definitely can be improved on. Its up to you to explore the code and all the details and comments. And get pulseaudio, also and mics to work (nightmares will ensue, but so worth it).
Have Fun!
Ill do my best to answer comments below, and share!
Runner Up in the
Internet of Things Contest 2017
3 Discussions
Reply 1 year ago
You should hear it sing Spooky Scary Skeletons!
1 year ago
That is awesome! Something like this would make a great project for people learning some of the more advanced things that you can do with microcontrollers.
Reply 1 year ago
Thanks. | https://www.instructables.com/id/Smart-Skull/ | CC-MAIN-2018-47 | refinedweb | 2,508 | 80.31 |
hi guys
Am new to spark ans scala,i have csv files that i want tomerge in the same csv file or dataframe i want just to handle them as if they are only one file
Any help thanks
Created 02-23-2017 03:46 PM
For Spark 1.6+
What you need to do is load all the csv files with a for loop in a batch processing manner. As u inject the same schema into each of them, convert them to a dataframe; union each of the dataframe in another var. In that way, all of them will be just one dataframe. Following code does the work. You can follow my code and test them in spark-shell
Contents of file1.csv
x,y,z
Contents of file2.csv
a,b,c c,d,e
Store them in a hdfs directory and change the path according to yours in the following code where it says 'hadoopPath'
NOTE: While working on spark-shell, don't paste all code at once. It yields error sometimes. Paste once bunch at a time.
import org.apache.spark.{ SparkConf, SparkContext } import org.apache.spark.sql.functions.broadcast import org.apache.spark.sql.types._ import org.apache.spark.sql._ import org.apache.spark.sql.functions._ val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.implicits._ // -- EDIT YOUR SCHEMA HERE case class Test( attr1:String, attr2:String, attr3:String ) import org.apache.hadoop.fs.{FileSystem,Path} /* initialize empty dataframe (This dataframe will be the final one where we union all others) */ var all_df = Seq.empty[Test].toDF // -- EDIT YOUR HDFS PATH HERE 'hadoopPath' val files = FileSystem.get( sc.hadoopConfiguration ).listStatus(new Path("/hadoopPath/")) // -- Function for all operations to be executed in each file iteration def convertToDFandUnion(file: String) = { val x = sc.textFile(file); val x_df = x.map(_.split("\\,")) .map(a => refLineID( a(0).toString, a(1).toString, a(2).toString )).toDF x_df.show() // This is where we make each dataframe into one all_df = all_df.unionAll(x_df) all_df.show() } // -- Loop through each file and call the function 'ConvertToDFAnd Union' files.foreach(filename => { val a = filename.getPath.toString() convertToDFandUnion(a) })
thanks , but the problem i do not know the schema of csv so i can't insitialize x_df. any help please thank you
Created 02-24-2017 05:50 PM
you don't need the schema, as long as u know the number of columns. on my code I put attr1,attr2,attr3 as I had 3 columns in the data. If you have 15 columns for example, you can go from attr1,attr2....upto attr15 etc.
the data are in local file system , they all have the same header i want to get one csv file with this header , is there a solution using spark-csv or any thing else nwant to loop and merge them any solution please thanks | https://community.cloudera.com/t5/Support-Questions/erge-csv-files-in-one-file/m-p/122008 | CC-MAIN-2020-50 | refinedweb | 481 | 67.15 |
Using unsafe tricks to examine Rust data structure layout
Introduction
[Edit (20/9/2016)] Please check this Reddit discussion where some readers have pointed out errors
Whether you are learning Rust or C, it is important that you have an understanding of how various data types are represented in memory. For example, when learning C, if you are not able to sketch on a piece of paper how the array ‘a’ is represented in memory, you will have a hard time understanding the meaning of expressions like *a, **a etc (array decaying rules make this tricky in C):
main() { char *a[] = {"abc", "def", "ijk"}; }
In C, it is possible to learn more about the representation of the array simply by printing out the value of various expressions involving ‘a’. One can also think of using ‘gdb’ to peek into memory. Finally, it is not too difficult to read and understand the assembly code generated by the C compiler.
Things are not that easy in Rust. References in Rust are not the free spirits you encounter in the land of C - they have a lot of restrictions imposed on them (for an excellent reason, the safe execution of your code). But these restrictions will have to be lifted if you wish to write very low level Rust code (say operating system kernels, embedded systems) - Rust does indeed provide an ‘unsafe’ mechanism using which you are free to do all the wild and crazy things you can do with pointers in C. Needless to say:
this should be used with extreme caution if you do not wish your Rust code to blow up the way your C code sometimes does! Unless you are writing very low level code, you will seldom have the need to use ‘unsafe’.
In this article, we will use the ‘unsafe’ mechanism in Rust purely for educational purpose - to understand how Rust lays out objects of different types in memory.
Getting the address of a variable
If you expect the following program to print the address of ‘a’, you are in for surprise:
fn main() { let a = 10; let b = &a; println!("{:?}, {:?}", b, b+1); }
You will see the value of ‘a’ and ‘a+1’ (that is, 10 and 11) in the output.
Even though ‘b’ contains address of ‘a’, when you try to access ‘b’, Rust will automatically derefer the pointer and give you the pointed-to value (ie, Rust will automatically do *b when you make use of ‘b’ in an expression like ‘b+1’ - we need not write *b explicitly).
This behaviour is very convenient because that is what you really want to do with a pointer: access the pointed-to object.
It is also safe; if you are able to perform arithmetic on the memory address and then derefer the modified address (say do something like *(b+1)), there is no guarantee that the modified address (b+1) is pointing to a valid memory location.
But what if you really want to see the address of ‘a’?
fn main() { let a = 10; let b = &a; println!("{:?}", b as *const i32); }
This is the output I got on my system (x86_64 Linux):
0x7ffeebf83a24
[Edit(14/09/2016)] You can use
println!("{:p}", &a)
to see the address of ‘a’. Thanks Manish for pointing this out!
Rust has a “raw pointer” type, written as “* const T” (where T is some type like say i32). None of the safety checks associated with ordinary Rust references are applicable to raw pointers. In the above example, we are asking Rust to interpret ‘b’ as a raw pointer, which will let us see the actual address stored in ‘b’.
Here is another way to write the same program:
fn main() { let a = 10; let b: *const i32 = &a as *const i32; println!("{:?}", b); }
The explicit declaration of the type of ‘b’:
let b: *const i32 = &a as *const i32;
is not really required as Rust can easily infer the type; we need to only write:
let b = &a as *const i32;
Dereferencing raw pointers
Here is a fun program in C:
int main() { int *p = 0; int k = *p; }
This will of course give you a segfault as you are trying to derefer the null pointer.
Let’s try to write an equivalent one in Rust:
fn main() { let p = 0 as *const i32; }
This will compile and run without any problem; you can store 0 in a raw pointer.
Now let’s try this one:
fn main() { let p = 0 as *const i32; let k = *p; }
We are now trying to access the memory location whose address is 0; you will find that the code will not compile. We get an error which says:
a5.rs:3:13: 3:15 error: dereference of raw pointer requires unsafe function or block [E0133] a5.rs:3 let k = *p;
Rust does not allow us to derefer raw pointers because it is a dangerous operation with the potential for undefined behaviours; a raw pointer is not guaranteed to contain valid memory addresses.
If you are really stubborn, Rust will let you do this:
fn main() { let p = 0 as *const i32; unsafe { let k = *p; } }
The above program will compile properly; and you get a wonderful segfault when you run it!
Any code which derefers a raw pointer should be explicitly marked as “unsafe”; the big promise of Rust is memory safety without using a garbage collector - you don’t get that safety in unsafe blocks.
Why does Rust need unsafe?
Say you are writing code which runs on a microcontroller. Your code needs to write some data to an I/O port. Most often, I/O ports are memory mapped, that is, you can access the I/O port by reading from or writing to specific memory locations. If you have an I/O port which is mapped to address 0x7134af23 and you want to write 1 byte of data:
int main() { unsigned char *c = (unsigned char*)0x7134af23; *c = 'A'; }
Unless you have raw pointers and “unsafe” blocks, Rust code will not be able to do low level manipulations like this which are essential for writing operating systems and embedded systems code. Rust is targeted to be a replacement for C, so it should be capable of doing everything that is possible in C. As the dangerous parts of the code are explicitly marked as “unsafe”, it becomes easy to identify and more thoroughly audit such blocks of code. Redox OS, an operating system written in Rust, has demonstrated that it is possible to write a large percentage of the kernel code in safe rust.
Looking at the memory layout of 32 bit integers
Here is a C program which declares a 32 bit integer and tries to read it back as four independent bytes:
#include <stdint.h> #include <stdio.h> int main() { uint32_t a = 0x12345678; uint8_t *b = (uint8_t*)&a; printf("%x, %x\n", *b, *(b+1)); printf("%x, %x\n", *(b+2), *(b+3)); }
If you run the code on a little endian system, you will get 0x78, 0x56, 0x34 and 0x12.
Here is how you can do the same in Rust:
fn print_bytes(p: &u32) { let q = p as *const u32; let r = q as u64; for i in 0..4 { unsafe { print!("{:x}, ", *((r + i) as *const u8)); } } println!(""); } fn main() { let a:u32 = 0x12345678; print_bytes(&a); }
Here are some points to be noted:
- It is not possible to cast a &u32 as *const u8
- You can cast a *const u32 as *const u8
- You can’t do addition on a raw pointer
- We have 64 bit addresses, so we need a u64 to store an address
- It is possible to cast a u64 as a raw pointer
Using the “transmute” function
It is possible to get the same effect using “transmute”:
use std::mem; fn main() { let a:u32 = 0x12345678; let b:[u8; 4]; b = unsafe{mem::transmute(a)}; println!("{:x},{:x},{:x},{:x}", b[0],b[1],b[2],b[3]); }
The “transmute” function takes in an object of type u32 and returns a view of that object as an array of four 8 bit values, which gets copied to b. You can think of it as a kind of copy operation which copies all the bits of an object of type T1 to an object of type T2. Both T1 and T2 must have the same size and alignment.
Warning: Use of the “transmute” function is discouraged. As Rust beginners, most of us will have absolutely no need for this operation in the code that we write. I am using it here purely for educational purpose.
Understanding Vector Layout
The figure below shows how a vector is mapped to memory; we have a triplet of values: (pointer,capacity,length) on the stack. The space for the contents of the vector is allocated on the heap. The “length” attribute specifies the actual number of elements in the vector, “capacity” represents the actual amount of space allocated (which may be larger than the number of elements in the vector) on the heap and “pointer” points to the heap location where the contents of the vector are stored.
Here is a program which reads the stack locations representing a vector and converts the data into three u64 values:
use std::mem; fn main() { let p:[u64;3]; let mut v:Vec<i32> = vec![]; v.push(10); println!("{:?}", &v[0] as *const i32); p = unsafe {mem::transmute(v)}; println!("{:x}, {:x}, {:x}",p[0], p[1], p[2]); }
Here is the output I got on my machine:
0x7f32bea1d000 7f32bea1d000, 4, 1
The heap allocated block starts at 0x7f32bea1d000, the vector has enough space to store 4 elements even though only one element is actually stored.
And now a small trick; let’s change the stack representation of a vector within an unsafe block:
use std::mem; fn main() { let p:[u64;3] = [0, 1, 1]; let mut v:Vec<i32> = vec![]; v = unsafe {mem::transmute(p)}; println!("{:?}", v[0]); }
The program changes the stack representation of the vector; the pointer value is now 0 and capacity and length are both 1. The program will segfault when you try to print v[0] because it will try to derefer 0.
Once again: these are horrible tricks which shouldn’t be employed in production code!
Exercise: You can write a program to understand how Strings are represented in memory.
The transmute operation requires that you know the exact size of the object to be transmuted. Here is how you can find out the size of a Rust data type:
use std::mem; fn main() { let v:Vec<i32> = vec![1,2,3,4,5]; println!("{:?}", mem::size_of::<Vec<i32>>()); }
I got 24 (bytes) on my machine (Linux, x86_64). You can think of this as three 8 byte values.
Representing slices in memory
A slice is just a pair of values stored on the stack; a pointer to some element of an array and a length.
Here is a program which prints out the stack representation of a slice:
use std::mem; fn main() { let v:[i32; 8] = [1,2,3,4,5,6,7,8]; let t = &v[3..7]; let r:[u64;2]; println!("{:?}", &v[3] as *const i32); r = unsafe {mem::transmute(t)}; println!("{:x}, {:x}", r[0], r[1]); }
Here is what I got on my machine:
0x7ffcb9b46284 7ffcb9b46284, 4
It is evident that the slice has length 4 and that it is pointing to the element at index 3 in the array.
Representing sum types
Let’s first find out the size of a simple enum:
use std::mem; enum Color { Red, Green, Blue, } fn main() { println!("{}", mem::size_of::<Color>()); }
The program gives 1 as the output on my machine. So we shall assume that Red, Green and Blue are encoded simply as numbers 0, 1 and 2. Let’s check:
use std::mem; enum Color { Red, Green, Blue, } fn main() { let c = Color::Blue; let d:u8; d = unsafe {mem::transmute(c)}; println!("{}", d); }
I am getting the output 2 when I run this on my system.
What if your sum type is more complex, like this:
enum Color { Red(u32), Green(u32), Blue(u32), }
Let’s find out the size of this enum.
use std::mem; enum Color { Red(u32), Green(u32), Blue(u32), } fn main() { println!("{}", mem::size_of::<Color>()); }
I am getting 8 as the output.
A variable of type Color can assume only one of 3 possible values at any point in time. Each value needs 4 bytes of storage, but because you are storing only one value at any point in time, the compiler allocates only 4 bytes instead of 12.
But now there is a problem. How does Rust represent the fact that what is stored in this common 4 byte storage area is a “Red”. Or, a “Green”, or a “Blue”? It will be impossible to execute “match” operations without this knowledge.
The solution is simple. Allocate another 4 byte location as a “tag”. If the tag value is 0, the next 4 byte location represents a Red, if the tag is 1, it is a Green and so on … (only 1 byte of the tag space will be used, the other 3 bytes are most probably used as a padding to meet the alignment restriction for the 32 bit integer Red/Green/Blue data field which comes next).
Let us write another program to find out.
use std::mem; enum Color { Red(u32), Green(u32), Blue(u32), } fn main() { let c = Color::Green(0x12); let d:[u32;2]; d = unsafe {mem::transmute(c)}; println!("{:x}, {:x}", d[0], d[1]); }
The output I am getting is:
1, 12
The tag field value is 1 (ie, Green) and the data field value is hex 12.
Exercise: Write similar programs to find out the memory layout of product types: tuples and structures.
Conclusion
All the basic Rust data types have a simple representation in memory; the common theme is to allocate everything on the stack without any layers of indirection. The only exception is when you need data structures which grow dynamically; in this case, the dynamic part is allocated on the heap and the stack allocated part will have a pointer to this heap allocated area plus some other information (like length, capacity etc).
When a data structure is passed to a function, you have two options: either copy the stack allocated part to the function or to pass a simple pointer to the beginning of the stack allocated part.
Experiment more with Rust and have fun! | https://pramode.net/2016/09/13/using-unsafe-tricks-in-rust/ | CC-MAIN-2022-33 | refinedweb | 2,437 | 64.95 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Autofill gives ID number not the field value?
I have many2one fields in one class. I have to auto fill this fields. I have written an on change function to auto fill. But the auto filled value are ID since it is a many2one field. How can I get the value in the many2one field.
First class:
class vansdent(osv.osv):
_name = "vans.dent"
_description = "Vans Dent"
_rec_name = 'service'
_description = "Vals Dent"
_columns = {
'year': fields.many2one('dent.year', 'Year', required=True, select=True),
'make': fields.many2one('vals.make','Make', required=True),
'model': fields.many2one('car.model','Model', required=True, select=True),
'service': fields.char('Service ID', required=True),
'customer': fields.char('Customer', required=True),
}
Second class:
class vansdent_bill(osv.osv):
_name = "vansdent.bill"
_description = "Vans Dent"
_columns = {
'name': fields.char('Year', required=True),
'make': fields.char('Make', required=True),
'model': fields.char('Model', required=True),
'customer': fields.char('Customer', required=True),
'serviceid': fields.many2one('vans.dent', 'Service ID', select=True),
}
Onchange function:
def vansdent_service(self, cr, uid, ids, serviceid=False, context=None):
res = {}
if serviceid:
service_obj = self.pool.get('vans.dent')
rec = service_obj.browse(cr, uid, serviceid)
res = {'value': {'name': rec.year, 'model': rec.model, 'make': rec.make, 'customer':rec.customer}}
else:
res = {'value': {'name': False, 'model': False, 'make': False, 'customer': False}}
return res
XML :
<field name="serviceid" on_change="vansdent_service(serviceid)"/>
The auto fill value is filled with
dent.year(1,)
vals.make(123,)
car.model(144,)
How can I solve this?
I see the "year", "make", "model" in "vans.dent" are many2one fields, so I think you could change this
from: res = {'value': {'name': rec.year, 'model': rec.model, 'make': rec.make, 'customer':rec.customer}}
to: res = {'value': {'name': rec.year.name, 'model': rec.model.name, 'make': rec.make.name, 'customer':rec.customer}}
(rec.year.name, ....)
and in 'dent.year', 'vals.make', 'car.model' we have to have a filed named `name`
hope see in table that in many2one field the value is saved as ID number not as values. How can I save the many2one value as the value shown in UI and not as ID number in table | https://www.odoo.com/forum/help-1/question/autofill-gives-id-number-not-the-field-value-105128 | CC-MAIN-2016-50 | refinedweb | 383 | 63.76 |
In this tutorial, we will discuss the concept of dynamic memory allocation in C/C++. Memory is one of the major resource on our modern computing system, especially RAM. Because programs under execution store in RAM and it is a limited resource. By using dynamic memory allocation, we can efficiently allocate memory for a program on run-time. By the end of this tutorial, you will be able to know how to dynamically allocate memory using malloc(), realloc(), and calloc() functions available in C programming.
Firstly, we will see the memory layout of a C memory, the way the operating system stores a program in memory, and what are different memory segments of a program.
C Program Memory Layout
The memory layout of a program can be divided into four segments. One segment of the memory is assigned to store the instructions that need to be executed. Another section stores all the static or global variables that are not declared inside a function. The next section of the memory stores all the information of function calls and all the local variables. This is known as the ‘Stack.’ The local variables are declared inside a function. Their lifetime is only till the function is executing.
We have previously looked upon the concept of Stack in the programs memory in the following article as well: Pointers as Function Arguments or call by reference in C
The fourth segment is known as the ‘Heap.’
The memory set aside for these three segments: code segment, global variable segment and the stack is defined during program compilation but we can allocate memory on the heap segment during program execution.
STACK
We will first look at how these three segments are used when a program executes. Let us have a look at a simple C program.
#include <stdio.h> int output; int square(int x) { return x*x; } int SQUAREofSUM(int x, int y) { int z = square(x+y); return z; } int main() { int num1 = 3, num2 = 2; output = SQUAREofSUM(num1,num2); printf("\nFirst Number: %d",num1); printf("\nSecond Number: %d",num2); printf("\nSquare of Sum of Numbers: %d", output); }
We have a function square() that gives us the square of a number. We have another function SQUAREofSUM() that takes in two integer arguments ‘x’ and ‘y’. It returns us the square of (x+y). In the main() method we are calling the SQUAREofSUM() method and passing it two arguments ‘num1’ and ‘num2’.
Code Output
Now let’s see the code output. After the compilation of the above code, you will get this output.
Stack Memory
Let us now see what happens in the memory when this program is executed. Below you can view the section of the application’s memory showing the Stack segment and the global variable section.
When the program starts executing, firstly the main() method is invoked .When the main() method is invoked some amount of memory from the stack is allocated for the execution of the main(). The amount of memory allocated on the stack for execution of main() can also be called the stack frame for the main(). All the local variables, arguments and the information where this function should return back to, is stored within the stack frame. The size of the stack frame for a method is calculated when the program is compiling.
The ‘output’ variable is a global variable so it is found in the global variable segment.
When main() calls SQUAREofSUM() method then a stack frame is allocated for the call to SQUAREofSUM(). All these local variables (x, y and z) will be found in this particular stack frame.
The SUMofSQUARE() then calls the square() function so another stack frame for square() will be created with its own local variables. At any time during the execution of the program, the function at the top of the stack executes and the rest are at a pause, waiting for the above function to return something, and then it will resume execution.
The ‘output’ variable is a global variable as it is not declared inside a function so it is found in the global variable segment. We can access this variable anywhere. In this particular statement, we call the SQUAREofSUM() function.
output = SQUAREofSUM(num1,num2);
The SQUAREofSUM() function in return calls the square() function in the following statement.
int z = square(x+y);
Our call stack consists of the three methods as shown above. As soon as the square() function will return, it will be cleared from the stack memory. So now the SQUAREofSUM() function will resume.
Once again when SQUAREofSUM() finishes, the control will come to this particular line:
output = SQUAREofSUM(num1,num2);
The main() method will resume again. The printf() function will be called and therefore it goes to the top of the stack.
After printf() will finish, the control will go back to the main() method. This will cause the main() to finish executing. As the main() will finish, the program will also complete. In the end, the global variables will also get removed.
Note: We should assign a variable as global only if it is needed multiple places in multiple functions and is also required for the whole lifetime of the program. Otherwise, it is a waste of memory to keep a variable for the whole lifetime of the program execution.
Limitations of Stack
When our program starts, the operating system allocates some amount of reserved space for the stack e.g. 1MB. The actual allocation for the stack frame and for the local variables happens from the stack during runtime. If our call stack grows beyond the reserved memory for the stack for e.g. if method A() calls method B(). B() in return calls C() and we go on calling different functions this will exhaust the whole space reserved for the stack. This is known as stack overflow. In the case of stack over flow, our program crashes. This can usually happen if there is an issue in the code for recursion where it runs indefinitely.
The memory set aside for stack does not grow during runtime. The application can not request for more memory for the stack. So for e.g. if the memory allocated for the stack frame is 1MB then if the allocation of the variables and functions in the stack exceeds 1MB then the program will crash. Furthermore, the allocation/deallocation of memory onto the stack happens by a set rule. When a function is called, it is pushed onto the stack, when it finishes, it removed from the stack. It is not possible to manipulate the scope of a variable if it is on the stack.
Another limitation is that if we need to declare a larger data type for e.g. an array as a local variable then we need to know the size of the array at the time of compilation. If we have to decide how large the array will be based on some parameter during runtime then it is a problem with the stack. For all these problems like allocating large amount of memory or keeping variables in the memory till the time we want, we have a HEAP.
Note: Stack is an implementation of stack data structure but heap is not an implementation of the heap data structure.
HEAP Memory Segment
Unlike the stack, the application’s heap is not fixed. Its size can vary. During the lifetime of the application, there is no set rule for the allocation or deallocation of memory. The user can control how much memory to use from the heap and till what time to keep the data in the memory during the application’s lifetime. Heap can grow as long as you do not run out of the memory on the system itself. Heap is also known as ‘free pool of memory’ or ‘free store of memory.’ The implementation of heap by the operating system, language runtime or the compiler depends upon the individual system and can vary from system to system.
An abstracted way of looking at the heap as a user is that it is a large free pool of memory available to us that we can use according to our needs. Heap is also referred to as ‘dynamic memory.’
Dynamic Memory Allocation
Common Functions for Allocating/Deallocating Memory
To use dynamic memory in C programming, we require four major functions. These are known by the following names:
- malloc
- calloc
- realloc
- free
The first three functions are used to allocate memory and the last one is for deallocating memory on the heap.
- malloc is the most frequently used library function for dynamic memory allocation. The definition of this function is as follows:
void* malloc(size_t size)
This function takes in the size of the memory block in bytes as an argument. The data type size_t stores only zero or positive integer values. It is an unsigned integer data type. This function returns a void pointer that gives us the address of the first byte of the block of memory it allocates.
- calloc function is slightly different from malloc. The definition for calloc is as follows:
void* calloc(size_t num, size_t size)
calloc also returns a void pointer but takes in two arguments instead of one. The first argument is the number of elements of a particular data type. The second argument is the size of the data type. So, with malloc if we have to declare an array for e.g. an integer array of size 5, we would say malloc(5*sizeof(int)) but with calloc we would say calloc(5, sizeof(int)). Here, the first argument is the number of units of the data type you want and the second argument is the size of the data type in bytes. There is one more difference between malloc and alloc. When malloc allocates some amount of memory, it does not initialize the bytes with any value. Hence, garbage values will be found instead. Whereas if you allocate memory through calloc, then it sets all the byte positions with the value zero.
- realloc is used if we have a dynamically allocated block of memory and we want to change the size of that block of memory. The definition of realloc is as follows:
void* realloc(void* ptr, size_t size)
The realloc function takes in two arguments. The first argument is the pointer to the starting address of the existing block and the second argument is the size of the new block. If we want the size to be larger than the previous one, then the system may create a new block and copy the previous data that was written there into the new block. If contiguous memory is already available with the existing block then the existing block may be extended.
You can also use these four functions in C++ programming as well but mostly the following two operators are used instead. To use dynamic memory in C++ programming, we require two operators. These are:
- new
- delete
Example C Code: Allocate Dynamic Memory for integers
Let us look at an example C program to understand the concept of heap in a better way.
#include <stdio.h> #include <stdlib.h> int main(){ int x; int *ptr; ptr= (int*)malloc(sizeof(int)); *ptr = 50; ptr= (int*)malloc(sizeof(int)); *ptr = 150; }
The integer variable ‘x’ is declared in the main() method hence this is a local variable. It is placed in the stack. Memory for this particular variable ‘x’ will be allocated from the stack frame of the main() method.
Malloc() Integer
If we want to store an integer on the heap we have to first reserve some space on the heap. To reserve some space allocated on the heap, we need to call the malloc function, as shown in the following line:
int *ptr; ptr = (int*)malloc(sizeof(int));
The malloc function asks for how much memory to allocate on the heap in bytes. This is determined by the sizeof() operator according to its operands data type.
For this example, our operand is an integer that takes up 4 bytes of memory. Hence, one block of 4 bytes will be reserved on the heap. The malloc will return a starting address of this particular block. Moreover, malloc returns a void pointer. If for example the starting address of the 4 bytes block is 300, then the malloc will return us 300. This way we have a pointer to an integer ‘ptr’ that is a local variable to the main(). So, ‘ptr’ will be allocated in the stack frame of the main() method.
In this particular statement we have performed typecasting. This is because malloc returns a void pointer and ‘ptr’ is an integer pointer. ‘ptr’ stores the address of this block of memory which was 300 in our case. In this way we have got a block of memory on the heap that we want to use to store an integer.
To store a value in this memory block we will use the following statement:
*ptr = 50;
Here we are storing the value ’50’ in this memory block. We have derefrenced the location using the pointer ptr and then set it equal to ’50.’
The only way to use memory on the heap is through reference.
The function of the malloc is to look for free space on the heap, reserve it and return us the pointer. The only way this memory block can be accessed is through a pointer variable that will be local to your function.
Now, in the next lines of code we are calling the malloc function again.
ptr = (int*)malloc(sizeof(int)); *ptr = 150;
This way another block of 4 bytes will get reserved in the heap. For example purposes let us assume its starting address is 100. Now, the address that is returned by the second call to malloc will be stored in the variable ‘ptr.’ Thus, ‘ptr’ is now pointing to address 100 instead. We are also storing the value ‘150’ in this memory block.
What we did was we allocated one more block of memory in the heap and we modified the address in ‘ptr’ to point to this particular block instead. Notice that the previous block will remain in the heap. The memory consumed will thus not be cleared off automatically.
Freeing Dynamically Allocated Memory
At any point in the program if we have used some block of memory that was dynamically allocated using malloc, we also need to clear it as it is not in use anymore. Let us modify the C program code a bit by adding free() operator in between the two memory allocations in order to remove the first memory block.
#include <stdio.h> #include <stdlib.h> int main(){ int x; int *ptr; ptr= (int*)malloc(sizeof(int)); *ptr = 50; free(ptr); ptr= (int*)malloc(sizeof(int)); *ptr = 150; }
After we are done using the memory block at starting address 300, we have made a call to the function free(). Any memory which is allocated using malloc is automatically cleared off by calling free(). We will pass the pointer to the starting address of the memory as the parameter of free().
free(ptr);
Now, the first block of memory will be cleared.
In terms of the scope of the variable, unlike stack, anything allocated on the heap is not automatically deallocated when a function finishes. We can control when to free anything on the heap.
Example C Code: Dynamically Allocate Memory for Arrays
#include <stdio.h> #include <stdlib.h> int main(){ int x; int *ptr; ptr= (int*)malloc(sizeof(int)); *ptr = 50; ptr= (int*)malloc(10*sizeof(int)); }
If we want to store an array on the heap for example an integer array then make a call to malloc asking for one block of memory equal to the total size of the array in bytes. Supposing it is an integer array of 10 elements then we will make a call to malloc asking (10 x size of integer: 4)= # of bytes. Now one large contiguous bock of memory for 10 integers will be allocated on the heap. We will get the starting address of this block which will be the base address of the array. ‘ptr’ will now point to the base address of this block as shown in the section of the system’s memory.
In our code, we can use these 10 integers as ptr[0], ptr[1], ptr[2] and so on. As we know, ptr[0] is the value at address ptr and ptr[1] is the same as value at address (ptr+1).
If malloc is not able to find any free block of memory then it returns null. For error handling we need to know this.
Modifying Example Code for C++
Now, if we want to write the same example code as mentioned above but in C++, instead of using malloc and free we will use free and delete operators instead. Just modify the code as follows:
#include <stdio.h> #include <stdlib.h> int main(){ int x; int *ptr; ptr= new int; *ptr = 50; delete ptr; ptr = new int[10]; delete[] ptr; }
As you may notice, instead of using malloc we are using new operator. Likewise, instead of using free we are using delete operator.
If we want to allocate an integer array of size 10 in C++, we will use the following statement:
ptr = new int[10];
To free an array, we will use the delete operator with a square bracket:
delete[] ptr;
In C++ programming, we do not need to do any type of typecasting like we used to in the case of C programs. Malloc returns void so we have to typecast it back to an integer pointer. New and delete operators are type safe. This means that they are used with a type and return pointers to a particular type only. Thus, here ‘ptr’ is getting a pointer to integer only.
Example Code 2
Not let us look at another example code in C programming where the user will be asked for the size of the array.
#include <stdio.h> #include <stdlib.h> int main(){ int n; printf("Enter the size of the array\n"); scanf("%d",&n); int *array = (int*)malloc(n*sizeof(int)); for(int i=0;i<n;i++) { array[i] = i*2; } for(int i=0;i<n;i++) { printf("%d ",array[i]); } }
We will declare an array of the particular size entered by the user. We can not know the size of the array at runtime thus we will allocate the memory dynamically. This is done in the following line:
int *array = (int*)malloc(n*sizeof(int));
We will make a call to the malloc function to allocate a memory block equal to the size of ‘n’ integers. Remember to typecast the return of malloc in this case to integer pointer as well otherwise a compilation error will occur. We have now created a dynamically allocated array of size ‘n.’
Using the for() loop we will store the values in our array dynamically. Here we are storing even values in the array like 0,2,4,6…
for(int i=0;i<n;i++) { array[i] = i*2; }
After that we are using another for() loop to print the array..
Similarly, we we set the size to 10 then the output will be as follows:
Dynamically Allocate Memory using Calloc()
Instead of using the malloc function, let us use the calloc function in the example code above instead.
#include <stdio.h> #include <stdlib.h> int main(){ int n; printf("Enter the size of the array\n"); scanf("%d",&n); int *array = (int*)calloc(n,sizeof(int)); for(int i=0;i<n;i++) { array[i] = i*2; } for(int i=0;i<n;i++) { printf("%d ",array[i]); } }
For the calloc function we will have two arguments instead of one. ‘n’ will be the first argument and size of integer will be the second argument.
Thus, we will be only changing one line in the code which is shown below:
int *array = (int*)calloc(n,sizeof(int));
Rest of the program remain the same.
Dynamically Allocate Memory using Calloc()
Let us now include realloc as well in our program code. For example, we want to modify the size of our memory block associated with the array we created dynamically, we will call the realloc() function.
#include <stdio.h> #include <stdlib.h> int main(){ int n; printf("Enter the size of the array\n"); scanf("%d",&n); int *array = (int*)malloc(n*sizeof(int)); for(int i=0;i<n;i++) { array[i] = i*2; } int *ptr = (int*)realloc(array,3*n*sizeof(int)); printf("Previous memory block address: %d, New memory block address: %d\n",array,ptr); for(int i=0;i<n;i++) { printf("%d ",ptr[i]); } }
We will create another pointer variable called ‘ptr’ and set it equal to the call to realloc(). The realloc() function will take in the previous pointer which was ‘array’ in our case as the first argument. The second argument will be the size of the new block. We want the size of the new block to be 3 times that of the previous one. Do not forget to perform the typecasting as well.
int *ptr = (int*)realloc(array,3*n*sizeof(int));
This call will create a new memory block of size 3n and copy the values from the previous memory block ‘array’ to this new memory block ‘ptr.’
If the size of the new block is greater than the size of the previous memory block then it is possible to extend the previous block. Otherwise, a new block of memory is allocated and the previous is deallocated after the values from that block have been copied.. You may notice that the address of the previous memory block is same as that of the new memory block. This means that the previous memory block was extended.
Now if we want to change the size of the new block to half that of the previous one we can modify the statement as follows:
int *ptr = (int*)realloc(array,(n/2)*sizeof(int));
This way the previous memory block will be reduced in size. | https://csgeekshub.com/c-programming/pointers-dynamic-memory-allocation/ | CC-MAIN-2021-49 | refinedweb | 3,702 | 61.67 |
The C Standard, 6.7.2.1 [ISO/IEC 9899:2011], states
There may be unnamed padding within a structure object, but not at its beginning. . . . There may be unnamed padding at the end of a structure or union.
Subclause 6.7.9, paragraph 9, states that
unnamed members of objects of structure and union type do not participate in initialization. Unnamed members of structure objects have indeterminate value even after initialization.
The only exception is that padding bits are set to zero when a static or thread-local object is implicitly initialized (paragraph10):
If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate. If an object that has static or thread storage duration is not initialized explicitly, then:
—;
Because these padding values are unspecified, attempting a byte-by-byte comparison between structures can lead to incorrect results [Summit 1995].
Noncompliant Code Example
In this noncompliant code example,
memcmp() is used to compare the contents of two structures, including any padding bytes:
#include <string.h> struct s { char c; int i; char buffer[13]; }; void compare(const struct s *left, const struct s *right) { if ((left && right) && (0 == memcmp(left, right, sizeof(struct s)))) { /* ... */ } }
Compliant Solution
In this compliant solution, all of the fields are compared manually to avoid comparing any padding bytes:
#include <string.h> struct s { char c; int i; char buffer[13]; }; void compare(const struct s *left, const struct s *right) { if ((left && right) && (left->c == right->c) && (left->i == right->i) && (0 == memcmp(left->buffer, right->buffer, 13))) { /* ... */ } }
Exceptions
EXP42-C-EX1: A structure can be defined such that the members are aligned properly or the structure is packed using implementation-specific packing instructions. This is true only when the members' data types have no padding bits of their own and when their object representations are the same as their value representations. This frequently is not true for the
_Bool type or floating-point types and need not be true for pointers. In such cases, the compiler does not insert padding, and use of functions such as
memcmp() is acceptable.
This compliant example uses the
#pragma pack compiler extension from Microsoft Visual Studio to ensure the structure members are packed as tightly as possible:
#include <string.h> #pragma pack(push, 1) struct s { char c; int i; char buffer[13]; }; #pragma pack(pop) void compare(const struct s *left, const struct s *right) { if ((left && right) && (0 == memcmp(left, right, sizeof(struct s)))) { /* ... */ } }
Risk Assessment
Comparing padding bytes, when present, can lead to unexpected program behavior.
Automated Detection
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
Related Guidelines
Key here (explains table format and definitions)
8 Comments
David Svoboda
Question: Would we allow bitwise serlization of a struct, given that the padding data might contain sensitive info (eg: password from its previous use as a char string)?
Aaron Ballman
I think that's what DCL39-C. Avoid information leak in structure padding covers, unless I misunderstand.
Daniel Marjamäki
To me it seems that the "compliant" solution is dangerous and uncompliant.
It assumes there is no padding inside the struct, for instance between c and i.
Then it is dangerous, because if there is padding then that code will not compare the members completely.
Aaron Ballman
The compliant solution is comparing the struct members individually. The exception compliant solution is doing a
memcmp()only because the structure is packed with an implementation-defined
#pragma. Can you expound on what you find dangerous?
Daniel Marjamäki
sorry I misread the code. it is safe.
Daniel Marjamäki
as far as I see... if you don't want to compare padding data at all then memcmp should not be used. the struct members should be compared individually then.
Alex Bock
The Compliant Solution checks
leftand
rightfor
NULLwhile the Noncompliant Code Example and EXP42-C-EX1 do not. I don't think this difference is intended to illustrate anything about this rule so I would suggest making them all consistent for clarity. Passing
NULLto
memcmpis undefined behavior.
David Svoboda
Agreed, I changed the code as you suggest. | https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87151934 | CC-MAIN-2019-22 | refinedweb | 690 | 53.71 |
Dormand-Prince explicit solver for non-stiff ODEs.
tfp.math.ode.DormandPrince( rtol=0.001, atol=1e-06, first_step_size=0.001, safety_factor=0.9, min_step_size_factor=0.1, max_step_size_factor=10.0, max_num_steps=None, make_adjoint_solver_fn=None, validate_args=False, name='dormand_prince' )
Used in the notebooks
Implements 5th order Runge-Kutta with adaptive step size control
and dense output, using the Dormand-Prince method. Similar to the 'dopri5'
method of
scipy.integrate.ode and MATLAB's
ode45. For details see [1].
For solver API see
tfp.math.ode.Solver.
References
[1]: Shampine, L. F. (1986). Some practical runge-kutta formulas. Mathematics of Computation, 46(173), 135-150, doi:10.2307/2008219
Methods
solve
solve( ode_fn, initial_time, initial_state, solution_times, jacobian_fn=None, jacobian_sparsity=None, batch_ndims=None, previous_solver_internal_state=None, constants=None )
Solves an initial value problem.
An initial value problem consists of a system of ODEs and an initial condition:
dy/dt(t) = ode_fn(t, y(t), **constants) y(initial_time) = initial_state
Here,
t (also called time) is a scalar float
Tensor and
y(t) (also
called the state at time
t) is an N-D float or complex
Tensor.
constants is are values that are constant with respect to time. Passing
the constants here rather than just closing over them in
ode_fn is only
necessary if you want gradients with respect to these values.
Example
The ODE
dy/dt(t) = dot(A, y(t)) is solved below.
t_init, t0, t1 = 0., 0.5, 1. y_init = tf.constant([1., 1.], dtype=tf.float64) A = tf.constant([[-1., -2.], [-3., -4.]], dtype=tf.float64) def ode_fn(t, y): return tf.linalg.matvec(A, y) results = tfp.math.ode.BDF().solve(ode_fn, t_init, y_init, solution_times=[t0, t1]) y0 = results.states[0] # == dot(matrix_exp(A * t0), y_init) y1 = results.states[1] # == dot(matrix_exp(A * t1), y_init)
If the exact solution times are not important, it can be much
more efficient to let the solver choose them using
solution_times=tfp.math.ode.ChosenBySolver(final_time=1.).
This yields the state at various times between
t_init and
final_time,
in which case
results.states[i] is the state at time
results.times[i].
Gradients
The gradients are computed using the adjoint sensitivity method described in [Chen et al. (2018)][1].
grad = tf.gradients(y1, y0) # == dot(e, J) # J is the Jacobian of y1 with respect to y0. In this case, J = exp(A * t1). # e = [1, ..., 1] is the row vector of ones.
This is not capable of computing gradients with respect to values closed
over by
ode_fn, e.g., in the example above:
def ode_fn(t, y): return tf.linalg.matvec(A, y) with tf.GradientTape() as tape: tape.watch(A) results = tfp.math.ode.BDF().solve(ode_fn, t_init, y_init, solution_times=[t0, t1]) tape.gradient(results.states, A) # Undefined!
There are two options to get the gradients flowing through these values:
- Use
tf.Variablefor these values.
- Pass the values in explicitly using the
constantsargument:
def ode_fn(t, y, A): return tf.linalg.matvec(A, y) with tf.GradientTape() as tape: tape.watch(A) results = tfp.math.ode.BDF().solve(ode_fn, t_init, y_init, solution_times=[t0, t1], constants={'A': A}) tape.gradient(results.states, A) # Fine.
By default, this uses the same solver for the augmented ODE. This can be
controlled via
make_adjoint_solver_fn.
References
[1]: Chen, Tian Qi, et al. "Neural ordinary differential equations." Advances in Neural Information Processing Systems. 2018. | https://tensorflow.google.cn/probability/api_docs/python/tfp/math/ode/DormandPrince | CC-MAIN-2022-21 | refinedweb | 556 | 53.98 |
On Sat, Feb 12, 2005 at 04:45:43PM +0100, Philippe Elie wrote: > On Mon, 07 Feb 2005 at 17:04 +0000, Scott T Jones wrote: > > >: > > > > > > There is some minor things to change before applying it. > > - replace all msdos CR/LF in newly created file (e.g. op_jdl_bfd.c) > > - many s/return (xx);/return xx;/ Both of these items have been fixed. > > - patch contains some chunk diffing only by space, I prefer we avoid that. > > - some global var in new .c can be made static afaics Could you give me one example of each of these? > > - Will Cohen patch must be updated and submitted to lkml: > I get a look to the our current code and I don't think running a kernel > with Will patch and an old oprofile version can hurt, cookie lookup for > NO_COOKIE will fail but the cookie will be hashed and no other lookup > will occur, sample file open for cookie == 0 will fail silently but samples > lost for these mapping will be accounted. > > > The following can be addressed after applying the patch unless John disagree > > - configure.in : > +if ! ( test -f libopjan/jvmpi.h ) ; then > + if test -f $JDKDIR/include/jvmpi.h ; then > + cp -p $JDKDIR/include/jvmpi.h libopjan > + else > + echo You must copy jvmpi.h from your Java include directory > + echo to the libopjan directory to use Java profiling. > + fi > +fi > > this must be done only if --enable-opjan is given on command line This check for jvmpi.h will be moved to libopjan/Makefile.am, which already checks the --enable-opjan flag. > > - use of dso like: > oprofile-0.8.1/daemon/Makefile.am > + ../libopjdl/.libs/libopjdl.so \ > + /usr/lib/libbfd.so \ > > must be done conditionnaly or by building an empty dso and link with it > unconditionnaly. I do not understand this. These DSOs are always used, they are not conditional. They provide support for identifying anonymous modules even if --enable-opjan is not specified. > > - I don't understand why libopjdl.so must be linked with pp tools I can understand that you do not want to possibly taint your pp tools with regressions. We only called libopjdl.so from opreport to minimize the window in which new addresses could be identified in anonymous code. We will create a stand-alone routine that can call libopjdl.so immediately before invoking opreport. We had only hoped to eliminate the possibility that the user would omit this call. > > - I dunno if we want to create file under /tmp/oprofile or > /var/lib/oprofile/sample/vm/java/, I prefer the second option. At John Levon's request, the file will be created under /var/lib/oprofile/samples/{anon}/. > > - later we will need to support multiple jitted source by getting get the > interface from the tgid This will be done. > > A question about unload method vs unload class. Can a class be unloaded > w/o all of these methods unloaded first ? I do not know. It may be JVM specific. However, our code will automatically unload all of the methods associated with a class when the class is unloaded. > > What's the rationale behind the valid flag vs the removal of class and > method ? Is it to allow reuse a class or method after an unload then a > load event ? It was implemented that way for ease of debugging. We can easily change the code to delete these control blocks when classes and methods are unloaded. > > John, I think we need to release oprofile 0.8.2 before appyling this patch. > > > >. > > Looking at the code I see we need to compile oprofile for a specific jvm, > either sun or ibm. People will prefer probably we support both of them and > to figure out the right at runtime. Something to think about, not a stopper > for the present patch It was never our intention to require any specific JVM. Our current code will successfully build with the jvmpi.h from either IBM or Sun and the version built with the Sun jvmpi.h will work with either JVM. However, it is true that a version built with the IBM jvmpi.h will only work with an IBM JVM. Therefore, we will modify our code to ensure that all versions work on any JVM. > > regards, > Philippe Elie Thank you for your helpful comments. Scott T Jones WBI Performance II IBM Corp, Austin, TX Reply to: stjones@us.ibm.com Phone: (512) 838-4758, T/L: 678-4758 | http://sourceforge.net/p/oprofile/mailman/attachment/OF02A61AE7.55E6A1FA-ON87256FAA.006015D3-86256FAA.00773E38%40us.ibm.com/1/ | CC-MAIN-2015-06 | refinedweb | 738 | 75.81 |
Hello,
I am new to FEnics, I want to solve the steady state dynamic linear elastic model in solid. my equation is function of frequency and the strong form is:
Divergence(
BC: Stress(vec(x),w) n(x)=T(x,w)
u(x,w)=U0
the physical problem is a plate with dimension of 1*1*0.1 with a harmonic load on the all the top and also the bottom of the plate is fixed.
I solved this as a test with the fixed frequency (w=200)
The weak form of the this equation is :
inner(sigma(u), sym(grad(v)))*dx -po*w*w *inner(
I compared the results with the model with converged mesh from Abaqus, I almost checked everything in the code the order of the displacement is mostly correct but the signs of the displacements are wrong and also the error is large between the solution of the Abaqus and this code.
I checked the variation form so I cant find any error in it. So the only thing that I suspected to be wrong is the neumann boundary condition but I cant find any bug in it.
I would be very thankful if someone can help me to figure out what is wrong in my code that I get different answers from the code.
Here is the code::
from dolfin import *
import pickle
import numpy
import csv
po=2700
w=200
Magnitude=-100
mesh = BoxMesh(0.0,0.0,0.0 , 1.0,1.0,0.1 , 10,10,10)
V = VectorFunctionS
# Defining Domain
class Bottom(SubDomain):
def inside(self, x, on_boundary):
return near(x[1], 0.0)
class Top(SubDomain):
def inside(self, x, on_boundary):
return near(x[1], 1.0)
# Initialize sub-domain instances
top = Top()
bottom = Bottom()
# Initialize mesh function for boundary domains
boundaries = FacetFunction(
boundaries.
top.mark(
bottom.
bc = DirichletBC(V, (0.0, 0.0, 0.0), boundaries,4)
# Define new measures associated with the interior domains and exterior boundaries
ds = Measure(
# Define trial and test functions
u = TrialFunction(V)
v = TestFunction(V)
f = Expression(("0.0", "scale","0.0"), w0 = w, scale = Magnitude )
# Elasticity parameters
E, nu = 69000000000, 0.3
mu = E / (2.0*(1.0 + nu))
lmbda = E*nu / ((1.0 + nu)*(1.0 - 2.0*nu))
def sigma(u):
return 2.0*mu*sym(grad(u)) + lmbda*tr(
a = inner(sigma(u), sym(grad(v)))*dx -po*w*w *inner(v,u)*dx
L = inner(f,v)*ds(2)
# Compute solution
u = Function(V)
solve(a == L, u, bc)
# Save solution in VTK format
file = File("ElasticSo
file << u
plot(u, interactive=True)
I already wrote the weak form but since I couldn't find any example or demo similar to what I want to do . does any one can help me how should i deal with this problem and solve the problem in a range of frequency or give me some tips about this problem.
Thanks
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- DOLFIN Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2013-05-08
- Last reply:
- 2013-05-09
FEniCS no longer uses Launchapd for Questions & Answers. Please
consult the documentation on the FEniCS web page for where and
how to (re)post your question: http://
FEniCS no longer uses Launchapd for Questions & Answers. Please
fenicsproject. org/support/
consult the documentation on the FEniCS web page for where and
how to (re)post your question: http:// | https://answers.launchpad.net/dolfin/+question/228467 | CC-MAIN-2021-21 | refinedweb | 574 | 58.21 |
churro morales created HBASE-9865:
-------------------------------------
Summary: WALEdit.heapSize() is incorrect in certain replication scenarios which
may cause RegionServers to go OOM
Key: HBASE-9865
URL:
Project: HBase
Issue Type: Bug
Affects Versions: 0.95.0, 0.94.5
Reporter: churro morales
WALEdit.heapSize() is incorrect in certain replication scenarios which may cause RegionServers
to go OOM.
A little background on this issue. We noticed that our source replication regionservers would
get into gc storms and sometimes even OOM.
We noticed a case where it showed that there were around 25k WALEdits to replicate, each one
with an ArrayList of KeyValues. The array list had a capacity of around 90k (using 350KB
of heap memory) but had around 6 non null entries.
When the ReplicationSource.readAllEntriesToReplicateOrNextFile() gets a WALEdit it removes
all kv's that are scoped other than local.
But in doing so we don't account for the capacity of the ArrayList when determining heapSize
for a WALEdit. The logic for shipping a batch is whether you have hit a size capacity or
number of entries capacity.
Therefore if have a WALEdit with 25k entries and suppose all are removed:
The size of the arrayList is 0 (we don't even count the collection's heap size currently)
but the capacity is ignored.
This will yield a heapSize() of 0 bytes while in the best case it would be at least 100000
bytes (provided you pass initialCapacity and you have 32 bit JVM)
I have some ideas on how to address this problem and want to know everyone's thoughts:
1. We use a probabalistic counter such as HyperLogLog and create something like:
* class CapacityEstimateArrayList implements ArrayList
** this class overrides all additive methods to update the probabalistic counts
** it includes one additional method called estimateCapacity (we would take estimateCapacity
- size() and fill in sizes for all references)
* Then we can do something like this in WALEdit.heapSize:
{code}
public long heapSize() {
long ret = ClassSize.ARRAYLIST;
for (KeyValue kv : kvs) {
ret += kv.heapSize();
}
long nullEntriesEstimate = kvs.getCapacityEstimate() - kvs.size();
ret += ClassSize.align(nullEntriesEstimate * ClassSize.REFERENCE);
if (scopes != null) {
ret += ClassSize.TREEMAP;
ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
// TODO this isn't quite right, need help here
}
return ret;
}
{code}
2. In ReplicationSource.removeNonReplicableEdits() we know the size of the array originally,
and we provide some percentage threshold. When that threshold is met (50% of the entries
have been removed) we can call kvs.trimToSize()
3. in the heapSize() method for WALEdit we could use reflection (Please don't shoot me for
this) to grab the actual capacity of the list. Doing something like this:
{code}
public int getArrayListCapacity() {
try {
Field f = ArrayList.class.getDeclaredField("elementData");
f.setAccessible(true);
return ((Object[]) f.get(kvs)).length;
} catch (Exception e) {
log.warn("Exception in trying to get capacity on ArrayList", e);
return kvs.size();
}
{code}
I am partial to (1) using HyperLogLog and creating a CapacityEstimateArrayList, this is reusable
throughout the code for other classes that implement HeapSize which contains ArrayLists.
The memory footprint is very small and it is very fast. The issue is that this is an estimate,
although we can configure the precision we most likely always be conservative. The estimateCapacity
will always be less than the actualCapacity, but it will be close. I think that putting the
logic in removeNonReplicableEdits will work, but this only solves the heapSize problem in
this particular scenario. Solution 3 is slow and horrible but that gives us the exact answer.
I would love to hear if anyone else has any other ideas on how to remedy this problem? I
have code for trunk and 0.94 for all 3 ideas and can provide a patch if the community thinks
any of these approaches is a viable one.
--
This message was sent by Atlassian JIRA
(v6.1#6144) | http://mail-archives.apache.org/mod_mbox/hbase-dev/201310.mbox/%3CJIRA.12676743.1383177359612.8625.1383177385706@arcas%3E | CC-MAIN-2018-30 | refinedweb | 635 | 56.45 |
yt.add_field()and
ds.add_field()?
If you run into problems with yt and you’re writing to the mailing list or contacting developers on IRC, they will likely want to know what version of yt you’re using. Oftentimes, you’ll want to know both the yt version, as well as the last changeset that was committed to the branch you’re using. To reveal this, go to a command line and type:
$ yt version yt module located at: /Users/username/src/yt-conda/src/yt-git The current version of yt is: --- Version = 3.4-dev Changeset = 94033fca00e5 --- This installation CAN be automatically updated.
For more information on this topic, see Updating yt and Its Dependencies.
Because there are a lot of backwards-incompatible changes in yt 3.0 (see What’s New and Different in yt 3.0?, it can be a daunting effort in transitioning old scripts from yt 2.x to 3.0. We have tried to describe the basic process of making that transition in Converting Old Scripts to Work with yt 3.0. If you just want to change back to yt 2.x for a while until you’re ready to make the transition, you can follow the instructions in Switching versions of yt: yt-2.x, stable, and master branches.
This is commonly exhibited with this error:
ImportError: cannot import name obtain_rvec. This is likely because
you need to rebuild the source. You can do this automatically by running:
cd $YT_GIT pip install -e .
where
$YT_GIT is the path to the yt git repository.
This error tends to occur when there are changes in the underlying cython files that need to be rebuilt, like after a major code update or in switching from 2.x to 3.x. For more information on this, see Switching versions of yt: yt-2.x, stable, and master branches.
For yt to be able to incorporate parallelism on any of its analysis (see
Parallel Computation With yt), it needs to be able to use MPI libraries.
This requires the
mpi4py module to be installed in your version of python.
Unfortunately, installation of
mpi4py is just tricky enough to elude the
yt batch installer. So if you get an error in yt complaining about mpi4py
like:
ImportError: No module named mpi4py
then you should install
mpi4py. The easiest way to install it is through
the pip interface. At the command line, type:
pip install mpi4py
What this does is it finds your default installation of python (presumably in the yt source directory), and it installs the mpi4py module. If this action is successful, you should never have to worry about your aforementioned problems again. If, on the other hand, this installation fails (as it does on such machines as NICS Kraken, NASA Pleaides and more), then you will have to take matters into your own hands. Usually when it fails, it is due to pip being unable to find your MPI C/C++ compilers (look at the error message). If this is the case, you can specify them explicitly as per:
env MPICC=/path/to/MPICC pip install mpi4py
So for example, on Kraken, I switch to the gnu C compilers (because yt doesn’t work with the portland group C compilers), then I discover that cc is the mpi-enabled C compiler (and it is in my path), so I run:
module swap PrgEnv-pgi PrgEnv-gnu env MPICC=cc pip install mpi4py
And voila! It installs! If this still fails for you, then you can build and install from source and specify the mpi-enabled c and c++ compilers in the mpi.cfg file. See the mpi4py installation page for details.
Converting between physical units and code units is a common task. In yt-2.x,
the syntax for getting conversion factors was in the units dictionary
(
pf.units['kpc']). So in order to convert a variable
x in code units to
kpc, you might run:
x = x*pf.units['kpc']
In yt-3.0, this no longer works. Conversion factors are tied up in the
length_unit,
times_unit,
mass_unit, and
velocity_unit
attributes, which can be converted to any arbitrary desired physical unit:
print("Length unit: ", ds.length_unit) print("Time unit: ", ds.time_unit) print("Mass unit: ", ds.mass_unit) print("Velocity unit: ", ds.velocity_unit) print("Length unit: ", ds.length_unit.in_units('code_length')) print("Time unit: ", ds.time_unit.in_units('code_time')) print("Mass unit: ", ds.mass_unit.in_units('kg')) print("Velocity unit: ", ds.velocity_unit.in_units('Mpc/year'))
So to accomplish the example task of converting a scalar variable
x in
code units to kpc in yt-3.0, you can do one of two things. If
x is
already a YTQuantity with units in
code_length, you can run:
x.in_units('kpc')
However, if
x is just a numpy array or native python variable without
units, you can convert it to a YTQuantity with units of
kpc by running:
x = x*ds.length_unit.in_units('kpc')
For more information about unit conversion, see Fields and Unit Conversion.
If you want to create a variable or array that is tied to a particular dataset
(and its specific conversion factor to code units), use the
ds.quan (for
individual variables) and
ds.arr (for arrays):
import yt ds = yt.load(filename) one_Mpc = ds.quan(1, 'Mpc') x_vector = ds.arr([1,0,0], 'code_length')
You can then naturally exploit the units system:
print("One Mpc in code_units:", one_Mpc.in_units('code_length')) print("One Mpc in AU:", one_Mpc.in_units('AU')) print("One Mpc in comoving kpc:", one_Mpc.in_units('kpccm'))
For more information about unit conversion, see Fields and Unit Conversion.
While there are numerous benefits to having units tied to individual quantities in yt, they can also produce issues when simply trying to combine YTQuantities with numpy arrays or native python floats that lack units. A simple example of this is:
# Create a YTQuantity that is 1 kpc in length and tied to the units of # dataset ds >>> x = ds.quan(1, 'kpc') # Try to add this to some non-dimensional quantity >>> print(x + 1) YTUnitOperationError: The addition operator for YTArrays with units (kpc) and (1) is not well defined.
The solution to this means using the YTQuantity and YTArray objects for all
of one’s computations, but this isn’t always feasible. A quick fix for this
is to just grab the unitless data out of a YTQuantity or YTArray object with
the
value and
v attributes, which return a copy, or with the
d
attribute, which returns the data itself:
x = ds.quan(1, 'kpc') x_val = x.v print(x_val) array(1.0) # Try to add this to some non-dimensional quantity print(x + 1) 2.0
For more information about this functionality with units, see Fields and Unit Conversion.
yt sets up defaults for many fields for whether or not a field is presented
in log or linear space. To override this behavior, you can modify the
field_info dictionary. For example, if you prefer that
density not be
logged, you could type:
ds = load("my_data") ds.index ds.field_info['density'].take_log = False
From that point forward, data products such as slices, projections, etc., would be presented in linear space. Note that you have to instantiate ds.index before you can access ds.field info. For more information see the documentation on Fields in yt and Creating Derived Fields.
Yes! yt identifies all the fields in the simulation’s output file
and will add them to its
field_list even if they aren’t listed in
Field List. These can then be accessed in the usual manner. For
example, if you have created a field for the potential called
PotentialField, you could type:
ds = load("my_data") ad = ds.all_data() potential_field = ad["PotentialField"]
The same applies to fields you might derive inside your yt script
via Creating Derived Fields. To check what fields are
available, look at the properties
field_list and
derived_field_list:
print(ds.field_list) print(ds.derived_field_list)
or for a more legible version, try:
for field in ds.derived_field_list: print(field)
yt.add_field()and
ds.add_field()?¶
The global
yt.add_field()
(
add_field())
function is for adding a field for every subsequent dataset that is loaded
in a particular python session, whereas
ds.add_field()
(
add_field()) will only add it
to dataset
ds.
Using the Ray objects
(
YTOrthoRay and
YTRay) with AMR data
gives non-contiguous cell information in the Ray’s data array. The
higher-resolution cells are appended to the end of the array. Unfortunately,
due to how data is loaded by chunks for data containers, there is really no
easy way to fix this internally. However, there is an easy workaround.
One can sort the
Ray array data by the
t field, which is the value of
the parametric variable that goes from 0 at the start of the ray to 1 at the
end. That way the data will always be ordered correctly. As an example you can:
my_ray = ds.ray(...) ray_sort = np.argsort(my_ray["t"]) density = my_ray["density"][ray_sort]
There is also a full example in the Line Plots section of the docs.
A pull request is the action by which you contribute code to yt. You make modifications in your local copy of the source code, then request that other yt developers review and accept your changes to the main code base. For a full description of the steps necessary to successfully contribute code and issue a pull request (or manage multiple versions of the source code) please see Making and Sharing Changes.
See Submit a bug report and Making and Sharing Changes.
Many different sample datasets can be found at . These can be downloaded, unarchived, and they will each create their own directory. It is generally straight forward to load these datasets, but if you have any questions about loading data from a code with which you are unfamiliar, please visit Loading Data.
To make things easier to load these sample datasets, you can add the parent
directory to your downloaded sample data to your yt path.
If you set the option
test_data_dir, in the section
[yt],
in
~/.config/yt/ytrc, yt will search this path for them.
This means you can download these datasets to
/big_drive/data_for_yt , add
the appropriate item to
~/.config/yt/ytrc, and no matter which directory you are
in when running yt, it will also check in that directory.
If the up-arrow key does not recall the most recent commands, there is probably an issue with the readline library. To ensure the yt python environment can use readline, run the following command:
$ ~/yt/bin/pip install gnureadline
yt does check the time stamp of the simulation so that if you
overwrite your data outputs, the new set will be read in fresh by
yt. However, if you have problems or the yt output seems to be
in someway corrupted, try deleting the
.yt and
.harray files from inside your data directory. If this proves to
be a persistent problem add the line:
from yt.config import ytcfg; ytcfg["yt","serialize"] = "False"
to the very top of your yt script. Turning off serialization is the default behavior in yt-3.0.
yt’s default log level is
INFO. However, you may want less voluminous logging, especially
if you are in an IPython notebook or running a long or parallel script. On the other
hand, you may want it to output a lot more, since you can’t figure out exactly what’s going
wrong, and you want to output some debugging information. The yt log level can be
changed using the The Configuration File, either by setting it in the
$HOME/.config/yt/ytrc file:
$ yt config set yt loglevel 10 # This sets the log level to "DEBUG"
which would produce debug (as well as info, warning, and error) messages, or at runtime:
from yt.funcs import mylog mylog.setLevel(40) # This sets the log level to "ERROR"
which in this case would suppress everything below error messages. For reference, the numerical values corresponding to different log levels are:
The The Plugin File provides a means for always running custom code whenever yt is loaded up. This custom code can be new data objects, or fields, or colormaps, which will then be accessible in any future session without having modified the source code directly. See the description in The Plugin File for more details.
If you use yt in a publication, we’d very much appreciate a citation! You should feel free to cite the ApJS paper with the following BibTeX entry:
} } | http://yt-project.org/doc/faq/ | CC-MAIN-2018-05 | refinedweb | 2,084 | 64.51 |
While developing our
agop package I encountered some problems with calling S4 generic functions defined in the
Matrix package, that were created from “base” S3 generics. I don’t know whether it’s an R bug (tested in R 2.15 and R Under development 2013-05-19 3.1-r62765), or whether such behavior was induced intentionally by the R team.
Note that I discuss here package development-related issues, and not the end-user ones.
The scenario was as follows:
- I have a package that Depends on the
Matrixpackage.
- IN the package I created a function that takes an object of class
Matrix(package
Matrix) as argument, and calls the S4 generic function (that was my intention)
t(); something like:
test1 <- function(A) { stopifnot(is(A, 'Matrix')) t(A) }
- If the function had been created in the global environment, then everything would be OK. In my case, however, I get:
> x1 <- matrix(1:10, nrow=2) > x2 <- Matrix(x1) > test1(x2) Error in t.default(x) : argument is not a matrix
Strange, isn't it?
- The error message indicates that an S3 method was called here (
t.default()). However, in the GlobalEnv, we have:
> body(t) standardGeneric("t") # It's an S4, not S3 generic [defined in the Matrix namespace]
Quite surprisingly (for me),
test1()calls:
> body(get('t', envir=baseenv())) UseMethod("t") # the S3 generic from the BaseEnv
Why? My package's namespace is ABOVE the
Matrix's namespace...
The solution is very simple - call the S4 generic by pointing the
Matrix's namespace directly with the
:: operator:
test2 <- function(A) { stopifnot(is(A, 'Matrix')) Matrix::t(A) }
Still, however, I'd like to know WHY we have such behavior - any ideas?
Here is an minimal example of a package exploring this issue: NamespaceTest_0.1.tar.gz (run
R CMD INSTALL NamespaceTest_0.1.tar.gz).
@UPDATE: The problem is known (see e.g. this post). Some suggest using
importFrom() in the
NAMESPACE file. However, the above-given solution, IMHO, is much more elegant and... | https://www.r-bloggers.com/package-defined-s4-generic-covered-by-a-base-s3-generic-in-r-packages/ | CC-MAIN-2018-13 | refinedweb | 337 | 55.13 |
The solution to this is, add the packages that got lost with the AUR4 transition again.
There is some way to access the old pkgs, but I can't remember how. I'll take a look a week or so from now, when I've got a bit more spare time.
Search Criteria
Package Details: mgltools 1.5.6-1
Dependencies (10)
- glut (freeglut-wayland-svn, freeglut-x11-svn, freeglut)
- libxmu
- python2-imaging (python2-pillow)
- python2-numpy (python2-numpy-mkl, python2-numpy-openblas)
- python2-pmw
- python2-simpy
- swig (swig-git)
- tk (tk85)
- zsi (python2-zsi)
- autodocksuite (optional)
Required by (1)
Sources (2)
Latest Comments
mschu commented on 2015-10-01 22:06
The solution to this is, add the packages that got lost with the AUR4 transition again.
kathka commented on 2015-09-30 08:46
As of now I'm unable to install this, since the packages "zsi" and "autodocksuite" are removed from aur. Is there a possible workaround?
cspal commented on 2014-08-06 08:20
I can't run pmv because I get the following error:
$ pmv
Run PMV from /usr/lib/python2.7/site-packages/MGLToolsPckgs/Pmv
Resource file used to customize PMV: /home/cspal/.mgltools/1.5.6/Pmv/_pmvrc
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/Pmv/__init__.py", line 381, in runPmv
title=title, withShell= not interactive, verbose=False, gui=gui)
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/Pmv/moleculeViewer.py", line 843, in __init__
trapExceptions=trapExceptions)
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/ViewerFramework/VF.py", line 371, in __init__
verbose=verbose)
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/ViewerFramework/VFGUI.py", line 360, in __init__
verbose=verbose,guiMaster=VIEWER_root,)
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/DejaVu/Viewer.py", line 523, in __init__
loadTogl(master)
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/DejaVu/__init__.py", line 103, in loadTogl
toglVersion = master.tk.call('package', 'require', 'Togl','2.0')
TclError: couldn't load file "/usr/lib/python2.7/site-packages/MGLToolsPckgs/opengltk/OpenGL/Tk/Togl/togl.so": /usr/lib/python2.7/site-packages/MGLToolsPckgs/opengltk/OpenGL/Tk/Togl/togl.so: undefined symbol: tkStubsPtr
Please include this Traceback in your bug report.
hit enter to continue
How can I solve this problem?
cspal commented on 2014-08-06 07:54
I can't run after the installation of mgltools:
pvm, because I get error message:
$ pmv
Run PMV from /usr/lib/python2.7/site-packages/MGLToolsPckgs/Pmv
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/Pmv/__init__.py", line 378, in runPmv
from Pmv.moleculeViewer import MoleculeViewer
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/Pmv/moleculeViewer.py", line 21, in <module>
from DejaVu.Geom import Geom
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/DejaVu/__init__.py", line 200, in <module>
from Viewer import Viewer
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/DejaVu/Viewer.py", line 53, in <module>
from DejaVu.Camera import Camera
File "/usr/lib/python2.7/site-packages/MGLToolsPckgs/DejaVu/Camera.py", line 41, in <module>
import Image
ImportError: No module named Image
hit enter to continue
I tried to edit ~/.bashrc file:
export LD_LIBRARY_PATH=/opt/molekel/lib:$LD_LIBRARY_PATH
but that doesn't help.
What is the solution for this problem?
arcanis commented on 2014-02-01 22:27
BTW "python-imaging" has name "python2-imaging" too
Jesin commented on 2014-02-01 21:02
The package python2-pmw is now available in [community], and the package python-pmw is now deprecated, so you should probably update your dependencies the next time you update this package.
mschu commented on 2013-09-10 20:34
The package is currently broken. It requires the python package opengltk, which I can't get to build.
Anonymous comment on 2013-09-04 01:46
The last stable release is 1.5.6 (since 2013/03/08), there is also a unstable release of 1.5.7rc1 (but no source). The news of the website is out of date.
mschu commented on 2013-09-03 09:27
Why was this marked out of date? The website still mentions this version.
If there is a problem with the package (that's not that it is outdated) please at least leave a comment on what doesn't work.
mschu commented on 2011-05-24 22:19
fixed. sorry :-)
Anonymous comment on 2011-05-24 10:17
I think python2-numpy should be in the dependencies for this package to be properly built. | https://aur.archlinux.org/packages/mgltools/?comments=all | CC-MAIN-2017-09 | refinedweb | 753 | 51.14 |
Building an XML document in-memory from an XSD file.
Discussion in 'ASP .Net Web Services' started by Ray Stevens, Jan 24, 2006.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
XSD document for XSD defintionRick Razzano, Sep 26, 2003, in forum: XML
- Replies:
- 1
- Views:
- 456
- C. M. Sperberg-McQueen
- Sep 26, 2003
Help on including one XML document within another XML document using XML SchemasTony Prichard, Dec 12, 2003, in forum: XML
- Replies:
- 0
- Views:
- 706
- Tony Prichard
- Dec 12, 2003
referencing another XSD file within an XSD file, Jan 14, 2004, in forum: XML
- Replies:
- 1
- Views:
- 837
- Martin Honnen
- Jan 14, 2004
Change to the public schema document for the XML namespace(xml.xsd)Henry S. Thompson, Sep 7, 2005, in forum: XML
- Replies:
- 0
- Views:
- 374
- Henry S. Thompson
- Sep 7, 2005
- Replies:
- 0
- Views:
- 547 | http://www.thecodingforums.com/threads/building-an-xml-document-in-memory-from-an-xsd-file.785774/ | CC-MAIN-2014-23 | refinedweb | 179 | 70.94 |
You can subscribe to this list here.
Showing
1
results of 1
--- On Sun, 10/19/08, Roger Haase <crosseyedpenguin@...> wrote:
> From: Roger Haase <crosseyedpenguin@...>
> Subject: MiddleKit Threading Error?
> To: webware-discuss@...
> Date: Sunday, October 19, 2008, 3:46 PM
> I last).
>
Well, the previous solution worked for the failing transaction, but then I hit another problem on a flurry of 5 transactions that were generating PNG images. The new problem occurred in the same area - MiddleKit/Run/MiddleObject.py method readStoreData on the line:
assert 0, "attempted to refresh changed object ....
My revised solution is to put the lock at the beginning and end of the method.
--- C:\...\MiddleObject.py-revBASE.svn005.tmp.py Tue Oct 21 08:41:35 2008
+++ C:...\MiddleObject.py Tue Oct 21 08:36:16 2008
@@ -5,6 +5,9 @@
from MiddleKit.Core.ObjRefAttr import ObjRefAttr
from MiddleKit.Core.ListAttr import ListAttr
+import thread # 2002-08-21 rdh
+_cacheLock = thread.allocate_lock() # 2002-08-21 rdh
+
try: # for Python < 2.2
object
except NameError:
@@ -67,6 +70,7 @@
for the same object in order to "refresh the attributes"
from the persistent store.
"""
+ _cacheLock.acquire() # 2008-10-20 rdh
if self._mk_store:
assert self._mk_store is store, 'Cannot refresh data from a different store.'
if self._mk_changed and not self._mk_initing:
@@ -114,6 +118,7 @@
self._mk_initing = 0
self._mk_inStore = 1
self._mk_changed = 0 # setting the values above will have caused this to be set; clear it now.
+ _cacheLock.release() # 2008-10-20 rdh
return self
Roger Haase | http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200810&viewday=22 | CC-MAIN-2015-32 | refinedweb | 252 | 62.85 |
Firstly, I can't stress enough how much we need artwork. Concept sketches; pixel art; basically anything to get us moving.
Method Names and Instance Variables
Use the function naming rules: lowercase with words separated by underscores as necessary to improve readability.
Use one leading underscore only for non-public methods and instance variables.
mixedCase is allowed only in contexts where that's already the prevailing style
now the abbreviations and capitalizing is driving me nuts.
su for setup
pg for pygame
Surf for surface
also in pep8 it said method and instance variables should follow function rules.
class SomeClass(object):
def __init__(self):
self.instance_variable = 5 #I'm an instance variable
The example you have used here is referred to in the pep8 document as mixedCase, not CamelCase, and indeed mixedCase is highly discouraged in python.The example you have used here is referred to in the pep8 document as mixedCase, not CamelCase, and indeed mixedCase is highly discouraged in python.also python is moving away from camelCase.
pep8 wrote:CapitalizedWords (or CapWords, or CamelCase -- so named because of the bumpy look of its letters [3]). This is also sometimes known as StudlyCaps.
[...]
Class Names
Almost without exception, class names use the CapWords convention. Classes for internal use have a leading underscore in addition.!
As long you guys agree on a consistant style for the project it will be ok
but i did find in pep8.
DrakeMagi wrote:how are you guys handle xrange, range difference ?
import sys
if sys.version_info[0] == 2:
range = xrange
jkbbwr wrote:Have two separate master branches. py2 and py3
tag all subsequent branches as either py2-<branchname> or py3-<branchname>
keep releases separate. Mixing version logic gets complicated quickly. This also means you can test, deploy and release far easier than having mixed logic that works on all versions, and its more optimised, less instructions and less that can go wrong.
As for range or xrange, unless you need a list, always use xrange. Simple as.
Hey, do you guys still need art? Let me know what file-types and what kind of graphics you need, and I can whip something up.
Return to Game Development
Users browsing this forum: Bing [Bot] and 2 guests | http://www.python-forum.org/viewtopic.php?p=6223 | CC-MAIN-2015-35 | refinedweb | 372 | 65.52 |
Importing a workflow that has subworkflows from dockstore to firecloud
If i’m importing a workflow from dockstor to FC is there a proper way to import the subworkflows within the workflow?
1. Should i import using github raw file url. This may be the easist way since i don’t need to import the subworkflows to FC. The question here is whether its ok to leave the subworkflows out FC? Users would still be able to view to the subworkflows, but it wont be viewable in the FC method repo.
import "" as ToBam
2. Should I import the subworkflows into FC first, make sure they are public and use FC’s import url for in the main workflow? This would be elaborate because it would mean importing all workflows to FC, doing them in a particular order to make sure the higher level workflows has the correct snapshot from the FC import url, needing to always update the snapshot url in the main workflow for every update.
import "" as Alignment
3. Should i just not try to import from dockstore, instead upload files directly to FC.
I’m leaning towards the first option, I was wondering if anyone sees a major flaw
Best Answer
Dev response:
- "Importing directly from dockstore (by URL) and therefore bringing in subworkflows with relative paths within dockstore is the eventual future. "
- "The only problem with 1 is that there will be probably be a period where the main workflow in FC will fail with w/e subworkflows you have on the dev branch of the repo. If you could point the FC workflow to a release version of gatk-workflows or something that would be better I think"
I'll go ahead with option 1 and make sure the url points to a release tag
Answers
Dev response:
I'll go ahead with option 1 and make sure the url points to a release tag | https://gatkforums.broadinstitute.org/firecloud/discussion/23359/importing-a-workflow-that-has-subworkflows-from-dockstore-to-firecloud | CC-MAIN-2020-24 | refinedweb | 321 | 63.93 |
atNetworkPath, atLocalPath - network path handling
#include <atfs.h> #include <atfstk.h> char* atNetworkPath (Af_key *aso); char* atLocalPath (char *networkPath);
atNetworkPath returns a network wide unique pathname for aso. The pathname has the following structure <hostname>:<canonical_pathname>@<version>. Hostname is the name of the host controlling the device, where aso is stored. the canonical pathname is the real pathname (without symbolic links), where the object is located on that host. The version number, including the introducing at-sign (@) is optional. For busy versions, it may be missing or the string busy <pathname>[version] with the version number added in brackets. It may be converted into an ASO descriptor by calling atBindVersion (manual page atbind(3)).
Both functions return the resulting string in static memory. The result will be overwritten on a subsequent call. On failure, a null pointer is returned.
atbind(3)
/etc/mtab | http://huge-man-linux.net/man3/atLocalPath.html | CC-MAIN-2017-13 | refinedweb | 143 | 51.65 |
Serge Hallyn <serge@hallyn.com> wrote:>.The last comma there is unnecessary, I think. You might also want to say'will fail' rather than 'will return false', but I'm not sure that sums it upcorrectly.> When a task belonging to (for example) userid 500 in the initial user namespaceWhy switch to talking about 'userid'? This should probably be 'UID'.> Userid mapping for the VFS is not yet implemented, though prototypes exist.Ditto.> ... Therefore, attempts to exercise privilege to resources in, for instance,> a particular network namespace, can be properly validated by checking whether> the caller has the needed privilege (i.e. CAP_NET_ADMIN) targeted to the> user namespace which owns the network namespace.That sentence looks rather clumsy. I think you need to split the statementfrom the example. Other namespaces, such as UTS and network, are owned by a user namespace. When such a namespace is created, it is assigned to [owned by? associated with?] the user namespace of the task by which it was created. Attempts to exercise privilege in the new namespace are properly validated by checking whether the caller has the needed privilege targeted to [granted by?] the user namespace that owns the new namespace. For instance, to use the resources in a network namespace, a check must be made that the caller has [has been granted?] the CAP_NET_ADMIN privilege. This is done using the ns_capable() function.You may want to list here what CAPs correspond to what namespaces.> As an example, if a new task is cloned with a private user namespace but> no'not a' instead of 'no'?> private network namespace, then the task's network namespace is owned> by the parent user namespace. The new task has noInsert 'special' here?> privilege tos/to/over/ perhaps?> the> parent user namespace, so it will not be able to create or configure'the'> network devicesInsert 'therein'?> . If,I don't think you need the comma here. The 'instead' is the if condition.> instead, the task were cloned with both private> user and network namespaces, then the private network namespace is owned> by the private user namespace, and so root in the new user namespace> will have privilege targeted toInterestingly, in these two paragraphs, you've used 'targeted to' in bothdirections. whether the caller has the needed privilege (...) targeted to the user namespacevs the new user namespace will have privilege targeted to the network namespaceYou might want to consider changing one of them. I would suggest 'granted by'for the first and 'targeted at [users of]' for the second.> the network namespace. It will be able> to create and configure network devices.David | http://lkml.org/lkml/2011/10/19/136 | CC-MAIN-2015-06 | refinedweb | 435 | 66.74 |
[
]
Chris Nauroth updated HADOOP-9489:
----------------------------------
Assignee: (was: Chris Nauroth)
> Eclipse instructions in BUILDING.txt don't work
> -----------------------------------------------
>
> Key: HADOOP-9489
> URL:
> Project: Hadoop Common
> Issue Type: Bug
> Components: build
> Affects Versions: 2.7.0
> Reporter: Carl Steinbach
> Priority: Minor
> Attachments: HADOOP-9489.003.patch, HADOOP-9489.1.patch, HADOOP-9489.2.patch,
eclipse_hadoop_errors.txt, error.log
>
>
> I have tried several times to import Hadoop trunk into Eclipse following the instructions
in the BUILDING.txt file, but so far have not been able to get it to work.
> If I use a fresh install of Eclipse 4.2.2, Eclipse will complain about an undefined M2_REPO
environment variable. I discovered that this is defined automatically by the M2Eclipse plugin,
and think that the BUILDING.txt doc should be updated to explain this.
> After installing M2Eclipse I tried importing the code again, and now get over 2500 errors
related to missing class dependencies. Many of these errors correspond to missing classes
in the oah*.proto namespace, which makes me think that 'mvn eclipse:eclipse' is not triggering
protoc.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/hadoop-common-issues/201503.mbox/%3CJIRA.12643763.1366444320000.47030.1425236345908@Atlassian.JIRA%3E | CC-MAIN-2018-05 | refinedweb | 187 | 67.35 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Support Forum » how to update keymap.h for ps/2 keyboard project?
How do I go about to add more scancode to the keymap.h. I have found all
the scancode for all the keys but just don't know how to mod the python
code so that it can generate a new keymap.h
by the way, what do these two lines means, is the sys.argv[1] supposed
to be the font.txt
import sys
f = open(sys.argv[1])
Hi kle8309,
The keymapper.py script uses font.txt to generate the keymap.h file. You can update the keymap.h file by adding new scancode/character pairs to font.txt, then running
python keymapper.py font.txt
keymapper.py takes the font file in as an argument, which is the reason for the use of the sys module. sys.argv in a python script is the list of arguments that were passed into the script. sys.argv[0] is the name of the script and sys.argv[1] etc. are arguments used in order. Check out the python sys module documentation for more.
In the case of keymapper.py it expects the argument to be a file in the same directory that it just opens and reads line by line. You can make a new font.txt file with a different name and then just pass in that name if you desire.
Humberto
What if the scan code is E075 (up arrow)
would that change everything?
The arrow keys throw out extended scan codes, so basically two bytes get sent by the keyboard 0xE0 first then 0x75. For the break it first sends 0xE0, then 0xFO (the break code), then 0x75. The current code is not written to handle this, but adding the functionality would be a great exercise.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1080/ | CC-MAIN-2020-29 | refinedweb | 325 | 86.4 |
MPS is very interesting product, with greate future, but the documentation is poor. I tried to create simple language for describing entities. The syntax is simple:
entity SomeEntity
property id : Long
property name : String
property anotherEntity : AnotherEntity
I created two concepts: 'Entity' and 'Property' with property 'type'. The 'type' can be some Java type or reference to another entity. How to check the value of the property 'type'? I read the documentation but I didn't understand how to create the type system is this case. Perhaps, I should use a separate concept for a type, not a property of concept?
You might use BaseLanguage's types for your declarations (not good idea, but possible) or you could create your own domain-specific types (gives you absolute control).
It might be 'Enum data type' (MPS built-in concept, like string or integer), or you can create your own concepts hierarhy.
If you use Enum, create properties of your concepts of that type.
If you use Concept (either your own or BaseLanguage's), declare it as 0..1 (for optional properties, for example) child.
In case you wish to use BL types, just add reference to jetbrains.mps.baseLanguage and look at it's jetbrains.mps.baseLanguage.structure.Type and it's hierarchy
Hi Alex,
You wrote: "I created two concepts: 'Entity' and 'Property' with property 'type'. The 'type' can be some Java type or reference to another entity."
It sounds strange because properties of an MPS node are atomic values: they are simply strings (or ints or booleans). So you can not store Java types or references in properties, or may be you meant something different by word "property".
To solve your problem I'd rather create such an hierarchy:
PropertyType (abstract concept)
JavaType extends PropertyType, contains a child of a concept jetbrains.mps.baseLanguage.Type
EntityType extends PropertyType, contains a reference to Entity
Your concept named Property would contain a child of a concept PropertyType.
By the way, to understand typesystem better you should remember that a type is always a node (not string not property etc.). It is even written in documentation on typesystem.
Regards,
Cyril.
Hi Alexander, you can not use Enum property value as MPS typesystem type, because a type is always a node, meanwhile Enum property is a string value property with only several allowed string values, exactly those which are declared in Enum property type delcaration.
Regards,
Cyril.
It seems to me that your answer may be helpful. I'll try to use your advise.
I tried your advise. I created the hierarchy of concepts and generator. There is an abstract concept 'PropertyType' and its inheritors: 'EntityType' with reference to 'Entity' and 'JavaType' with child node 'Type'.
The entity declaration was:
entity Device
property test : Device
property test1 : string
The generated text was:
public class Device {
private <!TextGen not found for 'entity.structure.EntityType'!> test;
private <!TextGen not found for 'entity.structure.JavaType'!> test1;
}
What does it mean? As I understand, I must to add something to my concepts... What?
Hi Alex,
you should create a generator for your concepts which generates Java types from them.
Regards,
Cyril.
How do I have to set up the editor for EntityType? If I create concepts as
you do I'm able to create entities with properties of java types, but can't reference ones of entity types.
My EntityType editor is a simple (%entityType%->{name}) so it should be able to enter the name of the referenced entity.
If I create a solution and try to use a property of EntityType, the EntityType appears in the popup list (as the JavaType does).
I choose the EntityType but can't enter the "Device" name. A JavaType property with e.g. string works fine.
Best regards,
jens
You made mistake in editor. The editor must be (%entity%->{name}), not entityType. EntityType concept must have reference to Entity concept with cardinality 1. Add it to section ''references".
Best regards,
Alex S
That works fine, thanks! The key was to define a 1-reference, not a 0..1 one.
Hi Cyril,
speaking of creating a generator - I've played a bit with this typesystem and craeted a generator with gtext to generate a simple text output.
Now I want to write out the choosen type of the solution - e.g. Device for the entity or string for the JavaType.
In the generator I have already a PropertyMacro for the name (node.name) - works fine.
For the type I have to distinguish in the PropertyMacro between EntityType and JavaType to get the type. Can be done with node.type.instanceOf and ?: operator,
but how can I cast the PropertyType to the EntityType to follow the reference and get the name of the entity?
And is there a better way than ?: in case of having more Type concepts?
Best regards
jens
One more thing to mention: I thought the "Behavior" is what I'm looking for: there I can create a method which returns the right value by some sort of polymorphism mechanism. But in baseLanguage there are "virtual" and "override" keywords I think I have to use too but my behavior editor does not accept them. Do I have to change a configuration somewhere or add another used language?
Best regards,
jens
Hi Jens,
To make a method virtual use intentions: Alt-Enter to get intentions menu.
To make a method overriding other method select name of method you want to override in completion menu of your method's name.
Igor.
I created textGens for JavaType and EntityType.
In JavaType_TextGen I wrote append ${node.type}. It works. In EntityType_TextGen I wote append ${node.entity.name}. It does not work. When I trying to generate text I receive following errors:
couldn't resolve reference 'entity' in output node [type] EntityType <no name>[8613613947967781656] in entitytest.sandbox.sandbox@1_1
-- input node was [propertyType] EntityType <no name>[1516979138000269500] in entitytest.sandbox.sandbox@1_0
bad reference 'entity' in input node [type] EntityType <no name>[8613613947967781656] in entitytest.sandbox.sandbox@1_1
Maybe it incorrect to write append ${node.entity.name} ? There is no any word about textGen in MPS documentation.
Works fine, thanks Igor
Best regards
jens
Hi alex,
as you noticed I've had also played with your typesystem sample. Finally it works, I've published the project with
Have fun.
Best regards
jens
Hi Jens,
Thank you for the sample. I think it will be useful for me.
Hi, Jens
Your sample works fine, but has one difference. You used gtext for generating output. But I tried to use baseLanguage. I created the following construction for property declaration:
private $COPY_SRC$[?] $[propertyName]
In your sample you created behavior method getTypeName() and used it in propertyMacro. It works with gtext perfectly. But it is impossible to use propertyMacro for type with baseLanguage. COPY_SRC macro works fine for JavaType, but can't resolve reference to entity for EntityType because the reference is null.
Hi alex,
I did use gtext because of a discussion in this forum some weeks ago. I asked which method would be best to generate text files.
What do you try? Generate Java code?
Anyway, never thought that it makes a difference which generator you use when using a behavior method because that's part of
the language and should be equal in any cases.
Best regards,
jens
Well, yaeh, instantly I see the difference with generators: in gtext the property macro has to evaluate to a string (which does the behaviour), in baseLanguage it has to evaluate to a node!
Since baseLanguage represents Java, I guess EntityType has to create a node<> of Java class type or so. The trick is that the entity may reference itself, so infinite loops may occur.
Any hints from JetBrains folks?
Best regards
jens
Well, I thought something like that should work: genContext.outputModel.roots(Entity).findFirst({~it => it.name == node.propertyType.getTypeName(); })
take the output model, find all roots of Entity and take the first with the name I got from behaviour. Since I used a property macro to name the generated class
one item could be found. But does not work - maybe the outputModel is not what I think it is or it is not ready for use or it.name is not what I think it is or...
Well, a debugger would be cool to set a breakpoint on use of the node macro to explore the data.
Best regards,
jens
I agree with you about debugger. It would be a useful feature.
I experimented with the sample last three days and got an acceptable result. I'll prepare a short description and post it in the nearest time.
I promised to publish my version of the typesystem for entities. You can see it in the attached file. I created three concepts: Entity, EntityType (extends Type) and Property. EntityType contains property 'name'.
The editor for EntityType is:
[> entityref {name} <]
Also I added a menu for the cell {name}:
property values
values : (scope, operationContext, node)->list<string> {
nlist<Entity> entities = node.model.roots(Entity);
list<string> names = new arraylist<string>;
foreach entity in entities {
names.add(entity.name);
}
names;
}
Now when I press Ctrl+Enter I see EntityType with other types (string, long, etc.) in the popup. I choose it. The 'entityref <no name>' appears. I press Ctrl+Enter in the 'name' cell and choose entity name from popup. There is only one problem. The word entityref is unnecessary. But I could't create this construction without it because the popup with types overlaps the popup with entity names.
Attachment(s):
entitytest.zip
I tried your advise but it did't work. The problem described here
I have chosen another way but I'm not sure is it correct.
I posted my solution here
Hi alex,
I'm interested how your solution looks like, in the next slack time I'll have a look at it!
Best regards
jens
Hi Alex,
I found some time to play again with the typesystem (my version from github). I've added a Java generator. It works with the same principle as yours.
But I don't extend Type in EntityType, I extend PropertyType as we did in the first version. The generator uses a DummyType which extends Type
to present a fitting node.
Because I cant post the diffs with my current computer to github, I've attachted the project here.
Best regards
jens
Attachment(s):
mps.samples.typesystem.rar.zip
Hi Alex,
> There is only one problem. The word entityref is unnecessary.
So you could do a little trick: put the cursor on the entityref cell, open the inspector, delete the "entityref" text in text field in section Constant cell, and add two styles:
<no base style> {
punctuation-left : true
punctuation-right : true
}
This makes it nearly unvisible.
Best regards
jens
P.S. Your menu is really cool stuff!
Hi, Jens
Sorry for delayed replay. Thanks for the answer. I found that your solution is simple and it works perfectly.
However, I thought that the baseLanguage can contains a solution. It has the same construction: the ClassifierType is referenced to the Classifier. But the code is very complicated. Unfortunatelly I don't understand it yet. | https://mps-support.jetbrains.com/hc/en-us/community/posts/205827249-How-to-create-type-system?sort_by=votes | CC-MAIN-2021-49 | refinedweb | 1,868 | 58.48 |
Overview:
Due to the recent COVID outbreak and as it continues to spread throughout the world, employees are being to asked to work from home. While most of the companies are already getting adapted to this new way of working, there are mixed opinions among employees from different parts of the world. IMO , Working from home is a good option for new parents, people with disabilities and others who aren’t well served by a traditional office setup. As this was appreciated by most of my colleagues and industry friends, i wanted to see how everyone is reacting to this new way of working across the world. In this post, i will explain how i built an application in 10 minutes to solve this particular question in mind using server less computing offered by Azure.
PreRequisities:
You will need to have an Azure Subscription. If you do not have an Azure subscription you can simply create one with free trial.
Services used:
- Azure Logic Apps
- Azure Functions
- Azure CosmosDB
- Cognitive Service
- PowerBI
Architecture:
Architecture of the solution is very simple and it uses most of the Azure managed services that handle the infrastructure for you.Whenever a new tweet is posted Logic Apps receives and processes the tweet. Sentiment score of the tweet can be analyzed using the Cognitive service then Azure function is used here to detect the sentiment of the tweet and finally inserted as a row in the powerBI to visualize in the dashboard. You can also use SQL server/Cosmosdb to store the tweet data if you want to process it later.
How to build the application:
Step 1: Create the Resource Group
As the first step, we need to create the resource group that contains all the resources needed. Navigate to Azure Portal and create the resource group named “wfh-sentiment”
Step 2 : Create the Function App
As the next step lets create the Function App which we need to detect the sentiment of the tweet. You can create and deploy the function app using Visual Studio Code. Open Visual Studio Code(Make sure you have already installed the VSCode with the function core tools and extension). Select Ctrl + Shif + P to create a new Function Project and select the language as C# ( But you could consider using any of the language that you are familiar with)
and the logic of the Function app is as follows,
using System;; namespace WorkFromHome { public static class DecideSentinment { [FunctionName("DecideSentinment")] public static async Task<HttpResponseMessage> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequestMessage req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string Sentiment = "POSITIVE"; //Getting the score from the Cognitive Service and determining the sentiment double score = await req.Content.ReadAsAsync<double>(); if(score < 0.3){ Sentiment = "NEGATIVE"; } else if(score < 0.6){ Sentiment = "NEUTRAL"; } return req.CreateResponse(System.Net.HttpStatusCode.OK,Sentiment); } } }
And the source code can be found here. Then , you can deploy the function App to Azure with simple command using Ctrl+Shift+P and deploy to Function App.
Step 3: Create the Azure Cognitive Service to determine the sentiment of the tweet text
As we discussed above, lets create the cognitive service to determine the sentiment score of the tweet. Go to the same resource group and search for cognitive service and create a new service as follows,
Step 4: Create Cosmosdb to store the data
In my application, i have made this step optional as i don’t need to save the tweet data for historical analysis. But you can definitely use cosmosdb to store the tweets to process later. As how you created the Cognitive service create a new cosmosdb account and a database to store the data as follows,
Step 5: Create PowerBI dataset to visualize the data
Navigate to PowerBI portal and create a new dataset to visualize the data we collected as follows,
Step 6: Create the Logic App and configure the Flow
This is the core part of the application as we are going to link together the above component as one flow. You can connect these flows using designer as well as using YAML code. I will be using Designer to create the flow.
As denoted above the first step we need to add the twitter connector which you can pick from the available list of connector named “when a new tweet is posted”
You need to configure the search text which you want to get the tweets , in this case i am going to use the Hashtag “#WFH” and set the interval as 30 seconds.
The second step is to pass the tweet to Azure cognitive service to analyse the sentiment of the tweet and get the score as output
You need to provide the key and the URL which could be obtained from the cognitive service you created above.
The third step is to pass the score obtained above to Azure function which we already deployed to determine the sentiment of the tweet, select the azure function from the connector list as follows,
Next step is to stream the data set to powerBI so that it will be readily available for the visualization. Select the below connector as next step
We are almost done with the configuration, as the last step you need to map the data fields from the above steps to insert into the dataset and the final configuration looks as below.
Step 7: Visualize it in PowerBI
Now we have configured all the steps required in the logic app, navigate to PowerBI and select the data set from which you want to create the report/dashboard. In this case we will select the data set which we have already created as follows,
Rest is yours, you can create lot of usual charts/visualizations according to the way you need. I have created four basic metrics to see how world reacts to “work from home”
- Indicate the total number of unique tweets
- Distribution of sentiments using a pie chart
- Table which displays all the data (user,location,sentiment,score and the tweet)
- Worldmap which shows how distribution of sentiments look like
and this is how my application/dashboard look like.
As you can see the tweets and the sentiments are being inserted to the data set and most of the sentiments are being Positive(Looks green !!!). You can replicate the same architecture for your scenarios ( Brands/ Public opinion etc).
As you see some complex scenarios/problems can be easily sorted out with the help of serverless computing and that is the power of Azure. Cheers!
For those who are interested you can view the Live dashboard.
2 thoughts on “How world reacts to Work from home(#WFH) using Serverless with Azure(CosmosDB + Functions + LogicApps)”
[…] to practice social distancing and work from home. As we have seen in the previous blog about the sentiment analysis of employees working form home, in this blog i will explain about how to build a chat bot to provides answers to most of the […]
[…] You can navigate to Azure Portal and search for Resource Group in the search bar and create a new one as defined here! […]
You must log in to post a comment. | https://sajeetharan.com/2020/03/20/how-world-reacts-to-work-from-home-using-serverless-with-azure/ | CC-MAIN-2021-04 | refinedweb | 1,202 | 53.04 |
User talk:DeRaza360
From Uncyclopedia, the content-free encyclopedia
edit Welcome!
Hello, DeRaza360,:DeRaza360!)
02:03, July 11, 2011 (UTC)
edit Nintendo 3DS
A minute ago I coulda deleted your article like it ain't no thing, but I ain't a stone cold motherfucker like that. You clearly have the goods to take a topic and explore it pretty thoroughly, all you have to do now is get it extra-funny. Put the article up on Pee Review and see if you can fix it up before the new tag expires. Good luck and have fun! Writing is fun as shit, never forget it. -- 01:57, July 30, 2011 (UTC)
- Just saw you already put it in, but from your namespace. So yeah. Bug someone about doing it. -- 21:16, July 30, 2011 (UTC)
edit Pee review
Hello! I just peed all over your article, you can find the review here. If you need more help, don't hesitate to ask for it! Mattsnow 02:31, September 16, 2011 (UTC) | http://uncyclopedia.wikia.com/wiki/User_talk:DeRaza360 | CC-MAIN-2013-48 | refinedweb | 171 | 74.9 |
leau2001 wrote:
>> I made some figure in a loop and i want to close after the
>> figure show.
>>
> Not absolutely sure what you mean, but to produce some
> plots and save them in a loop I do
> f = figure() for i in range(..): plot(...) savefig(...)
> f.clf() # clear figure for re-use close(f)
Often times what people are looking for is they want to the figure to
pop up on the screen, look at it, have it close, and move on. One way
to achieve this is to run mpl in interactive mode
and then insert a time.sleep or call
input("Press any key for next figure: ")
If this is what you are doing, threading becomes important. This is
discussed on the web page linked above, and your best bet is to either
use the tkagg backend or better yet, use ipython in -pylab mode.
Something like
import sys
from pylab import figure, close, show, nx, ion
ion()
while 1:
fig = figure()
ax = fig.add_subplot(111)
x, y = nx.mlab.rand(2,30)
ax.plot(x,y,'o')
fig.canvas.draw()
k = raw_input("press any key to continue, q to quit: ")
if k.lower().startswith('q'):
sys.exit()
show() | https://discourse.matplotlib.org/t/how-to-close-a-figure/5307 | CC-MAIN-2019-51 | refinedweb | 201 | 76.66 |
#!/usr/bin/env python
# authored by shane lindberg
# This script makes configuration files for mplayer. In particular it makes a configuration that crops widescreen
# avi files so they will better fit your 4:3 aspect tv or computer moniter
# to run this program you need to to be in the directory that contains your avi files. Then just simply run the command
# it will check for the dimensions of the avi file using avitype, I think this is a part of transcode. If avitype is not
# installed the script will not work properly. This does not affect your media it only makes a config file that mplayer
# will use. At any time you can simply do 'rm *conf' to remove all of the config files this program created
# then you will be back to your old widescreen self
import os
import sys
current_dir = os.getcwd()
# this python function gets the dimensions of a video file and returns them as a tuple (width,height)
# it uses the linux video program avitype (I think this is part of the transcode package)
# getdimensions.py
def getdimensions(video_file):
import commands
avitype_command= '/usr/bin/avitype "%s" | grep WxH' % video_file
dimensions = commands.getoutput(avitype_command)
width = int(dimensions[-7:-4])
height = int(dimensions[-3:])
WxH = (width,height)
return WxH
# this function finds all media in a given directory by file extention. It then places this media in a list
def movie_find(directory):
ls_dir = os.listdir(directory)
dir_list =[]
for i in ls_dir:
if i.endswith('.avi'):
dir_list.append(i)
return dir_list
# this part checks to make sure the user has root privleges, if not it exits the script
current_user = os.geteuid()
#you may want to remove this if statment. It is needed for me because my movie files are in a write protected space
if current_user != 0:
print "you need to be root to run this script"
sys.exit()
# this part checks to make sure you are in the directory of the files you want to make .conf files for
print "is this the directory which contains the files you want to make .confs for"
print current_dir
answer = raw_input("enter 1 to continue")
if answer != '1':
print "change to the correct directory then restart the script"
sys.exit()
movie_list = movie_find(current_dir)
for i in movie_list:
conf_name = "%s.conf" %i
wxh = getdimensions(i)
width = wxh[0]
# you can change the amount of crop by adjusting the number multiplied by width. The lower the number
# the more of a crop you will get. If the number is at the max 1, it will not be cropped at all
cropped_width = int(.80 * width)
print_tuple = (cropped_width,wxh[1])
conf_file = open(conf_name, "w")
conf_file.write("vf=crop=%s:%s\n"%print_tuple)
conf_file.close() | http://www.linuxquestions.org/questions/linux-software-2/watch-your-widescreen-movies-more-full-screen-349052/ | CC-MAIN-2014-35 | refinedweb | 450 | 62.78 |
My program’s purpose is to receive a binary number (1’s and 0’s) as input, verify that it is a binary number, deny the input if it is not a binary number and continue prompting the user until they enter a binary number, and then output how many ones and zeros are in that binary number.
Here’s the problem I am running into: While my program does output how many ones and zeros are in the number, even when I do enter a proper binary number, my output still says “ERROR: Not a binary number.” For example, if my input was 10001, the output would be this-
Please enter a binary number.
10001
There are 2 ones in the binary number.
There are 3 zeros in the binary number.
ERROR: Not a binary number.
Please enter a binary number.
What did I do wrong in my code?
import java.util.Scanner;
public class NewClass
{
public static void main( String [] args )
{
Scanner scan = new Scanner( System.in);
int i = 0, count1 = 0, count0 = 0;
String number;
System.out.println("Please enter a binary number.");
number = scan.next();
String number1 = "1";
while ((i = number.indexOf(number1, i++)) != -1) {
count1++;
i += number1.length();
}
System.out.println("There are "+ count1 + " ones in the binary number.");
String number2 = "0";
while ((i = number.indexOf(number2, i++)) != -1) {
count0++;
i += number2.length();
}
System.out.println("There are "+ count0 + " zeros in the binary number.");
int total = (count1 + count0);
int length = number.length();
if (length != total);
{
System.out.println("ERROR: Not a binary number.");
System.out.println("Please enter a binary number.");
number = scan.next();
}
}
Try this code. First, you can use scan.nextLine() method to get user's input. and Second, you don't need use two while loops separately for zeros and ones. finally, if there is wrong input such as 2, 3 or else. It should be loop infinitely.
import java.util.Scanner; public class NewClass { public static void main(String [] args) { Scanner scan = new Scanner(System.in); int i = 0, count1 = 0, count0 = 0; String number; char number1 = '0'; char number2 = '1'; int total; while(true){ System.out.println("Please enter a binary number."); number = scan.nextLine(); char [] charArray = number.toCharArray(); while(i < charArray.length){ if(charArray[i] == number1){ count0++; } else if(charArray[i] == number2) { count1++; } i++; } total = count0 + count1; if(charArray.length == total){ System.out.println("There are " + count0 + " zeros in the binary number."); System.out.println("There are " + count1 + " ones in the binary number."); break; } else { System.out.println("ERROR: Not a binary number."); } } } } | https://codedump.io/share/Zj5IF2GYAIDQ/1/how-to-show-an-error-message-for-invalid-user-input | CC-MAIN-2017-26 | refinedweb | 424 | 53.37 |
Discord.py 1.2.2 Bot doesn't respond to commands)
- ellie_ff1493
Can you do some basic debugging and find out if the functions get called, so we know if it’s a problem with the return or the message isn’t making it to the function
- ellie_ff1493
Just put a few prints in it
This is probably unrelated but on-the-same-line comments like on line 6 should be done with the pound/hash (#) sign instead of with triple quotes.
a = 'a' # letter a b = 'b' """ letter b""" assert len(a) == len(b), (a, b)
@Noxive I really don't know discord but if you see the doc, you have to use your $test with a parameter,
like "$test world". Did you do so? Else nothing happens.
@ellie_ff1493 I did. Nothing happens. Like the command event never even occured.
@ccc @cvp I deleted all the comments and PyDoc, there is no change.
could you try:
@bot.command() async def test(ctx): await ctx.send('test')
then type $test
oh, i see.
client.run('token')
shuld be
bot.run('token')
also, client.on_message, etc all become bot.on_message
Bot is a subclass of Client -- your bot instance IS your client.
)
Ok finally solved it. I missed this fragment in the docs:
Overriding the default provided on_message forbids any extra commands from running. To fix this, add a bot.process_commands(message) line at the end of your on_message
Thanks everyone for help. | https://forum.omz-software.com/topic/5684/discord-py-1-2-2-bot-doesn-t-respond-to-commands/10 | CC-MAIN-2019-39 | refinedweb | 242 | 76.11 |
I am currently working on a small project with Fibonacci numbers and it requires me to report an error when given a negative numbers. This is the assignment my professor gave me,
"The program is not assured of receiving good data, thus if -2 is received as the number of numbers, an error should be reported."
I am having a bit of trouble setting up the error report.
# include <iostream.h> void main () { int x=0, y=1, b, n=0,ter; cout<<"Enter The number of terms"; cin>>ter; cout<<x<<" "<<y<<" "; while (n<ter-1) { b=x+y; cout<<b<<" "; x=y; y=b; n++; } } | https://www.daniweb.com/programming/software-development/threads/390237/fibonacci-numbers-question | CC-MAIN-2017-09 | refinedweb | 107 | 65.05 |
SpriteKit Animations and Texture Atlases in Swift
In this SpriteKit tutorial, you’ll create an interactive animation of a walking bear and learn how to:
- Create efficient animations with texture atlases.
- Change the direction the bear faces based on where it’s moving.
- Make your animated bear move in response to touch events.
This tutorial assumes you know the basics of SpriteKit. If not, you might want to start with the SpriteKit Swift 3 Tutorial for Beginners.
It’s time to get started.
Create the Swift Project
Start up Xcode, select File\New\Project…, choose the iOS\Game template and click Next.
Enter AnimatedBearSwift for the Product Name, Swift for Language, SpriteKit for Game Technology. Make sure the options for Integrate GameplayKit, Include Unit Tests, and Include UI Tests are unchecked and click Next:
Choose where to save your project, and click Create.
Now that your project is open, select one of the iPad simulators and build and run to check out the starter project. After a brief splash screen, you should see the following:
If you tap on the screen, you’ll see spinning geometric shapes which flare to life then fade from view. Pretty cool, but those won’t do for your bear.
You can download ready-to-animate art, courtesy of GameArtGuppy.com, by clicking the Download Materials button at the top or bottom of this tutorial. When you unzip the materials, you’ll find a folder named BearImages.atlas which contains eight numbered bear images.
With the help of SpriteKit, you’ll cycle through these eight images to create the illusion of movement — kind of like an old-fashioned flip-book.
You could create an animation by loading in each of these images individually. But there’s a better way: Use a texture atlas to make your animation more efficient.
Texture Atlases
If you’re new to texture atlases, you can think of them as one big mashup of all your smaller images. Rather than eight separate bear images, your texture atlas will be one big image along with a file that specifies the boundaries between each individual bear image.
SpriteKit is optimized to work with texture atlases. So using this approach can improve memory usage and rendering performance.
It’s also nearly effortless.
Just place your image files in a folder with a name that ends with .atlas — like the BearImages.atlas folder you downloaded. Xcode will notice the .atlas extension and automatically combine the images into a texture atlas for you at compile time.
Drag BearImages.atlas over your project and drop it under the AnimatedBearSwift folder in Xcode:
In the dialog box that appears, be sure that the Copy items if needed, Create groups and AnimatedBearSwift options are all checked, and click Finish:
If you expand the folder in Xcode it should look like this:
Before you start animating, get your Xcode template ready by completing a few small tasks.
First, click on AnimatedBearSwift in the Project navigator. Make sure that the AnimatedBearSwift target is selected. In the Deployment Info section, choose iPad for Devices and uncheck the Portrait and Upside Down options so only Landscape Left and Landscape Right are left checked, as shown below:
Next, find GameScene.sks in the project navigator and press Delete. Choose Move to Trash when prompted.
Be sure you’re deleting GameScene.sks, and not GameScene.swift. GameScene.sks is a scene editor which allows developers to visually lay out sprites and other components of a scene. For this tutorial, you’ll build your scene programmatically.
Similarly, delete Actions.sks as you also don’t need that for this tutorial.
With that out of the way, it’s time get that bear moving :] !
A Simple Animation
Start by plopping the bear in the middle of the screen and looping the animation, just to make sure things are working.
Open GameViewController.swift and replace the contents with the following:
import UIKit import SpriteKit class GameViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() if let view = view as? SKView { // Create the scene programmatically let scene = GameScene(size: view.bounds.size) scene.scaleMode = .resizeFill view.ignoresSiblingOrder = true view.showsFPS = true view.showsNodeCount = true view.presentScene(scene) } } override var prefersStatusBarHidden: Bool { return true } }
This implementation has just what you need to start, so you won’t need the starter code generated by Xcode.
GameViewController is a subclass of
UIViewController that has its root view set to an
SKView, which is a view that contains a SpriteKit scene.
Here, you’ve overridden
viewDidLoad() to create a new instance of
GameScene on startup. You define the scene to have the same size as the view, set the scaleMode along with other basic properties and present the scene. For this tutorial, the rest of your code will be in GameScene.swift.
if let view = view as? SKViewbit — to make sure the view is the correct type before proceeding.
You’re also overriding the
prefersStatusBarHidden getter to hide the status bar so that all the attention will be focused on the bear.
Switch over to GameScene.swift and replace the contents with the following:
import SpriteKit class GameScene: SKScene { private var bear = SKSpriteNode() private var bearWalkingFrames: [SKTexture] = [] override func didMove(to view: SKView) { backgroundColor = .blue } }
At this point you’ve just removed all the project template code to create a nice blank slate (and defined a couple
private variables you’ll need later). Build and run to make sure everything builds OK — you should see a blue screen.
Note: If you are running in the simulator, you may need to manually rotate the screen by selecting Hardware\Rotate Right.
Setting up the Texture Atlas
Add a new method, just after
didMove(to:):
func buildBear() { let bearAnimatedAtlas = SKTextureAtlas(named: "BearImages") var walkFrames: [SKTexture] = [] let numImages = bearAnimatedAtlas.textureNames.count for i in 1...numImages { let bearTextureName = "bear\(i)" walkFrames.append(bearAnimatedAtlas.textureNamed(bearTextureName)) } bearWalkingFrames = walkFrames }
First, you create an
SKTextureAtlas from the bear images.
walkFrames is an array of
SKTexture objects and will store a texture for each frame of the bear animation.
You populate that array by looping through your images’ names (they are named with a convention of bear1.png -> bear8.png) and grabbing the corresponding texture.
Still in
buildBear(), add the following right after
bearWalkingFrames = walkFrames:
let firstFrameTexture = bearWalkingFrames[0] bear = SKSpriteNode(texture: firstFrameTexture) bear.position = CGPoint(x: frame.midX, y: frame.midY) addChild(bear)
Here, you’re creating an
SKSpriteNode using the first frame texture and positioning it in the center of the screen to set up the start of the animation.
Finally, you need to call the method. Add the following code to the end of
didMove(to:)
buildBear()
If you were to build and run now, the bear still wouldn’t be moving. In order to do so, we need an
SKAction. In the same file, add the following new method right after the
buildBear() method:
func animateBear() { bear.run(SKAction.repeatForever( SKAction.animate(with: bearWalkingFrames, timePerFrame: 0.1, resize: false, restore: true)), withKey:"walkingInPlaceBear") }
This action will cause the animation to run with a 0.1 second wait-time for each frame. The
"walkingInPlaceBear" key identifies this particular action with a name. That way, if you call this method again to restart the animation, it will simply replace the existing animation rather than create a new one.
The
repeatForever action repeats whatever action it is provided forever, which results in the given
animate action repeatedly animating through the textures in the texture atlas.
Now all you need to do is call this method to kick off the animation! Add the following line to the end of
didMove(to:):
animateBear()
And that’s it! Build and run the project. You’ll see your bear happily strolling on the screen.
Changing Animation Facing Direction
Things are looking good, except you don’t want this bear meandering about on its own — that would be dangerous! It would be much better if you could control its direction by tapping the screen to tell it which way to go.
Still in
GameScene.swift, add the following method to detect user touches:
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) { let touch = touches.first! let location = touch.location(in: self) var multiplierForDirection: CGFloat if location.x < frame.midX { // walk left multiplierForDirection = 1.0 } else { // walk right multiplierForDirection = -1.0 } bear.xScale = abs(bear.xScale) * multiplierForDirection animateBear() }
SpriteKit calls
touchesEnded(_:with:) when the user's finger lifts off the screen at the end of a tap.
In the method, you determine which side of the screen was tapped — left or right of center. You want the bear to face in the direction of the tap. You do this by making the node's
xScale positive when the bear should face left (the bear walks to the left by default), and negative to flip the image when the bear should face right.
Finally, you call
animateBear() to restart the animation each time you tap the screen.
Build and run the project, and you'll see your bear happily strolling on the screen as usual. Tap on the left and right sides of the screen to get the bear to change directions.
Moving the Bear Around the Screen
Right now, it looks like the bear is walking in-place on a treadmill. The next step is to get him to meander to different places on the screen.
First, remove the call to
animateBear() at the end of
didMove(to:). You want the bear to start moving when the user taps the screen, not automatically.
Next, add this helper method to the class:
func bearMoveEnded() { bear.removeAllActions() }
This will remove all actions and stop the animation. You'll call this later when the bear reaches the edge of the screen.
Before taking your bear out for a walk, clean-up
touchesEnded(_:with:) by moving all bear-related code to its own method. Replace the entire method with:
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) { let touch = touches.first! let location = touch.location(in: self) moveBear(location: location) }
The code above records the touch location and sends it over to a new method,
moveBear(location:). It also spawns a warning from Xcode because you haven't yet created this method. Add the following code right after
animateBear():
func moveBear(location: CGPoint) { // 1 var multiplierForDirection: CGFloat // 2 let bearSpeed = frame.size.width / 3.0 // 3 let moveDifference = CGPoint(x: location.x - bear.position.x, y: location.y - bear.position.y) let distanceToMove = sqrt(moveDifference.x * moveDifference.x + moveDifference.y * moveDifference.y) // 4 let moveDuration = distanceToMove / bearSpeed // 5 if moveDifference.x < 0 { multiplierForDirection = 1.0 } else { multiplierForDirection = -1.0 } bear.xScale = abs(bear.xScale) * multiplierForDirection }
Here's what's going on step-by-step:
- You declare
multiplierDirection, a
CGFloatvariable you'll use to set the bear's direction, just as you did earlier in
touchesEnded().
- You define the
bearSpeedto be equal to the screen width divided by 3, so the bear will be able to travel the width of the scene in 3 seconds.
- You need to figure out how far the bear needs to move along both the x and y axes, by subtracting the bear's position from the touch location. You then calculate the distance the bear moves along a straight line (the hypotenuse of a triangle formed from the bear's current position and the tap point). For a full tutorial on the math of game programming, check out Trigonometry for Games.
- You calculate how long it should take the bear to move along this length, by dividing the length by the desired speed.
- Finally, you look to see if the bear is moving to the right or to the left by looking at the x axis of the move difference. If it's less than 0, he's moving to the left; otherwise, to the right. You use the same technique of creating a multiplier for the
xScaleto flip the sprite.
Now all that's left is to run the appropriate actions. That's another substantial chunk of code — add the following to the end of
moveBear(location:):
// 1 if bear.action(forKey: "walkingInPlaceBear") == nil { // if legs are not moving, start them animateBear() } // 2 let moveAction = SKAction.move(to: location, duration:(TimeInterval(moveDuration))) // 3 let doneAction = SKAction.run({ [weak self] in self?.bearMoveEnded() }) // 4 let moveActionWithDone = SKAction.sequence([moveAction, doneAction]) bear.run(moveActionWithDone, withKey:"bearMoving")
Here's what's happening in this second half of
moveBear(location:):
- Start the legs moving on your bear if he is not already moving his legs.
- Create a move action specifying where to move and how long it should take.
- Create a done action that will run a block to stop the animation.
- Combine these two actions into a sequence of actions, which means they will run in order sequentially — the first runs to completion, then the second runs. You have the bear sprite run this sequence with the key "bearMoving". As with the animation action, using a unique key here will ensure there is only one movement action running at a time.
Note: SpriteKit supports sequential and grouped actions. A sequential action means each specified action runs one after the other (sequentially). Sometimes you may want multiple actions to run at the same time. This can be accomplished by specifying a
group action where all the actions specified run in parallel.
You also have the flexibility to set up a series of sequential actions that contain grouped actions and vice versa! For more details see the SKAction class in Apple's SpriteKit documentation.
That was a lot of code — but was it worth it? Build and run to see! Now you can tap the screen to move your bear around.
Where To Go From Here?
You can download the completed version of the project using the Download Materials button at the top or bottom of this tutorial.
For more animation fun, here are a few ideas to try out:
- What if you wanted the bear to moonwalk? Hint: Try building the array of images backwards!
- Try accelerating or slowing down the frame times in
animateBear().
- Try animating multiple bears on the screen at the same time. Hint: Create multiple sprite nodes with actions.
At this point, you should know all you need to start adding animations to your projects using texture atlases. Have some fun and experiment by creating your own animations and seeing what you can do!
If you want to learn more about SpriteKit and Swift, you should check out our book 2D Apple Games by Tutorials. We'll teach you everything you need to know — physics, tile maps, particle systems and even making your own 2D game art.
In the meantime, if you have any questions or comments,
Morten Faarkrog
- Team Lead
Richard Critz | https://www.raywenderlich.com/161314/spritekit-animations-texture-atlases-swift | CC-MAIN-2018-13 | refinedweb | 2,463 | 65.93 |
Compiling under VS2010
Visual Studio 2010 is the best choice for Source. You can use the free C++ Express edition.
game\server\swarm_sdk_server.vcprojwith a text editor and delete "
' $File" from line 2232.
Contents
Debugging
- Follow the instructions for fixing debug compiles in VS2008, which apply here as well.
- Right click on the client and server projects in VS and go to Properties > Configuration Properties > General. Change Target Name to client and server respectively.
File Copying
There is a bug in VS 2010 which will sometimes prevent your newly-compiled DLLs from being copied to your mod directory. To fix it, right-click on the client and server projects in VS and select Properties. Change the Configuration drop-down menu at the top to All Configurations. Then, go to Configuration Properties > Custom Build Step and in the Additional Dependencies field, type in $(TargetPath).
IsBIK error
First, save ibik.h to
src/public/avi/ibik.h.
Then in
src/game/client/enginesprite.h, add to the include list:
#include "avi/ibik.h"
Next, add the highlighted lines towards the bottom:
bool IsAVI(); bool IsBIK(); void GetTexCoordRange( float *pMinU, float *pMinV, float *pMaxU, float *pMaxV ); private: AVIMaterial_t m_hAVIMaterial; BIKMaterial_t m_hBIKMaterial;
Now open
spritemodel.cpp (located in the same directory) and add the following:
bool CEngineSprite::IsBIK() { return ( m_hBIKMaterial != BIKMATERIAL_INVALID ); }
Fixing Other Compiling Errors
- Error LNK2005: ... already defined in memoverride.obj
- See Compiling under VS2008#Fix debug compilation.
- Linker Errors relating to LIBC, LIBCMT, LIBCD, LIBCMTD
- Go to the client project's Properties->Linker->Input->Ignore Specific Default Libraries and enter
libc;libcmtd;libcd;. Navigate to the same place in the server project properties and enter
libcd;libcmtd;.
- Cannot locate "gl/glaux.h" (glview only)
- Remove the line
#include <gl/glaux.h>from
glview.cpp(can be found in src/utils/glview). Then right-click on the project and go to Properties->Configuration Properties->Linker->Input and under Additional Dependencies remove
glaux.lib.
- Warning MSB8012: TargetPath does not match the Linker's OutputFile property ...
- Go to Properties->Configuration Properties->General. Change the values in Output Directory, Intermediate Directory, Target Name, and Target Extension to match the Linker's OutputFile property value given in the error message.
- fatal error C1083: Cannot open include file: 'weapon_sdkbase.h': No such file or directory
- See Source SDK missing files#weapon_sdkbase
Precompiled Headers
If you want to remove the "precompiled header" warnings seen during a full compile, open these files in your client project and move
#include "cbase.h" up to the first line:
- hud_locator.cpp
- hud_credits.cpp
- hud_flashlight.cpp
First-chance exception at 0x00000000
If Client compiles but Server does not, and you have set your mod up for debugging, hl2.exe will crash with extremely ambiguous information. To prevent the game from running if Server does not successfully compile, right click the client project file and select Project Dependencies. Check the checkbox for Server and then click OK.
Compiling Under Linux
VS 2010 introduces a new project file format (
.vcxproj) which is not compatible with the SDK's
VprojToMake tool. A third-party update adding support is available. | https://developer.valvesoftware.com/w/index.php?title=Compiling_under_VS2010&printable=yes | CC-MAIN-2020-10 | refinedweb | 512 | 50.84 |
Recursion is a technique that allows us to break down a problem into one or more subproblems that are similar in form to the original problem. For example, suppose we need to add up all of the numbers in an array. We'll write a function called add_array that takes as arguments an array of numbers and a count of how many of the numbers in the array we would like to add; it will return the sum of that many numbers.
If we had a function that would add up all but the very last number in the array, then we would simply have to add the last number to that sum and we would be done. Add_array is an ideal function for adding up all but the last number (as long as the array contains at least one number). After all, add_array is responsible for taking an array and a count, and adding up that many array elements. If there are no numbers in the array, then zero is the desired answer. These observations suggest the following function:
int add_array(int arr[], int count) { if (count == 0) return 0; return arr[count - 1] + add_array(arr, count - 1); }This function is perfectly legal C, and it operates correctly. Notice that the function has two components:
One of the classic examples of recursion is the factorial function. Although factorial is not the world's most interesting function, it will provide us with many useful observations.
Recall that factorial, which is written n!, has the following definition:
n! = 1 * 2 * 3 * .... * (n-2) * (n-1) * nWe can use this definition to write a C function that implements factorial:
int fact(int n) { int i; int result; result = 1; for (i = 1; i <= n; i++) { result = result * i; } return result; }This is a simple iterative function that mirrors the definition of factorial. We can derive a different definition for factorial by noticing that n! = n * (n-1)! and 1! = 1. For example, 4! = 4 * 3!. Notice that we need to specify a value for 1! because our definition does not apply when n=1. This kind of definition is known as an inductive definition, because it defines a function in terms of itself.
We can write a C function that mirrors this new definition of factorial as follows:
int fact(int n) { if (n == 1) return 1; return n * fact(n - 1); }Notice that this function precisely follows our new definition of factorial. It is recursive, because it contains a call to itself.
Let's compare the two versions:
To successfully apply recursion to a problem, you must be able to break the problem down into subparts, at least one of which is similar in form to the original problem. For example, suppose we want to count the number of occurrences of the number 42 in an array of n integers. The first thing we should do is write the header for our function; this will ensure that we know what the function is supposed to do and how it is called:
int count_42s(int array[], int n);To use recursion on this problem, we must find a way to break the problem down into subproblems, at least one of which is similar in form to the original problem. If we know that the array contains n numbers, we might break our task into the subproblems of:
count_42s(array, n-1);If successful, this recursive call will count all of the occurrences of the number 42 in the first n-1 positions of the array and return the sum (we will discuss the conditions that must hold for a recursive call to be successful in the Section 5; until then, we will assume that all recursive calls work properly). We must now determine how to use this result. If the last element in the array is not 42, then the number of 42s in the entire array is the same as the number of 42s in all but the last element of the array. If the last number in the array is 42, then the number of 42s in the entire array is one more than the number found in the subarray. This suggests the following code:
if (array[n-1] != 42) { return count_42s(array, n-1); } return 1 + count_42s(array, n-1);Here we have two recursive calls (only one of which will actually be used in any given situation). We must now determine whether there are any circumstances under which this code will not work. In fact, this code will not work when n==0; in such a case it tries to subscript the array with -1, which is not a legal array subscript in C. (Oh alright, it's legal; it's just that it's almost never what you want, and will often lead to a segmentation fault or worse.) That means that unless we treat specially the case where n is zero, our function will not work when asked to count the number of 42s in an array of zero items. We will therefore add a base case that will test for n==0 and return zero as its result in that case. This gives the function:
int count_42s(int array[], int n) { if (n==0) return 0; if (array[n-1] != 42) { return count_42s(array, n-1); } return 1 + count_42s(array, n-1); }This is a perfectly good recursive solution to the count_42s problem. It is not the only recursive solution though; there are other ways to break the problem into subpieces. For example, we could break the array into two pieces of equal size, count the number of 42s in each half, then add the two sums. To do this, we will need to hand as arguments to count_42s not just the array and the subscript of the highest value in the array, but also the subscript of the lowest value in the array:
int count_42s(int array[], int low, int high);A call such as count_42s(my_array, A, B) says "count all the occurrences of the number 42 in my_array that lie between position A and position B inclusive."
We can calculate the midpoint between subscript low and subscript high with (high+low)/2. Thus we can count the number of 42s in each half of the array and add them together with:
count_42s(array, low, (low + high) / 2) + count_42s(array, (low + high) / 2 + 1, high);We now have a recursive case but no base case. When will the recursive case fail? It fails when the array does not contain at least two numbers. If the array contains no items, or it contains one item that is not 42, then we should return zero. If the array contains exactly one number, and that number is 42, then we should return one. Putting it all together, we get:
int count_42s(int array[], int low, int high) { if ((low > high) || (low == high && array[low] != 42)) { return 0 ; } if (low == high && array[low] == 42) { return 1; } return count_42s(array, low, (low + high)/2) + count_42s(array, 1 + (low + high)/2, high)); }Note that the line
if (low == high && array[low] == 42)could properly be written simply as if (low==high). The comparison with 42 is included here simply to make each of the relevant conditions explicit in the same expression.
These examples demonstrate that there may be many ways to break a problem down into subproblems such that recursion is useful. It is up to you the programmer to determine which decomposition is best. The general approach to writing a recursive program is to:
How should you think about a recursive subprogram? Do not immediately try to trace through the execution of the recursive calls; doing so is likely to simply confuse you. Rather, think of recursion as working via the power of wishful thinking. Consider the operation of fact(4) using the recursive formulation of fact. 4 is not 1, so the recursive case holds. The recursive case says to multiply fact(3) by 4. Here is where the wishful thinking comes in: wish for fact(3) to be calculated. Because this is a recursive call, your wish will be granted. You now know that fact(3)=6. So fact(4) is equal to 6 times 4, or 24 (which is just what it's supposed to be).
An analogy you can use to help you think this way is corporate management. When the CEO of a corporation tells a vice-president to perform some task, the CEO doesn't worry about how the task is accomplished; he or she relies on the vice-president to get it done. You should think the same way when you are programming recursively. Delegate the subtask to the recursive call; don't worry about how the task actually gets done. Worry instead whether the top-level task will get done properly, given that all the recursive calls work properly.
Another way to think about recursion is to pretend that a recursive call is actually a call to a different function, written by somebody else, that performs the same task that your function performs. For example, suppose we had a library routine called libfact that returned the factorial of its argument. We could then write our own version of fact as:
int fact (int n) { if (n == 1) return 1; return n * libfact(n - 1); }This version of fact correctly returns one if its argument is one. If its argument is greater than one, it calls libfact to calculate (n-1)!, and multiplies the result by n. Because libfact is a library routine, we may assume that it works properly, in this case calculating the factorial of n-1. For example, if n is 4, then libfact is called with 3 as its argument; it returns 6. This is multiplied by 4 to get the desired result of 24.
This example points out that a recursive call is just like any other function call. In particular, a recursive call gets its own parameter list and local variables, just as libfact would. Furthermore, while the recursive call is executing, the top--level call sits there waiting for the recursive call to terminate. This means that execution doesn't halt when a recursive call finds itself at the base case; once the recursive call returns, the top--level call then continues to execute.
One of the most difficult aspects of programming recursively is the mental process of accepting on faith that the recursive call will do the right thing. The following checklist itemizes the five conditions that must hold for recursion to work. If each of these conditions holds for your recursive subprogram, you may feel confident that the recursion will operate correctly:
Let's see whether the recursive fact function meets these criteria:
This section describes some of the ways in which recursive functions are characterized. The characterizations are based on:
int foo(int x) { if (x <= 0) return x; return foo(x - 1); }includes a call to itself, so it's directly recursive. The recursive call will occur for positive values of x.
The following pair of functions is indirectly recursive. Since they call each other, they are also known as mutually recursive functions.
int foo(int x) { if (x <= 0) return x; return bar(x); } int bar(int y) { return foo(y - 1); }
Tail recursive functions are often said to "return the value of the last recursive call as the value of the function." Tail recursion is very desirable because the amount of information which must be stored during the computation is independent of the number of recursive calls. Some modern computing systems will actually compute tail-recursive functions using an iterative process.
The "infamous" factorial function fact is usually written in a non-tail-recursive manner:
int fact (int n) { /* n >= 0 */ if (n == 0) return 1; return n * fact(n - 1); }Notice that there is a "pending operation," namely multiplication, to be performed on return from each recursive call. Whenever there is a pending operation, the function is non-tail-recursive. Information about each pending operation must be stored, so the amount of information is not independent of the number of calls.
The factorial function can be written in a tail-recursive way:
int fact_aux(int n, int result) { if (n == 1) return result; return fact_aux(n - 1, n * result) } int fact(n) { return fact_aux(n, 1); }The "auxiliary" function fact_aux is used to keep the syntax of fact(n) the same as before. The recursive function is really fact_aux, not fact. Note that fact_aux has no pending operations on return from recursive calls. The value computed by the recursive call is simply returned with no modification. The amount of information which must be stored is constant (the value of n and the value of result), independent of the number of recursive calls.. The Fibonacci numbers can be defined by the rule:
fib(n) = 0 if n is 0, = 1 if n is 1, = fib(n-1) + fib(n-2) otherwiseFor example, the first seven Fibonacci numbers are
Fib(0) = 0 Fib(1) = 1 Fib(2) = Fib(1) + Fib(0) = 1 Fib(3) = Fib(2) + Fib(1) = 2 Fib(4) = Fib(3) + Fib(2) = 3 Fib(5) = Fib(4) + Fib(3) = 5 Fib(6) = Fib(5) + Fib(4) = 8This leads to the following implementation in C:
int fib(int n) { /* n >= 0 */ if (n == 0) return 0; if (n == 1) return 1; return fib(n - 1) + fib(n - 2); }Notice that the pending operation for the recursive call is another call to fib. Therefore fib is tree-recursive.
A non-tail recursive function can often be converted to a tail-recursive function by means of an "auxiliary" parameter. This parameter is used to form the result. The idea is to attempt to incorporate the pending operation into the auxiliary parameter in such a way that the recursive call no longer has a pending operation. The technique is usually used in conjunction with an "auxiliary" function. This is simply to keep the syntax clean and to hide the fact that auxiliary parameters are needed.
For example, a tail-recursive Fibonacci function can be implemented by using two auxiliary parameters for accumulating results. It should not be surprising that the tree-recursive fib function requires two auxiliary parameters to collect results; there are two recursive calls. To compute fib(n), call fib_aux(n 1 0)
int fib_aux(int n, int next, int result) { if (n == 0) return result; return fib_aux(n - 1, next + result, next); }
Let's assume that tail recursive functions can be expressed in the general form
F(x) { if (P(x)) return G(x); return F(H(x)); }That is, we establish a base case based on the truth value of the function P(x) of the parameter. Given that P(x) is true, the value of F(x) is the value of some other function G(x). Otherwise, the value of F(x) is the value of the function F on some other value, H(x). Given this formulation, we can immediately write an iterative version as
F(x) { int temp_x = x; while (P(x) is not true) { temp_x = x; x = H(temp_x); } return G(x); }The reason for using the local variable temp_x will become clear soon. Actually, we will use one temporary variable for each parameter in the recursive function.
In the tail-recursive factorial function (fact_aux) given in Section 6,
Therefore the iterative version is:
int fact_iter(int n, int result) { int temp_n; int temp_result; while (n != 1) { temp_n = n; temp_result = result; n = temp_n - 1; result = temp_n * temp_result; } return result; }The variable temp_n is needed so result will be computed on the basis of the unchanged n. The variable temp_result is not really needed, but is used to be consistent.
In the tail-recursive fibonacci function (fib_aux) given in Section 7:
int fib_iter(int n, int next, int result) { int temp_n; int temp_next; int temp_result; while (n != 0) { temp_n = n; temp_next = next; temp_result = result; n = temp_n - 1; next = temp_next + temp_result; result = temp_next; } return result; }
Just as it is possible to convert any recursive function to iteration, it is possible to convert any iterative loop into the combination of a recursive function and a call to that function. The ability to convert recursion to iteration is often quite useful, allowing the power of recursive definition with the (often) greater efficiency of iteration.
Converting iteration to recursion is unlikely to be useful in practice, but it is a fine learning tool. This Section gives several examples of the relationship between programs that loop using C's built-in iteration constructs, and programs that loop using tail recursion.
Suppose that we want to write a program that will read in a sequence of numbers, and print out both the maximum value and the position of the maximum value in the sequence. For example, if the input to our program is the sequence 2, 5, 6, 4, 1, then the program should tell us that the maximum number is 6, and that it occurs in position 3 of the input. Here is a program that performs this task using C's built-in iterative constructs:
#include <stdio.h> #include <assert.h> #include <stdlib.h> #define MAX_NUMS 32 #define max(num1,num2) ((num1) > (num2) ? (num1) : (num2)) typedef int array_of_nums[MAX_NUMS]; int main(void) { array_of_nums nums; int num_count; int max_num; int pos_of_max; int i; /* LOOP 1: Read in the numbers */ num_count = 0; while(scanf("%d", &nums[num_count]) != EOF) { num_count++; } assert(num_count > 0); /* LOOP 2: Find the maximum number */ max_num = nums[0]; for (i = 1; i < num_count; i++) { max_num = max(max_num, nums[i]) ; } /* LOOP 3: Find the position of the maximum number */ pos_of_max = 0; while (nums[pos_of_max] != max_num) { pos_of_max++; } /* Print the results */ printf("The largest number, which was %d, occurred in position number %d\n", max_num, pos_of_max + 1); return EXIT_SUCCESS; }
You will notice that the above program uses three loops and an array where one loop and an integer would have sufficed. However, the purpose of this program is not to show the best way to find the maximum of a sequence of numbers, but rather to exhibit a program that contains a few loops.
We will convert each of the program's three loops into a tail recursive subprogram. The key to implementing a loop using tail recursion is to determine which variables capture the state of the computation; these variables will then serve as the parameters to the tail recursive subprogram. The first loop is a while loop that is responsible for reading numbers into an array. It terminates when scanf returns EOF. Aside from the EOF condition, two variables capture the state of the computation each time through this loop:
int read_nums(int num_count, array_of_nums nums) { if (scanf("%d", &nums[num_count]) != EOF) { return read_nums(num_count + 1, nums); } return num_count; }
Notice that the recursive call to read_nums is closer to the base case (EOF) than the top--level call by virtue of the fact that the call to scanf consumes input. The base case test happens before the recursive call, and the base case will never be skipped. Finally, if there is no more input, we return the current value of num_count, which has accumulated exactly the return value we desire. If there is more input, we add one to num_count and recurse. In either case, if we assume that the recursive call works properly, we get exactly the return value we want. Thus this function meets each of our criteria for valid recursive functions.
We must invoke this procedure with the correct initial values for its arguments; here is the appropriate code from the main program:
/* Read in the numbers */ num_count = read_nums(0, nums); assert(num_count > 0);
Notice that this call to read_nums causes the initial value of num_count in read_nums to be zero (which is exactly the initial value it had in the original loop). Assert is a statement defined in <assert.h>; it flags an error if its argument is false, and does nothing if its argument is true. The assert statement is used to ensure that at least one number is read in. Without this statement, subsequent code will bomb if no numbers are entered.
Let's generalize from this example. In general, an iterative loop may be converted to a tail--recursive function by:
The second loop is a for loop that is responsible for finding the value of the maximum input number. Before entering the for loop, we make the assumption that the zero-th number read in is the largest. We then loop through all but the zero-th element of nums, looking for a higher value. Four variables capture the state of the computation during this loop:
int find_max(int max_so_far, int pos_to_test, int num_count, array_of_nums nums) { if (pos_to_test < num_count) { return find_max(max(max_so_far, nums[pos_to_test]), pos_to_test + 1, num_count, nums); } return max_so_far; }
To invoke this function, we will need to pass the value of the zero-th number read in as its first argument, and 1 (which was the initial value of the loop control variable in the iterative version of the program) as its second argument. Here is the appropriate portion of the main program:
/* find the maximum number */ max_num = find_max(nums[0], i, num_count, nums);
The third loop looks through the input numbers until it finds the first occurrence of the maximum number, then records the position of the maximum number. Three variables control the state of this computation:
int find_pos_of_max(int max_num, int pos_to_test, array_of_nums nums) { if (nums[pos_to_test] != max_num) { return find_pos_of_max(max_num, pos_to_test + 1, nums); } return pos_to_test; }Note that although the recursive call is not textually the last code of the function, it is the last instruction executed before the return if we have not yet found the position of the maximum number. Therefore, it is a tail recursive call. The invocation of this function in the main program looks as follows:
/* Find the position of the maximum number */ pos_of_max = find_pos_of_max(max_num, 0, nums);
Thomas A. Anastasio, Thu Sep 4 21:40:50 EDT 1997Thomas A. Anastasio, Thu Sep 4 21:40:50 EDT 1997
Modified by Richard Chang Fri Jan 16 22:51:58 EST 1998. | http://www.csee.umbc.edu/courses/undergraduate/CMSC202/spring00/lectures/recursion.html | crawl-003 | refinedweb | 3,701 | 57.2 |
Solves the inverse kinematics problem as a mixed integer convex optimization problem. More...
#include <drake/attic/multibody/global_inverse_kinematics.h>
Solves the inverse kinematics problem as a mixed integer convex optimization problem.
We use a mixed-integer convex relaxation of the rotation matrix. So if this global inverse kinematics problem says the solution is infeasible, then it is guaranteed that the kinematics constraints are not satisfiable. If the global inverse kinematics returns a solution, the posture should approximately satisfy the kinematics constraints, with some error. The approach is described in Global Inverse Kinematics via Mixed-integer Convex Optimization by Hongkai Dai, Gregory Izatt and Russ Tedrake, ISRR, 2017.
Parses the robot kinematics tree.
The decision variables include the pose for each body (position/orientation). This constructor loops through each body inside the robot kinematics tree, adds the constraint on each body pose, so that the adjacent bodies are connected correctly by the joint in between the bodies.
Adds joint limits on a specified joint.
Penalizes the deviation to the desired posture.
For each body (except the world) in the kinematic tree, we add the cost
∑ᵢ body_position_cost(i) * body_position_error(i) + body_orientation_cost(i) * body_orientation_error(i) where
body_position_error(i) is computed as the Euclidean distance error |p_WBo(i) - p_WBo_desired(i)| where
Boin the world frame
W.
Boin the world frame
W, computed from the desired posture
q_desired.
body_orientation_error(i) is computed as (1 - cos(θ)), where θ is the angle between the orientation of body i'th frame and body i'th frame using the desired posture. Notice that 1 - cos(θ) = θ²/2 + O(θ⁴), so this cost is on the square of θ, when θ is small. Notice that since body 0 is the world, the cost on that body is always 0, no matter what value
body_position_cost(0) and
body_orientation_cost(0) take.
robotis the input argument in the constructor of the class.
robotis the input argument in the constructor of the class.
Add a constraint that the angle between the body orientation and the desired orientation should not be larger than
angle_tol.
If we denote the angle between two rotation matrices
R1 and
R2 as
θ, namely θ is the angle of the angle-axis representation of the rotation matrix
R1ᵀ * R2, we then know
trace(R1ᵀ * R2) = 2 * cos(θ) + 1
as in To constraint
θ < angle_tol, we can impose the following constraint
2 * cos(angle_tol) + 1 <= trace(R1ᵀ * R2) <= 3
Adds the constraint that the position of a point
Q on a body
B (whose index is
body_idx), is within a box in a specified frame
F.
The constraint is that the point position, computed as
p_WQ = p_WBo + R_WB * p_BQ
where
W.
W.
W.
B. p_WQ should lie within a bounding box in the frame
F. Namely
box_lb_F <= p_FQ <= box_ub_Fwhere p_FQ is the position of the point Q measured and expressed in the
F. The inequality is imposed elementwisely.
Notice that since the rotation matrix
R_WB does not lie exactly on the SO(3), due to the McCormick envelope relaxation, this constraint is subject to the accumulated error from the root of the kinematics tree.
Getter for the decision variables on the position p_WBo of the body B's origin measured and expressed in the world frame.
Getter for the decision variables on the rotation matrix
R_WB for a body with the specified index.
This is the orientation of body i's frame measured and expressed in the world frame.
Constrain the point
Q lying within one of the convex polytopes.
Each convex polytope Pᵢ is represented by its vertices as Pᵢ = ConvexHull(v_i1, v_i2, ... v_in). Mathematically we want to impose the constraint that the p_WQ, i.e., the position of point
Q in world frame
W, satisfies
p_WQ ∈ Pᵢ for one i.
To impose this constraint, we consider to introduce binary variable zᵢ, and continuous variables w_i1, w_i2, ..., w_in for each vertex of Pᵢ, with the following constraints
p_WQ = sum_i (w_i1 * v_i1 + w_i2 * v_i2 + ... + w_in * v_in) w_ij >= 0, ∀i,j w_i1 + w_i2 + ... + w_in = zᵢ sum_i zᵢ = 1 zᵢ ∈ {0, 1}
Notice that if zᵢ = 0, then w_i1 * v_i1 + w_i2 * v_i2 + ... + w_in * v_in is just 0. This function can be used for collision avoidance, where each region Pᵢ is a free space region. It can also be used for grasping, where each region Pᵢ is a surface patch on the grasped object. Note this approach also works if the region Pᵢ overlaps with each other.
Adds the constraint that a sphere rigidly attached to a body has to be within at least one of the given bounded polytopes.
If the polytopes don't intersect, then the sphere is in one and only one polytope. Otherwise the sphere is in at least one of the polytopes (could be in the intersection of multiple polytopes.) If the i'th polytope is described as
Aᵢ * x ≤ bᵢ
where Aᵢ ∈ ℝⁿ ˣ ³, bᵢ ∈ ℝⁿ. Then a sphere with center position p_WQ and radius r is within the i'th polytope, if Aᵢ * p_WQ ≤ bᵢ - aᵢr where aᵢ(j) = Aᵢ.row(j).norm() To constrain that the sphere is in one of the n polytopes, we introduce the binary variable z ∈{0, 1}ⁿ, together with continuous variables yᵢ ∈ ℝ³, i = 1, ..., n, such that p_WQ = y₁ + ... + yₙ Aᵢ * yᵢ ≤ (bᵢ - aᵢr)zᵢ z₁ + ... +zₙ = 1 Notice that when zᵢ = 0, Aᵢ * yᵢ ≤ 0 implies that yᵢ = 0. This is due to the boundedness of the polytope. If Aᵢ * yᵢ ≤ 0 has a non-zero solution y̅, that y̅ ≠ 0 and Aᵢ * y̅ ≤ 0. Then for any point x̂ in the polytope satisfying Aᵢ * x̂ ≤ bᵢ, we know the ray x̂ + ty̅, ∀ t ≥ 0 also satisfies Aᵢ * (x̂ + ty̅) ≤ bᵢ, thus the ray is within the polytope, violating the boundedness assumption.
After solving the inverse kinematics problem and finding out the pose of each body, reconstruct the robot generalized position (joint angles, etc) that matches with the body poses.
Notice that since the rotation matrix is approximated, that the solution of body_rotmat() might not be on SO(3) exactly, the reconstructed body posture might not match with the body poses exactly, and the kinematics constraint might not be satisfied exactly with this reconstructed posture. | https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_global_inverse_kinematics.html | CC-MAIN-2018-43 | refinedweb | 1,032 | 61.36 |
Available with Spatial Analyst license.
Summary
Defines a Large transformation function which is determined from the midpoint and spread shape–controlling parameters as well as the lower and upper threshold that identify the range within which to apply the function.
Learn more about how the parameters affect this transformation function
Discussion
The tool that uses the TfLarge object is Rescale by Function.
The equation for the Large transformation function is:
The inputs to the equation are f1, the spread, and f2, the midpoint.
The function values range from 0 to 1, which are then transformed to the evaluation scale.
The spread determines how rapidly the transformation function values increase and decrease around the midpoint. If the midpoint is within the range of the lower and upper thresholds, then the spread determines how rapidly the function values increase to the toScale and decrease to the fromScale. The larger the spread, the steeper the function values will be around the midpoint. Said another way, as the spread gets smaller, the transformation function values approach the midpoint more slowly.
The selection of an appropriate spread value is a subjective process that is dependent on the range of the input values. The default value of 5 is a good starting point.
Input values that are negative (below zero) will be assigned to the evaluation value that is assigned to zero.
The Large function is most useful when you want the larger input values to receive the higher output evaluation values (the larger values are preferred).
Syntax
TfLarge ({midpoint}, {spread}, {lowerThreshold}, {valueBelowThreshold}, {upperThreshold}, {valueAboveThreshold})
Properties
Code sample
Demonstrates how to create a TfLarge class and use it in the RescaleByFunction tool within the Python window.
import arcpy from arcpy.sa import * from arcpy import env env.workspace = "c:/sapyexamples/data" outRescale = RescaleByFunction("distroads", TfLarge(4075, 4.5, "#", "#", "#", "#"), 1, 10) outRescale.save("c:/sapyexamples/rescaletfla1")
Demonstrates how to transform the input data with the RescaleByFunction tool using the TfLarge class.
# Name: TfLarge_Ex_02.py # Description: Rescales input raster data using a Large TfLarge object midpoint = 4075 spread = 4.5 lowerthresh = "#" valbelowthresh = "#" upperthresh = "#" valabovethresh = "#" myTfFunction = TfLarge(midpoint, spread,a2") | https://pro.arcgis.com/en/pro-app/latest/arcpy/spatial-analyst/tflarge-class.htm | CC-MAIN-2022-27 | refinedweb | 351 | 55.24 |
Built-in magic commands¶
Note. For
example, the IPython kernel uses the
% syntax element for magics as
%
is not a valid unary operator in Python. While, the syntax element has
meaning in other languages.
Here is the help auto generated from the docstrings of all the available magics function that IPython ships with.
You can create an register your own magics with IPython. You can find many user
defined magics on PyPI. Feel free to publish your own and
use the
Framework :: IPython trove classifier.
Line magics¶
%alias¶: 'echox function, which automatically creates aliases for the contents of your $PATH.
If called with no parameters, %alias prints the current alias table.
%alias_magic¶
:
-
%autocall¶)
%automagic¶.
%bookmark¶
Manage IPython’s bookmark system.
%bookmark <name> - set bookmark to current dir %bookmark <name> <dir> - set bookmark to <dir> %bookmark -l - list all bookmarks %bookmark -d <name> - remove bookmark %bookmark -r - remove all bookmarks
You can later on access a bookmarked folder with:
%cd -b <name>
or simply ‘%cd <name>’ if there is no directory called <name> AND there is such a bookmark defined.
Your bookmarks persist through IPython sessions, but they are associated with each profile.
%cd¶
%colors¶
Switch color scheme for prompts, info system and exception handlers.
Currently implemented schemes: NoColor, Linux, LightBG.
Color scheme names are not case-sensitive.
Examples
To get a plain black and white terminal:
%colors nocolor
%config¶
configure IPython%config Class[.trait=value]
This magic exposes most of the IPython config system. Any Configurable class should be able to be configured with the simple line:
%config Class.trait=value
Where
valuew
%debug¶
:
-
%dhist¶
Print your history of visited directories.
%dhist -> print full history%dhist n -> print last n entries only%dhist n1 n2 -> print entries between n1 and n2 (n2 not included)
This history is automatically maintained by the %cd command, and always available as the global list variable _dh. You can use %cd -<n> to go to directory number <n>.
Note that most of time, you should view directory history by entering cd -<TAB>.
%doctest_mode¶:
- Changing the prompts to the classic
>>>ones.
- Changing the exception reporting mode to ‘Plain’.
- Disabling pretty-printing of output..
%edit¶.
%env¶
Get, set, or list environment variables.
Usage:%env: lists all environment variables/values %env var: get value for var %env var val: set value for var %env var=val: set value for var %env var=$val: set value for var, using python expansion if possible
%gui¶ qt5 # enable PyQt.
%history¶
%history [-n] [-o] [-p] [-t] [-f FILENAME] [-g [PATTERN [PATTERN ...]]] [-l [LIMIT]] [-u] [range [range ...]]
Print input history (_i<n> variables), with most recent last.
By default, input history is printed without line numbers so it can be directly pasted into an editor. Use -n to show them.
By default, all input history from the current session is displayed. Ranges of history can be indicated using the syntax:
4
- Line 4, current session
4-6
- Lines 4-6, current session
243/1-5
- Lines 1-5, session 243
~2/7
- Line 7, session 2 before current
~8/1-~6/5
- From the first line of 8 sessions ago, to the fifth line of 6 sessions ago.
Multiple ranges can be entered, separated by spaces
The same syntax is used by %macro, %save, %edit, %rerun
Examples
In [6]: %history -n 4-6 4:a = 12 5:print a**2 6:%history -n 4-6
- positional arguments:
- range
- optional arguments:
-
%load¶
Load code into the current frontend.
- Usage:
%load [options] source
where source can be a filename, URL, input history range, macro, or element in the user namespace.
-n : Include the user’s namespace when searching for source code. -n MyClass %load -n my_module.wonder_function
%loadpy¶
Alias of
%load
%loadpyhas gained some flexibility and dropped the requirement of a
.pyextension. So it has been renamed simply into %load. You can look at
%load’s docstring for more info.
%logon¶
Restart logging.
This function is for restarting logging which you’ve temporarily stopped with %logoff. For starting logging for the first time, you must use the %logstart function, which allows you to specify an optional log filename.
%logstart¶
-
%logstop¶
Fully stop logging and close log file.
In order to start logging again, a new %logstart call needs to be made, possibly (though not necessarily) with a new filename, mode and other options.
%macro¶
Define a macro for future re-execution. It accepts ranges of history, filenames or string objects.
- Usage:
- %macro [options] name
namewhich
%matplotlib¶
%matplotlib [-l] [gui]
Set up matplotlib to work interactively.
This function lets you activate matplotlib interactive support at any point during an IPython session. It does not import anything into the interactive namespace.
If you are using the inline matplotlib backend in the IPython Notebook you can set which figure formats are enabled using the following:
In [1]: from IPython.display import set_matplotlib_formats In [2]: set_matplotlib_formats('pdf', 'svg')
The default for inline figures sets
bbox_inchesto ‘tight’. This can cause discrepancies between the displayed image and the identical image created using
savefig. This behavior can be disabled using the
%configmagic:
In [3]: %config InlineBackend.print_figure_kwargs = {'bbox_inches':None}
In addition, see the docstring of
IPython.display.set_matplotlib_formatsand
IPython.display.set_matplotlib_closefor more information on changing additional behaviors of the inline backend.
Examples
To enable the inline backend for usage with the IPython Notebook:
In [1]: %matplotlib inline
In this case, where the matplotlib default is TkAgg:
In [2]: %matplotlib Using matplotlib backend: TkAgg
But you can explicitly request a different GUI backend:
In [3]: %matplotlib qt
You can list the available backends using the -l/–list option:
In [4]: %matplotlib --list Available matplotlib backends: ['osx', 'qt4', 'qt5', 'gtk3', 'notebook', 'wx', 'qt', 'nbagg', 'gtk', 'tk', 'inline']
-:
-
%notebook¶
%notebook filename
Export and convert IPython notebooks.
This function can export the current IPython history to a notebook file. For example, to export the history to “foo.ipynb” do “%notebook foo.ipynb”.
The -e or –export flag is deprecated in IPython 5.2, and will be removed in the future.
- positional arguments:
- filename Notebook name or filename
%page¶
Pretty print the object and display it through a pager.
%page [options] OBJECT
If no object is given, use _ (last output).
Options:-r: page str(object), don’t pretty-print it.
%pastebin¶”.
%pdb¶.
%pdef¶
Print the call signature for any callable object.
If the object is a class, print the constructor information.
Examples
In [3]: %pdef urllib.urlopen urllib.urlopen(url, data=None, proxies=None)
%pdoc¶
Print the docstring for an object.
If the given object is a class, it will print both the class and the constructor docstrings.
%pfile¶¶
Provide detailed information about an object.
‘%pinfo object’ is just a synonym for object? or ?object.
%pinfo2¶
Provide extra detailed information about an object.
‘%pinfo2 object’ is just a synonym for object?? or ??object.
%precision¶
%profile¶
DEPRECATED since IPython 2.0.
Raise
UsageError. To profile code use the
prunmagic.
See Also
prun : run code using the Python profiler (
prun)
%prun¶¶
%pwd¶
Return the current working directory path.
Examples
In [9]: pwd Out[9]: '/home/tsuser/sprint/ipython'
%pycat¶
%pylab¶
%pylab [--no-import-all] [gui]
Load numpy and matplotlib to work interactively.
This function lets you activate pylab (matplotlib, numpy and interactive support) at any point during an IPython session.
%pylab makes the following imports: *
If you pass
--no-import-all, the last two
*imports will be excluded.
See the %matplotlib magic for more details about activating matplotlib without affecting the interactive namespace.
-:
-
%recall¶
Repeat a command, or get command to input line for editing.
%recall and %rep are equivalent.
- %recall (no arguments):
Place a string version of last computation result (stored in the special ‘_’ variable) to the next input prompt. Allows you to create elaborate command lines without using copy-paste:
In[1]: l = ["hei", "vaan"] In[2]: "".join(l) Out[2]: heivaan In[3]: %recall In[4]: heivaan_ <== cursor blinking
%recall 45
Place history line 45 on the next input prompt. Use %hist to find out the number.
%recall 1-4
Combine the specified lines into one cell, and place it on the next input prompt. See %history for the slice syntax.
%recall foo+bar
If foo+bar can be evaluated in the user namespace, the result is placed at the next input prompt. Otherwise, the history is searched for lines which contain that substring, and the most recent one is placed at the next input prompt.
%rehashx¶
Update the alias table with all executable files in $PATH.
rehashx explicitly checks that every entry in $PATH is a file with execute access (os.X_OK)..
%rerun¶
Re-run previous input
By default, you can specify ranges of input history to be repeated (as with %history). With no arguments, it will repeat the last line.
Options:
-l <n> : Repeat the last n lines of input, not including the current command.
-g foo : Repeat the most recent line which contains foo
%reset¶
Resets the namespace by removing all names defined by the user, if called without arguments, or by removing some types of objects, such as everything currently in IPython’s In[] and Out[] containers (see the parameters for details).
Parameters
-f : force reset without asking for confirmation.
- -s : ‘Soft’ reset: Only clears your namespace, leaving history intact.
- References to objects may be kept. By default (without this option), we do a ‘hard’ reset, giving you a new session and removing all references to objects from the current session.
in : reset input history
out : reset output history
dhist : reset directory history
array : reset only variables that are NumPy arrays
See Also
reset_selective : invoked as
%reset_selective
Notes
Calling this magic from clients that do not implement standard input, such as the ipython notebook interface, will reset the namespace without confirmation.
%reset_selective¶
Resets the namespace by removing names defined by the user.
Input/Output history are left around in case you need them.
full reset:
In [1]: %reset -f
Now, with a clean namespace we can make a few variables and use
%reset_selectiveto']
Notes
Calling this magic from clients that do not implement standard input, such as the ipython notebook interface, will reset the namespace without confirmation.
%run¶
Run the named file inside IPython as a program.
Usage:
%run [-n -i -e -G] [( -t [-N<N>] | -d [-b<N>] | -p [profile options] )] ( -m mod |.
\\*) to suppress expansions. To completely disable these expansions, you can use -G flag.
Options:.
There is one special usage for which the text above doesn’t apply: if the filename ends with .ipy[nb], the file is run as ipython script, just as if the commands were written on IPython prompt.
%save¶
-roption is used, the default extension is
.ipy.
%sc¶
Shell capture - run shell command and capture output (DEPRECATED use !). [1]: sc a=ls *py # a is a string with embedded newlines In [2]: a Out[2]: 'setup.py\nwin32_manual_post_install.py' # which can be seen as a list: In [3]: a.l Out[3]: ['setup.py', 'win32_manual_post_install.py'] # or as a whitespace-separated string: In [4]: a.s Out[4]: 'setup.py win32_manual_post_install.py' # a.s is useful to pass as a single command line: In [5]: !wc -l $a.s 146 [7]: sc -l b=ls *py In [8]: b Out[8]: ['setup.py', 'win32_manual_post_install.py'] In [9]: b.s Out[9]: 'setup.py win32_manual_post_install.py'
In summary, both the lists and strings used for output capture have the following special attributes:
.l (or .list) : value as list. .n (or .nlstr): value as newline-separated string. .s (or .spstr): value as space-separated string.
%set_env¶
Set environment variables. Assumptions are that either “val” is a name in the user namespace, or val is something that evaluates to a string.
- Usage:
- %set_env var val: set value for var %set_env var=val: set value for var %set_env var=$val: set value for var, using python expansion if possible
%sx.
%system.
%tb¶
Print the last traceback with the currently active exception mode.
See %xmode for changing exception reporting modes.
%time¶:
- controlit¶. loop,load_ext¶
Unload an IPython extension by its module name.
Not all extensions can be unloaded, only those which define an
unload_ipython_extensionfunction.
%who¶always
%who_ls¶']
%whos¶
%xdel¶.
Cell magics¶
%%bash¶
%%bash script magic
Run cells with bash in a subprocess.
This is a shortcut for
%%script bash
%%capture¶
:
-
%%latex¶
Render the cell as a block of latex
The subset of latex which is support depends on the implementation in the client. In the Jupyter Notebook, this magic only renders the subset of latex defined by MathJax [here]().
%%perl¶
%%perl script magic
Run cells with perl in a subprocess.
This is a shortcut for
%%script perl
%%pypy¶
%%pypy script magic
Run cells with pypy in a subprocess.
This is a shortcut for
%%script pypy
%%python¶
%%python script magic
Run cells with python in a subprocess.
This is a shortcut for
%%script python
%%python2¶
%%python2 script magic
Run cells with python2 in a subprocess.
This is a shortcut for
%%script python2
%%python3¶
%%python3 script magic
Run cells with python3 in a subprocess.
This is a shortcut for
%%script python3
%%ruby¶
%%ruby script magic
Run cells with ruby in a subprocess.
This is a shortcut for
%%script ruby
%%script¶
%shebang [--proc PROC] [--bg] [--err ERR] [--out OUT]
Run a cell via a shell command
The
%%scriptline is like the #! line of script, specifying a program (bash, perl, ruby, etc.) with which to run.
The rest of the cell is run by that program.
Examples
In [1]: %%script bash ...: for i in 1 2 3; do ...: echo $i ...: done 1 2 3
- optional arguments:
- | http://ipython.readthedocs.io/en/5.x/interactive/magics.html | CC-MAIN-2018-34 | refinedweb | 2,244 | 65.83 |
Quick Summary
This post is about our contributions to the .NET Open Source community to help create a new and more flexible .NET Intermediate Language (IL) Verifier. It will explain what IL is, why you would actually want to modify it and finally introduce to you different ways of verifying such IL. This whole story was initiated by our efforts to improve the quality and stability of our product.
Introduction: Dynatrace and IL
About a year ago, we investigated ways to harden our .NET Agent against bugs that originated from invalid IL. Our Agent is based on manipulating IL at runtime. We have a neat framework for parsing and manipulating IL, which makes changing IL quite easy. However, it was still possible to emit invalid IL and we could only find these bugs through extensive testing. Sometimes the effects were so subtle that we couldn’t even discover those in tests.
For obvious reasons, we want to avoid such bugs at all costs. So we started to search for ways on how to verify the IL we generate. We found ILVerify, a cross-platform, open-source tool by Microsoft, which seemed to satisfy all our requirements. However, we quickly realized that ILVerify was still an early stage prototype. At that moment, we decided to take things into our own hands and to start contributing to the open-source project. But before we dive into ILVerify, let me explain what this whole “IL” deal is about.
What is IL?
Languages such as C# or Java are not directly compiled into machine instructions, but another intermediate code. For C# this intermediate code is called Common Intermediate Language (CIL, or just IL). Most Portable Executable (PE) files compiled and assembled from C#, whether .dll or .exe, are simply composed of IL and its corresponding metadata. At runtime this IL is then translated into native code by a just-in-time (JIT) compiler.
You can easily inspect any PE-file by disassembling it using ildasm, manipulate its IL and then reassemble it using ilasm. Both these tools are installed with Visual Studio.
Why Manipulate IL?
As a normal software developer you might have written C# applications for decades, but never actually had to deal with or even think about the IL that was generated from it. Which is a good thing, since IL is a stack-based assembly language (a fancy term for “hard to read”).
For example, this very simple C# console app:
is compiled to the following IL:
I think we can generally agree that the former is more readable than the latter. This is also the reason why most people consider manually reading or even manipulating IL “crazy”.
However, there are a lot of valid reasons why you would still want to do just that. The classes of the Reflection.Emit namespace of C# even allow you to dynamically create new types at runtime, by manually emitting IL instructions. In other words, you can write code that writes code. Another common reason for directly manipulating IL is to instrument existing applications for debugging, profiling or aspect oriented programming purposes. At Dynatrace, we use it for profiling .NET applications.
The Hassle of Manipulating IL
Other than it being hard to read, another reason why you would generally try to avoid writing IL yourself is its fragility. Unlike with C# there is no compiler, which checks whether your code conforms to the rules of the language used. IL is directly translated by the JIT compiler and thus it is your own responsibility to emit valid IL.
But what happens if you emit invalid IL?
In the best case, an InvalidProgramException is thrown when you run your application. At this point you at least know that something is wrong with your code, even though you will still have to analyze what exactly is wrong with your code. This involves analyzing the emitted IL and taking note of the stack state of the entire method by hand. On top of that you will have to get familiar with the ECMA-335 standard if you want to emit your own IL in a serious way, since documentation on this matter is rather sparse.
As if all of this wasn’t discouraging enough, this exception might only be thrown “just-in-time”, as the name of the JIT compiler suggests, meaning once the invalid code is actually executed. This way invalid code parts which rely on conditional branches, or maybe even some race condition, may never be executed during your tests, but could then lead to a fatal crash in production. Perfect.
Verifying IL
Considering the fragility of IL and its rather bad readability, it obviously makes sense to seek a way of being able to verify your own IL in a robust and reliable way. So let’s examine and compare the currently available possibilities of verifying IL.
#1 – The Old-School Way
Gather your PE-file, fire up ildasm, get a pen and paper and start taking note of the stack state, while trying to learn the ECMA spec by heart.
This is how everyone starts out. Even though fancy open-source tools like dnSpy (which I highly recommend) make investigating IL a lot easier, this is a time consuming and quite frankly not very pleasurable task. On top of that it is also prone to errors by itself. Once snippets of IL finally start to haunt your dreams, it might be time to switch to one of the next solutions.
#2 – PEVerify
PEVerify is a tool for verifying the metadata and IL of .NET PE-files, which was developed by Microsoft. Just like ilasm and ildasm, it is shipped with Visual Studio and you can run it on any assembly using the Developer’s Command Prompt. PEVerify is a great and reliable tool, which automates the processes described earlier. However, it has some major limitations, such as not being compatible to .NET Core and not being able to verify mscorlib.dll, which was one of our requirements.
#3 – ILVerify
ILVerify is a cross-platform, open-source tool currently being developed as part of Microsoft’s CoreRT repository. The goal of ILVerify is to alleviate PEVerify’s limitations, thus being able to verify any assembly, including mscorlib and .NET Core assemblies, while being developed entirely in C#. Due to its open-source nature, anyone can browse its source code and contribute on Github.
Currently ILVerify can be run as a console application, just like PEVerify, even though there is a public API surface planned. In order to verify an assembly you also have to specify the location of all referenced assemblies. For example, in order to verify the assembly asm.exe, which references mscorlib.dll and System.dll, you would run:
ilverify.exe <path-to-asm.exe> -reference <path-to-mscorlib.dll> -reference <path-to-system.dll>
or simply:
ilverify.exe <path-to-asm.exe> -r <path-to-libfolder-*.dll>
Additionally you can specify a regular expression defining specific methods to be included or excluded with -include and -exclude, or just -i and -e. You can also define the base library to be used with -system-module, or just -s, for assemblies using a base library other than mscorlib.
Contributing to ILVerify
When we came across ILVerify about six months ago, it was still in an early development stage. The basic structure was there, but most IL-instructions simply yielded a NotImplementedException. Since investing into this tool had the potential of not only preventing bugs but also improving the overall quality of our product, we decided to start contributing to it. During the last two months, I had the honor of doing so and not only learned a lot about IL, but also the open-source contribution process. Contributing to the CoreRT repository was hugely satisfying, also due to the awesome work of the Microsoft employees in charge of it.
The Current State of ILVerify
At present, ILVerify is very close to a verification capability comparable to PEVerify. While there are still some minor verification rules missing and some false negatives popping up, the project is now in a state, that allowed us to fix errors in our code, which we might have never found otherwise. In the end, it did exactly what we expected from it: improve the overall quality and stability of our product.
Future plans for ILVerify include being used as a verifier for the Roslyn Compiler and implementing newly proposed IL verification rules.
Summary
Whenever you are in a situation where you need to generate your own IL, it is a good idea to verify that your code is actually valid. Not only does it prevent your application from crashing, but can also improve the quality of your code. PEVerify has historically been the go-to tool for IL generators, but has some major limitations. ILVerify is a cross-platform, open-source tool that is currently being developed and serves as an alternative to PEVerify, avoiding its limitations and being updated with the newest verification rules introduced in new standards.
I must thank Jan Kotas for his amazingly fast and professional reviews on Github, which made the process of contributing to ILVerify a very pleasant experience. I must also thank Christoph Neumüller for continuously pushing this project and therefore making this all possible in the first place and Michael Mayr for his awesome support during the last months.
If you have any questions about ILVerify or just want to say hello, feel free to contact me via email arzt.samuel@live.de or Twitter @SamuelArzt. | https://www.dynatrace.com/news/blog/verifying-your-own-dotnet-il-code/ | CC-MAIN-2019-30 | refinedweb | 1,594 | 54.63 |
Modern, elegant, minimalistic but powerful plugin system for Python 3.5+.
Project description
This one day in the past, you took your first step on your programming journey. Some days were tough, some days were great. You made progress. You made mistakes. You learned some best practices and design patterns. You’ve come to idolize low coupling and modularity. Eventually, you started working on more ambitious projects, always increasing in complexity; the possibilities endless. After implementing your 7th export format for your latest project, the words of the great Raymond Hettinger come to mind: There’s got to be a better way! After a short stint going all out on inheritance and mixins, you turn your attention to plugins. You read up on them, get the general idea and start looking at what’s available for Python. You are happy to find out there are a quite a few on the market. You start trying them out and for the most part, they work great, but it always feels like something is missing. Perhaps they make you go through crazy code gymnastics, lack features or are plain just horrible to look at. This is the moment you discover offshoot.
*offshoot*:
- Is a modern, elegant and minimalistic plugin system for Python 3.5+
- Is unintrusive; Stays out of our way. No file copying, no symlinks, nada!
- Provides a clear and simple plugin definition format.
- Understands your flow: Provides installation callbacks, can maintain a configuration and/or a requirements file for your plugins and has an optional plugin validation system on install.
- Can discover and import any plugin of any type anywhere in your code with a one-liner. No more complex plugin management.
- Batteries included. Comes with an executable to install/uninstall plugins.
- Is fully-tested and is under active development.
- Does not aim to please the PEP 8 gods and the purists. Some dark magic is used unapologetically.
Quick Tour
Your Class you’d like to make pluggable
class ExportFormat: def __init__(self): self.name = "Export Format" def export(self, data): raise NotImplementedError() @classmethod def is_an_export_format(cls): return True
Your Class made pluggable with *offshoot*
import offshoot class ExportFormat(offshoot.Pluggable): def __init__(self): self.name = "Export Format" @offshoot.expected def export(self, data): raise NotImplementedError() @classmethod @offshoot.forbidden def is_an_export_format(cls): return True
Yes, that’s it! More about those optional decorators later.
A sample *offshoot* plugin definition
import offshoot class YAMLExportFormatPlugin(offshoot.Plugin): name = "YAMLExportFormatPlugin" version = "0.1.0" libraries = ["PyYAML"] files = [ {"path": "export_formats/yaml.py", "pluggable": "ExportFormat"} ] config = { "export_options": { "width": 80 } } @classmethod def on_install(cls): print("\n\n%s was installed successfully!" % cls.__name__) @classmethod def on_uninstall(cls): print("\n\n%s was uninstalled successfully!" % cls.__name__) if __name__ == "__main__": offshoot.executable_hook(YAMLExportFormatPlugin)
A sample *offshoot* plugin file
import offshoot from export_format import ExportFormat import yaml class YAMLExportFormat(ExportFormat): def export(self, data): return yaml.dump(data)
Installing an *offshoot* plugin from the command line
offshoot install YAMLExportFormatPlugin
Automatic *offshoot* plugin discovery and importing
import offshoot offshoot.discover("ExportFormat", globals()) YAMLExportFormat # Now in scope!
Verifying if class name string maps to a discovered plugin class
import offshoot class_mapping = offshoot.discover("ExportFormat") # We omit scope param to get the class mapping "YAMLExportFormat" in class_mapping # True
Requirements
- PyYAML (On the roadmap to make it optional so the project is 100% dependency-free!)
Installation
pip install offshoot
Configuration
Default Configuration Values
{ "modules": [], "file_paths": { "plugins": "plugins", "config": "config/config.plugins.yml", "libraries": "requirements.plugins.txt" }, "allow": { "plugins": True, "files": True, "config": True, "libraries": True, "callbacks": True }, "sandbox_configuration_keys": True }
Initializing offshoot
Initializing offshoot will save a YAML copy of the default configuration to offshoot.yml which you can then modify to suit your needs. Just run the following in the command line: offshoot init
Configuration Keys
- modules: Perhaps the most important key to modify since nothing will happen without some valid module paths in there. offshoot needs to discover pluggable classes in the project at import time. It will explore the modules listed here to find classes that extend offshoot.Pluggable
- file_paths: Directories and file paths to use when offshoot needs to hit the file system. plugins is where offshoot will look for plugin files. The defaults should suffice, but do make sure they exist.
- allow: offshoot allows you to enable/disable certain part of the plugin installation. It is recommended to leave all values to True.
- sandbox_configuration_keys: If you chose to let offshoot merge configuration keys during plugin installation, it can either merge them all at the root level (False) or sandbox them under the plugin name (True)
Usage
Initializing Offshoot
The first thing you will want to do after installing offshoot is run offshoot init in the command line at the root of your project. This will create a configuration file named offshoot.yml. You can leave it be for now but we will go back to it later.
Making Your Classes Pluggable
To make a class pluggable with offshoot all that technically needs to be done is extend it with offshoot.Pluggable
So you go from this:
class Shape: pass
To this:
import offshoot class Shape(offshoot.Pluggable): pass
Then for every class you make pluggable, you append its module path to offshoot.yml under the modules key. This means that if you make shape.py and shapes/rectangle.py pluggable, your modules value will look like this modules: ["shape", "shapes/rectangle"]
Magic Validation
offshoot comes with an optional validation system for your pluggable classes. You can control which class, instance and static methods are either expected, accepted or forbidden in a plugin file. The way you do this couldn’t be any simpler: you wrap them with a decorator. It ends up looking like the following:
import offshoot class PluggableClass(offshoot.Pluggable): @offshoot.expected def expected_function(self): raise NotImplementedError() @classmethod @offshoot.accepted def accepted_function(cls): raise NotImplementedError() @staticmethod @offshoot.forbidden def forbidden_function(): raise NotImplementedError()
If a plugin file is missing an expected method, or defining a forbidden method, it will be rejected and the installation will be stopped and reverted.
They are called magic decorator because under the hood, they do absolutely nothing. They are however found using Python’s abstract syntax trees (ast in the stdlib) during plugin installation and validation can be performed.
Installation Callbacks
In addition to magic validators, you have the option to add callbacks that will be executed for each file installed/uninstalled by a plugin.
To leverage these callbacks, simply add these functions to your pluggable class:
@classmethod def on_file_install(cls, **kwargs): pass @classmethod def on_file_uninstall(cls, **kwargs): pass
Contained in kwargs are the file path and the name of the pluggable class.
One common application for these callbacks would be to seed some values in a database. If we stick the ExportFormat example, once you install a YAMLExportFormat plugin, you may want to add it to a export_formats table along with the name of the class. That could then allow list the available export format options in a more logical fashion. Similarly, you’d want that option to be cleaned up when you uninstall the plugin.
Anatomy of an offshoot Plugin
Expected File Structure
PLUGINS_DIRECTORY (defined in offshoot.yml) ├── ShapesPlugin # Name of the plugin. Matches the plugin class name in plugin.py │ ├── __init__.py │ ├── files # Any file other than the plugin definition goes here │ │ ├── __init__.py │ │ ├── helpers.py # Supporting file. Not in plugin definition but can be accessed by plugin files. │ │ └── shapes │ │ ├── __init__.py │ │ ├── rectangle.py # Variant of the Shape pluggable class. Included in plugin definition file │ │ ├── star.py # Variant of the Shape pluggable class. Included in plugin definition file │ │ └── triangle.py # Variant of the Shape pluggable class. Included in plugin definition file │ └── plugin.py # Plugin definition file ├── __init__.py
You are free to structure your file hierarchy exactly the way you want inside of the files directory. You can also add as many supporting files as needed.
__init__.py files DO need to be peppered everywhere as we want our plugin structure to be accessible as a package.
Plugin Definition File (plugin.py)
The plugin definition file turns out to be a Python file with a class that extends offshoot.Plugin. The name of that class needs to be an exact match of the name of the directory containing the plugin.
Here’s what plugin definition file would look like for a plugin using the file structure above. It is annotated to explain what the various sections do.
import offshoot class ShapesPlugin(offshoot.Plugin): # We extend offshoot.Plugin name = "ShapesPlugin" # We define a name for the plugin. Matches the class name. version = "0.1.0" # We define a version number for the plugin. # A list of plugin dependencies to check for (by name) before installing the plugin. # Optional. plugins = [ "RequiredShapesPlugin" ] # A list of required PyPI packages for the plugin. # Optional. These libraries will be merge to your offshoot requirements.txt during the installation. Set to None if you don't intend to use it. libraries = [ "requests", "requests-respectful==0.2.0" ] # A list of file objects that target pluggable classes in the project. # Required. "path" is the file path relative to the plugin root. "pluggable" is the pluggable class' name. files = [ {"path": "shapes/rectangle.py", "pluggable": "Shape"}, {"path": "shapes/triangle.py", "pluggable": "Shape"}, {"path": "shapes/star.py", "pluggable": "Shape"} ] # A Python dict containing configuration keys that can be referenced by your plugin files at runtime. # Optional. Any valid Python dict is accepted. Set to None if you don't intend to use it. config = { "i_am_a": { "plugin": True, "human": False }, "urls": ["", ""], "count": 42, } # Callbacks to be performed once per install / uninstall # Optional. @classmethod def on_install(cls): print("\n\n%s was installed successfully!" % cls.__name__) @classmethod def on_uninstall(cls): print("\n\n%s was uninstalled successfully!" % cls.__name__) # This hook always needs to be present in a plugin definition file. # It is used by the installation process. Pass it the class you just defined above. if __name__ == "__main__": offshoot.executable_hook(ShapesPlugin)
Plugin Files Extending the Pluggable Classes
Each plugin file needs to define a class that extends one of the classes that were previously made pluggable. If magic validation decorators were used when making the class pluggable, the plugin file needs to validate against that protocol successfully to be installed.
Here is a sample plugin file, following our Shapes plugin theme:
from shape import Shape class Rectangle(Shape): def __init__(self, **kwargs): super().__init__(**kwargs) self.name = "Rectangle" self.sides = 4 self.is_polygon = True @property def shape_is_a_polygon(self): return "A Rectangle is a polygon!" def area(self): raise NotImplementedError() def draw(self): raise NotImplementedError()
You are free to go way beyond the pluggable class’ protocol. You can require functions from supporting files bundled with the plugin and make use of your required PyPI packages and/or configuration keys.
The offshoot Manifest
The offshoot manifest is a critical file that gets created when you attempt to install a plugin for the first time. It contains the metadata of installed plugins and helps maintain the overall offshoot state. Look for offshoot.manifest.json if you want to take a peek under the hood. Be aware that editing or deleting this file will cause issues!
The offshoot Executable
The executable is rather minimalistic at the moment but it used to perform two crucial operations: Installing and uninstalling plugins.
Installing Plugins
The first step is making sure the plugin has been copied/cloned into the plugin directory defined in offshoot.yml
After that, simply run the following in the command line:
offshoot install PLUGIN_NAME
What happens when a plugin is installed?
- The offshoot configuration file is consulted to fetch the allow flags
- If plugins are allowed: Every plugin listed as a dependency in the plugin definition is verified to be installed before continuing.
- If files are allowed: Every plugin file in the plugin definition is validated against its pluggable class’ protocol. If even one validation test fails, the installation fails and is reverted. File installation callbacks are executed.
- If config is allowed: The configuration keys contained in the plugin definition file are merged in the configuration file defined in offshoot.yml.
- If libraries are allowed: Libraries contained in the plugin definition file are merged in the libraries file defined in offshoot.yml.
- If callbacks are allowed: The on_install callback is executed.
- The plugin metadata is appended to the manifest.
The installation process will not automatically install libraries with pip. It is assumed the user will permorm the pip installation.
Uninstalling Plugins
offshoot uninstall PLUGIN_NAME
Discovering & Importing Plugins
Last but not least, a simple way of getting installed plugins’ classes into scope has been provided.
Here’s how it’s done:
import offshoot offshoot.discover("Shape", globals()) # All installed plugin classes that extend the Shape pluggable class are now into scope!
This can be done literally anywhere in your application.
Tips & Tricks
Listing installed plugins
A utility method is exposed allowing you to fetch a list of the currently installed plugins as per the manifest.
Simply run:
offshoot.installed_plugins()
Example output:
["ShapesPlugin - 0.1.0"]
Merging the offshoot configuration keys with your application configuration at runtime.
Chances are you already have a YAML configuration file for your application. In some situations, it may become desirable to merge that configuration dict with offshoot’s configuration dict.
Here’s a code snippet to achieve this:
import yaml # Application Configuration with open("config/config.yml", "r") as f: config = yaml.safe_load(f) # Offshoot Configuration import offshoot with open(offshoot.config["file_paths"]["config"], "r") as f: plugin_config = yaml.safe_load(f) # Merge Configuration. Application Configuration takes priority in the key space. config = {**plugin_config, **config}
You can then import config from this file to have the merged configurations.
Tests
Unit tests for the project can be run with the following command:
python -m pytest tests --spec
You can install the test requirements by refering to requirements.test.txt in the repository.
Examples
You can find full examples in the examples directory of the repository.
Roadmap / Contribution Ideas
- Make PyYaml optional. Use it if it’s there, otherwise default on JSON or INI
- Explore supporting the extension of 3rd-party modules.
- Windows support? Python 2 branch?
- Clean up tests. A lot of repetition.
- … Anything else that makes sense really!
If you like offshoot, feel free to check out `requests-respectful <>`__, also by `SerpentAI <>`__
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/offshoot/ | CC-MAIN-2022-27 | refinedweb | 2,395 | 50.63 |
Re: Need help in understanding x86 syscall
From: Zachary Amsden (zach_at_vmware.com)
Date: 08/11/05
- ]
Date: Thu, 11 Aug 2005 12:58:23 -0700 To: Steven Rostedt <rostedt@goodmis.org>
Steven Rostedt wrote:
>sysenter_entry code, which is not triggered, as well as an objdump of
>
>
>libc.so shows a bunch of int 0x80 calls.
>
>
The NPTL version of glibc (the TLS library) uses this.
zach-dev2:~ $ ldd /bin/ls
linux-gate.so.1 => (0xffffe000)
librt.so.1 => /lib/tls/librt.so.1 (0x4002e000)
libacl.so.1 => /lib/libacl.so.1 (0x40038000)
libselinux.so.1 => /lib/libselinux.so.1 (0x4003e000)
--> libc.so.6 => /lib/tls/libc.so.6 (0x4004c000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x40162000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
libattr.so.1 => /lib/libattr.so.1 (0x40174000)
You'll find getpid much faster with TLS libraries (it's cached, no
longer a system call):
With TLS:
zach-dev2:Micro-bench $ time ./getpid
real 0m0.080s
user 0m0.080s
sys 0m0.000s
Without TLS:
zach-dev:Micro-bench $ time ./getpid
real 0m5.041s
user 0m2.520s
sys 0m2.520s
If you're feeling really masochistic, I've added a demonstration of how
you can call sysenter from userspace without glibc. The code verifies
that there is no way to exploit the kernel to achieve reading arbitrary
memory through a non-flat data segment. It deliberately segfaults at
the end. Let me point out this is a very wrong way to do things - you
should always use the vsyscall page, and in fact, this code actually
depends on the vsyscall page even if it is not apparent. I fake the
same frame structure that the vsyscall page would have pushed to
simulate a vsyscall entry, but the kernel will always return to the
vsyscall page, which then returns back to us. Fun stuff. If you leave
the kernel hack for ud2 in your kernel, I would expect it to blow up in
amazing fashion when running the code below.
zach-dev2:~ $ gcc sysenter.S sysenter.c -o sys
sysenter.c: In function `main':
sysenter.c:34: warning: passing arg 2 of `signal' from incompatible
pointer type
sysenter.c:49: warning: passing arg 3 of `sysenter_call_2' makes pointer
from in
teger without a cast
sysenter.c:22: warning: return type of `main' is not `int'
zach-dev2:~ $ ./sys
interrupted %ebp = 0xbaadf00d
phew
Segmentation fault (core dumped)
zach-dev2:~ $ gdb sys core
GNU gdb 6/tls/libthread_db.so.1".
Core was generated by `./sys'.
Program terminated with signal 11, Segmentation fault.
warning: current_sos: Can't read pathname for load map: Input/output error
Reading symbols from /lib/tls/libc.so.6...done.
Loaded symbols for /lib/tls/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
#0 0xffffe410 in ?? ()
(gdb) print $eax
$1 = -14
(gdb)
#define EFAULT 14 /* Bad address */
int main(int argc, char *argv[]) {
int j;
for (j = 0; j < 1000000; j++) {
getpid(); getpid(); getpid(); getpid(); getpid();
getpid(); getpid(); getpid(); getpid(); getpid();
}
}
#include <sys/syscall.h>
.text
.global sysenter_call
.global sysenter_call_2
/* void sysenter_call(pid_t pid, int signo, short ds, void *addr) */
sysenter_call:
push %ebx
push %edi
push %ebp
push %ds
movl %esp, %edi
movl 20(%esp), %ebx /* pid */
movl 24(%esp), %ecx /* signo */
movl 28(%esp), %ds /* exploit DS */
movl 32(%esp), %ebp
movl %ebp, %esp
push $sysenter_return
push %ecx
push %edx
subl $16, %ebp
push $0xbaadf00d
movl $SYS_kill, %eax
sysenter
/* vsyscall page will ret to us here */
sysenter_return:
mov %edi, %esp
pop %ds
pop %ebp
pop %edi
pop %ebx
ret
sysenter_call_2:
push %ebx
push %ebp
movl 12(%esp), %ebx /* pid */
movl 16(%esp), %ecx /* signo */
movl 20(%esp), %ebp
movl $SYS_kill, %eax
sysenter
.data
test: .long 0
#include <stdio.h>
#include <signal.h>
#include <asm/ldt.h>
#include <asm/segment.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/mman.h>
#define __KERNEL__
#include <asm/page.h>
extern void sysenter_call(pid_t pid, int signo, short ds, void *addr);
extern void sysenter_call_2(pid_t pid, int signo, void *addr);
void catch_sig(int signo, struct sigcontext ctx)
{
__asm__ __volatile__("mov %0, %%ds" : : "r" (__USER_DS));
printf("interrupted %%ebp = 0x%x\n", ctx.ebp);
if (ctx.ebp == 0xbaadf00d)
printf("phew\n");
}
void main(void)
{
struct user_desc desc;
short ds;
unsigned long addr;
unsigned *stack;
unsigned long offset;
stack = (unsigned *)mmap(0, 4096, PROT_EXEC|PROT_READ|PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
stack = &stack[1024];
addr = 0xf0000;
offset = __PAGE_OFFSET-(unsigned)stack+addr+16;
signal(SIGUSR1, catch_sig);
desc.entry_number = 0;
desc.base_addr = offset;
desc.limit = 0xffffff;
desc.seg_32bit = 1;
desc.contents = MODIFY_LDT_CONTENTS_DATA;
desc.read_exec_only = 0;
desc.limit_in_pages = 1;
desc.seg_not_present = 0;
desc.useable = 1;
if (modify_ldt(1, &desc, sizeof(desc)) != 0) {
perror("modify_ldt");
}
ds = 0x7; /* TI | RPL 3 */
sysenter_call(getpid(), SIGUSR1, ds, stack);
sysenter_call_2(getpid(), SIGSTOP, __PAGE_OFFSET+4096);
printf("not reached - core should show %%eax == -EFAULT\n");
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at
Please read the FAQ at
- ] | http://linux.derkeiler.com/Mailing-Lists/Kernel/2005-08/3275.html | crawl-002 | refinedweb | 840 | 61.02 |
Type: Posts; User: _wall_
I tried contacting the owner..but he was out of reach..
is there a way that we can build this applet without the anonymous class..i haven't used anonymous class ever, so i am a bit confused how it...
actually i got this code by decompiling several class file.
i plan to extend this project so i thought of decompiling the class file and then work on it.
...
i have the same error at 3 different places in the project.
mayb if you compile the project i attached, it'll help you.
please help me..:)
hey keang,
thanks for showing interest.
package encryptionproject;
import java.applet.Applet;
import java.awt.*;
hey everyone...i was compiling a project code...
but it is having a very small error..please take a look and help me...
its a sincere request... | http://forums.codeguru.com/search.php?s=22d6174b0c6c6e57097759db5261dfb2&searchid=5374889 | CC-MAIN-2014-42 | refinedweb | 142 | 79.97 |
Update: Solution presented in this post is integrated in XStream 1.2.2 and you are advised to use the official (and maintained) release. See this post for more information.
It seems that JSON vs. XML debate was one of the hot topics for this winter. Again, I think that “vs.” part is sufficient and that both XML and JSON should have their place in overall technology landscape. One of the obvious JSON advantages is that it can be directly evaluated in JavaScript. And for the Ajax-world we live in, it is not a small thing.
For one project, I started looking for a library that would enable me easy transformations between Java and JSON data. JSON in Java looks nice, but what I really want is a more automated and configurable library like those we have for XML processing. My first thought was XStream since I like its simple API, extensible architecture, powerful converters and support for annotations. After first check I found that from version 1.2 on there is a partial support for JSON (which was logical thing to expect). So, currently you can serialize your Java objects to JSON format.
public class JSONWrite { public static void main(String[] args) { Product product = new Product("Banana", "123", 23.00); XStream xstream = new XStream(new JsonHierarchicalStreamDriver()); String result = xstream.toXML(product); System.out.println(result); } }
This example initiates a
Product object (with
name,
id and
price properties) and serialize it through
JsonHierarchicalDriver. As a result we will get the following output:
{"org.sensatic.jqr.json.Product": { "name": "Banana", "id": "123", "price": {"23.0"} }}
Unfortunately, there is no read support at this moment for JSON format. So if you try to convert this JSON data back to Java object
public class JSONRead { public static void main(String[] args) { String json = "{\"org.sensatic.jqr.json.Product\": {" + "\"name\": \"Banana\"," + "\"id\": \"123\"," + "\"price\": {\"23.0\"}" + "}}"; XStream xstream = new XStream(new JsonHierarchicalStreamDriver()); Product product = (Product)xstream.fromXML(json); System.out.println(product); } }
you’ll get the following exception
java.lang.UnsupportedOperationException: The JsonHierarchicalStreamDriver can only write JSON
After a little more search, I stumbled upon Jettison project which implements “a collection of Stax parsers and writers which read and write JSON”. So instantly I wanted to create a XStream driver that uses Jettison as an underlying library to parse to and from JSON.
It was a little bit trickier then I first thought, since I had to patch Jettison (Mapped convention classes) to support nested arrays, element names with “.” (so it can use full class names as element names) and some other minor things. After all this, it worked at least for the usage that I needed at the moment (I didn’t test its behavior when used with namespaces and attributes).
So, if we now use
JettisonDriver with the previous write example:
public class JSONWrite { public static void main(String[] args) { Product product = new Product("Banana", "123", 23.00); XStream xstream = new XStream(new JettisonDriver()); String result = xstream.toXML(product); System.out.println(result); } }
We’ll get the similar result:
{"org.sensatic.jqr.json.Product":{"name":"Banana","id":"123","price":"23.0"}}
But read operation now works as well. So the following code:
public class JSONRead { public static void main(String[] args) { String json = "{\"org.sensatic.jqr.json.Product\": {" + "\"name\": \"Banana\"," + "\"id\": \"123\"," + "\"price\": \"23.0\"" + "}}"; XStream xstream = new XStream(new JettisonDriver()); Product product = (Product)xstream.fromXML(json); System.out.println(product); } }
prints:
[Banana, 123, 23.0]
The real point of processing JSON with XStream is that now we can use all features already built-in for processing XML documents. For example we can add alias for the
Product class. In order to do that we can put the following annotation in front of the
Product class:
@XStreamAlias("product")
and modify our write example a bit:
public class JSONWrite { public static void main(String[] args) { Product product = new Product("Banana", "123", 23.00); XStream xstream = new XStream(new JettisonDriver()); Annotations.configureAliases(xstream, Product.class); String result = xstream.toXML(product); System.out.println(result); } }
and we’ll get a more compact JSON as a result:
{"product":{"name":"Banana","id":"123","price":"23.0"}}
You can grab the source of this driver from this:
svn co json
Subversion repository. If you are interested only in binary version, you can download it from here:
For more complex examples you can look at unit tests. It also contains a driver based on BadgerFish Stax implementation (part of Jettison). It has the same problem with nested arrays and since I didn’t need it at the moment I left it as it is.
It currently includes patched Jettison source, but I’ll see to forward those changes where they belong and include Jettison as a dependency in the future. Frankly, this driver shouldn’t be a standalone code but integral part of some of those two projects. I’ll write here if anything changes about that, but if you need something like this today, you can take it from above locations.
Try using JsOrb. It has support for both XML and JS streaming, and allows direct access to Java interfaces from JavaScript, using normal Java semantics. It generates JS classes for your POJOs and proxies for your remote interfaces. A few tags in your JSP is all you need to get started.
One thing I've been wondering is why everyone talks so much about JSON as opposed to simply JavaScript? JSON is a subset of JavaScript...but you can just as easily send JavaScript across the wire and have that evaluated at the other end.?
A web service is often useful to more than just an end user with a browser. It's also usable by other web services, both inside the organization itself and by outside organizations. JSON is a compact data format for them as well.
Hiya, I'm the Jettison author - sorry to hear that you had problems with it! Would love to integrate your patches though!! Please contact the mailing list or file a JIRA issue so we can get them in for our next release. Thanks!
Hi Dan,
I wouldn't call them problems, rather minor issues and enhancements. I plan to sort out changes that I've made, make some test cases and submit it all to Jira. I hope I'll have it all finished during next week. Thanks for the great library ...
When i tried using this approach i got java.lang.UnsupportedClassVersionError: org/sensatic/jqr/jason/JettisonDriver . I just got the snapshot jar
Hi Mike,
I downloaded jqr-json-1.0-SNAPSHOT.jar and found a problem. I was trying to serialized the Query class from Lucene and serialization broke with json while it is good with xml:.
Hi John,
Hi Dejan:
Any updates with the problem I posted last month?
-John
John, unfortunately I didn't find time to deal with it. I think it's best to post the problem in Jettison's Jira. I've provided patches that I had to them, so I'm looking to modify this driver to work with official Jettison release (when I find time to do that).
When can I use Java and JSON. In which kind of situations?
I ran into an issue where Jettison represents all numeric values as strings, I wrote up a quick patch to the JettisonMappedXmlDriver. I tried posting on the codehaus Xstream and Jettison mailing lists but for some reason no matter what I do I can't subscribe any email addresses to those lists.
Hi Doug,
I submitted a bug report and a patch with unit tests to both XStream and Jettison.
Jettison - JSON JettisonMappedXmlDriver quoting numeric values breaking JSON 2 Java deserialization Issue
XStream - JSON JettisonMappedXmlDriver quoting numeric values breaking JSON 2 Java deserialization Issue | http://www.oreillynet.com/onjava/blog/2007/01/java_and_json.html | crawl-002 | refinedweb | 1,295 | 56.96 |
#include <deal.II/base/patterns.h>
Test for the string being a
double. double precision number is allowed.
Giving bounds may be useful if for example a value can only be positive and less than a reasonable upper bound (for example damping parameters are frequently only reasonable if between zero and one), or in many other cases.
Definition at line 292 double precision numbers is implied. The default values are chosen such that no bounds are enforced on parameters.
Definition at line 352 of file patterns.cc.
Return
true if the string is a number and its value is within the specified range.
Implements Patterns::PatternBase.
Definition at line 360 of file patterns.cc.
Return a description of the pattern that valid strings are expected to match. If bounds were specified to the constructor, then include them into this description.
Implements Patterns::PatternBase.
Definition at line 383 of file patterns.cc.
Return a copy of the present object, which is newly allocated on the heap. Ownership of that object is transferred to the caller of this function.
Implements Patterns::PatternBase.
Definition at line 483 of file patterns.cc.
Creates a new object on the heap using
new if the given
description is a valid format (for example created by calling description() on an existing object), or
nullptr otherwise. Ownership of the returned object is transferred to the caller of this function, which should be freed using
delete.
Definition at line 491 double value used as default value, taken from
std::numeric_limits.
Definition at line 299 of file patterns.h.
Maximal double value used as default value, taken from
std::numeric_limits.
Definition at line 305 of file patterns.h.
Value of the lower bound. A number that satisfies the match operation of this class must be equal to this value or larger, if the bounds of the interval form a valid range.
Definition at line 357 of file patterns.h.
Value of the upper bound. A number that satisfies the match operation of this class must be equal to this value or less, if the bounds of the interval form a valid range.
Definition at line 365 of file patterns.h.
Initial part of description
Definition at line 370 of file patterns.h. | https://dealii.org/developer/doxygen/deal.II/classPatterns_1_1Double.html | CC-MAIN-2021-25 | refinedweb | 372 | 58.18 |
I am working on a pilot for a system that will eventually loop over a number of raster files to produce a single output one. If I have two input rasters, the routine works reliably. If I have more than three, it always fails. If I try three after a failure, it fails, but it can be made to work by first resorting to just two inputs and then adding the third. This is my (anonymised) code:
import arcpy
from arcpy.sa import *
arcpy.CheckOutExtension("Spatial")
arcpy.env.extent = "MAXOF"
arcpy.env.overwriteOutput = True
arcpy.env.qualifiedFieldNames = False
arcpy.env.workspace = "C:\aaaa"
inRasDir = bbb\\"
outRasDir = ccc\\"
terms = {}
terms = {
'first' : ['0.025', '*', '+', ],
'second' : ['0.025', '*', '+', ],
'third' : ['0.025', '*', '+', ],
'fourth' : ['0.15', '*', '-']
}
def coeff(lyr):
if terms[lyr][2] == '+':
c = float(terms[lyr][0])
else:
c = -float(terms[lyr][0])
return c;
outRas = ((
(coeff('first') * Raster(inRasDir + "first"))
+ (coeff('second') * Raster(inRasDir + "second"))
+ (coeff('third') * Raster(inRasDir + "third"))
# + (coeff('fourth') * Raster(inRasDir + "fourth"))
)
# * 'fifth'
)
outRas.save(outRasDir + "finalRas")
The error on the last line is:
outRas.save(outRasDir + "finalRas")
RuntimeError: ERROR 999998: Unexpected Error.
I also tried using CellStatistics but that also failed on the save statement.
Any comments would be gratefully received.
Regards,
Ian
Yes, I was beginning to think memory management might be a problem. I have taken your advice to simplify my test system. It is now just a simple loop summing all the rasters. The erratic behaviour appears to have been resolved by using a 64-bit version of Python. | https://community.esri.com/thread/216575-arcpysa-save-random-error-999998-unexpected-error | CC-MAIN-2019-13 | refinedweb | 253 | 61.53 |
So we're stress testing an ASP.NET application developed by an external company. We're doing roughly 50 requests per second, and after about half an hour each of the 48 worker process (w3wp.exe) is up to about 400 MB and counting. Running on IIS7.
Now after meddling with dotTrace, I'm fairly certain there is a memory leak here, but it's hard to be 100% certain without knowing the application. When we confronted the developers with this theory they dismissed it after 30 seconds saying something like "memory is handled automatically in .NET". Surely .NET would have garbage collected 48x~400MB if it was finalized data?
Anyway, I'm used to working with WinForms. How exactly do one create memory leaks in ASP.NET? Is the ONLY way by (mis)using the Application and Session objects in .NET?
edit
I have since posting this question learned that holding static references to objects in the request handlers (be it a web service class, web form or whatever) will cause "leaks". Probably not the correct term here, but anyway... This I was unsure of, because I thought the handler classes were killed and recreated after each request by IIS.
So code like this:
public class SomeService : IService
{
public static List<RequestData> _requestDataHistory = new List<RequestData>();
public void SomeRequest(RequestData data)
{
_requestDataHistory.Add(data);
}
}
WILL crash your server sooner or later. Maybe this was obvious to most people, but not to me :-)
Still unsure whether or not this would be a problem if you removed the static keyword though. Are instances disposed after each request?
The developers might be right. In the .Net world, garbage collection and freeing memory happens when needed. I.e. if, there's plenty of memory available to the application, it may consume more and more, until the operating system does not allow more allocation. You shouldn't worry about that, unless it really causes problems.
Of course, there may be memory leaks, if the application does not properly dispose unmanaged resources, like sockets, file handlers, etc. Look at the operating system objects (in task manager, you can enable handles and user objects columns) to see how they grow.
As you stated, Application or Session object misusing can also be a cause.
I'm wondering why you would have 48 application pools (worker processes). This is overkill, and you do not need that at all.
The GC manages memory per process, and 400MB per process is not that much at all. Reduce the number of app pools to the nr. of cores - 1, and then stress test the application. If then it grows too much, you may be concerned about memory leaks.
Based on your additional information, yes, in that case the history list will grow indefinitely. Static objects are created once per application domain, and live until the appdomain lives.
I do not advise you to arbitrary remove the static keyword, unless you know that you do not break some application logic. Investigate why that data is collected anyway, and what it is used for. The code has another problem as well - it's not thread safe, it's behavior is undefined when 2 requests came in at the same time, and decide to add data to that list.
Better move your question to stackoverflow.com, as it's a programming question, and not administrative one.
If you are not in control off that code, and really want to just solve your memory problem, you can set your apppools to recycle after X number of requests, or after they get more than Y amount of memory - but again, that's not a real solution.
Memory Leaks, in the general sense, is the negative/unanticipated results of coding logic which is not releasing its unused (or no longer used) resources correctly (or at all) while still reserving additional resources for new/continuing use/processing requirements. The overall resource "footprint" then continues to grow, even through the actual requirements may not necessarily be increasing if the proper "clean-up" of correctly utilizing and releasing resources is in place.
In short, it is not limited to logical programming "abuse" where it can be from incorrect assumptions of resource management or just the lack of it.
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
3065 times
active
1 year ago | http://serverfault.com/questions/302510/asp-net-app-eating-memory-application-session-objects-the-reason | CC-MAIN-2015-14 | refinedweb | 733 | 55.24 |
I keep getting the wrong answer...I have a homework problem I have been trying to understand but have resulted in a headached.
Using the code shown below, select the correct output for an input of -1:
Input Number If Number < 0 Then Write “1” Else If Number ==0 Then Write “2” Else Write “3” End If End If
Here is my code
#include <iostream> using namespace std; int main () { Input Number = -1 if Number < 0 Then Write “1” Else if Number ==0 Then Write “2” Else Write “3” End if End if
This C++ is all new to me but I keep getting 1 2 3 4 5 6 as my answer. The answer should be one of the following: 1 2 3 or -1
Any help would be appreciated! | https://www.daniweb.com/programming/software-development/threads/325793/help-finding-ouput | CC-MAIN-2017-34 | refinedweb | 131 | 63.87 |
Config::XrmDatabase This is a Pure Perl implementation of the X Window Resource Manager Database (XrmDB). It allows creation and manipulation of Xrm compliant databases. Warning! The XrmDB refers to names and resources. These days they are more typically called keys and values. The terminology used below (and sometimes in the names of subroutines and methods) mixes these two approaches, sometimes a bit too liberally. Subroutine and method names will probably change to make things more consistent. Why another configuration database? The XrmDB differs from typical key-value stores by allowing stored keys to be either fully or partially qualified. For example, the partially qualified key *.c.d will match a query for the keys "a.c.d", "a.b.c.d". Keys are composed of multiple components separated by the "." character. If the component is "?" it will match any single component; a component of "*" matches any number (including none). Matching Matching a search key against the database is a component by component operation, starting with the leftmost component. The component in the search key is checked against the same level component in the database keys. First the keys with non-wildcard components are compared; if there is an exact match, the search moves on to the next component in the matching database key. At this point, XrmDB adds another dimension to the search. Keys belong to a *class*, which has the same number of components as the key. When an exact match against the search key component is not found, the database is searched for an exact match for the same level component in the class. Only after that fails does the algorithm switch to database keys with wildcard components. The same order of comparison is performed; first against the component in the search key, and if that fails, to the component in the class. For example, given a search key of xmh.toc.messagefunctions.incorporate.activeForeground' with a class of Xmh.Paned.Box.Command.Foreground the database is first searched for keys which begin with "xmh". If that fails, the database is searched for keys which begin with "Xmh". If that fails, keys which start with a "?" wildcard are searched, and then those which start with "*". The "*" components can match an arbitrary number of components in the search key and class. If a match is found, the search moves on to the next unmatched component and the algorithm is repeated. Classes Why the extra "class"? Assigning keys to a class provides an ability to distinguish between two similarly structured keys. It essentially creates namespaces for keys so that values can be created based on which namespace a key belongs to, rather than the content of the key. Let's say that you have a bunch of keys which end in "Foreground": a.b.c.Foreground d.e.f.Foreground x.y.z.Foreground and you want to set a value for any keys which end in "Foreground": *.Foreground : 'yellow' To specify a separate value for each one could set a.b.c.Foreground : 'red' d.e.f.Foreground : 'blue' x.y.z.Foreground : 'green' Let's say that "a.b.c.Foreground" and "d.e.f.Foreground" are in the same class, "U.V.W.Foreground", and all keys in that class should have the same value: U.V.W.Foreground : 'red' x.y.z.Foreground : 'green' At some point, a new hierarchy of keys that begin with "g" is added to that class, but they should has a different value: g.V.W.Foreground : 'magenta' You could try this: g.?.?.Foreground : 'magenta' But that would affect *all* keys that begin with "g" but aren't in that class. Classes help bring some order, but this system can become very confusing if some discipline isn't maintained. Build.PL ./Build ./Build test ./Build install COPYRIGHT AND LICENSE This software is Copyright (c) 2021 by Smithsonian Astrophysical Observatory. This is free software, licensed under: The GNU General Public License, Version 3, June 2007 | https://web-stage.metacpan.org/release/DJERIUS/Config-XrmDatabase-0.07/source/README | CC-MAIN-2021-49 | refinedweb | 664 | 67.15 |
So far we’ve looked at collections that provide very basic data storage, essentially abstractions over an array. In this section, we’re going to look at what happens when we add a few very basic behaviors that entirely change the utility of the collections.
Stack
A stack is a collection that returns objects to the caller in a Last-In-First-Out (LIFO) pattern. What this means is that the last object added to the collection will be the first object returned.
Stacks differ from list and array-like collections. They cannot be indexed directly, objects are added and removed using different methods, and their contents are more opaque than lists and arrays. What I mean by this is that while a list-based collection provides a Contains method, a stack does not. Additionally, a stack is not enumerable. To understand why this is, let’s look at what a stack is and how the usage of a stack drives these differences.
One of the most common analogies for a stack is the restaurant plate stack. This is a simple spring-loaded device onto which clean plates are stacked. The spring ensures that regardless of how many plates are in the stack, the top plate can be easily accessed. Clean plates are added to the top of the stack, and when a customer removes a plate, he or she is removing the top-most plate (the most recently added plate).
We start with an empty plate rack.
And then we add a red, a blue, and a green plate to the rack in that order.
The key point to understand here is that as new plates are added, they are added to the top of the stack. If a customer retrieves a plate, he or she will get the most recently added plate (the green plate). The next customer would get the blue plate, and finally the red plate would be removed.
Now that we understand how a stack works, let’s define a few new terms. When an item is added to the stack, it is “pushed” on using the
Push method. When an item is removed from the stack, it is “popped” off using the
Pop method. The top item in the stack, the most recently added, can be “peeked” at using the
Peek method. Peeking allows you to view the item without removing it from the stack (just like the customer at the plate rack would be able to see the color of the top plate). With these terms in mind, let’s look at the implementation of a
Stack class.
Class Definition
The
Stack class defines
Push,
Pop, and
Peek methods, a
Count property, and uses the
LinkedList<T> class to store the values contained in the stack.
public class Stack { LinkedList _items = new LinkedList(); public void Push(T value) { throw new NotImplementedException(); } public T Pop() { throw new NotImplementedException(); } public T Peek() { throw new NotImplementedException(); } public int Count { get; } }
Push
Since we’re using a linked list as our backing store, all we need to do is add the new item to the end of the list.
public void Push(T value) { _items.AddLast(value); }
Pop
Push adds items to the back of the list, so we will “pop” them from the back. If the list is empty, an exception is thrown.
public T Pop() { if (_items.Count == 0) { throw new InvalidOperationException("The stack is empty"); } T result = _items.Tail.Value; _items.RemoveLast(); return result; }
Peek
public T Peek() { if (_items.Count == 0) { throw new InvalidOperationException("The stack is empty"); } return _items.Tail.Value; }
Count
Since the stack is supposed to be an opaque data structure, why do we have a
Count property? Knowing whether a stack is empty (
Count == 0) is very useful, especially since
Pop throws an exception when the stack is empty.
public int Count { get { return _items.Count; } }
Example: RPN Calculator
The classic stack example is the Reverse Polish Notation (RPN) calculator.
RPN syntax is quite simple. It uses:
<operand> <operand> <operator>
rather than the traditional:
<operand> <operator> <operand>.
In other words, instead of saying “4 + 2,” we would say “4 2 +.” If you want to understand the historical significance of RPN syntax, I encourage you to head to Wikipedia or your favorite search engine.
The way RPN is evaluated, and the reason that a stack is so useful when implementing an RPN calculator, can be seen in the following algorithm:
for each input value if the value is an integer push the value on to the operand stack else if the value is an operator pop the left and right values from the stack evaluate the operator push the result on to the stack pop answer from stack.
So given the input string “4 2 +,” the operations would be:
push (4) push (2) push (pop() + pop())
Now the stack contains a single value: six (the answer).
The following is a complete implementation of a simple calculator that reads an equation (for example, “4 2 +”) from console input, splits the input at every space ([“4”, “2”, and “+”]), and performs the RPN algorithm on the input. The loop continues until the input is the word “quit”.
void RpnLoop() { while (true) { Console.Write("> "); string input = Console.ReadLine(); if (input.Trim().ToLower() == "quit") { break; } // The stack of integers not yet operated on. Stack values = new Stack(); foreach (string token in input.Split(new char[] { ' ' })) { // If the value is an integer... int value; if (int.TryParse(token, out value)) { // ... push it to the stack. values.Push(value); } else { // Otherwise evaluate the expression... int rhs = values.Pop(); int lhs = values.Pop(); // ... and pop the result back to the stack. switch (token) { case "+": values.Push(lhs + rhs); break; case "-": values.Push(lhs - rhs); break; case "*": values.Push(lhs * rhs); break; case "/": values.Push(lhs / rhs); break; case "%": values.Push(lhs % rhs); break; default: throw new ArgumentException( string.Format("Unrecognized token: {0}", token)); } } } // The last item on the stack is the result. Console.WriteLine(values.Pop()); } }
Queue
Queues are very similar to stacks—they provide an opaque collection from which objects can be added (enqueued) or removed (dequeued) in a manner that adds value over a list-based collection.
Queues are a First-In-First-Out (FIFO) collection. This means that items are removed from the queue in the same order that they were added. You can think of a queue like a line at a store checkout counter—people enter the line and are serviced in the order they arrive.
Queues are commonly used in applications to provide a buffer to add items for future processing or to provide orderly access to a shared resource. For example, if a database is capable of handling only one connection, a queue might be used to allow threads to wait their turn (in order) to access the database.
Class Definition
The
Queue, like the
Stack, is backed by a
LinkedList. Additionally, it provides the methods
Enqueue (to add items),
Dequeue (to remove items),
Peek, and
Count. Like
Stack, it will not be treated as a general purpose collection, meaning it will not implement
ICollection<T>.
public class Queue { LinkedList _items = new LinkedList(); public void Enqueue(T value) { throw new NotImplementedException(); } public T Dequeue() { throw new NotImplementedException(); } public T Peek() { throw new NotImplementedException(); } public int Count { get; } }
Enqueue
This implementation adds the item to the start of the linked list. The item could just as easily be added to the end of the list. All that really matters is that items are enqueued to one end of the list and dequeued from the other (FIFO). Notice that this is the opposite of the Stack class where items are added and removed from the same end (LIFO).
Public void Enqueue(T value) { _items.AddFirst(value); }
Dequeue
Since
Enqueue added the item to the start of the list,
Dequeue must remove the item at the end of the list. If the queue contains no items, an exception is thrown.
public T Dequeue() { if (_items.Count == 0) { throw new InvalidOperationException("The queue is empty"); } T last = _items.Tail.Value; _items.RemoveLast(); return last; }
Peek
public T Peek() { if (_items.Count == 0) { throw new InvalidOperationException("The queue is empty"); } return _items.Tail.Value; }
Count
public int Count { get { return _items.Count; } }
Deque (Double-Ended Queue)
A double-ended queue, or deque, extends the queue behavior by allowing items to be added or removed from both sides of the queue. This new behavior is useful in several problem domains, specifically task and thread scheduling. It is also generally useful for implementing other data structures. We’ll see an example of using a deque to implement another data structure later.
Class Definition
The
Deque class is backed by a doubly linked list. This allows us to add and remove items from the front or back of the list and access the
First and
Last properties. The main changes between the Queue class and the Deque class are that the
Enqueue,
Dequeue, and
Peek methods have been doubled into
First and
Last variants.
public class Deque { LinkedList _items = new LinkedList(); public void EnqueueFirst(T value) { throw new NotImplementedException(); } public void EnqueueLast(T value) { throw new NotImplementedException(); } public T DequeueFirst() { throw new NotImplementedException(); } public T DequeueLast() { throw new NotImplementedException(); } public T PeekFirst() { throw new NotImplementedException(); } public T PeekLast() { throw new NotImplementedException(); } public int Count { get; } }
Enqueue
EnqueueFirst
public void EnqueueFirst(T value) { _items.AddFirst(value); }
EnqueueLast
public void EnqueueLast(T value) { _items.AddLast(value); }
Dequeue
DequeueFirst
public T DequeueFirst() { if (_items.Count == 0) { throw new InvalidOperationException("DequeueFirst called when deque is empty"); } T temp = _items.Head.Value; _items.RemoveFirst(); return temp; }
DequeueLast
public T DequeueLast() { if (_items.Count == 0) { throw new InvalidOperationException("DequeueLast called when deque is empty"); } T temp = _items.Tail.Value; _items.RemoveLast(); return temp; }
PeekFirst
public T PeekFirst() { if (_items.Count == 0) { throw new InvalidOperationException("PeekFirst called when deque is empty"); } return _items.Head.Value; }
PeekLast
public T PeekLast() { if (_items.Count == 0) { throw new InvalidOperationException("PeekLast called when deque is empty"); } return _items.Tail.Value; }
Count
public int Count { get { return _items.Count; } }
Example: Implementing a Stack
Deques are often used to implement other data structures.
We’ve seen a stack implemented using a
LinkedList, so now let’s look at one implemented using a
Deque.
You might wonder why I would choose to implement a
Stack using a
Deque rather than a
LinkedList. The reason is one of performance and code reusability. A linked list has the cost of per-node overhead and reduced data locality—the items are allocated in the heap and the memory locations may not be near each other, causing a larger number of cache misses and page faults at the CPU and memory hardware levels. A better performing implementation of a queue might use an array as the backing store rather than a list. This would allow for less per-node overhead and could improve performance by addressing some locality issues.
Implementing a
Stack or
Queue as an array is a more complex implementation, however. By implementing the
Deque in this more complex manner and using it as the basis for other data structures, we can realize the performance benefits for all structures while only having to write the code once. This accelerates development time and reduces maintenance costs.
We will look at an example of a
Deque as an array later in this section, but first let’s look at an example of a
Stack implemented using a
Deque.
public class Stack { Deque _items = new Deque(); public void Push(T value) { _items.EnqueueFirst(value); } public T Pop() { return _items.DequeueFirst(); } public T Peek() { return _items.PeekFirst(); } public int Count { get { return _items.Count; } } }
Notice that all of the error checking is now deferred to the
Deque and any optimization or bug fix made to the
Deque will automatically apply to the
Stack class. Implementing a
Queue is just as easy and as such is left as an exercise to the reader.
Array Backing Store
As mentioned previously, there are benefits to using an array rather than a linked list as the backing store for the
Deque<int> (a deque of integers). Conceptually this seems simple, but there are actually several issues that need to be addressed for this to work.
Let’s look at some of these issues graphically and then see how we might deal with them. Along the way, keep in mind the growth policy issues discussed in the ArrayList section and that those same issues apply here.
When the collection is created, it is a 0-length array. Let’s look at how some actions affect the internal array. As we go through this, notice that the green “h” and red “t” in the figures refer to “head” and “tail,” respectively. The head and tail are the array indexes that indicate the first and last items in the queue. As we add and remove items, the interaction between head and tail will become clearer.
Deque deq = new Deque(); deq.EnqueueFirst(1);
deq.EnqueueLast(2);
deq.EnqueueFirst(0);
Notice what has happened at this point. The head index has wrapped around to the end of the array. Now the first item in the deque, what would be returned by
DequeueFirst, is the value at array index three (zero).
deq.EnqueueLast(3);
At this point, the array is filled. When another item is added, the following will occur:
- The growth policy will define the size of the new array.
- The items will be copied from head to tail into the new array.
- The new item will be added.
EnqueueFirst- The item is added at index zero (the copy operation leaves this open).
EnqueueLast- The item is added to the end of the array.
deq.EnqueueLast(4);
Now let’s see what happens as items are removed from the Deque.
deq.DequeueFirst();
deq.DequeueLast();
The critical point to note is that regardless of the capacity of the internal array, the logical contents of the Deque are the items from the head index to the tail index, taking into account the need to wrap around at the end of the array. An array that provides the behavior of wrapping around from the head to the tail is often known as a circular buffer.
With this understanding of how the array logic works, let’s dive right into the code.
Class Definition
The array-based Deque methods and properties are the same as the list-based, so they will not be repeated here. However, the list has been replaced with an array and there are now three properties to contain the size, head, and tail information.
public class Deque { T[] _items = new T[0]; // The number of items in the queue. int _size = 0; // The index of the first (oldest) item in the queue. int _head = 0; // The index of the last (newest) item in the queue. int _tail = -1; ... }
Enqueue
Growth Policy
When the internal array needs to grow, the algorithm to increase the size of the array, copy the array contents, and update the internal index values needs to run. The
Enqueue method performs that operation and is called by both
EnqueueFirst and
EnqueueLast. The
startingIndex parameter is used to determine whether to leave the array slot at index zero open (in the case of
EnqueueFirst).
Pay specific attention to how the data is unwrapped in cases where the walk from head to tail requires going around the end of the array back to zero.
private void allocateNewArray(int startingIndex) { int newLength = (_size == 0) ? 4 : _size * 2; T[] newArray = new T[newLength]; if (_size > 0) { int targetIndex = startingIndex; // Copy the contents... // If the array has no wrapping, just copy the valid range. // Else, copy from head to end of the array and then from 0 to the tail. // If tail is less than head, we've wrapped. if (_tail < _head) { // Copy the _items[head].._items[end] -> newArray[0]..newArray[N]. for (int index = _head; index < _items.Length; index++) { newArray[targetIndex] = _items[index]; targetIndex++; } // Copy _items[0].._items[tail] -> newArray[N+1].. for (int index = 0; index <= _tail; index++) { newArray[targetIndex] = _items[index]; targetIndex++; } } else { // Copy the _items[head].._items[tail] -> newArray[0]..newArray[N] for (int index = _head; index <= _tail; index++) { newArray[targetIndex] = _items[index]; targetIndex++; } } _head = startingIndex; _tail = targetIndex - 1; // Compensate for the extra bump. } else { // Nothing in the array. _head = 0; _tail = -1; } _items = newArray; }
EnqueueFirst
public void EnqueueFirst(T item) { // If the array needs to grow. if (_items.Length == _size) { allocateNewArray(1); } // Since we know the array isn't full and _head is greater than 0, // we know the slot in front of head is open. if (_head > 0) { _head--; } else { // Otherwise we need to wrap around to the end of the array. _head = _items.Length - 1; } _items[_head] = item; _size++; }
EnqueueLast
public void EnqueueLast(T item) { // If the array needs to grow. if (_items.Length == _size) { allocateNewArray(0); } // Now we have a properly sized array and can focus on wrapping issues. // If _tail is at the end of the array we need to wrap around. if (_tail == _items.Length - 1) { _tail = 0; } else { _tail++; } _items[_tail] = item; _size++; }
Dequeue
DequeueFirst
public T DequeueFirst() { if (_size == 0) { throw new InvalidOperationException("The deque is empty"); } T value = _items[_head]; if (_head == _items.Length - 1) { // If the head is at the last index in the array, wrap it around. _head = 0; } else { // Move to the next slot. _head++; } _size--; return value; }
DequeueLast
public T DequeueLast() { if (_size == 0) { throw new InvalidOperationException("The deque is empty"); } T value = _items[_tail]; if (_tail == 0) { // If the tail is at the first index in the array, wrap it around. _tail = _items.Length - 1; } else { // Move to the previous slot. _tail--; } _size--; return value; }
PeekFirst
public T PeekFirst() { if (_size == 0) { throw new InvalidOperationException("The deque is empty"); } return _items[_head]; }
PeekLast
public T PeekLast() { if (_size == 0) { throw new InvalidOperationException("The deque is empty"); } return _items[_tail]; }
Count
public int Count { get { return _size; } }
Next UpThis completes the fourth part about stacks and queues. Next up, we'll move on to the binary search tree.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
Envato Market has a range of items for sale to help get you started. | http://code.tutsplus.com/tutorials/stacks-and-queues--cms-20664 | CC-MAIN-2016-22 | refinedweb | 3,049 | 63.9 |
Wicket is around for a while, but lately it is getting more and more attention. A few years ago I attended a presentation about Wicket. It looked like a nice framework, but at that time I didn’t see much differences with Tapestry and put it on my list of nice frameworks. A few weeks ago a colleague told me some site was comparing Wicket with GWT. We both found that a bit strange, so I took a look at Wicket to see what became of it. It seems like Wicket can create Ajax-things for you without writing any Javascript (just like GWT, but that’s probably one of the few similarities). That’s what this blog is about. It gives a short introduction to Wicket and shows you how to create your first Ajax-call. This isn’t a getting started guide, there are many guides around (which can be found in the conclusion), I just focus on the Ajax-functionality here.
Wicket is a web framework that is a little bit different than most web frameworks we know. It doesn’t use jsp, but instead you can annotate html tags with a wicket attribute. The wicket attributes are referenced from within java code and the annotated tags become java objects.
The easiest way to get started with Wicket is creating a Wicket archetype with Maven 2. On the Wicket site you can generate a command to create a Maven 2 project.
You can set the groupId and artifactId and after that you have to pick a version. All you have to do now is copy the command and run it in a command line window (Maven 2 needs to be installed). This is the first time I’ve seen this and I hope other frameworks/libraries will copy this, it’s always a bit of a hustle to get started with a new framework and now it only takes a few seconds!
The final step is creating the files for your IDE ( scroll down to the bottom) and open your IDE to browse the generated project.
The first thing that catches the eye is that there is no servlet for Wicket inside the web.xml, most web frameworks use a servlet, Wicket only needs a filter. The filter points to a class that is the heart of your application, in this case WicketApplication.
WicketApplication is a class that extends the WebApplication class. Inside the WicketApplication you have to override the getHomePage method. This is the starting point of your application and determines the first page the user will see.
With the archetype you get a page called HomePage. The page consists of a .java and .html file. The java file is the class that is returned inside the getHomePage method in WicketApplication. The .html is just an ordinary .html file with some the earlier mentioned wicket attributes. The HomePage page has a wicket:id=â€message†on a span. This message is referenced inside HomePage.java:
add(new Label("message", "If you see this message wicket is properly configured and running"));
The first argument of the Label object is the id of the element (in the .html), the second argument is the value of Label as we will see it in the browser. Of course this text is displayed when we compile and deploy the application. Open the root of the web application (in my case) to see the result.
Now add comment slashes to skip the call to the add method in HomePage.java. After recompiling we get a very useful error message (and I’m not being ironic, I really like it when developers pay attention to their error messages):
WicketMessage: Unable to find component with id ‘message’ in [Page class = nl.amis.HomePage, id = 0, version = 0]. This means that you declared wicket:id=message in your markup, but that you either did not add the component to your page at all, or that the hierarchy does not match.
[markup = file:/D:/Documents/wicket-test/target/wicket-test/WEB-INF/classes/nl/amis/HomePage.html
Ok let’s put the add back and add Ajax to the page.
Adding ajax behaviour to your pages
And now the reason why I’m writing this blog, ajax behaviour without one line of Javascript!
On my page I want to display the status of the system (this is a String that lives in a static class). The initial status says “the application is started†and after a click on a link the status must say: “the application is still runningâ€, without refreshing the page of course!
Add a link to your HomePage.html: <a href="#" wicket:click me</a>
The java file needs some changes, in fact so many that I just paste my new class and explain what I’ve done:
public class HomePage extends WebPage {
private static final long serialVersionUID = 1L;
public HomePage(final PageParameters parameters) {
final Model model = new Model() {
public Object getObject() {
return StatusService.getStatus();
}
};
final Label label = new Label("message", model);
label.setOutputMarkupId(true);
add(label);
add(new AjaxFallbackLink("link") {
public void onClick(AjaxRequestTarget target) {
System.out.println("onclick");
StatusService.setStatus("Application is still running");
target.addComponent(label);
}
});
}
}
The link we just added to the .html needs to become an AjaxFallbackLink. A method must be invoked when the onclick event in Javascript is fired. With the AjaxFallbackLink there is an onclick method we can override. In this method the status is updated. When the status is updated the component must be added to the page again, with the addComponent method.
The Model object seems a bit strange, but in Wicket almost everything is an object, in this case the message of the Label is an Object and not a String.
When we load the page for the first time we see the initial state:
After clicking the link the status is updated:
Note that the page is not reloaded. Let’s analyze the ajax request:
<ajax-response>
<component id="message2">
<span wicket:Application is still running</span>
</component>
</ajax-response>
This response is very clear, the component with id message2 needs to be refreshed with a new text. The AJAX-request can be analyzed in Firebug (when you see no response, click with the middle mouse button on the url, there are some problems rendering xml in Firebug I guess). Another option is the “WICKET AJAX DEBUG†on the bottom right corner of your page. Because we’re running in developer mode we can analyze the ajax traffic, very neat!
Conclusion
It seems Wicket really has evolved since the first time I saw it. I really like the way they ‘do’ Ajax. I think I have some trouble getting used to using objects for everything (the Model object on the Label for example), but maybe it’s because I just started with Wicket. When you’re used to frameworks like Struts, Spring and Stripes you really have to take your time to get used to the component based structure of Wicket. It might be a bit difficult at first, but after a while you’ll see that it is very useful and the Wicket guys really thought about it.
The documentation and error messages are really good, they really helped me understand Wicket better.
If you want to start with Wicket I suggest reading the article on The Server Side and also don’t forget to check out the excellent samples on the Wicket page.
Sources
I need to enter substring of Name text box in another text box(shortname). I am unable to find component(shortname) using find component. Does any one know how this can be done?
Cool article! You can pass a full-on IModel to label or you can just pass a String for convenience. The difference is that the IModel implementation could update dynamically, while the fixed string is static. | https://technology.amis.nl/2008/03/29/wicket-it-can-do-ajax-without-writing-any-line-of-javascript/ | CC-MAIN-2015-22 | refinedweb | 1,316 | 62.48 |
This. New developers, those unfamiliar with the inner-workings of Rails, likely need a basic set of guidelines to secure fundamental aspects of their application. The intended purpose of this doc is to be that guide.. The Ruby Security Reviewer's Guide has a section on injection and there are a number of OWASP references for it, starting at the top: Command. OWASP has extensive information about SQL Injection..
If you must accept HTML content from users, consider a markup language for rich text in an application (Examples include: markdown and textile) and disallow HTML tags. This helps ensures that the input accepted doesn’t include HTML content that could be malicious. If you cannot restrict your users from entering HTML, consider implementing content security policy to disallow the execution of any javascript. And finally, consider using the #sanitize method that let's you whitelist allowed tags. Be careful, this method has been shown to be flawed numerous times and will never be a complete solution.>
OWASP provides more general information about XSS in a top level page: OWASP Cross Site Scripting.
There is an OWASP Session Management Cheat Sheet.
There is an OWASP Authentication Cheat Sheet..
More general information about this class of vulnerability is in the OWASP Top 10 Page..
There is a top level OWASP page for CSRF. most basic, but restrictive protection is to use the :only_path option. Setting this to true will essentially strip out any host information.
redirect_to params[:url], :only_path => true # this can be vulnerable to javascript://trusted.com/%0Aalert(0) so check .scheme and .port too
def validation_routine(host)
# Validation routine where we use \A and \z as anchors *not* ^ and $
# you could also check the host value against a whitelist
end
Also blind redirecting to user input parameter can lead to XSS. Example:
redirect_to params[:to][status]=200&to[protocol]=javascript:alert(0)//
There is a more general OWASP resource about Unvalidated Redirects and Forwards..
Any application in any technology can contain business logic errors that result in security bugs. Business logic bugs are difficult to impossible to detect using automated tools. The best ways to prevent business logic security bugs are to do code review, pair program and write unit tests. and.
Another area of tooling is the security testing tool Gauntlt which is built on cucumber and uses gherkin syntax to define attack files.
Launched in May 2013 and very similiar to brakeman scanner, the codesake-dawn rubygem is a static analyzer for security issues that work with Rails, Sinatra and Padrino web applications. Version 0.60 has more than 30 ruby specific cve security checks and future releases custom checks against Cross Site Scripting and SQL Injections will be added
Egor Homakov homakov [at] gmail.com | https://www.owasp.org/index.php?title=Ruby_on_Rails_Cheatsheet&direction=next&oldid=154539 | CC-MAIN-2017-34 | refinedweb | 458 | 54.42 |
I’ve just had a Sony XAV-5000 Head Unit installed into my teenagers 2006 Toyota Camry and it’s totally amazing!
The integration works great with steering controls, and whole Android Auto experience is fantastic.
However, it wasn’t always so.
If you’ve had the issue with:
… well you’re not alone. I had the same problems.
There’s tons of threads about how bad the experience has been for people. The primary reason for the bad reviews is not the unit itself - it’s the constant dropouts. It makes the unit unusable.
But there is good news in this thread which captures the solution towards the end of the thread:
TL;DR: Replacing the USB socket with one that has a *much* shorter cable appears to have resolved the issue for me.
But I have even better news. This worked for me by just using a new shorter, high-quality usb cable and attaching to the USB extension provided with the unit. I didn’t need to open the dash again - just buy a new shorter cable!
I went down to my local Officeworks and bought one of these 25cm cables:
Completely eliminated all dropouts and the unit is working perfectly. I’m connected to a Moto G7 Plus via USB-C, but the cables are also available in MicroUSB and Lightening. I’ve tried the MicroUSB on my son’s OPPO AX7 and, after we fixed the “Unrecognised USB device” error - which was related to how his Phone’s USB port was set to Midi rather than File Transfer) it works great too!
Phew! Hope it works out great for you! Once the drop-outs go away, this really is a fantastic unit. You’re going to love it.]]>
Well it’s been a long year since I’ve posted anything here.. but it’s not because I haven’t been writing. Between studying at Uni, and keeping a chatty (and super helpful) personal journal, I’ve written more words in the last year than any other time in my life.
But it’s just that I haven’t been writing publicly. And I feel like I want to do more of that in this season.
So here we are.
And let me start by talking about some of my private writing…
For the course of this year, each workday I’ve kept a “Developer Journal”. It’s really just a scrapbook of the things that I have worked on each day.
Sometimes it’s documenting weird Angular stuff, or something I’ve learned about PrimeNG. It can be something I learned about refactoring or something I’ve read or realised in leveling up my professional skillset.
But often it’s simply documenting how I solved some problem that I ran into (with links to interesting stuff I discovered when I was researching it). Kinda like a personal blog but without the overhead. And super easy to search in later when I know the error message, but not how I solved it :-)
I use Microsoft OneNote since I can update it from client sites using OneNote online, as well as on my phone and laptop and everything just syncs nicely.
It’s been tremendously helpful to me - like a personal knowledgebase. I remember having friends who’ve kept personal Wikis back in the day - this is my modern version of that. But it’s also been useful to remind me of all the cool stuff I get to work on - and how much progress I’ve made on the days where I simply feel like “I’ve just got nothing done today.”
Highly recommend you take the practice for a spin for a week or two to see if it’s valuable to you.
Here’s an extract of the kind of stuff I write down. Have fun with your experiments!
(BTW.. feels great to be blogging again. Forgotten how much joy it brings me!)
]]>]]>
As my Pluralsight PWA Sensor Course leaps into record mode, I’ve been looking for ways to add visual interest to my camera shots - but without the overhead of mega-battery-drain client-side image processing.
Sounds like a great reason to dive into CSS filters as a lightweight way to transform captured images on mobile. After a little googling, I ran into CSSgram - a CSS filters library for Instagram style filters! How cool.
Import a CSS file, and you’re ready to filter:
Implementing a given filter is super quick. Pick one of the many style names, and add it the parent element of your image, and you’re off to the races. They recommend doing that via the
figure tag, which I’ve done in the page above with:
<figure class="hudson"> <img src="[[imageData]]" on-</figure><figure class="inkwell"> <img src="[[imageData]]" on-</figure><figure class="kelvin"> <img src="[[imageData]]" on-</figure>
Definitely a cool way to add some pop to your PWA image capture stuff!
Have fun!]]>
Whoa! I’m recording video and audio in my PWA app now! This is so crazy..
Turns out there is a MediaStream Recording API, so I’m putting it to good use on my PWA Sensors course (it will appear here one day).
Dude! I’m actually recording a video note - with audio - in-browser on my SurfaceBook at my local Maccas McCafe!
There’s an awesome Mozilla post that gives you a good nuts and bolts of the API - which I found super helpful.
The MediaRecorder stuff really builds on the MediaStream stuff I was talking about last week.
You take an incoming stream, then feed it into a MediaRecorder. Call
mediaRecorder.start() to start recording - in my case I supply a “ms chunk size” with
.start(1000) so chunks of video will arrive in my callback every second. Once started, your
ondataavailable handler will start getting chunks of video. How cool!
When you’re done recording, invoke
mediaRecorder.stop() (which I do from a button click - not shown here).
Once you call
stop() on the MediaRecorder to stop the recording, your
onstop handler will get called back. You then take that nice selection of video chunks that we have accumulated in the array, and
Blob() it into an object that we can
URL-ize.
Finally, assign your Blob URL to a stock standard vanilla
<video> component and click play to playback your video!
// find my <video> element in the DOMconst videoPreview = this.$.videoNote;navigator.mediaDevices.getUserMedia(constraints) .then((stream) => { let mediaRecorder = new MediaRecorder(stream); // save for later so we can mediaRecorder.stop() from a button click this.set('mediaRecorder', mediaRecorder); // Start with number of ms per "chunk" mediaRecorder.start(1000); let chunks = []; mediaRecorder.ondataavailable = function(e) { chunks.push(e.data); } mediaRecorder.onstop = function(e) { var blob = new Blob(chunks, { 'type' : 'video/webm' }); videoPreview.src = URL.createObjectURL(blob); } }, (err) => { console.log('User rejected camera capture permissions', err); });}
Constantly amazed at how much you can do in a browser these days.. Such an exciting time for PWAs.
Time to finish this script and get this module out the door later in the week.
Have fun!]]>
I’m having a ball at the moment hacking around with the MediaDevices API in preparation for an upcoming Pluralsight Course on PWA Sensors (it will appear here one day). It’s just crazy what you can do in a browser these days!
One cool trick I’ve been playing with is populating a selectbox with a list of available cameras, then letting the user select the camera that they’d like to work with.
The API has an enumerateDevices() call which is handy for tracking down everything that’s available on your system. I iterate over the returned
MediaDeviceInfo objects since having their
label and
deviceId is handy for my next trick…
navigator.mediaDevices.enumerateDevices() .then(function (devices) { let cameras = []; let microphones = []; devices.forEach(function (device) { console.log(device.kind + ": " + device.label + " id = " + device.deviceId); if (device.kind === 'videoinput') { cameras.push(device); } else if (device.kind == 'audioinput') { microphones.push(device); } }); self.set('cameraDevices', cameras); self.set('microphoneDevices', microphones); }) .catch(function (err) { console.log(err.name + ": " + err.message); });}
Now I have that those list of MediaDeviceInfo, I can put them into a dropdown and we’re off to the races:
But what happens if you have more than one device? My Surfacebook has a front and rear camara, and so does my mobile handset. How do you tell the API which one to use?
The magic you need is the super-handy Constraints block. Pass this into your media acquisition, and can specify exactly which DeviceId you’d like:
let constraints = { video: { deviceId: selectedCamera.deviceId }, audio: false,};
That constraints object is all the secret sauce you need. Pass those constraints in your getUserMedia call and you’re in business:
navigator.mediaDevices.getUserMedia(constraints).then((stream) => { photoPreview.srcObject = stream; this.set('cameraActive', true);}, (err) => { console.log('User rejected camera capture permissions', err); this.set('captureRejected', true);});
Can’t stop winning! But right now I need to get back to scripting my course - still have some media recording stuff to churn through (super, super fun!). I have a DevEditor progress call tomorrow and I’m way behind! Eeek!
Will post some more media recording coolness here soon.]]>
If you’ve not heard Steven Pressfield talk about The Resistance, you need to spend three minutes and set that right.
Good. Now we’re on the same page..
I’m an avid daily journal person - I’ve been collecting daily posts for the last four years which I keep in OneNote. But this recent journal entry seems like something others may benefit from.
This post is about a bunch of my own internal resistance reasons that I use to procrastinate doing actual work on my next Pluralsight Course (which is on PWA sensors, and which I’m actually super pumped about)… and the rebutt to those lies so you can just get on with Turning Pro and actually doing the work.
Enjoy the journal!
There is massive resistance on working on my Pluralsight course today - even though I have set aside the day to do just that.
So what’s under the waterline of this iceberg of resistance? What’s stopping the daily progress?
Deadlines are creating fear and disappointment. I thought my accountability stuff - asking a friend and my editor to hold me to a deadline - would help me kick my own butt and make some progress, but the reverse has happened. It’s made me freeze. Scared to progress. Terrified of clicking record. Fearful I won’t ship anything in time and miss a great opportunity. BUT: that mindset takes hold when you forget: Habits make goals come to you!! Just fall in love with the daily grind and you’ll be done in no time. Just move the needle forward in some small way every day.
Resentment that I’m not making hay while the sun shines. I have this awesome consulting opportunity at the moment that I could be monetizing every single day . But I’m busy tinkering on this slow-as-molasses course progress which I resent as anti-business-wisdom. BUT: will I resent this down the track? Absolutely not - I know the work an hour/get paid an hour thing is dead. I only do it at all because it’s a good way of keeping my dev skills applied in a real business team context - and grow my programming chops by working with other developers. Pluralsight scales out!
Scared I don’t have the stuff. Ah, that old chestnut. That I “don’t know this area deep enough”. The truth is that I, like everyone, have many knowledge gaps - but I’m curious, and so will explore things with a beginners mind as I go along. My mind still plays the old, “there are better people out there in this area who will rubbish and ridicule this course”. BUT: of course, that’s all non-sense. Most real experts I know are gracious - and love to see others level up. And I’ll also be offering a ton of enthusiasm, a beginner’s mind, and a passion for levelling people up.
Tension about having the (super large) tracts of time I (pretend I) need. I feel like there are constant interruptions, so how will I get in the flow for this course to progress? How will I move this forward in little blocks of an hour here or there? BUT: the truth of the matter is that I make great progress in a single Pomodoro - so there’s no risk of this lie really being true. Thinking in smaller blocks of time is very helpful.
Zero self-compassion. No matter how much I track through, I’ll never feel like I’ve done enough, so why even start? Reality means other things do sometimes trump my courseware development. BUT: the other things that have trumped Pluralsight progress have been wonderful, and important, and it’s worth remembering my value is not measured by completed Pluralsight courses - I was created to enjoy life. Imagine if I treated myself the way that I would treat a friend who accomplished a similar amount of work in a day… I would give them a raise! So feel good about yourself, and crack on.
Losing my why. I forget that I want to become a world class trainer, mentor and coach. BUT: the way that you do that is through mindful practice. I sometime resent “putting in the practice” on my Pluralsight work - since it’s not making immediate money. But then I remember that it’s not, and was never, about the money anyway!! It was about making the world a cooler place - levelling up developers in entertaining ways - and growing in my own potential - and getting better at doing online training!
No next step. This one is actaully kinda true. I am a little light on the planning stage - and when you don’t have an immediate “next step” to move onto, you can spin the wheels working that out. BUT: I can grow in this area and I have great tools available to me in Kanbanflow or even Trello - so I’m going to develop my next steps as I go. Certainly having no plan doesn’t serve me - so I’m going to create a coarse grain plan and refine as you go. Live the agile dream!
Not making it fun. First, learn to practice! BUT: the first step there is to make it fun. Give yourself little rewards. Set micro-deadlines that are actually doable - and then hit them and celebrate. No more Herculean epic tasks - those “I’m not allowed to celebrate without doing 40 hours of progress in 6 hours of work” mindsets. That’s ridiculous. And it doesn’t serve you. So leave it behind.
No room for overflow. When I dive into Pluralsight, all my blogging/vlogging/whatever goes to zero. There’s no overflow to share. BUT: that’s crazy. I’m learning new things every time I sit down to work on the course, so I just need to work out a lightweight way to share those insights. No “massive tome” blog posts. Just punch out a few little sentences to encourage yourself. Or a quick vid to just demonstrate something cool. Or something from your journal. I’m going to commit to a Wednesday blog day.
Well, that’s all I can think of immediately this morning. So I figured I’d just write it down so I can see the madness in some of my mindsets.
I’m going to edge forward today. I’m going to break that resistance.
Let’s get started :-)]]>
These last couple of years I’ve been reflecting every day on a Vision Wall that I keep in OneNote. It’s just a page with a bunch of images that remind me of things I’m working to build into my life.
One of the these pictures this year is around innovation.
Innovate. That word used to conjure up “Elon Musk-ish” level innovation. But no more. I now tell a different story about innovation.
My new innovation story is more about friction reduction. And automation.
Imagine if you could remove a small percentage of the friction you experience every day delivering software. How great would that feel? Annoying tasks gone forever.
The crazy thing is that automation - using robots to do our boring work - lets us actually do that, but often we put it off since we figure we’re just doing a “one off”. Or it would be too fiddly or time consuming to automate.
Interestingly I used to work with a guy who was completely the other way around. He wouldn’t even copy a file to a floppy without creating a batch file to do it (since he figured he’d be doing it again real soon now).
I think he was onto something..
Now I’m diving into a messy refactor so I can finally get my test harnesses working smoothly.
Reducing friction AND deliving a higher quality product to the client.
You don’t need to choose.
Except automating the hard stuff. That is definitely the choice :-)]]>
Wow! We’re into 2018 already and I haven’t blogged for months and months. That’s such a shame because I’ve been doing so much cool stuff. So it’s time to get back on the horse!:
I am so pumped about it. It’s pretty scratchy at the moment, but it’ll get there:!]]>!
]]>]]>
That’s what I wrote in my daily journal.
But really it was more about device checking. Desktop, Handset, whatever.
Constant. Device. Checking.
And I was exhausted. I needed an intervention.…
The desktop filtering approach was working great. During work hours, it was a solid strategy.
But I realised that most of my drain was actually coming from after-hours.
So I wanted to go all in on blocking every dopamine thief on my phone handset..
Enter AppBlock. :-).]]>
The.
Enter ngrx.
If you haven’t bumped into ngrx before, it’s about applying the new hotness of Redux-style applications to Angular, but built on RxJS.
How does this ngrx stuff help my async plight?
And what’s been the result in my refactor to ngrx?..]]>
P..
I’m hoping to explore a few themes here over the next month:
That sounds like a bunch of cool stuff. Time to get cracking!]]>
The!
]]>]]>
I!]]>.]]>
Just!
]]>]]>
I).
npm installthe libraries you need
In my case, I’m working with Semantic UI. This framework both has CSS and JS component - and depends on jQuery for heavy lifting.
So I need npm install those libs first.
npm install jquery --savenpm install semantic-ui-css --save
Once those are installed, you should be able to spy them in the
./node_modules/ directory in the root of your project.
All good so far.!]]>
After a day of Observables on Day Two, the final day of the Masterclass was focused around forms and routing.
Here’s my typo-laden gitlog for the day to give you an idea of content pace and the type of exercises we worked on:
A good portion of the day was spent working with both Template-Driven and Reactive Forms in Angular - and getting a really good feel for where each of them shine.
I’ve never been much of a fan of Reactive Forms - they seemed like a separation of concerns that I didn’t want to separate - but I have a new respect for them after yesterday (I particularly love the idea of being able to unit test that validation magic in a vanilla unit test).
I now like both approaches - and look forward to road testing both in larger forms apps to give me a real feel for what has the lowest friction.
Here’s the vanity screencast of yesterday’s hacking efforts:
We also wrote our first custom validator - and used it in both Template and Reactive approaches. Validation code is another area where Reactive’s FormBuilder really shines - super straightforward and clear to read afterwards.
With our forms experiments behind us, it was on to master/detail child routing - which was really great to see in action (I’d read about it, but never had a reason to experiment with it). We had a top level of routing for our menu, then a master/detail child route looking after our Contacts List -> Detail Detail/Editor connection. Super great.
We finished the day exploring lazy loading of routes, then a little about AOT Compilation before I had to leave early to catch the last flight to Canberra.
So was it worth it? Travel costs, Accommodation, Course fees, Time off, etc?
Absolutely. And on so many fronts:
This was a five star training experience. Six stars if you can do it in Sydney ;-)
Many thanks to thoughtram for coming over!
Recommended.
]]>]]>
Wow. Things just got seriously master-level at the Masterclass! Day Two really did build on Day One, kicking off with the wonders of the Async pipe.
It was actually great to start out with Async Pipe, since it let us convert our little Contacts application to an Observable architecture and ditch all the dodgy
*ngIf code and null checking we’d crammed into to play nice with the async http calls.
Async was really the theme for the day, and the exercises came steadily throughout the day to make sure we were grabbing the concepts as they were introduced. There’s a lot to love about Observables, but it took the day to really appreciate what they bring to the table.
Here’s my typo-laden gitlog for the day to give you an idea of content pace and the type of exercises we worked on:
Christoph and Thomas took us on a tour of the hardcore range of Rx operators that are helpful when working with Observables. Even with marble diagrams, some of these concepts pretty much blew our minds!
Major. Paradigm. Shift.
As a last mic drop before the
switchMap() exercise Christoph told us “you’ll work it out” - then pointed it at the lab to learn-by-doing.
And work it out we did. We implemented the classic
debounceTime() search scenario (but optimised it with a little
distinctUntilChanged(), followed up with some
merge()ing with the initial ContactList
Http fetch for the non-search case, and even sprinkled in some serious
switchMap() rocket surgery to make sure that async fetches arriving out of sequence were handled (to cater for load balancing scenarios).
It was impressive.
And it will take a lot more practice.
But here’s a vanity showoff of what we covered today.
As one of the many bonuses, I finally understand why you use this form of import for RxJS (compared with the standard Angular imports):
import 'rxjs/add/operator/map';
We did all kinds of intercomponent comms yesterday - property binding,
EventEmitter, and all their friends were covered - culminating in implementing a basic EventBus shell. Powered by a
Subject(), this EventBusService let us update the Header component of our application with a breadcrumb from wherever we were in the app. Very nifty!
Today we dive into all the nuances of Forms and Validation and will be getting deep into the Router for routing child views. Great way to finish!
Pumped!]]>
Well, yesterday was Day One of the thoughtram Angular Masterclass in Sydney and I’m having a ball and learning heaps.
I thought I’d write up a quick retro to give people an idea of the experience.
On our first day we build a thing! And that thing did CRUDing operations on RESTful service. And did Databinding and all that Angular-ish goodness.
In fact, the first day is called their “Jumpstart” day where you cover all the basics to get everyone on the same page.
This is what I spent the day building:
I’ve always been a fan of the “Training from the Back of the Room” approach (actually all of Sharon Bowman’s stuff is awesome), and the thoughtram guys definitely train in that school - 20 minute intense “downloads” of information with Q&A, followed by labs to put it all into practice.
All their training presentation notes are browser-based and accessed by a participant login on their website. The notes are updated for the life of the course (which is awesome!)
Ditto for the lab exercises - which are all hosted on Github repos and constantly updated with tweaks.
Being the “Jumpstart” day, you get a taste of everything, starting from a blank “scorched-earth” Angular 2 project containing some supplied CSS and sample data - and building piece by piece on that.
Based on my Git logs, the day was a solid basics wrap-up:
*ngForand
*ngIfand friends)
ngOnInitand what runs when)
OpaqueToken, Instance handling and how the DI resolves things)
routerLinkand friends)
That’s quite a first day. If you’re already using Angular 2 that might sound a little basic, but the great thing about having experienced instructors is that we deep dive all the way along.
Covering all those “so I wonder how that actually works?” demystifies a lot of the Angular magic you’d be templated to glaze over and get stuck on down the track. Super valuable.
The first day was long (9:30 until after 6pm) but it went really fast and the pace intensified as the day went on.
After hours, Christoph and Thomas took us out to the Opera House Bar for Drinks and Hangout time. This is where you want to be at the end of the first day:
They team have told us that the intensity ramps up today as we dive into Observables and functional programming.
Super excited!]]>
I’ve had a cracker this year. It might well have been one of the best years of my life!
Didn’t write a book. Didn’t have a baby. Didn’t ship a world-beating app. Didn’t speak at any big gigs. Didn’t finish a marathon. Didn’t make a ton of money. Or even have something go viral.
Nope. None of those things.
This year I took bold actions toward things that matter to me.
That is all.
And that is more than enough.
Wrote some great software for clients that makes their lives easier and more productive. Even delivered my own hobby app to make my own life easier.
I had so much fun on my YouTube Channel shooting Learn Angular 2.0 in 21 days and Ship Your Hobby Project in 30 days. That’s a lot of video content, and by the end of it the projection workflow has improved considerably!
I’m going to be doing a lot more video next year, so make sure you subscribe if you haven’t yet!
I love writing. And it was great to get back on the blogging horse this year.
This is post 46 for this year. That’s almost my busiest blog year on record (only exceeded by 2006 with 47). That was ten years ago when life was a lot simpler!
I’m really enjoyed the switch to Hexo, Git and Markdown, and this style of blogging is definitely the minimalist future I’ve been looking for.
I’ve really enjoyed making new friends inside and outside the tech community this year. You really are the sum of the five closest people you hang around - so make sure you’re intentional about those people!
I’m putting the whole “personal networking” thing front and centre for next year. I’ve learned so many great techniques from Fizzle this year that I can’t wait to apply in 2017.
Angular 2 and Ionic 2 have really pulled me back into the heart of the WebDev community - and it’s an awesome place to be.
TypeScript is all kinds of wonderful, and I’d happily spend all of 2017 hacking on Angular and Ionic apps in TypeScript.
In fact, that’s what my diary looks like at the moment.
So. Very. Pumped.
Kent Beck said, “I’m not a great programmer; I’m just a good programmer with great habits.”
This year, I’ve had a really hacked on my own personal habits and have definitely gone next level.
I’m more disciplined than ever (inside and out), and have worked out some great productivity hacks to be the most effective I’ve been in my life (check out my free Ship Your Hobby Project in 30 days for an insight).
So many great projects scheduled for 2017. Can’t wait to get started!
Thanks for hanging out!]]>
I’ve been a huge fan of Nexus for inhouse proxying of Maven artefacts for years. I’ve used it at heaps of government sites where it was a hassle to get internet access through content proxies, and it’s been a life saver. But you might not know that Nexus is also awesome for proxying NPM, Bower and even Docker images.
If you need to configure your local npm to point to Nexus, there’s awesome docs on that.
However, I’ve now started experimenting with yarn, Facebook’s drop in replacement for npm (but quite a bit faster!), and so was naturally wondering how to configure it to use my local Nexus proxy.
Turns out it’s simply a matter of:
yarn config set registry
And you’re off and running. I thought I would write it down so I can google it later!
You can confirm that life is great, but tailing your
/nexus/log/request.log and watch the traffic flowing (or try a
yarn install --verbose and sit back!).
Have fun!]]>
I have to say that Push Notifications are just super cool.
Think how cool would it be to make your Mobile ding with a custom sound every time a really important event happened in one of your systems?
Way cooler than SMS :-).
And it turns out adding such features into your app is totally trivial thanks to the good folk at Pushover.net. And today I’m going to show you how!
Pushover is a web service (and suite of platform specific mobile apps for Android, IoS and the Browser) that lets you easily push Notifications to all (or one) of your registered devices whenever you would like.
It’s a one-off $5 subscription cost per platform, then you’re free to send up to 7500 notifications month forever. That’s a lot of events!
Integrating the Pushover API into your app is a snack. There are node modules, and other third party API libs (Java, C#, Python and all the other good gear) to use out of the gate, but the API is simple form POST to an endpoint, so rolling your own is no big deal either. I’ll show you how.
In Angular 2, I’d create a
pushover.service.ts file to host my service, and drag in my keys and endpoints:
@Injectable()export class PushoverService { private PUSHOVER_ENDPOINT=' private PUSHOVER_TOKEN='your-magic-token-here'; private PUSHOVER_USER_KEY='your-user-key-here'; constructor(private http : Http) { }}
And with my shell class in place, there’s just the small matter of performing that actual form POST and getting those notifications rolling in:
sendNotification(title : string, body : string) : Observable<Object>{ let pushoverMessage = { token : this.PUSHOVER_TOKEN, user: this.PUSHOVER_USER_KEY, title: title, message: body }; let options = {} as RequestOptionsArgs; options.headers = new Headers({ 'Content-Type': 'application/x-www-form-urlencoded' }); let pushoverStr = this.mapToFormData(pushoverMessage); console.log(`Pushing message ${pushoverStr}`); return this. pushoverStr, options).map( resp => { if (resp) if (resp.status == 200) { return resp.json() } else { throw resp; } }).catch((err, other) => { console.log("Trouble in the land of POST", err); return Observable.throw(err); });}
The only other magic I needed was a way to format my data into an old school form POST. There’s probably a better way in npm to do this stuff, but here’s my heavy hammer approach:
public mapToFormData(map: Object): string { let formData = ''; for (var key in map) { if (formData) formData += '&'; let eKey = encodeURIComponent(key); let eValue = encodeURIComponent(map[key]); formData += `${eKey}=${eValue}`; } return formData;}
And will all the code happening, it’s a matter of calling it from my login form:
this.notificationsService.sendNotification(`Successful login for ${as.auth.email}`, "With all kinds of winning"). subscribe( value => { console.log("Winning"; }, err => { console.log("Observable threw", err); })
And then sit back and enjoy the greatness:
Having so much fun with this stuff. Thanks Pushover.net!]]>
And I’m so there!
It’s no surprise to any of my friends that I’ve doubled-down hard on Angular 2 this year. I built PuppyMail to learn all about serverless Angular apps with Firebase, taught a 21 days of Angular YouTube course, spoke at a Meetup on Angular 2 for Java Devs, and have lately been diving in hard on Ionic 2 (think of Ionic 2 as Angular 2 for Hybrid Mobile apps).
But there’s so much more to learn! I’m only scratching the surface of Observables, Forms, Unit Testing, DI scoping and lots more!
As you may know, I’m a massive fan of Hobby Projects for levelling up. I even did an eight-part YouTube Series on Shipping Your Hobby Project in 30 days. But I’m also a massive fan of osmosis in community. Of hanging out with guys and girls that are further along the road than you, and leveraging off that talent and experience.
So when I heard the legends from Thoughtram are coming to Sydney to teach their Angular 2 Masterclass I was super pumped! These guys are committers on Angular 2, Angular Material, and prolific bloggers - you’re probably read some of their (hardcore) blog posts while researching Angular 2 stuff.
This is going to be an awesome opportunity from best-in-class Angular 2 folk in our own backyard - from Jan 9-11. Can’t wait to hang out with them! (and not sure why this is the first I heard of it - but now you know too!).
If you live in AU/NZ and are keen for a Sydney trip, why not come and hang out! There are still tickets!
Sorting out accommodation and transport is always a big hassle when travelling for training.
Here’s my $0.02 on what I’ve found out so far.
After a fair bit of research, I’m going to be staying at The Tank Stream Hotel which is just around the corner from the training venue Tank Stream Labs. Pretty cool rooms from $160/night - which is pretty cheap for Sydney City.
It’s also accessible from Wynyard station, so it’s easy to get there from the Airport (and get home when we’re done)
I already have a few Angular 2 and Ionic 2 projects banking up for 2017 - both commercial consulting and startup gigs, so this is the absolute perfect kickstart.
Hope to see you there!
And will report back on how it goes…]]>
I:
I developed a small Angular 2 application, with a few very simple goals:
So I birthed “PuppyMail” to scratch that itch.
Well PuppyMail delivered in spades on those three objectives. With the TL/DR being:
Ladies and Gentleman, I give you PuppyMail:
And I thought I would go all out with a landing page to (with links to source, etc).
Yes. Well. There were quite a few side tracks in those 31 days. But all wonderful learning experiences none the less.
A few things really did prove trickier than I anticipated.
First “crash and burn” moment for me was lack of CORS support on the GetPocket API.
That meant the standard http angular component would die in the browser when doing the cross-origin calls. This lead to a tiny bit of glue in a node proxy which I called puppymail-server. Under the covers, it just uses express- to do all the heavy lifting.
I needed to generate Custom tokens for Firebase logins (since I wanted the user to be able to use their GetPocket creds as auth to the Database)
This was actually a blessing, since I already had the proxy server doing all the Pocket oAuth proxying, so it was easy to inject a custom Firebase token on the way back from a successful oAuth.
The Database Auth rules on Firebase are just fantastic. I can lock down sections of the Db to just the user that creates them. So very painless!
Paginated lists turned out to be trivial. Just throw in a PrimeNG DataList component, point it at your array of objects, and you’re good to go. Rince and repeat for OrderList if you need to sort that list. Throw in a few Growl notifications, a couple of SplitButtons, and a pinch of Dialog action and you’re cooking.
I will definitely be recommending PrimeNG to enterprise clients as a great way to standardise the app experience.
I’ve used Bootstrap a lot in the past, and SemanticUI really takes ideas to a whole new level of usability. The whole “semantic-ness” is just so readable! I love creating a
<div class='stackable two column grid'>!
I need to get deeper in SemanticUI, and fortunately the docs are just awesome.
This one really caught me off guard in deployment. Nginx kept throwing 404s when deep linking into the app using html5 style URLs.
You’d end up with requests like: /puppymail/login/backFromPocket which you’d hope would route to the LoginController, but end up 404’ing as Nginx tried to resolve that to a real file. Apparently there are serverside fixes which map all 404’s back to
index.html to support this. That’s on my list, but a quick workaround is using a
HashMappingStrategy in the router on the client side.
This means you end up “hashy” style links like
/puppymail/#/login/backFromPocket, but that doesn’t worry me for now - especially since it was a one-line change to my Angular module to:
imports: [RouterModule.forRoot(routes, { useHash: true })],
And I was done!
There were tons of other stuff I learned along the way about Angular 2 - particularly in creating re-usable components that shared data across the app. Still plenty to learn about Observables, RxJS and error handling. But the pieces are coming together for me.
I just can’t believe how fast the thing runs too! Just awesome!
Can’t wait for my next hobby project. I can feel some Ionic 2 mobile app action coming on to channel my Angular passion into the mobile space.
And in the meantime, if you are a GetPocket user, have a play with PuppyMail, or checkout the source code, or just sign up to the Motivated Programmer Newsletter.]]>
I loved hacking on Primefaces when I was working fulltime in JSF2. In fact, the whole component-based development thing is really what attracted me to Angular 2.
But since switching, I’ve really been looking for a solid component library to get work done. I’ve been having great success with Semantic UI for styling, but nothing beats being able to bind a paginated list to an Array
Enter PrimeNG aka Primefaces for Angular 2! This library brings all the stunning engineering and pragmatism from Primefaces to the Angular 2 domain. And it totally rocks!
Here’s that list of my
PocketEntries rendering in a PrimeNG paginated OrderList. In the words of Kanye, “Enjoy the greatness!”
And there are only a few lines of code backing all that magic.
But the $1M question is, “How do you integrate it with the Angular CLI?”
Turns out that it’s a snack…
I’m sure the existing setup page will cover all this at some stage. But until then, here’s the steps you’ll need to use PrimeNG with Angular CLI.
First, install the dependency, and save it to your project…
npm install primeng --savenpm install font-awesome --save
Once that is installed, you’ll have a
./node_modules/primeng/ directory (and a
./node_modules/font-awesome/ directory). Edit the
angular-cli.json file in your root directory to include the PrimeNG style tags you’ll need:
"styles": [ "styles.css", "../node_modules/font-awesome/css/font-awesome.css", "../node_modules/primeng/resources/themes/omega/theme.css" , // or whatever theme you prefer "../node_modules/primeng/resources/primeng.css"],
You’ll need font-awesome for some of the icon glyphs, so I added it to the styles above.
With the styles in place, it’s time to drag in the modules you’ll need to update your
app.module.ts to load the modules of the components you need.
You’ll find the name of the module you need to load on the Documentation tab of the component you want to use on the PrimeNG website. For example, on the OrderList page, the Documentation tab says:
import {OrderListModule} from 'primeng/primeng';
So you’ll need to add that line to the top of your
app.module.ts. Then finally, include the component in your
imports: section of
app.module.ts towards the bottom of the file (which makes sure the
p-dataList tag becomes active in your markup. So, your
app.module.ts will have something like:
imports: [ BrowserModule, FormsModule, HttpModule, OrderListModule, // this line brings the magic of p-dataList to your markup! ]
With those bits in play, you can start using the
p-dataList tag in your actual markup:
<p-dataList [value]="entries" [paginator]="true" [rows]="5" [paginatorPosition]="both">
In my case
entries is an
Array<PocketEntry> property on my controller, which I can iterate over to output each Pocket link entry.
To make the iteration happen, inside the
p-dataList markup you specify a
template directive which gets output for each iteration of my
entries (in a very similar style to good ‘ol Primefaces).
I want to call each iteration
entry, so the line I need is:
<template let-entry>
Then I’m free to start referencing that
entry variable as I choose..
<div class="content"> <a class="header">{{entry.resolved_title}}</a> <div class="meta"> <span class="cinema"><a [href]='entry.resolved_url'>{{entry.resolved_url}}</a></span> </div> <div class="description"> <p>{{entry.excerpt}}</p> </div></div> <!-- etc... -->
Super productive!
It’s been amazing to see the PrimeNG library take shape - there are already dozens of components in place - all with the super high engineering standards that the Prime team deliver (check out their source code and enjoy the greatness!)
Awesome work Prime!]]>
Now that I’ve established some routines around my exercise habits, I’m finding that I’m listening to a lot more podcasts. And there is some epic free content out there these days (more on that later).
What I really wanted to talk about today was how much I’m enjoying Pocket Casts as a player for Android (though I used it on Windows Phone originally, and it ships for iPhone too).
Of course it has:
But the thing I really love about it is the syncing! If you pay them just a few $ more for Web access, you can manage all your subscriptions via a browser on your PC.
Better still, because everything is synced, you can easily pick up the episode your were playing on your phone, and listen to a bit more on your PC, and everything remembers where you’re up to!
I’m always a bit interested in what people are listening to in the Podcast department, so here are a few of my faves, in case you need some more commute/exercise background listening action…
There’s a bunch of things to get you started. And do check out Pocket Casts if you’re in the market for a good player!]]>
A few days ago I started a new series on YouTube called Ship Your Hobby Project in 30 Days and I’m really enjoying it.
One of the best ways to level up as a programmer is by having a side project. It gives you a great way to learn something new by actually doing (the best way to learn) - and also gives you an internal motivation bump through the exhilaration of shipping.
And I’m all about the shipping.
A whole bunch of people have been gradually coming on board and signing up for my Motivated Programmer newsletter to keep themselves honest and moving forward. And they’ve been keeping themselves honest by sending my their one-sentence app descriptions. It’s been awesome!
I’ve found the people signing up super positive and have been joining in the fun myself. We’ve just completed the lesson 3 on Wireframing and Just Enough Planning where we used e.ggtimer.com to smash through our wireframing and module planning.
Here’s my whiteboard version for accountability:
If your GitHub repo is full of half-finished stuff, and you’re keen to jump into a community that is non-judgement, it’s totally worth hanging out with us. First step is to spend just 10 minutes a day and start working your way through Ship Your Hobby Project in 30 Days.
Can’t wait to show off what the students build next month. Super exited!
]]>]]>
Today on the show we explore securing Angular 2 routes. And we’ll use OAuth, Firebase and Google (or your preferred OAuth provider) to get us there.
You can grab the source on the GitHub Repo.
Don’t forget to Subscribe to my Fresh Bytecode channel for regular Java/Web-related screencasts
Here’s the script if you’d like to follow along at home!
[ ] Discuss the plan for today - use Google authentication to login to your app. Pluck Image URLs along the way.
[ ] Check out last weeks show to get up to speed on the QOTD app
[ ] Turn on authentication in your application (auth/signin method)
[ ] Configure your module to turn on authentication methods you prefer
export const firebaseAuthConfig = { provider: AuthProviders.Google, method: AuthMethods.Popup} // and in imports...AngularFireModule.initializeApp(firebaseConfig, firebaseAuthConfig),
[ ] Now to secure the routes in
app.routing.ts
const appRoutes: Routes = [ { path: '', redirectTo: '/quotes', pathMatch: 'full'}, { path: 'quotes', component: QuotelistComponent, canActivate: [LoggedInGuard] }, { path: 'login', component: LoginComponent }];export const appRoutingProviders: any[] = [ LoggedInGuard];
[ ] Implement
login.guard.ts
import { LoginService } from './login.service';import { Injectable } from '@angular/core';import { Router, CanActivate } from '@angular/router';@Injectable()export class LoggedInGuard implements CanActivate { constructor(private loginService: LoginService, private router: Router) { } canActivate(): boolean { console.log("Guard function has been invoked"); let authenticated = false; if (this.loginService.isAuthenticated) { authenticated = true; } else { this.router.navigate(['/login']); } console.log("Returning from Guard function with: " + authenticated); return authenticated; }}
[ ] Explore the actual login service
[ ] Add some markup to the menu
<div class="right item" * <a class="item" > <img class="ui avatar image" [src]='loginService.photoUrl'> <span>{{ loginService.displayName }} </span> </a> </div> <div class="right item" * <a class="ui inverted button" routerLink='/quotes'>Log in</a></div>
[ ] GitHub repo has localStorage to cache oAuth tokens
[ ] Next time we’ll look at editing values.]]>
Serverless architectures are so hot right now! Today on the show we dive into Firebase with Angular2 using the AngularFire2 library.
We’ll be a dynamic Quote of the Day application which updates from the server in real time. Get out of here! Real time!
You can grab the code for today’s episode on GitHub.
Don’t forget to Subscribe to my Fresh Bytecode channel for regular Java/Web-related screencasts
Here’s the script so you can follow along at home:
[ ] Discuss the plan for today - use remote db to populate a list of quotes - with live update when server changes
[ ] Create a new app
ng new qotd
[ ] I’ve added Semantic UI to the
index.html in the root
<link rel="stylesheet" href=" src=" src="
[ ] I’ve also customised the
index.html to use the Home Page layout from Semantic UI (scroll down that page to see it in action)
[ ] Install the firebase deps
npm install firebase angularfire2 --save
[ ] Create some data
[ ] Edit the read and write action on the your table (the rules tab) - so you can read without authenticating
{ "rules": { ".read": "true", ".write": "auth != null" }}
[ ] Configure your
app.module.ts to use Firebase
Update your module definitions import * as firebase from 'firebase'; import { AngularFireModule } from 'angularfire2'; // Initialize Firebase export const firebaseConfig = { apiKey: "AIzaSyAtkDOebowUCsSF8efOL_0yup1DKCCan00", authDomain: "qotd-caac8.firebaseapp.com", databaseURL: " storageBucket: "qotd-caac8.appspot.com", messagingSenderId: "922026910241" }; @NgModule({ declarations: [ AppComponent, ], imports: [ BrowserModule, FormsModule, HttpModule, AngularFireModule.initializeApp(firebaseConfig) ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
[ ] Add your quote list component
import { Component, OnInit } from '@angular/core';import { AngularFire, FirebaseListObservable } from 'angularfire2';@Component({ selector: 'app-quotelist', templateUrl: './quotelist.component.html', styleUrls: ['./quotelist.component.css']})export class QuotelistComponent implements OnInit { quotes: FirebaseListObservable<any>; constructor(private af: AngularFire) { } ngOnInit() { this.quotes = this.af.database.list('quotes', { query: { orderByChild: 'author' } }); }}
[ ] Add some markup
]]>]]>
<table class="ui basic table" id="quoteTable"> <thead> <tr><th>Id</th> <th>Author</th> <th>Quote</th> </tr></thead> <tbody> <tr * <td>{{quote.$key}}</td> <td>{{quote.author}}</td> <td>{{quote.quote}}</td> </tr> </tbody></table>[ ] Celebrate the awesome serverless realtime update action
Congratulations if you’ve made it this far. Today on the show we explore doing production builds, and deploying to github pages. Then we’ll share a bunch of cool resources (down below) for the next stage in your journey!
It’s our last show in the series! But not the last show for the channel, so subscribe today!
Thanks for being so do a production build
ng build --prodng build --target=prod --env=prod
[ ] Have a look at your tiny dist directory
[ ] Then we’ll deploy our app to github pages
ng github-pages:deploy --message "First official release of Twit-ng!"
[ ] Sit back and wait to be acquired
[ ] While we’re waiting to bathe in the cash, let’s explore some cool resources
[ ] First some free stuff…
[ ] You should subscribe to ng-newsletter - super interesting articles. Here’s a direct link to the current issue at time of writing.
[ ] If you’re into podcasts, the Adventures in Angular podcast is jam packed full of great stuff
[ ] If you’re into blogs, the Thoughtram guys are the gold-standard for black belt Angular. Super cool dudes.
[ ] If you’re after a book, I can recommend the ng-book-2 offering. Don’t really need the screencast bundle - but the base book offering is awesome (albeit expensive)
[ ] I’m seriously thinking about putting together a $5 book package that covers just the essential stuff that I cover in these 21 days. If you’d be interested, just thumbs up this vid and I’ll take it as a call to arms!
[ ] And we’re done. Thanks for 21 days of effort and positivity. You guys have been so supportive, I have learned so much!
[ ] Subscribe to the channel, I’m going to keep posting vids regularly (and I’m going to be doing a Git course here shortly!).]]>
Angular 2 has gone final! Yay! Today we’re going to look at linting, code coverage and mocking in the latest Angular CLI (Beta]]>
Today we’re going to look at integration testing using the bundled ng end to end testing tool called Protractor. run our integration test suite
ng e2e
[ ] Explore the directory e2e-spec.ts & .po.ts files
[ ] We’ll explore the role of Page objects to abstract elements and localise changes
[ ] Implement our
TwitNgPage object (I would typically rename to
FeedPage)
export class TwitNgPage { navigateTo() { return browser.get('/feed'); } postTweet(tweetText) { element(by.name("body")).sendKeys(tweetText); element(by.css("button")).click(); }}
[ ] Discuss selectors
by.name
by.id and
by.css (not
by.model)
[ ] Use the debugger
browser.pause();
[ ] And add an assertion to
app.e2e-spec.ts
it("Should post a tweet", () => { page.navigateTo(); page.postTweet("This is awesome"); expect(page.getFeedCount()).toEqual(6); });
[ ] Implement our selector in
app.po.ts
getFeedCount() { return element.all(by.css(".comment")).count();}
[ ] Let’s confirm it’s working by getting the text of the tweet in
app.po.ts
getLatestTweet() { return element.all(by.css(".comment .content .text")).get(0).getText();}
[ ] And asserting it in
app.e2e-spec.ts
expect(page.getLatestTweet()).toEqual("This is awesome");
[ ] Let’s also test our retweet action in
app.e2e-spec.ts
it("Should increment retweet count", () => { page.navigateTo(); page.retweetLatestTweet(); var rtCount = page.getLatestTweetRetweetCount(); expect(rtCount).toEqual("2 Retweets"); });
[ ] And implement the logic to retweet/count to the Page in
app.po.ts
retweetLatestTweet() { element.all(by.css(".comment .content .actions .retweet")).get(0).click();}getLatestTweetRetweetCount() { return element.all(by.css(".comment .content .actions .retweet")).get(0).getText();}
[ ] We have smashed end to end testing!
[ ] Tomorrow we look at quality tools]]>
With Reactive Forms, our validation and binding are configured in the backing code rather than the markup. Today, we’ll take yesterday’s template-driven form and convert it to a reactive form, and in the process you can decide which style you like update our module list to import reactive forms
import { FormsModule, ReactiveFormsModule } from '@angular/forms';imports: [ BrowserModule, FormsModule, ReactiveFormsModule, routing, HttpModule,
[ ] We’ll update our template code to spark the form model
import { FormControl, FormGroup, FormBuilder, Validators } from '@angular/forms';form = new FormGroup({ username : new FormControl(), password : new FormControl(), rememberme : new FormControl() })
[ ] Remove all our
ngModel directive
[ ] Wire our form to our model
<form class="ui form error" [formGroup]='form' name='username' becomes formControlName='username' etc
[ ] Test our form model
[ ] Add some validation logic
form = new FormGroup({ username : new FormControl('',[ Validators.required] ), password : new FormControl('', [ Validators.required] ), rememberme : new FormControl(true) })
[ ] Refresh our validation logic
<div *<p>Username is a required field</p></div>
[ ] Refactor to use the FormBuilder DSL
public form : FormGroup;constructor(private formBuilder : FormBuilder) { }ngOnInit() { this.form = this.formBuilder.group({ username: ['', Validators.required], password: ['', Validators.required], rememberme: [true], })}
[ ] Winning with reactive forms complete
[ ] Tomorrow we’ll tackle End to End testing]]>
Today we’re going to look at template-driven approach to forms with the wonders of ngModel. Tomorrow we’ll look at the reactive approach to forms using backing code for validation config. generate a login form
ng g c login
[ ] And add it to our module in
app.module.ts
import { LoginComponent } from './login';declarations: [ AppComponent, FeedComponent, MenuComponent, FriendsComponent, FriendComponent, MessagesComponent, LoginComponent ],
[ ] And add it to our router in
app.routes.ts
import { LoginComponent } from './login';{ path: 'login', component: LoginComponent },
[ ] And add that router link to our menu
<a class="ui item" routerLink="/login" routerLinkActive="active">
[ ] Demonstrate the login page
[ ] Now create a rough login form
<form class="ui form error"> <div class="field"> <label>User Name</label> <input name="username" placeholder="User Name" type="text" required > </div> <div class="field"> <label>Password</label> <input name="password" placeholder="Password" type="password"> </div> <div class="field"> <div class="ui checkbox"> <input name="rememberme" tabindex="0" type="checkbox"> <label>Remember Me</label> </div> </div> <button class="ui button primary" type="submit">Submit</button></form>
[ ] Now we’re going to use the ngForm module to do magic in creating a form backing model (not our domain model!)
<form class="ui form error" #form="ngForm"
[ ] Add
ngModel to each input field
[ ] Add a pipe to see the data in action
{{ form.value | json }}
[ ] Handle the form values in OnSubmit()
(ngSubmit)='OnSubmit(form.value)' OnSubmit(formJson) { console.log(formJson); }
[ ] Let’s look at the form model for validation
<input #<p>Username is a required field</p></div>
[ ] You can also structure JSON, and do validity checking on whole groups (eg an address)
ngModelGroup='blah'
[ ] Celebrate the win of template-driven forms
[ ] Tomorrow we’ll tackle reactive forms]]>
Bad things can happen when working with remote services. Today we explore error handling in an Observables context (in both the service tier and user facing components).
And we revisit our “Loading…” state to tidy things up on initial page load., fix our import
import { Observable } from 'rxjs/Rx';
[ ] Implement our
getCurrentFeed() method to catch errors
return this. Response) => { console.log(resp.json()); var fetchedTweets = []; for (let tweet of resp.json().data) { fetchedTweets.push(this.getTweetFromJson(tweet)); } return fetchedTweets as Array<Tweet>;}).catch(this.errorHandler);
[ ] Implement our error handler (service side) for diagnostic/forensics
errorHandler(err) { console.log(err); return Observable.throw(err); }
[ ] Update our feed component to catch the error (second arg to subscribe) - this should be our user facing error!
ngOnInit() {this.feedService.getCurrentFeed().subscribe( (newTweets) => { this.tweets = newTweets;}, ( error ) => { this.errorText = error;});
[ ] Display the error text in the component (conditionally) after we render the form, perhaps, but before the timeline…
<div * <i class="close icon"></i> <div class="header"> {{ errorText }} </div></div>
[ ] Throw an error to test it out!!!
throw "Internal Error";
[ ] There are scenarios when you do want to trip these errors yourself - for example, bad status codes come back. Let’s updated
updateTweet() in our feed service
return this. body).map( (resp: Response) => { console.log(resp); if (resp.status == 204) { console.log("Success. Yay!"); } else { throw `Error fetching tweet ${tweet.id}. Received status code: ${resp.status}`; } }).catch(this.errorHandler);
[ ] Finally, let’s handle the “Loading..” screen scenario. In our feed component backing class, we’ll add a
loaded property and use it as a toggle. The third argument to a subscribe is an “on complete” or finally block.
loaded = false;ngOnInit() { this.feedService.getCurrentFeed().subscribe( (newTweets) => { this.tweets = newTweets; }, ( error ) => { this.errorText = error; }, () => { this.loaded = true; }); }
[ ] Update the markup and we’re done
<div * // existing markup..</div><div * <h2>Loading...</h2></div>
[ ] Let’s slow down our fake request timing in our app.module.ts
InMemoryWebApiModule.forRoot(MockDatabaseService, { delay: 3000, rootPath: 'api/'})
[ ] Robustness award unlocked! (level 1 :-)
[ ] Next up we’ll start looking at Form validation (continuing our robust theme).]]>
It’s finally time to introduce some Http CRUD implementation! Today on the show we use yesterday mock in-memory RESTful Http service to
get(),
put() our way to remote:
[ ] Implement our
getCurrentFeed() method
private getTweetFromJson(obj: Tweet): Tweet { return new Tweet( obj.id, obj.body, obj.author, obj.date, obj.retweets, obj.favorites) } getCurrentFeed(): Observable<Tweet[]> { return this. Response) => { console.log(resp.json()); var fetchedTweets = []; for (let tweet of resp.json().data) { fetchedTweets.push(this.getTweetFromJson(tweet)); } return fetchedTweets as Array<Tweet>; }); }
[ ] Update
feed.component.ts to handle the observable
this.feedService.getCurrentFeed().subscribe( (newTweets) => { console.log(newTweets); this.tweets = newTweets;});
[ ] Demo of changing our database file.
[ ] Next, we’ll update our service to handle posting a new tweet as a JSON object
postNewTweet(tweetText: string) { let body = JSON.stringify({ body: tweetText, author: this.userService.getCurrentUser(), date: new Date(), retweets: [], favorites: [] }); return this. body).map( (resp: Response) => { console.log(resp.json()); return this.getTweetFromJson(resp.json().data); }); }
[ ] Update our feed.componoent.ts to subscribe to the create..
OnNewTweet() { console.log(this.tweetText); this.feedService.postNewTweet(this.tweetText).subscribe( (newTweet : Tweet) => { console.log(newTweet); this.tweets.unshift(newTweet); }); this.tweetText = ''; }
[ ] Finally, update our service to handle updates..
updateTweet(tweet: Tweet) { let body = JSON.stringify(tweet); let url = `/api/tweets/${tweet.id}`; return this. body).map( (resp: Response) => { console.log(resp); if (resp.status == 204) { console.log("Success. Yay!"); } });}
[ ] Update retweet and favorite routines in service with an update call
this.updateTweet(tweet).subscribe(resp => console.log(resp));
[ ] Let the winning flow!!!
[ ] Next up we’ll start looking into HTTP Error Handling]]>
You’ll need to test your API - and your backend might not be built yet. But no matter! The Angular 2 team have your back with an InMemoryDbService Http RESTful database.
Learn how to configure it in today’s episode (or just learn about using 3rd party JS with the Angular CL:
[ ] Install a 3rd party npm module to simulate REST (I’ve used a specific version since I know this one is compatible with Angular 2 RC5)
npm install --save angular2-in-memory-web-api@0.0.17
[ ] Update
angular-cli-build.js to copy the npm over to dist
'angular2-in-memory-web-api/*.+(js|js.map)'
[ ] Run an
ng build to copy it over to
dist and have a look
[ ] Update
system-config.ts - but only in the top section of the file
/** Map relative paths to URLs. */const map: any = { 'angular2-in-memory-web-api' : 'vendor/angular2-in-memory-web-api'};/** User packages configuration. */const packages: any = { 'angular2-in-memory-web-api': { main: './index.js', defaultExtension: 'js' }};
[ ] Now we import the actual classes into our
app\app.module.ts
// Imports for loading & configuring the in-memory web apiimport { InMemoryWebApiModule } from 'angular2-in-memory-web-api';import { MockDatabaseService } from './mock.database.service'; imports: [ BrowserModule, FormsModule, routing, HttpModule, InMemoryWebApiModule.forRoot(MockDatabaseService, { delay: 100, rootPath: 'api/' }) ],
[ ] Implement our
mock.database.service.ts method
import { InMemoryDbService } from 'angular2-in-memory-web-api';import { Tweet } from './tweet';export class MockDatabaseService implements InMemoryDbService { createDb() { let friends = [ "Mary", "Joe", "Karen", "Phil", "Toni" ]; let tweets = [ new Tweet(1, 'Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.', 'Glen', new Date(), ['Joe'], []), new Tweet(2, 'Measuring programming progress by lines of code is like measuring aircraft building progress by weight', 'Joe', new Date(), [], ['Mary']), new Tweet(3, 'Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.', 'Mary', new Date(), ['Glen'], ['Mary']), new Tweet(4, 'People think that computer science is the art of geniuses but the actual reality is the opposite, just many people doing things that build on each other, like a wall of mini stones', 'Glen', new Date(), ['Joe', 'Mary'], []), new Tweet(5, 'You can’t have great software without a great team, and most software teams behave like dysfunctional families.', 'Joe', new Date(), [], ['Mary', 'Glen']), ]; return { 'tweets' : tweets, 'friends' : friends }; }}
[ ] Update the Tweet constructor to handle that
id field
constructor(public id : number, public body: string, public author: string, public date: Date, public retweets: Array<string>, public favorites: Array<string>)
[ ] Fix any code that the new
id field on Tweets breaks
[ ] Demo that direct URL routing is not messed with
[ ] Demo that you can debug into your new class/api
[ ] Run it and celebrate the win!
[ ] Next up we’ll start looking into CRUD
It’s finally time to introduce some Http action into our app by fetching our Friends in async! Along the way we’ll learn about Observables.:
[ ] Fix the friend component deprecation
import { FriendComponent } from './friend';declarations: [ AppComponent, FeedComponent, MenuComponent, FriendsComponent, FriendComponent, MessagesComponent ],
[ ] First, let’s import the Angular Http Module into
app.module.ts
import { HttpModule } from '@angular/ [ BrowserModule, FormsModule, HttpModule, routing ],
[ ] Create a static json file in
public/friends.json to simulate our server and hold our users (note the “double” quotes)
[ "Glen", "Joe", "Mary" ]
[ ] Inject our http service into our
friend.service.ts
import { Http, Response } from '@angular/ { Observable } from 'rxjs';constructor(private userService: UserService, private http : Http) { }
[ ] Implement our
getFriends() method using
getFriends() : Observable<string[]> {return this. map((resp : Response) => resp.json() as string[]) ;}
[ ] Update the our
friends.component.ts to subscribe to the Observable
ngOnInit() { this.feedService.getFriends().subscribe((newFriends) => { this.friends = newFriends; console.log(this.friends); });}
[ ] Run it and celebrate the win!
[ ] Next up we’ll start looking into CRUD
On Day 12 we turn the Angular 2 Router up to 11 by deep linking to specific friend in our friends implement our Friends Component
import { FeedService } from '../feed.service';friends = [ ]; constructor(private feedService : FeedService) { } ngOnInit() { this.friends = this.feedService.getFriends(); console.log(this.friends); }
[ ] Implement our
getFriends() method
getFriends() : Array<string> { return [ 'Mary', 'Joe', 'Karen', 'Phil', 'Toni' ];}
[ ] Update the view
<div class="ui comments" * <div class="comment" * <a class="avatar"> <img src="./avatars/{{friend.toLowerCase()}}.jpg"> </a> <div class="content"> <a class="author">{{friend}}</a> <div class="actions"> <a >Details</a> </div> </div> </div></div><div * <h2>THere are not any friends here</h2></div>
[ ] Show the friends list
[ ] Let’s create a component for deep linking into our friends details
ng g c friend
[ ] Update our
app.routing.ts to point to the new component
import { FriendComponent } from './friend';{ path: 'friends/:friendId', component: FriendComponent },
[ ] Implement the new component
friend.component.ts logic to get params
friendId = '';constructor(private route : ActivatedRoute) { }ngOnInit() { this.route.params.map(params => params['friendId']).subscribe((friendId) => { this.friendId = friendId; console.log(friendId); })}
[ ] Update the
friend.component.html view file to output the friend:
<h2> This friend is {{ friendId }}</h2>
[ ] Update our router link in
feeds.component.html to pass the parameters:
routerLink="/friends/{{friend}}"
[ ] Run it and cheer!
[ ] Next up we’ll start looking into http for remote data fetching]]>
It’s time to introduce you to the Angular Router and get our menu routing to the various components in our application.:
[ ] Let’s create a component for friends and messages
ng g c messagesng g c friends
[ ] Now we’ll create a routing file
app.routing.ts to setup all our routes in the app
import { ModuleWithProviders } from '@angular/core';import { Routes, RouterModule } from '@angular/router';import { FeedComponent } from './feed';import { FriendsComponent } from './friends';import { MessagesComponent } from './messages';const appRoutes: Routes = [ { path: '', redirectTo: '/feed', pathMatch: 'full'}, { path: 'feed', component: FeedComponent }, { path: 'friends', component: FriendsComponent }, { path: 'messages', component: MessagesComponent },];export const appRoutingProviders: any[] = [];export const routing : ModuleWithProviders = RouterModule.forRoot(appRoutes);
[ ] Tell our module about this new routes in
app.module.ts
import { routing, appRoutingProviders } from './app.routing';providers: [ UserService, FeedService, appRoutingProviders ],imports: [ BrowserModule, FormsModule, routing ],
[ ] Then update our app component html in
app.component.html
<app-menu></app-menu><router-outlet></router-outlet>
[ ] Then we’ll need to update our menu to link to these new routes in
menu.componont.html
<a class="item" routerLink="/feed" routerLinkActive="active"> Home</a><a class="item" routerLink="/messages" routerLinkActive="active"> Messages</a><a class="item" routerLink="/friends" routerLinkActive="active"> Friends</a>
[ ] Run application and celebrate
[ ] Tomorrow we’ll deep-route our User names]]>
Today we inject services into services, refactor business logic out of components, and introduce you to Plain Old TypeScript Objects.
It’s a major tech debt paydown! And we consolidate everything you’ve learned so:
[ ] Introduce a new tweet class
ng g class tweet
[ ] First we’ll create our tweet properties
export class Tweet { public avatar; constructor(public body : string, public author: string, public date: Date, public retweets: Array<string>, public favorites: Array<string>) { this.avatar = `${author}.jpg`; }}
[ ] Then refactor our tweet array'] ), ]
[ ] Create a new feed service
ng g s feed
[ ] Then we’ll move all our logic out of our feed component and into feed service (injecting our
UserService into our
FeedService
private'] ), ] constructor(private userService : UserService) { } getCurrentFeed() : Array<Tweet> { return this.tweets; } private isUserInCollection(collection : string[], userId : string) : boolean { return collection.indexOf(userId) != -1; } postNewTweet(tweetText : string) { this.tweets.unshift( new Tweet(tweetText, this.userService.getCurrentUser(), new Date(), [], []) ); } reTweet(tweet : Tweet) { if (!this.isUserInCollection(tweet.retweets, this.userService.getCurrentUser())) { tweet.retweets.push(this.userService.getCurrentUser()); } } favoriteTweet(tweet : Tweet) { if (!this.isUserInCollection(tweet.favorites, this.userService.getCurrentUser())) { tweet.favorites.push(this.userService.getCurrentUser()); } }
[ ] Let’s inject our feed service into our Feed component
import { FeedService } from '../feed.service';import { Tweet } from '../tweet';tweets = [];constructor(private userService : UserService, private feedService : FeedService) { }ngOnInit() {this.tweets = this.feedService.getCurrentFeed();}
[ ] Implement our logic
OnFavorite(tweet) { this.feedService.favoriteTweet(tweet);}OnRetweet(tweet) { this.feedService.reTweet(tweet);}OnNewTweet() { console.log(this.tweetText); this.feedService.postNewTweet(this.tweetText); this.tweetText = '';}
[ ] Update our view
liked: tweet.hasFavorited(userService.getCurrentUser())retweeted: tweet.hasRetweeted(userService.getCurrentUser())
[ ] Enhance our Tweet class
hasFavorited(userId : string) : boolean { return this.favorites.indexOf(userId) != -1;}hasRetweeted(userId : string) : boolean { return this.retweets.indexOf(userId) != -1;}
[ ] Run application to fan hysteria.]]>
The technical debt has been piling up fast in our FeedComponent. It’s time to refactor that business logic out into services - and today on the show we’ll teach you how to get started.
(Spoiler: If you’ve done any Spring or Guice or Java EE6+ - this will be a piece of cake!):
[ ] If you’ve used Spring or @Inject, this is all super familiar!
[ ] First we’ll create a user service
ng g s User
[ ] Note the
@Injectable
[ ] Then we’ll implement a current user method
getCurrentUser() : string { return 'Glen'; }
[ ] Let’s write a test for that service (using Jasmin)
expect(service.getCurrentUser()).toBe('Glenz');
[ ] Run the test
ng test --build=false --watch=false
[ ] Fix the test
[ ] Inject the UserService into the FeedComponent
import { UserService } from '../user.service';providers: [UserService],constructor(private userService : UserService) { }
[ ] Replace all our backing component refs to ‘Glen’
[ ] Replace all our view component refs to ‘Glen’
[ ] Move the provides to the ng Module level
[ ] Talk about implications of exporting ngModule services
[ ] Cue Applause. Tomorrow we’ll tackle the FeedService]]>
It’s finally here - the day we add a tweet input element and can actually post things to our timeline.
Let’s take a moment to brag: use the element markup
<form class="ui form"> <div class="field"> <label>What's on your mind?</label> <textarea name='body' placeholder="Penny for your thoughts" type="text"></textarea> </div> <button class="ui button primary" type="button">Tweet</button></form>
[ ] Then we’ll make a local variable that gives you access to the element in the template
#mytext(click)='OnNewTweet(mytext)'
[ ] We’ll need a method to handle the new tweet
OnNewTweet(myTweet) { console.log(myTweet.value);}
[ ] Need to add the forms module to the
app.module.ts
import { FormsModule } from '@angular/forms';imports: [ BrowserModule, FormsModule ],
[ ] Binding to a backing property MVVM
tweetText = '';
[ ] Add the ngModel directive - put the banana in the box
[(ngModel)]='tweetText'
[ ] Change the OnNewTweet to take no args
OnNewTweet() { this.tweets.unshift( { body: this.tweetText, author: 'Glen', avatar: 'glen.jpg', date: new Date(), retweets: [], favorites: [] } ); this.tweetText = '';}
[ ] Live the dream!]]>
Today on the show we investigate Angular 2 [property] syntax - and also explore the wonders of per-component styling.:
[ ] You can hack DOM properties directly with [prop] syntax (not the quotes inside the quotes)
[style.color]='"red"'
[ ] You can also do per-component styling that won’t clash with the rest of your layout:
.like { color: red !important;} .retweet { color: red !important;}
[ ] But we want that to be conditional, so we’ll change the classes
.liked { color: red !important;} .retweeted { color: red !important;}
[ ] Be good to do this conditionally using
ngClass directive
[ngClass]='{ retweeted: isUserInCollection(tweet.retweets, "Glen") }'[ngClass]='{ retweeted: isUserInCollection(tweet.retweets, "Glen") }'
[ ] Cheer our awesomeness. We’ve learned about (events) and [properties]. Tomorrow, the textbox cometh!]]>
Today we investigate the wonders of (click) events and catching them in our backing component. We’ll be tweeted and favoriting up a storm.:
[ ] First we’re going to implement our Favorite and Retweet buttons using (click) Events
(click)='OnFavorite(tweet)'(click)='OnRetweet(tweet)'
[ ] Implement the click handlers too
OnFavorite(tweet) { tweet.favorites.push('Glen');}OnRetweet(tweet) { tweet.retweets.push('Glen');}
[ ] Stop double-adds? Be great to do it “properly” with a Tweet object method
isUserInCollection(collection : string[], userId : string) : boolean { return collection.indexOf(userId) != -1;}OnFavorite(tweet) { if (!this.isUserInCollection(tweet.favorites, 'Glen')) { tweet.favorites.push('Glen'); }}OnRetweet(tweet) { if (!this.isUserInCollection(tweet.retweets, 'Glen')) { tweet.retweets.push('Glen'); }}
[ ] Demonstrate only adding yourself once]]>
In today’s episode, we finally implement a nice looking timeline! Along the way we learn about ngFor, ngIf, and the Date Pipe.
Here’s some cool resources to checkout from today’s show:…
[ ] Copy our avatars into
public/avatars
[ ] First of all add an Array of Objects/Maps to
timeline.component.ts to hold our tweets:
tweets = [ { body: 'Some tweet text', author: 'Glen' }, { body: 'Some other text', author: 'Karen' },]
[ ] Put in our layout code from Semantic UI into
timeline.component.html
<div class="ui comments"> <div class="comment" * <a class="avatar"> <img src="/avatars/{{tweet.avatar}}"> </a> <div class="content"> <a class="author">{{tweet.author}}</a> <div class="metadata"> <span class="date"> {{tweet.date }}</span> </div> <div class="text"> {{tweet.body}} </div> <div class="actions"> <a class="reply">Reply</a> <a class="like"> <i class="like icon"></i> {{tweet.favorites.length}} Favourites </a> <a class="retweet"> <i class="retweet icon"></i> {{tweet.retweets.length}} Retweets </a> </div> </div> </div></div>
[ ] Then fill it out with some real data
tweets = [ { body: 'Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.', author: 'Glen', avatar: 'glen.jpg', date: new Date(), retweets: [ 'Joe'], favorites: [] }, { body: 'Measuring programming progress by lines of code is like measuring aircraft building progress by weight', author: 'Joe', avatar: 'joe.jpg', date: new Date(), retweets: [], favorites: ['Mary'] }, { body: 'Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.', author: 'Mary', avatar: 'mary.jpg', date: new Date(), retweets: ['Glen'], favorites: ['Mary'] }, { body: 'People think that computer science is the art of geniuses but the actual reality is the opposite, just many people doing things that build on each other, like a wall of mini stones', author: 'Glen', avatar: 'glen.jpg', date: new Date(), retweets: [ 'Joe', 'Mary'], favorites: [] }, { body: 'You can’t have great software without a great team, and most software teams behave like dysfunctional families.', author: 'Joe', avatar: 'joe.jpg', date: new Date(), retweets: [], favorites: ['Mary', 'Glen'] }, ]
[ ] Put in our conditional show of tweets or “No tweets today” using
*ngIf
<div class="ui comments" *</div><div * <h2>No tweets today.</h2></div>
[ ] Put in our pipe for the date to get a nicer layout
{{tweet.date | date:'h:mm:ss dd/mm/yy' }}
[ ] Deploy to crowd applause]]>
In today’s episode, we’re moving from RC4 to RC5. RC5 is reported to be the last of the breaking changes before Go-Live, so it’s worth upgrading.
Here’s some cool resources to checkout for further deep diving on the RC5 migration:…
[ ] Update
package.json with rc5 links (Ctrl-D is your friend)
[ ] Do an
npm install to fetch the new packages
[ ] Run
ng serve to prove the backward compatible action is working
[ ] Add
app.module.ts in root.
import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { AppComponent } from './app.component';@NgModule({ declarations: [ AppComponent ], providers: [ ], imports: [ BrowserModule ], bootstrap: [ AppComponent ]})export class AppModule { }
[ ] Updated your bootstrap in
main.ts
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module';platformBrowserDynamic().bootstrapModule(AppModule);
[ ] Move your directives into declarations
import { MenuComponent } from './menu';import { FeedComponent } from './feed';@NgModule({ declarations: [ AppComponent, FeedComponent, MenuComponent ],
[ ] Remove all your providers and declarations in
app.componont.ts
In today’s episode, we’re moving from “page thinking” to “component thinking” as we explore nesting Angular components within one another.
It’s also time to start getting our Twitter vibe happening using some magic stuff from Semantic U…
[ ] Twitter - in component terms rather than page terms
[ ]
ng generate component menu
[ ] Add the
<app-menu></app-menu> markup
[ ]
import { MenuComponent } from './menu';
[ ]
directives: [ MenuComponent ]
[ ] Rinse and repeat for Feed
[ ] Add layout magic from Semantic Menus
Here are the CDN links for Semantic UI that I used:
]]>]]>
<link rel="stylesheet" href=" ><script src=" src="
Day 2 is now live and we hack some properties on our first component (as well as learn how Angular’s component model hangs together).
Three big ideas on today’s show:
Lots of groundwork laid today for the next episode where we hack our first component tree.
I’m also updating a YouTube Playlist of all the episodes as they come out.
Cheat sheets below if you’re interested in following along at home.
[ ] We now have a Git Repo which I will tag day1, day2, etc
[ ] Review we introduced angular-cli
ng new &
ng serve
[ ] Talk through the directories
[ ] SystemJS - Bootstrapping our application
[ ] Main.ts bootstraps our App for the browser | http://feeds.feedburner.com/glensmith | CC-MAIN-2022-21 | refinedweb | 12,596 | 56.45 |
sentry-jira 0
Add sentry-jira to your INSTALLED_APPS in your sentry.conf.py:
from sentry.conf.server import * INSTALLED_APPS += ( 'sentry_jira', )
Configuration
Go to your project’s configuration page (Projects -> [Project]) and select the JIRA tab. Enter the JIRA credentials and Project configuration and save changes. Filling out the form is a two step process (one to fill in data, one to select project).
- Downloads (All Versions):
- 78 downloads in the last day
- 348 downloads in the last week
- 1817 downloads in the last month
- Author: Adam Thurlow
- License: BSD
- Categories
- Package Index Owner: dcramer, thurloat
- Package Index Maintainer: dcramer
- DOAP record: sentry-jira-0.5.xml | https://pypi.python.org/pypi/sentry-jira/0.5 | CC-MAIN-2016-07 | refinedweb | 107 | 55.24 |
[UNIX] OpenBSD File Descriptor Vulnerability (Additional Details)From: support@securiteam.com
Date: 05/19/02
- Previous message: support@securiteam.com: "[NEWS] SonicWALL SOHO Content Blocking Script Injection and Logfile DoS"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
From: support@securiteam.com To: list@securiteam.com Date: Sun, 19 May 2002 07:05 File Descriptor Vulnerability (Additional Details)
------------------------------------------------------------------------
SUMMARY
On current OpenBSD systems, any local user (whether or not they are in the
wheel group) can fill the kernel file descriptors table, leading to a
denial of service condition. Furthermore, by abusing a flaw in the way the
kernel checks closed file descriptors 0-2 (when running a setuid program),
it is possible to combine this bug and gain root access (by exploiting a
race condition).
DETAILS
Vulnerable systems:
OpenBSD 3.1, 3.0 and 2.9 (Unpatched)
Immune systems:
OpenBSD 3.1, 3.0 and 2.9 (Patched).
Local Root Exploit
Three weeks ago, Joost Pol is comments of the code, but not fixed).
exploit a race condition with respect to the system file descriptors
table:
1) Fill the kernel file descriptors table (see the "local DoS"
explanation).
2) Execute a setuid program program execution will fail. However, we found that, by
tuning a simple "for" loop, the good timing is quite easy to meet.
Solution:
The "root exploit" problem was fixed on the CVS a week ago, a few hours
after it was reported..
Exploit:
We have been able to exploit this vulnerability successfully on OpenBSD
3.0, and become succeeded.
/* fd_openbsd.c
(c) 2002 FozZy <fozzy@dmpfrance.com>
Local root exploit for OpenBSD up to 3.1. Do not distribute.
Research material from Hackademy and Hackerz Voice Newspaper
()
For educational and security audit purposes only. Try this on your *own*
system.
No warranty of any kind, this program may damage your system and your
brain.
Script-kiddies, you will have to modify one or two things to make it
work.
Usage:
gcc -o fd fd_openbsd.c
./fd
su -a skey
*/
#include <unistd.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <errno.h>
#include <fcntl.h>
#define SUID_NAME "/usr/bin/skeyaudit"
#define SKEY_DATA "\nr00t md5 0099 qwerty 545a54dde8d3ebd3 Apr 30,2002
22:47:00\n";
extern int errno;
int main(int argc, char **argv) {
char *argvsuid[3];
int i, n;
int fildes[2];
struct rlimit *rlp;
rlp = (struct rlimit *) malloc((size_t) sizeof(rlp));
if (getrlimit(RLIMIT_NOFILE, rlp))
perror("getrlimit");
rlp->rlim_cur = rlp->rlim_max; /* we want to allocate a maximum number
of fd in each process */
if (setrlimit(RLIMIT_NOFILE, rlp))
perror("setrlimit");
n=0;
open(SUID_NAME, O_RDONLY, 0);/* is it useful ? allocate this file in the
kernel fd table, for execve to succeed later*/
while (n==0) {
for (i=4; i<=rlp->rlim_cur; i++) /* we start from 4 to avoid freeing
the SUID_NAME buffer, assuming its fd is 3 */
close(i);
i=0;
while(pipe(fildes)==0) /* pipes are the best way to allocate unique
file descriptors quickly */
i++;
printf("Error number %d : %s\n", errno, (errno==ENFILE) ? "System file
table full":"Too many descriptors active for this process");
if (errno==ENFILE) { /* System file table full */
n = open("/bin/pax", O_RDONLY, 0); /* To be sure we don't miss one
fd, since a pipe allocates 2 fds or 0 if failure */
fprintf(stderr, "Let's exec the suid binary...\n");
fflush(stderr);
if ((n=fork())==-1) {
perror("last fork failed");
exit(1);
}
if (n==0) {
for (i=3; i<=rlp->rlim_cur; i++)
close(i); /* close all fd, we don't need to fill the fd table of the
process */
argvsuid[0]=SKEY_DATA; /* we put the data to be printed on stderr as
the name of the program */
argvsuid[1]="-i"; /* to make skeyaudit fail with an error */
argvsuid[2]=NULL;
close(2); /* let the process exec'ed have stderr as the *first*
fd free */
execve(SUID_NAME, argvsuid, NULL);
perror("execve");
exit(1);
}
else {
for (i=0; i<2000000; i++) /* Timing is crucial : tune this to your own
system */
;
for (i=4; i<=100; i++) /* free some fd for the suid file to execute
normally (ld.so, etc.) */
close(i);
sleep(5);
for (i=3; i<=rlp->rlim_cur; i++)
close(i);
exit(0);
}
}
else { /* process table full, let's fork to allocate more fds */
if ((n=fork()) == -1) {
perror("fork failed");
exit(1);
}
}
}
printf("Number of pipes opened by parent: %d\n",i);
sleep(5);
for (i=3; i<=rlp->rlim_cur; i++)
close(i);
fprintf(stderr,"Exiting...\n");
exit(0);
}
ADDITIONAL INFORMATION
The information has been provided by <mailto:fozzy@dmpfrance.com> FozZy.
========================================] SonicWALL SOHO Content Blocking Script Injection and Logfile DoS"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
Relevant Pages
- 2.4.22-pre7: are security issues solved?
... Red Hat has released a new kernel today, that fixes several security issues.
... > executing a setuid program. ... send the line "unsubscribe linux-kernel"
in ... (Linux-Kernel)
- OpenBSD local DoS and root exploit
... group) can fill the kernel file descriptors table, ... Because of a flaw in
the way the kernel checks closed file ... The "root exploit" problem was fixed on
the CVS a week ago, ... exec'ing a setuid program can make this program open files under
these fds, ... (Bugtraq)
- Re: VFS: file-max limit reached when running on a virtual machine
... running on a qemu virtual machine configured with 512 MB of memory. ... >
I understand that changes have been made recently to the way the kernel manages ... >
file descriptors in order to improve real-time performance. ... (Linux-Kernel)
- Re: [PATCH/RFC 2.6.21 3/5] ehca: completion queue: remove use of do_mmap()
... Owner tracking by pid is really dangerous. ... File descriptors can be
... This has a historic reason as we ... have needed to support fork, systemetc
for kernel 2.6.9, ... (Linux-Kernel)
- Re: Accessing Terminal Information in the Kernel
... > I'm doing some kernel development and I'd like to be able to instrument ...
all file descriptors were closed, and it remains now only as the ... to access your controling
tty. ... > Does anyone have any pointers as to how to get that information while
... (comp.os.linux) | http://www.derkeiler.com/Mailing-Lists/Securiteam/2002-05/0080.html | crawl-001 | refinedweb | 1,018 | 57.47 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.