text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
ACK On Mon, Jan 27, 2014 at 10:53:09AM +0000, Daniel P. Berrange wrote: > Libvirt uses gnulib for making winsock look like POSIX > sockets. This means that in the libvirt event handle > callbacks the application will be given a file descriptor > rather than a winsock HANDLE object. The g_io_channel_unix_new > method will detect that it is an FD and delegate to the > g_io_channel_win32_new_fd method. Unfortunately the glib Win32 > event loop impl is not very good at dealing with FD objects, > simulating poll() by doing a read() on the FD :-( > > The API docs for g_io_channel_win32_new_fd say > > "All reads from the file descriptor should be done by > this internal GLib thread. Your code should call only > g_io_channel_read()." > > This isn't going to fly for libvirt, since it has zero > knowledge of glib at all, so is just doing normal read(). > > Fortunately we can work around this problem by turning > the FD we get from libvirt back into a HANDLE using the > _get_osfhandle() method. > > Signed-off-by: Daniel P. Berrange <berrange redhat com> > --- > libvirt-glib/libvirt-glib-event.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/libvirt-glib/libvirt-glib-event.c b/libvirt-glib/libvirt-glib-event.c > index 87019b5..1e1ffec 100644 > --- a/libvirt-glib/libvirt-glib-event.c > +++ b/libvirt-glib/libvirt-glib-event.c > @@ -31,6 +31,10 @@ > > #include "libvirt-glib/libvirt-glib.h" > > +#ifdef G_OS_WIN32 > +#include <io.h> > +#endif > + > /** > * SECTION:libvirt-glib-event > * @short_description: Integrate libvirt with the GMain event framework > @@ -164,7 +168,11 @@ gvir_event_handle_add(int fd, > data->events = events; > data->cb = cb; > data->opaque = opaque; > +#ifdef G_OS_WIN32 > + data->channel = g_io_channel_win32_new_socket(_get_osfhandle(fd)); > +#else > data->channel = g_io_channel_unix_new(fd); > +#endif > data->ff = ff; > > g_debug("Add handle %p %d %d %d %p\n", data, data->watch, data->fd, events, data->opaque); > -- > 1.8.4.2 > > -- > libvir-list mailing list > libvir-list redhat com > Attachment: pgpqwxg7W4lbd.pgp Description: PGP signature
https://www.redhat.com/archives/libvir-list/2014-January/msg01367.html
CC-MAIN-2015-11
refinedweb
311
57.77
Originally posted on my website on February 13th 2020 How to create a custom WordPress Rest Api route. The WordPress Rest Api has many build in endpoints for you to access. But sometimes you may need something a little more specific, or need access to a resource that is not supported out of the box. In these cases you can create your own custom endpoint. In this example we create a (pretty useless) endpoint for fetching post meta values. With the snippet above we add an action to the rest_api_init hook, and register a new callback function called mytheme_get_meta_value. With the mytheme_get_meta_value function we make a call to the register_rest_route function and pass three parameters: - $namespace: A unique namespace that groups your custom endpoints. This will be the first URL segment after core prefix. - $route: The actual route. Usually a resource name with optional parameters. - $args: An array of arguments. For the $route parameter we pass a string including two parameter expressions like (?P[\d]+) This tells WordPress that we are expecting a parameter (?P) by the name of "post" with a value that must follow the regular expression of [\d]+ meaning an integer/id. Second we expect a parameter by the name of "key" that must follow the regular expression of [a-zA-Z0-9_-]+ meaning a string of letters, numbers. underscores and dashes. Note: You don't have to use url parameters in this way. You could also pass Get parameters (?posts=9&key=some_value) or Post parameters. For the arguments parameter we pass an array with two key/value pairs: - Methods: The request method. E.g. GET, POST or a WP_REST_Server constant like WP_REST_Server::READABLE = 'GET', WP_REST_Server::EDITABLE = 'POST, PUT, PATCH', WP_REST_Server::DELETABLE = 'DELETE', WP_REST_Server::ALLMETHODS = 'GET, POST, PUT, PATCH, DELETE' - Callback: The callback function that will be called when the route is requested. Note: You could also pass an array of arrays where each inner array has a different method and callback function. This is useful when you need a different callback function for Get and Post methods. End-point callback function. In the code above we passed mytheme_handle_get_meta_value as our route callback function. Lets create that function now. In this callback function we first extract the post and key parameters from the passed in $request array. These values come from the Url parameters we set for the $route parameter in the register_rest_route function earlier. We then use these values to retrieve the meta value using the get_post_meta function. If the value can not be found we return a WP_Error object explaining what went wrong. If we did find a value we use the rest_ensure_response function to return it. The rest_ensure_response function expects an array so we create one with a key of meta and the value we need to return. We use the meta key in our javascript later to retrieve the value. Calling the End-point with Javascript Now that our endpoint is set up, we can try and call it using Javascript. Here we use the $.ajax function to make a GET request to '/wp-jsaon/mytheme/v1/meta/9/_thumbnail_id', where 9 is a post id and _thumbnail_id is the meta value we want to retrieve. On success we get the meta key from the returned data object to get our meta value and log it to the console. This meta key comes from the array we passed to rest_ensure_response earlier. If we get an error we use error.responseJSON.message to get the error message and log that to the console. Further reading This is just a simple/basic example and we didn't handle things like securing the endpoint which I highly recommend you do when dealing with delicate data. You can read up on security and more in the Rest Api handbook under Adding custom endpoints. Found this post helpful? Follow me on twitter @Vanaf1979 or here on Dev.to @Vanaf1979 to be notified about new articles, and other WordPress development related resources. Thanks for reading Discussion (0)
https://dev.to/vanaf1979/wp-snippet-005-simple-custom-rest-api-route-330k
CC-MAIN-2022-21
refinedweb
669
65.83
Getting started with a Angular/NX workspace backed by an AWS Amplify GraphQL API - Part 1 Michael Gustmann Updated on ・16 min read We are going to create a Todo app using Angular with the help of - Nx to help with a multi-app, multi-team project style - AWS Amplify CLI to create a fully featured backend, completely managed in code (infrastructure as code) - AWS AppSync JavaScript SDK An AppSync client using Apollo and featuring offline and persistence capabilities - AWS Amplify Angular Angular components and an Angular provider which helps integrate with the AWS-Amplify library We are creating an offline-ready Todo-App backed by a GraphQL API and Authentication. We can expand on this app by adding further categories and features. But let's start simple. You can take a look at the code in this repo. Prerequisites I'm assuming, that you have NodeJS version 8 or higher installed and that you have access to an AWS Account. The free tier is very generous during the first year and will probably cost you nothing. Create a workspace NX is a schematic on top of Angular CLI, that gives us tools to setup a workspace, where it is easy to share code between apps and to create boundaries to help working within a team or even multiple teams. By separating out logic into modules these can be reused easily by just importing them like we are importing other npm packages. We don't need to publish any of those packages to some public or private repository. This saves us some headaches about running tests across different versions of each library and provides us a snapshot in our repo of all the parts working together. With the AWS Amplify CLI we can even connect one or more cloud stacks to our workspace, so that we get 'Infrastructure as Code' of our backend going along with our frontend apps. It's not in the sense of having completely different products all put into the same repo. Rather thinking in terms of having a customer web- and separate mobile app, an admin portal, an electron desktop app for the support team, a manager reports app and so on... They all work with the same data in a way and would immensely profit from shared SDKs and public interfaces across libraries. Even if you end up using just one app, where you include different domains or Lines of Businesses (LOB), it helps to structure the code base so that teams can work at the same time without worrying too much about the painful integration into production later. We start by creating a workspace folder by executing the following command using npx, so that we don't have to globally install it npx @nrwl/schematics amplify-graphql-angular-nx-todo When asked, we answer some questions ? Which stylesheet format would you like to use? CSS ? What is the npm scope you would like to use for your Nx Workspace? my ? What to create in the new workspace (You can create other applications and libraries at any point using 'ng g') empty We named our workspace 'my' and it should be usually named after your organization or project, ie. 'myorg'. We create an empty workspace, so that we can name our first application however we want. Otherwise it would get the name 'amplify-graphql-angular-nx-todo'. Create an app We decide to call our first app 'todo-app' (and it will be the only one for this post). ng g app todo-app ? What framework would you like to use for the application? Angular ? In which directory should the application be generated? ? Which stylesheet format would you like to use? CSS ? Which Unit Test Runner would you like to use for the application? Jest ? Which E2E Test Runner would you like to use for the application? Cypress ? Which tags would you like to add to the application? (used for linting) app This created two apps - todo-app - todo-app-e2e The second app is an end-to-end test. The app is our actual app and we will try to only use it for orchestrating other modules as much as possible. Most of our code we will put in libs to reuse it in possible further apps in the future. Create libs Where do we put our code then? NX makes it easy to create new libs! We compose our app out of possibly many libs. - We need a lib for our backend stuff, let's call this lib appsync - We put our UI components into the lib todo-ui So let's run the following commands to generate our desired libs: ng g lib appsync --tags=shared --framework=angular --unitTestRunner=jest --style=css ng g lib todo-ui --tags=ui --framework=angular --unitTestRunner=jest --style=css Create the AWS backend The (AWS Amplify CLI)[] makes it very easy to create a serverless backend in the AWS cloud. Everything is defined in code and can be checked in our source control. It elevates the need to write lengthy CloudFormation templates that describe our cloud resources. A few simple commands will spin up entire cloud stacks using team resources or creating one-of developer sand-boxes to try out new features. Install the Amplify CLI globally (if haven't done so previously) by running those commands: npm install -g @aws-amplify/cli amplify configure If you are completely new to AWS Amplify, you should check out the docs of how to configure the Amplify CLI Here we actually install the AWS Amplify CLI globally instead of using npx! We will use the amplify command quite often. Install the Amplify JavaScript dependencies npm i aws-amplify aws-amplify-angular Adjusting the angular app to Amplify's SDK There are a few things we need to change in order for us to use the AWS Amplify SDK in an Angular environment: - Tell typescript about the window.globalvariable - Add 'node' to tsconfig - Write a script to rename aws-exports.js to aws-exports.ts. Open apps/todo-app/src/polyfills.ts and add the following line at the top of the file (window as any).global = window; The node package needs to be included in apps/todo-app/tsconfig.app.json { "compilerOptions": { "types": ["node"] } } AWS Amplify comes with an UI-Library. We might just as well reuse the styles from those components and build upon them. Add the following lines to apps/todo-app/src/styles.css: /* You can add global styles to this file, and also import other style files */ @import '~aws-amplify-angular/theme.css'; We connect our project or branch to a cloud resource stack by running amplify init Using NX we need to change quite a few defaults: ? Enter a name for the project amplifynx ? Enter a name for the environment master ? Choose your default editor: Visual Studio Code ? Choose the type of app that you're building javascript Please tell us about your project ? What javascript framework are you using angular ? Source Directory Path: libs/appsync/src/l ? Distribution Directory Path: dist/apps/todo-app ? Build Command: npm run-script build ? Start Command: npm run-script start Using default provider awscloudformation ? Do you want to use an AWS profile? Yes ? Please choose the profile you want to use my.aws.profile ⠇ Initializing project in the cloud... We named our cloud environment master, corresponding to our current git branch master. We can easily add further branches, like develop and create equally named environments (stacks) in the cloud. We set the Source Directory Path to our previously created lib named appsync so that the automatically generated file aws_exports.js will land in this folder by default. This way we can share our cloud config with different apps by just importing this lib. The Distribution Directory Path points to the dist output of a single app, which is only important, if we use the amplify CLI commands to build or serve the app or if we connect our app to Amplify Console. The latter makes it very easy to create a CI/CD Pipeline by just connecting a GIT branch to a deployment. You will notice a new top-level folder named amplify. Most of the files created in this folder were added to .gitignore. The rest of the files can and should be added to version control and can be shared with your team. If you plan to make your repo public you should also add amplify/team-provider-info.json to .gitignore. Let's add our first category auth! amplify add auth I like to change the default configuration to soften the password policy and provide a custom email subject text, but you can choose the defaults if you like and later change any of those settings by running amplify auth update To create these auth resources in the cloud run amplify push You will see Current Environment: master | Category | Resource name | Operation | Provider plugin | | -------- | ------------- | --------- | ----------------- | | Auth | cognitonxtodo | Create | awscloudformation | ? Are you sure you want to continue? (Y/n) And after a while you will find the newly created Cognito user pool when you log in to your AWS console. This command will take you there: amplify console auth Rename aws-exports.js The amplify push command should have generated a file: libs/appsync/src/lib/aws-exports.js, which we should rename to aws-exports.ts to easily import the config in other typescript files. To automate this we use the scripts section in package.json. Now each time we either serve or build our app the file is renamed. { "start": "npm run rename:aws:config || ng serve; ng serve", "build": "npm run rename:aws:config || ng build --prod; ng build --prod", "rename:aws:config": "cd libs/appsync/src/lib && [ -f aws-exports.js ] && mv aws-exports.js aws-exports.ts || true" } We should also add this renamed file to .gitignore, so change aws-export.js to aws-exports.*s. Adding a GraphQL API If you have ever written a GraphQL API before you will appreciate the simplicity of creating one by just typing a few lines: amplify add api ? Please select from one of the below mentioned services GraphQL ? Provide API name: amplifynx ? Choose an authorization type for the API Amazon Cognito User Pool Use a Cognito user pool configured as a part of this project ? Do you have an annotated GraphQL schema? No ? Do you want a guided schema creation? Yes ? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description) ? Do you want to edit the schema now? Yes This will open your previously configured editor with a sample Todo schema: type Todo @model { id: ID! name: String! description: String } We don't want the description, but would like the completed status of our todos to be tracked. At first to not worry too much about offline and sorting in the backend we can add a createdOnClientAt timestamp to sort the todos on the client. AppSync adds createdAt and updateAt timestamps by default, which we could use in the schema, but it will mess with our todo-list when we queue up several todos while we are offline and reconnect. To explain this a little: Say we only rely on the createdAt property to sort and we add todo-1 and todo-2 while offline. These get added to our outbox. They are sorted correctly, because we add a timestamp when we create each of those todos. Once we are online, todo-1 will be sent to the server and will receive a new createdAt timestamp on the server, since it was created there. This timestamp is at a later point in time than todo-2's timestamp. So todo-1 will jump to position 2 until todo-2 was transmitted. After todo-2 was sent to the server, getting a new timestamp, it will jump back to position 2 again. We will see a short reordering happen in our client. The createdOnClientAt property is not touched on the server. Change the schema to this and save: type Todo @model @auth(rules: [{ allow: owner }]) { id: ID! name: String! completed: Boolean! createdOnClientAt: AWSDateTime! } We added @auth(rules: [{allow: owner}]) to the Todo type to only show the todos of each logged in user. This will automatically add an owner field to each todo in the underlying todo table in DynamoDB. What happened behind the scenes? When you look into the amplify/backend/api/{angularnx}/build directory, you will find an oppiniated and enhanced GraphQL schema by the AWS Amplify team. It generated all types of CRUD operations and Input objects with filtering and pagination automatically! By just writing six lines of annotated graphql we get 116 lines of graphql best practices out of the box! That's just dandy! Upload the API When we decide to push our API for the first time we get asked a few questions about generating code and where to place it. Choose angular as language target and leave the rest as default! amplify push Current Environment: master | Category | Resource name | Operation | Provider plugin | | -------- | ------------- | --------- | ----------------- | | Api | amplifynx | Create | awscloudformation | | Auth | cognitonxtodo | No Change | awscloudformation | ? Are you sure you want to continue? Yes ? Do you want to generate code for your newly created GraphQL API Yes ? Choose the code generation language target angular ? Enter the file name pattern of graphql queries, mutations and subscriptions (src/graphql//_.graphql) _? Enter the file name pattern of graphql queries, mutations and subscriptions*_ src/graphql/API.service.ts _*? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions*** Yes ⠋ Updating resources in the cloud. This may take a few minutes... There is a bug when trying to use another directory, like our lib folder, so just leave the default paths src/graphql/*! Otherwise the service or types will end up in our lib and be pretty much empty and all *.graphql files will be in the default location. Amplify Code Generator We get two choices when we decide to generate code: - angular - typescript 1. angular If we choose angular as the target (which we do in our example), we get a single file with all the typescript types and an angular service. This service uses the amplify SDK to connect to the API. It's not really useful to us, since we decided to use the AppSync SDK as our client instead to connect to our cloud resources to get offline and persistance out of the box. We won't use the types nor the service directly, but the operations defined in the service can provide a good starting point to write our own queries and mutations. When choosing angular also generates queries.graphql, mutations.graphql, subscriptions.graphql and an introspection schema schema.json. This is very useful as we will see later on. 2. typescript This generates *.ts files with all queries, mutations and subscriptions exported as strings. It also generates a file API.ts that defines all the types. I don't like this option as much as the other generators we will explore later. Here is an example generated by the typescript option: export type CreateTodoMutation = { createTodo: { __typename: 'Todo'; id: string; name: string; completed: boolean; createdOnClientAt: string; } | null; }; It's correct to have the nested createTodo property, but to type our Todo objects in code we need to do this: const newTodo: CreateTodoMutation['createTodo'] = { /*...*/ }; My code editor will not help with the string part and it might introduce subtle errors in complex schemas. Apollo Code Generator Another possibility is to use the apollo client's code generator. I like its generated types. They are clean, short and the above code would look like this const newTodo: CreateTodo_createTodo = { /*...*/ }; You can use it simply by executing npx apollo-codegen generate src/graphql/*.graphql --schema src/graphql/schema.json --addTypename --target typescript --output libs/appsync/src/lib/api.types.ts Installing it locally and running it gives me an error, because of a version mismatch of the dependency graphql used in aws-appsync. Using npx works great, though and I didn't worry about finding another solution for this. Using yarn might work better... GraphQL Code Generator One very important feature I am looking for in a code generator is the possibility to customize how the code is generated. { GraphQL } code generator is such a tool. It uses plugins to generate code for lots of different languages. You could install it by running npm i graphql-code-generator graphql-codegen-typescript-common graphql-codegen-typescript-client graphql-codegen-fragment-matcher Create a codegen.yml file: overwrite: true schema: src/graphql/schema.json documents: src/graphql/*.graphql generates: ./libs/appsync/src/lib/graphql.ts: plugins: - 'typescript-common' - 'typescript-client' - 'fragment-matcher' and simply run npm run gql-gen This will generate a graphql.ts file in our lib providing all the types organized in namespaces. If we would add another plugin targeting angular we would also get a service for each operation, that we could simply inject in our components. Note, we will NOT be using this tool in our app Choosing the code generator I think using the apollo code generator is probably the middle choice. It's easy enough to use when you add it the package.json's scripts section and hook it up to the amplify push command: { "generate": "npx apollo-codegen generate src/graphql/*.graphql --schema src/graphql/schema.json --addTypename --target typescript --output libs/appsync/src/lib/api.types.ts", "push": "amplify push && npm run generate" } Make sure to use angular(not typescript) as the target in the previous step when configuring amplify's codegen, so that the src/graphql/*.graphqlfiles are generated. The src/graphqlfolder is only used for code generation and should not be imported anywhere in our app! Since everything in src/graphql is generated, we can also put it in .gitignore together with .graphqlconfig.yml. src/graphql .graphqlconfig.yml; Writing our GraphQL operations Our app needs to - Fetch our list of todos - Create a new todo - Update the completed status of a todo - Delete a todo To make our operations reusable in different components or services, we put them in a file libs/appsync/src/lib/gql-operations.ts. import gql from 'graphql-tag'; export const CREATE_TODO = gql` mutation CreateTodo($input: CreateTodoInput!) { createTodo(input: $input) { __typename id name completed createdOnClientAt } } `; export const LIST_TODOS = gql` query ListTodos($filter: ModelTodoFilterInput, $nextToken: String) { listTodos(filter: $filter, limit: 999, nextToken: $nextToken) { __typename items { __typename id name completed createdOnClientAt } } } `; export const UPDATE_TODO = gql` mutation UpdateTodo($input: UpdateTodoInput!) { updateTodo(input: $input) { __typename id name completed createdOnClientAt } } `; export const DELETE_TODO = gql` mutation DeleteTodo($input: DeleteTodoInput!) { deleteTodo(input: $input) { __typename id } } `; To not worry about fetching more todos and simplifying apollo's caching, we hard code the limit in listTodos to 999 Don't forget, that we have auto-generated code in src/graphql/API.service.ts so we can simply copy some of those statement and use them as our starting point. In a more advanced app with a deeply nested schema the generated operations will need to be adjusted on a per component need. From a software architecture's point of view it would be better to create a new library called todo-data or todo-model and place the file there naming it todo-operations.ts. Then we could also add a facade which will be the only thing we would expose to the other libs. This data layer lib could also hold our local state using redux for example. Going this route would make our components simpler. We might expand on this in the future. Setup the AppSync Client Install the client, graphql-tag and optionally localforage npm i aws-appsync graphql-tag localforage It's time to modify libs/appsync/src/lib/appsync.module.ts to instantiate and configure the AppSync client. Change the content of the file to this: import { NgModule } from '@angular/core'; import Amplify, { Auth } from 'aws-amplify'; import { AmplifyAngularModule, AmplifyService } from 'aws-amplify-angular'; import AWSAppSyncClient, { AUTH_TYPE } from 'aws-appsync'; import * as localForage from 'localforage'; import config from './aws-exports'; Amplify.configure(config); export const AppSyncClient = new AWSAppSyncClient({ url: config.aws_appsync_graphqlEndpoint, region: config.aws_appsync_region, auth: { type: AUTH_TYPE.AMAZON_COGNITO_USER_POOLS, jwtToken: async () => (await Auth.currentSession()).getIdToken().getJwtToken() }, complexObjectsCredentials: () => Auth.currentCredentials(), cacheOptions: { addTypename: true }, offlineConfig: { storage: localForage } }); @NgModule({ exports: [AmplifyAngularModule], providers: [AmplifyService] }) export class AppsyncModule {} We are creating the AWSAppSyncClient and exporting the instance, so we can use it elsewhere. AmplifyService needs to be added to the providers array, since it does not use @Injectable({providedIn: 'root'}). To persist our the AppSync store in IndexedDB rather than LocalStorage, we use the library localforage. To hook it up, don't forget to add AppsyncModule to the imports array of apps/todo-app/src/app/app.module.ts. Note: We are not using the Apollo-Angular client here. This would give us an injectable service we could use throughout our app and which wraps the underlying zen-observables with rxjs. This is not necessary though, as we will see building this app in angular. AWSAppSyncClient uses quite a few libraries under the hood, which makes it difficult to exchange or add some of them. I haven't found a way to use apollo-link-state and apollo-angular-link-http together with AWSAppSyncClient while retaining all TypScript typings. This means NO Angular HttpClient and features like interceptors when working directly with the AWSAppSyncClient! Offline helpers The AWS AppSync SDK offers helpers to build mutation objects to use with apollo-client. This makes it quite easy to work with an optimistic UI in bad network conditions and update the local InMemoryCache when changes happen in our app. There are higher level helpers available for react, but when using other libraries, we need to use the lower level function buildMutation. This function needs an apollo-client instance for example, which we can abstract away in our own typescript functions. Since we have generated types for our API we can also use these to give us additional type safety when providing __typename strings. I also find it easier to use a config object instead of using a long parameter list. Here are my TypeScript Offline Mutation Builders which you can put in a file libs/appsync/src/lib/offline.helpers.ts: import { OperationVariables } from 'apollo-client'; import { buildMutation, CacheOperationTypes, CacheUpdatesOptions, VariablesInfo } from 'aws-appsync'; import { DocumentNode } from 'graphql'; import { AppSyncClient } from './appsync.module'; export interface TypeNameMutationType { __typename: string; } export interface GraphqlMutationInput<T, R extends TypeNameMutationType> { /** DocumentNode for the mutation */ mutation: DocumentNode; /** An object with the mutation variables */ variablesInfo: T | VariablesInfo<T>; /** The queries to update in the cache */ cacheUpdateQuery: CacheUpdatesOptions; /** __typename from your schema */ typename: R['__typename']; /** The name of the field with the ID (optional) */ idField?: string; /** Override for the operation type (optional) */ operationType?: CacheOperationTypes; } /** * Builds a MutationOptions object ready to be used by the ApolloClient to automatically update the cache according to the cacheUpdateQuery * parameter */ export function graphqlMutation< T = OperationVariables, R extends TypeNameMutationType = TypeNameMutationType >({ mutation, variablesInfo, cacheUpdateQuery, typename, idField, operationType }: GraphqlMutationInput<T, R>) { return buildMutation<T>( AppSyncClient, mutation, variablesInfo, cacheUpdateQuery, typename, idField, operationType ); } export function executeMutation< T = OperationVariables, R extends TypeNameMutationType = TypeNameMutationType >(mutationInput: GraphqlMutationInput<T, R>) { return AppSyncClient.mutate(graphqlMutation<T, R>(mutationInput)); } Note, this might break with future versions of the appsync sdk Export everything from our lib To make it possible to import our types, graphql operations, helpers, appsync-client and amplify from our barrel we need to export them in the libs/appsync/src/index.ts file. export * from './lib/appsync.module'; export * from './lib/api.types'; export * from './lib/gql-operations'; export * from './lib/offline.helpers'; To import them in our components we can simply write import { /* ... */ } from '@my/appsync'; Recap So far, we have setup our Angular multi-app workspace with NX, initialized AWS Amplify to give us a managed authentication service using AWS Cognito and generated a serverless GraphQL API using AWS AppSync. Just with a few CLI commands! To make it possible to reuse our backend in more than one app, we imported and configured everything in a separate custom library. Since GraphQL works with a schema we could even generate all typings automatically and copy query and mutation operations as a starting point for our own operational needs. We also wrote some utility functions to help with the Apollo client-side cache in offline scenarios. What to expect in part 2 In part 2 we will create a UI and use the stuff we created here to connect to our backend.
https://dev.to/beavearony/getting-started-with-a-angularnx-workspace-backed-by-an-aws-amplify-graphql-api---part-1-24m0
CC-MAIN-2019-26
refinedweb
4,084
54.73
- 0shares - Facebook0 - Twitter0 - Google+0 - Pinterest0 - LinkedIn0 String and Character Array A sequence of characters is generally known as a string. Strings are very commonly used in programming languages for storing and processing words, names, addresses, sentences, etc. String variables: Just like numeric variables, C language uses string variables to store and manipulate strings. But the string data type is not supported by C programming language. Declaring and Initializing string variables: In C programming language, string variables can be initialized in different ways. Consider the following line of code in which we have initialized a string variable: char name [12] = “programming”; This array can also be initialized like the following: char name [12] = {‘p’, ‘r’, ‘o’, ‘g’, ‘r’, ‘a’, ‘m’, ‘m’, ‘i’, ‘n’, ‘g’, ‘\0’}; It should be noted here that when we initialize the character array by listing, then the null character must be separately supplied separately in the list as the last character. String Input and Output: We use scanf () function to store a string in a string variable but it has some limitations therefore, we also use gets() and puts() functions for string. To demonstrate this let us consider the following program: # include <stdio. h> # include < conio. h> void main() { char yourname [20]; printf (“enter your name:”); Scanf (“%s”, yourname); printf ( () function. Notice that we don’t have to enter the null character (\0) while entering the string; instead it is automatically included at the end of the string. This means if your array is 20 characters long, you can only store 19 characters in it. One character is reserved for the null character (\0). Also notice that in this program there is no address operator (&) preceding the string variable name (yourname). This is because yourname is an address. We have to precede numerical and character variables with the & operator to change values into the addresses but yourname is the name of an array and therefore, it is already an address and does not need the & operator. The gets () and puts () functions: To solve the problem of multiword strings, C language); String Handling Functions: In C programming language there are many functions for string handling such as strcmp (), strcat (), strcpy (), etc. The string functions are include in the C standard libraray that is <string. h>. This header file must be included in the program to use the string functions. The most commonly used string functions in C programming language are as follows: Strcmp () function: The strcmp (). Strcpy () function: The strcpy () string function is used to copy the contents of string1 to string2. For example: char name[20]; strcpy (name, “John Abraham”); Here string “John Abraham” would be placed in the array name[]. Initializing an array of string: To demonstrate the initialization of an array of names, consider the following declaration: Char names [5][20]= { “Stuart”, “Bill”, “Faddy”, “Wilson”, “Abraham”}; The names in the quotes are already a one dimensional array, therefore, we don’t need braces around each names as we did for two dimensional numeric arrays. We do need braces around all the names because this is an array of strings. Notice that the individual names are separated by commas.
http://www.tutorialology.com/c-language/string-and-character-array/
CC-MAIN-2017-47
refinedweb
524
58.82
Im new to java, mostly done networking in the past, im trying to create a program that will run a basic simulation of a number of heats for a 100m sprint. It mostly about the use of collections so nothing to complex but im completly lost. I've created a number of runner objects and stored them in 2 arraylists called race and race2, I've also set the times to zero for each runner. What im trying to do right now is run through the objects in the arraylist and randomise the time for each runner within 9 to 10 secs in order to give a sort of realistic time for the race. Then I want to sort them by time first to last and pull the top 3 out of the arraylists and store them in another arraylist call winners. I've been thinking I have to use a loop and them set the time with a setTime method for each object for sorting them by time but im completly lost and have basically started the whole thing again just with the objects stored in the arraylist. Any help or suggestions would be very gratfully recieved Thanks This is my main class and runner class import java.util.ArrayList; import java.io.*; public class RaceSim{ public static void main(String[] args){ Runner p1 = new Runner("Mark", "Brady", 00.00); Runner p2 = new Runner("Karl", "Collins", 00.00); Runner p3 = new Runner("Brian","Morgan", 00.00); Runner p4 = new Runner("Stephen","Fryer", 00.00); Runner p5 = new Runner("Stephen","Taylor", 00.00); Runner p6 = new Runner("Gavin","Burke", 00.00); Runner p7 = new Runner("John","McGlynn", 00.00); Runner p8 = new Runner("Robert","Banks", 00.00); ArrayList<Runner> race = new ArrayList<Runner>(}; race.add(p1); race.add(p2}; race.add(p3); race.add(p4); race.add(p5); race.add(p6); race.add(p7); race.add(p8); Runner p9 = new Runner("Aaron", "McNutt", 00.00); Runner p10 = new Runner("Kevin", "McCafferty", 00.00); Runner p11 = new Runner("Harold","McCarron", 00.00); Runner p12 = new Runner("Declan","McCormack", 00.00); Runner p13 = new Runner("Micheal","McGoldrick", 00.00); Runner p14 = new Runner("David","Hegerty", 00.00); Runner p15 = new Runner("Ryan","Lynce", 00.00); Runner p16 = new Runner("Conal","Murphy", 00.00); ArrayList<Runner> race2 = new ArrayList<Runner>(); race2.add(p9); race2.add(p10); race2.add(p11); race2.add(p12); race2.add(p13); race2.add(p14); race2.add(p15); race2.add(p16); System.out.println(race + "\n"); System.out.println(race2 + "\n"); } } public class Runner { private String fName; private String sName; private double time; public Runner(){ } public Runner(String fName, String sName, double time){ this.fName = fName; this.sName = sName; this.time = time; } public double getTime(double time){ return time; } public void setTime(double time){ this.time = time; } }
http://www.dreamincode.net/forums/topic/195731-sprint-race-simulation/
CC-MAIN-2017-51
refinedweb
467
59.5
SqlHBase 0.1 MySQLDump to HBase, ETL scripts ======== SqlHBase ======== SqlHBase is an HBase ingestion tool for MySQL generated dumps The aim of this tool is to provide a 1:1 mapping of a MySQL table into an HBase table, mapped on Hive (schema is handled too) To run this requires a working HBase with Thrift enabled, and a Hive instance, with metastore properly configured and Thrift enabled as well. If u need I/O performance, I recommend to look into Pig or Jython, or directly a native Map Reduce job. SQOOP was discarded as an option, as it doesn't cope with dump files and it does not compute the difference between dumps before ingestion. SqlHBase does a 2 level ingestion process, described below. "INSERT INTO `table_name` VALUE (), ()" statements are hashed and stored (dropping anything at the left side of the first open round bracket) as a single row into a staging table on HBase (the md5 hash of the row is the row_key on HBase). When multiple dumps of the same table/database are inserted, this prevents (or at least reduce) the duplication of data on HBase side. MySQL by default chunks rows as tuples, up to 16Mb, in a single INSERT statement. Given that, we basically have a list of tuples: [(1, "c1", "c2", "c3"), (2, "c1", "c2", "c3"), ... ] Initial attempt of parsing/splitting such a string with a regexp failed, of course. Since a column value could contain ANYTHING, even round brackets and quotes. This kind of language is not recognizable by a Finite State Automata, so something else had to be implemented, to keep track of the nested brackets for example. A PDA (push down automata) would have helped but... as u can look above, the syntax is exactly the one from a list of tuples in python.... an eval() is all we needed in such a case. (and it is also, I guess, optimized on C level by the interpreter) To be taken in consideration that the IDs of the rows are integers while HBase wants a string... plus, we need to do some zero padding due to the fact that HBase does lexicographic sorting of its keys. There are tons of threads on forums about how bad is to use a monotonically incrementing key on HBase, but... this is what we needed. [...] A 2-level Ingestion Process =========================== A staging, -> (bin/sqlhbase-mysqlimport) -------------------------------------------- without any kind of interpretation of the content of the MySQL dump file apart of the splitting between schema data and raw data (INSERTs). 2 tables are created _"namespace"_creates, _"namespace"_values The first table contains an entry/row for each dumpfile ingested, having as a rowkey the timestamp of the day at the bottom of the dumpfile (or a command line provided one, in case that information is missing). Such row contains the list of hashes that for a table (see below), a create statement for each table, and a create statement for each view, plus some statistics related to the time of parsing of the file, and the amount of rows it was containing, and the overall md5 hash. A publishing, -> (bin/sqlhbase-populate) ----------------------------------------- given a namespace (as of initial import) and a timestamp (from a list): - the content of the table CREATE statement gets interpreted, the data types mapped from MySQL to HIVE, and the table created on HIVE. - if not existing, the table gets created fully, reading each 16Mb chunk - the table gets created with such convention: "namespace"_"table_name" - if the table exists, and it contains data, we compute the difference between the 2 lists of hashes that were created at ingestion time -- then we check what has already been ingested in the range of row ids which is contained in the mysql chunk (we took the assumption that mysql is sequentially dumping a table, hopefully) -- if a row id which is in the sequence in the database is not in the sequence from the chunk we are ingesting, than we might have a DELETE (DELETE that we do not execute on HBase due to HBASE-5154, HBASE-5241) -- if a row id is also in our chunk, we check each column for changes -- duplicated columns are removed from the list that is going to be sent to the server, this to avoid waste of bandwidth consumption - at this stage, we get a copy of the data on the next known ingestion date (dates are known from the list of dumps in the meta table) -- if data are found, each row gets diffed with the data to be ingested that are left from the previous cleaning... if there are real changes those are kept and will be sent to the HBase server for writing (timestamps are verified at this stage, to avoid to resend data that have already been written previously) FIXME: ingesting data, skipping a day, will need proper recalculation of the difference of the hashes list... ingesting data, from a backup that was not previously ingested (while we kept ingesting data in the tables) will cause some redundant data duplicated in HBase, simply cause we do not dare to delete the duplicate that are "in the future" ...anyway, it is pretty easy to delete a table and reconstruct it having all the history into the staging level of HBase Last but not least, we do parse VIEWs and apply them on HIVE ... be careful about !!! - Downloads (All Versions): - 22 downloads in the last day - 104 downloads in the last week - 348 downloads in the last month - Author: Guido Serra aka Zeph - License: GPLv3 - Package Index Owner: zeph - DOAP record: SqlHBase-0.1.xml
https://pypi.python.org/pypi/SqlHBase/0.1
CC-MAIN-2014-10
refinedweb
940
55.1
- Author: - axiak - Posted: - February 28, 2008 - Language: - Python - Version: - .96 - middleware cache namespace - Score: - 2 (after 2 ratings) Have you ever felt the need to run multiple Django projects on the same memcached server? How about other cache backends? To scope the cache keys, you simply need to prefix. However, since a lot of Django's internals rely on django.core.cache.cache, you cannot easily replace it everywhere. This will automatically upgrade the django.core.cache.cache object if settings.CACHE_PREFIX is set to a string and the Middleware contains ScopeCacheMiddleware. A thread discussing the merging of this functionality into Django is available on the dev mailing list. However, (as of now) nowhere in the thread does anyone mention the reason why this sort of treatment is needed: Many of Django's internal caching helpers use django.core.cache.cache, and will then conflict if multiple sites run on the same cache stores. Example Usage: >>> from django.conf import settings >>> from django.core.cache import cache >>> from scoped_caching import prefix_cache_object >>> settings.CACHE_PREFIX 'FOO_' # Do this once a process (e.g. on import or Middleware) >>> prefix_cache_object(settings.CACHE_PREFIX, cache) >>> cache.set("pi", 3.14159) >>> cache.get("pi") 3.14159 >>> cache.get("pi", use_global_namespace=True) >>> cache.get("FOO_pi", use_global_namespace=True) 3.14159 >>> cache.set("FOO_e", 2.71828, use_global_namespace=True) >>> cache.get("e") 2.71828 To Install: Simply add ScopeCacheMiddleware as a middleware and define settings.CACHE_PREFIX and enjoy! More like this - Run and cache only one instance of a heavy request by farnsworth 5 years, 1 month ago - Using descriptors for lazy attribute caching by djypsy 8 years, 2 months ago - Model Choices Helper by pmclanahan 5 years, 8 months ago - localsettings by elpaso66 5 years, 7 months ago - Filter on Multiple M2M Objects Simultaneously by axiak 8 years, 6 months ago Hi, I got bit by the issue that this snippet fixes when I built a bunch of Django applications that all used the same memcached instance. This snippet, or similar functionality, gets my vote for inclusion in mainline, FWIW! # Please login first before commenting.
https://djangosnippets.org/snippets/624/
CC-MAIN-2015-40
refinedweb
345
59.6
QR code is a type of matrix barcode that is machine readable optical label which contains information about the item to which it is attached. In practice, QR codes often contain data for a locator, identifier, or tracker that points to a website or application, etc. In this tutorial, you will learn how to generate and read QR codes in Python using qrcode and OpenCV libraries. Installing required dependencies: pip3 install opencv-python qrcode) This will generate a new image file in the current directory with the name of "site.png", which contains a QR code image of the data specified (in this case, this website). There are many tools that reads QR code. However, we will be using OpenCV for that, as it is popular and easy to integrate with the webcam or any video. Alright, open up a new Python file and follow along with me, let's read the image that we just generated: import cv2 # read the QRCODE image img = cv2.imread("site.png") Luckily for us, OpenCV already got QR code detector built in: # initialize the cv2 QRCode detector detector = cv2.QRCodeDetector() We have the image and the detector, let's detect and decode that data: # detect and decode data, bbox, straight_qrcode = detector.detectAndDecode(img) detectAndDecode() function takes an image as an input and decodes it to return a tuple of 3 values: the data decoded from the QR code, the output array of vertices of the found QR code quadrangle and the output image containing rectified and binarized QR code. We just need data and bbox here, bbox will help us draw the quadrangle in the image and data will be printed to the console! Let's do it: # if there is a QR code if bbox is not None: print(f"QRCode data:\n{data}") # display the image with lines # length of bounding box n_lines = len(bbox) for i in range(n_lines): # draw all lines point1 = tuple(bbox[i][0]) point2 = tuple(bbox[(i+1) % n_lines][0]) cv2.line(img, point1, point2, color=(255, 0, 0), thickness=2) cv2.line() function draws a line segment connecting two points, we retrieve these points from bbox array that was decoded by detectAndDecode() previously. we specified a blue color ( (255, 0, 0) is blue as OpenCV uses BGR colors ) and thickness of 2. Finally, let's show the image and quit when a key is pressed: # display the result cv2.imshow("img", img) cv2.waitKey(0) cv2.destroyAllWindows() Once you run this, the decoded data is printed: QRCode data: And the following image is shown: As you can see, the blue lines are drawn in the exact QR code borders. Awesome, we are done with this script, try to run it with different data and see your own results ! If you want to detect and decode QR codes live using your webcam (and I'm sure you do), here is a code for that: import cv2 # initalize the cam cap = cv2.VideoCapture(0) # initialize the cv2 QRCode detector detector = cv2.QRCodeDetector() while True: _, img = cap.read() # detect and decode data, bbox, _ = detector.detectAndDecode(img) # check if there is a QRCode in the image if bbox is not None: # display the image with lines for i in range(len(bbox)): # draw all lines cv2.line(img, tuple(bbox[i][0]), tuple(bbox[(i+1) % len(bbox)][0]), color=(255, 0, 0), thickness=2) if data: print("[+] QR Code detected, data:", data) # display the result cv2.imshow("img", img) if cv2.waitKey(1) == ord("q"): break cap.release() cv2.destroyAllWindows() Awesome, we are done with this tutorial, you can now integrate this in your own applications! Check qrcode's official documentation. Learn also: How to Use Steganography to Hide Secret Data in Images in Python. Happy Coding ♥View Full Code
https://www.thepythoncode.com/article/generate-read-qr-code-python
CC-MAIN-2020-16
refinedweb
634
61.87
Cannot load classes that are not in my "(default package)" in Eclipse Hi, i spent few years with mxj, having my unique "/2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin" folder in the file "max.java.config.txt", max loading my classes without problems. The problem is that I wanted to have others folders, to put my classes, because i want to organize them more clearly. So, in Eclipse, I tried to make a new project (called "testsClasses") / and i tried also to make a new package (called "test2") in my original project … then I put theses new Eclipse folders in "max.java.config.txt" too, like this : max.dynamic.class.dir /2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin (this one was already here) max.dynamic.class.dir /2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin/test2 max.dynamic.class.dir /2-Erdna/eclipse/MyWorkspace/testsClasses/bin …But it doesn't works, i cannot load the classes that are my new folders in max… I get some error messages in the max window : (see picture) I really don't understand what's happening. i've redone the max/eclipse tutorial from Adam, i think i didn't miss anything. maybe a problem in "max.java.config.txt". Isn't it possible to have more than one folder for mxj classes ? Please help! Thanks! [attachment=131268,482] Yes it should work. I think the problems may lay in your files. Could you post the code please (and complete file path)? Ok, after many tries of changing "max.java.config.txt" and restarting max without success, it finally worked for the classes in my NEW PROJECT after i restarted the macbookpro, hmm… But for my NEW PACKAGE, it still doesn’t work… max window always say the same : "java.lang.NoClassDefFoundError: nothing2 (wrong name: test2/nothing2) … … " (the class "nothing2" is also in my new package called "test2" in my "MyMxjClasses" project) it’s just a simple class doing nothing, here is the code: package test2; import com.cycling74.max.*; public class nothing2 extends MaxObject { nothing2 (Atom[] a) { declareInlets(new int[]{ DataTypes.ALL }); declareOutlets(new int[]{ DataTypes.ALL, DataTypes.ALL }); createInfoOutlet(false); post("INIT"); } public void bang() { outlet(0, "BANG RECEIVED"); } } …where the line "package test2;" was automaticaly added by Eclipse (and if i suppress it, it doesn’t compile) Of course, the class loads perfectly when i put it in one of the other folders. It looks like i can’t load any class that is inside a package inside a project. Thanks again for any help, Alexandre Java class files must live in a directory structure identical to their package name. The classpath should points to the root of that directory structure, not to a package subdirectory. For example, a if you create a class declared like this: package my.tests; public class Test1 extends com.cycling74.max.MaxObject { } it must be put it in the file ...src/my/tests/Test1.class. In this case, eclipse will compile it and write the class file to ...bin/my/tests/Test1.class. You should then set your classpath to …bin, because that directory contains the root of the package hierarchy. Is this clear? it doesn’t work. Eclispe put all the classes in their right places as you say, i think. >>You should then set your classpath to …bin, because that directory contains the root of the package hierarchy. If i understand well what you explain, as i have some classes in my default package, which are in: /2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin, and also have some classes in the test2 package, which are in: /2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin/test2, i should then only put : max.dynamic.class.dir /2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin in max.java.config.txt ? then max could find what is in /2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin plus what is in /2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin/test2 But this doesn’t work. max doesn’t find the classes in /test2, it just says : "Could not load class ‘nothing2′". max can only find the classes from the default package of a project. And, as i said above, when i add : max.dynamic.class.dir /2-Erdna/eclipse/MyWorkspace/MyMxjClasses/bin/test2 in max.java.config.txt, i then get even more error messages : java.lang.NoClassDefFoundError: nothing2 (wrong name: test2/nothing2) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:675) at com.cycling74.max.MXJClassLoaderImpl.doLoadClass(MXJClassLoaderImpl.java:119) at com.cycling74.max.MXJClassLoader.loadClazz(MXJClassLoader.java:88) Could not load class ‘nothing2′ Should i send a message to cycling74 to ask them ? Thanks, Forums > Java
https://cycling74.com/forums/topic/cannot-load-classes-that-are-not-in-my-default-package-in-eclipse/
CC-MAIN-2015-22
refinedweb
773
57.87
Copyright © 2009 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. This specification defines the security model controlling network access from within a widget, as well as a method for widget authors to request that the user agent grant access to certain network resources (or sets 18 June 2009 First Public Working Draft version of the "Widgets 1.0: Access Requests" specification. This version was forked out of "Widgets 1.0: Packaging and Configuration" in order to allow the group to focus on this specific issue without delaying further work on the rest of the widget stack. The purpose of this draft is to give external interested parties an opportunity to publicly comment on how access requests should work within widgets before the Working Group moves this specification to Last Call. The Working Group's goal is to make sure that vendor's requirements for access requests have been effectively addressed and clearly specified.. Note: User agents that wish to extend this specification in any way are encouraged to discuss their extensions on a public forum, such as public-webapps so their extensions can be considered for standardisation. accessElement. An access request is a request made by an author in the configuration file for the ability to retrieve one or more network resources identified via the access element's uri and subdomains attributes. To grant access means that the user agent authorises widget execution scopes to retrieve one or more network resources via the user agent. Note that some schemes (e.g. mailto:) may be handled by third-party applications and are therefore not controlled by the access mechanism defined in this specification. To deny access is to refuse to grant access. A network resource is a retrievable resource of any type that is identified by a URI that has a DNS or IP as its authority. A feature-enabled API is an API that is for one reason or another considered to be sensitive (typically because it has access to the] The widget execution scope is the scope (or set of scopes, seen as a single one for simplicity's sake) being the execution context for code running from documents that are part of the widget package. The web execution scope is the scope (or set thereof) being the execution context for code running from documents that have been loaded off the web. This section is non-normative. This specification is part of the Widgets 1.0 family of specifications, which together standardise widgets as a whole. The Widgets 1.0: APIs and Events [Widgets-APIs] specification defines APIs to store preferences and capture events.]. This section is non-normative. The design goals and requirements for this specification are addressed in the Widgets 1.0 Requirements [Widgets-Reqs] document. This document addresses the following requirements: Additionally, the following requirements are taken into account:. For example, when a user attempts to install a Widget in a User Agent, and the Widget Configuration Document declares that it requires access to currently blocked services in order to function, the User Agent may prompt the user to choose to: Additional considerations guiding this specification are maximal compatibility with existing web technology (including not breaking linking to JS libraries, embedded media, ads, etc.); and not restricting the platform in such a way that would make it less powerful than the web platform. A widget runs in its own widget execution scope. Communication between that execution scope and the network is prohibited by default, but may be turned on selectively using the access element. This prohibition must apply equally to access through APIs (e.g. XMLHttpRequest) or through inlined content (e.g. iframe, script, img). Scripts executing in that widget execution scope have access to feature-enabled APIs. Note that other mechanisms may provide access to the same APIs in other contexts, but that that is outside the scope of this specification. When permission is selectively turned on to access a given set of network resources, it must be granted equally to APIs and inlined content. web execution scope. accessElement The access element allows authors to request permission from the user agent to retrieve a set of network resources. A user agent must prevent the widget execution scope from retrieving network resources, using any method (API, linking, etc.) and for any operation, unless the user agent has granted access to an explicitly declared access request. However,.). The access element is in the namespace. widgetelement. xml:lang: uri *) may be used. This special value provides a means for an author to request from the user agent unrestricted access to resources through any and all schemes and protocols supported by the user agent. subdomains uriattribute. The default value when this attribute is absent is false, meaning that access to subdomains is not requested. This example shows the expected usage of the access element. <widget xmlns ="" width ="400" height="500"> <name>Flight Tracker</name> <access uri=""/> <access uri=""/> <access uri="" subdomains="true"/> <access uri=""/> <access uri="*"/> </widget> using the ToASCII algorithm as per [RFC3490], then decode, as per [URI], all percent-encoded parts of path that are unreserved characters. When multiple access elements are used, the set of network connections that are allowed is the union of all the access requests that were granted by the user agent. The following rules are applied to determine what each access element is requesting access to. accesselement is for network resources that have: false, a host exactly equal to host; and true, a host either exactly equal to host, or that is a subdomain of host; and At runtime, when a network request is made from within the widget execution scope, the user agent matches it against the rules defined above, accepting it if it matches and blocking it if it doesn't. Note that if scheme is " http" or " https", host comparisons must be performed in a case-insensitive manner. As a special case, the uri attribute may hold the value *. In that case, the access element is considered to request access to all network resources without limitation (e.g. retrieve RSS feeds from anywhere). If access is granted to such a request, then all other network access requests must be granted..
http://www.w3.org/TR/2009/WD-widgets-access-20090618/
CC-MAIN-2015-14
refinedweb
1,038
52.49
It was a very exiting moment when .Net Core first came out and it had support for Linux, Mac and Windows. Since then it has matured to a point where the ASP.NET Core is now one the fastest frameworks you can use to serve requests on the web. Although all is good on the web side of things, the desktop hasn’t been given the same amount of attention. Even though .Net Core 3.0 will support WPF and WinForms that still leaves out Linux and Mac. This blog post is about a cross-platform way of developing desktop applications using .Net Core and Electron called ElectronCGI. It does not rely on running a web server or having the .Net Core code pre-compiled. To show you how simple using ElectronCGI is here’s how you can configure a NodeJs/Electron UI to run code in a .Net Core console application. In this example a “request” with type “greeting” and a string argument is sent to a .Net application which responds with a string that contains the greeting. NodeJs/Electron after adding the electron-cgi npm package: const { ConnectionBuilder } = require('electron-cgi'); const connection = new ConnectionBuilder() .connectTo('dotnet', 'run', '--project', 'NetCoreProject') .build(); connection.send('greeting', 'John', greeting => { console.log(greeting); // will print "Hello John!" }); In a .Net console application after adding the ElectronCgi.DotNet nuget package: using ElectronCgi.DotNet; //... static void Main(string[] args) { var connection = new ConnectionBuilder() .WithLogging() .Build(); // expects a request named "greeting" with a string argument and returns a string connection.On<string, string>("greeting", name => { return "Hello " + name; }); // wait for incoming requests connection.Listen(); } That’s all you need to start. Here’s a video that shows how easy it is to setup and how simple the development workflow is. Also, here are the github links for the electron-cgi node npm package and the ElectronCgi.DotNet nuget package. Why Even though there already are a few ways to create Electron/NodeJs applications that run .Net code, I felt these were too involved and that they could provide a better development experience. Probably the most popular way of having an UI running on Electron and running .Net code is Electron.NET. The way it works is by having the .Net code run in a full ASP.NET application. The Electron app displays web pages rendered server-side by ASP.NET. Also, some of Electron’s functionality is “exposed” to the ASP.NET application by using web sockets to send requests/commands that are initiated from the ASP.NET code. For example, it is possible to open a new Electron window from an ASP.NET controller this way. The biggest drawback I see with this approach is that even though the goal is to create a desktop application we still have to handle a lot of tasks that should be foreign in this scenario. For example, when you create a new ASP.NET Core MVC application it comes with cookie policies, https configuration, HSTS, Routing, etc. In a desktop application scenario that makes no sense and just ends up getting in the way. Ideally, we just want to write the UI using HTML, CSS and a bit of Javascript and be able to invoke .Net code in response to user’s actions. The other alternative in Electron/NodeJs is Edge.js. It works by enabling .Net code to run in-process in Node. Although technologically this is impressive, it has a few requirements that make using it a little bit hard. This is how you can do an hello world using Edge.js: var edge = require('edge'); var helloWorld = edge.func(function () {/* async (_) => { return "Hello World"; } */}); helloWorld('', function (error, result) { if (error) throw error; console.log(result); // will log "Hello world" }); Yes, the C# code is inside / /. If you are using ES6 you can use template strings, but still, you won’t get intellisense. There are also ways to “bring” a method from a precompiled DLL, however there’s a requirement that the method has a specific signature ( Func<object,Task<object>>). I should also mention a fully native solution. The one that comes to mind is Qt. Qt has a lot going on for it. It is used by some very well know names (Tesla Model S’ dashboards run Qt for example). Qt has “bindings” for several languages, including C#/.Net. The only problem with Qt is that if you are thinking about doing something commercial with it, it becomes very costly. To the tune of more than 5K per developer, per year. Given all that, I think there’s space for one more way of doing cross-platform desktop applications. How does it work ElectronCGI draws inspiration from how the first dynamic web requests were made a reality in the early days of the web. In the early days the only things that a web server was able to serve were static web pages. To serve dynamic pages the idea of having an external executable take in a representation of the web request and produce a response was put forward. The way that executable got the web request’s headers was through environment variables and the request’s body was sent through the standard input stream (stdin). After processing the request the executable would send the resulting html back to the web server through the standard output stream (stdout). This way of doing things was called CGI – Common Gateway Interface. This mechanism is available in all operating systems and works perfectly well, is super fast and very easy to use. For example, in bash when you write something like ls | more you are redirecting the stdout of ls to more‘s stdin. That’s exactly what ElectronCGI takes advantage of. Using NodeJs and .Net as an example (these are the only implementations right now, but there’s no reason why this wouldn’t work on other runtimes/languages), when a connection is created in NodeJs to a .Net console application, ElectronCGI will launch the .Net application and will grab hold of it’s stdin and stdout streams. It will also keep the .Net application running until the connection is closed. Every time a “request” is sent (i.e. connection.send('requestType', args, callbackFn)) the request is serialised (as JSON) and written to the .Net application’s stdin stream. After handling the request the .Net Core application sends a response back through stdout (i.e. connection.On<ArgType, ReturnType>("requestType", handlerFunction)). ElectronCGI takes care of all this so that in the end the only thing that you need to do is send requests from NodeJs and provide request handlers for those requests in .Net. Using stdin/stdout as the communication channel provides very little overhead (I performed a quick test on an i7-7700K and was able to sequentially send 18K requests and receive their responses in one second). Benefits Have I mentioned that with ElectronCGI and .Net Core you can create applications that run in Linux, Windows and Mac and use C#? Also, if you want, since the nuget package for .Net ( ElectronCgi.DotNet) targets .Net Standard 2.0 you can use it with the full .Net Framework (from version 4.6.1). If you do this you’ll only be able to run on Windows though, but it might still be interesting if you are a .Net web developer and want to use HTML and CSS to build your UI while reusing existing .Net code you might have. Speaking of reusing existing .Net code, even though there’s a requirement that the .Net application you connect to is a console app, there’s no restrictions on what that console application adds as dependencies. That means that you can bring any nuget package or reference other .Net projects you might want to use. The development experience can also be quite good. If you establish a connection from an Electron app using dotnet run, for example: const { ConnectionBuilder } = require('electron-cgi'); const connection = new ConnectionBuilder() .connectTo('dotnet', 'run', '--project', 'PathToDotNetProject') .build(); When taking this approach if you make changes in the .Net project the only thing you need to do to see them take effect is to refresh the page (you can leave this enabled and even access the chrome dev tools in Electron). When you refresh, a new connection is created and that causes dotnet run to be executed which will compile and run the project if there are any pending changes. So you can imagine a development experience where you can make changes to your UI and/or your .Net code and see them be applied just by doing a “Ctrl+R”. Also, thanks to the good work that the .Net team in terms of speeding up compilation time it really feels seamless. And when you are done you can always “connect” to the published, self-contained executable for an extra performance boost and so that your application does not depend on the .Net SDK. Also, if you need to debug the .Net code you can just use the attach functionality in Visual Studio Code (or full Visual Studio) to attach to the running process and add breakpoints. Early days There still are a few rough edges with ElectronCGI. Particularly in how errors are handled. Right now you can enable logging in .Net ( new ConnectionBuilder().WithLogging("pathToLogFile")) so that you can see if something went wrong on the .Net side of things. The exception messages are quite descriptive, for example: ElectronCgi.DotNet.HandlerFailedException: Request handler for request of type 'division' failed. ---> System.DivideByZeroException: Attempted to divide by zero. Whenever there’s an unhandled exception in .Net the connection will be “lost”. In NodeJs/Electron the connection’s onDisconnect method is invoked. You can use it for example to restart the connection: const { ConnectionBuilder } = require('electron-cgi'); let _connection = new ConnectionBuilder().connectTo('dotnet', 'run', '--project', 'DotNetCalculator').build(); _connection.onDisconnect = () => { alert('Connection lost, restarting...'); _connection = new ConnectionBuilder().connectTo('dotnet', 'run', '--project', 'DotNetCalculator').build(); }; Keep in mind that you can maintain state in .Net since the executable starts when a connection is made and is kept running (listening for requests) until the connection is closed ( connection.close() in NodeJs/Electron) or there’s an exception. When this happens that state is lost. This behavior might not be ideal for some people. What I feel inclined to do is to have the error surface in NodeJs callback’s first argument for a request, much like is custom in Asynchronous APIs in Node. For example: connection.send('requestType', args, (err, data) => { if (err){ //handle error return; } //otherwise handle the data }); Another thing that might be useful is to add the ability to initiate a request in either end of the connection. Right now we can create a connection from NodeJs to a .Net application and send request from Node to .Net. There’s no reason why we couldn’t add the ability to send requests from .Net to Node after a connection is established. Another aspect that needs improvement, this one particular to the .Net implementation, is the way requests are currently being handled. Right now, while a request handler is being served the stdin is not monitored. That means that if you have a request handler that takes a long time to run, subsequent requests originating from Node will be queued in the .Net application until the long running request finishes. Also, having more options on how to register handlers for requests would be welcomed. Currently there are 4 different ways to register a handler for a request type: //handler for request of type requestType with no arguments or return value void On(string requestType, Action handler) //handler for request of type requestType with argument of type T and no return value void On<T>(string requestType, Action<T> handler) //handler for request of type requesType with argument of TIn and return type TOut void On<TIn, TOut>(string requestType, Func<TIn, TOut> handler) //async handler for request of type requestType with argument of type T and no return value void OnAsync<T>(string requestType, Func<T, Task> handler) //async handler for request of type requestType with argument of type T and return type TOut void OnAsync<TIn, TOut>(string requestType, Func<TIn, Task<TOut>> handler) It is possible to use dynamic for the argument type. In terms of other things that are missing documentation is certainly one of them. Specifically on how to use ElectronCGI in Electron with React, Angular, Vue and (why not?) Blazor and any other web UI framework I might be missing.
https://www.blinkingcaret.com/2019/02/27/electron-cgi/
CC-MAIN-2019-13
refinedweb
2,090
66.84
IO_GETEVENTS(2) Linux Programmer's Manual IO_GETEVENTS(2) io_getevents - read asynchronous I/O events from the completion queue #include <linux/aio_abi.h> /* Definition of *io_* types */ #include <sys/syscall.h> /* Definition of SYS_* constants */ #include <unistd.h> int syscall(SYS_io_getevents, aio_context_t ctx_id, long min_nr, long nr, struct io_event *events, struct timespec *timeout); Note: glibc provides no wrapper for io_getevents(), necessitating the use of syscall(2). Note:. The asynchronous I/O system calls first appeared in Linux 2.5. io_getevents() is Linux-specific and should not be used in programs that are intended to be portable. You probably want to use the io_getevents(). An invalid ctx_id may cause a segmentation fault instead of generating the error EINVAL. io_cancel(2), io_destroy(2), io_setup(2), io_submit(2), aio(7), time(7) This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2021-03-22 IO_GETEVENTS(2) Pages that refer to this page: io_cancel(2), io_destroy(2), io_setup(2), io_submit(2), syscalls(2), aio(7), signal(7)
https://man7.org/linux/man-pages/man2/io_getevents.2.html
CC-MAIN-2021-43
refinedweb
188
58.28
equivalent to pyb.USB_VCP() or lobo's machine.stdin_get() - philwilkinson last edited by If you have a Pycom board running a looping script on a LiPo battery out in the field, it would be really useful for it to detect when a USB cable has been connected, and exit out of the script. Does anyone know if there is an equivalent to Pyboard's pyb.USB_VCP() function; specifically the vcp.any function? This returns True when a USB connection is detected. Lobo's micropython port has the machine.stdin_get() function which passes in a timeout parameter, making it non-blocking. This does not detect the USB but does allow a check if a char is entered within a time period, before continuing on with the script. @rcolistete You can always connect Vin via a voltage divider to a GPIO Pin. - rcolistete last edited by What about detecting a microUSB cable only charging LoPy + Expansion Board ? It would be useful because only measuring the voltage of battery doesn't mean the battery is being discharging or charging (at least in the beginning of the discharging process). @philwilkinson Pycom device uses an UART and not a USB peripheral for the REPL, like the Pyboard. However, what you can do is to use uart.any() to tell, whether there is a character from the UART REPL interface waiting: from machine import UART uart=UART(0, 115200) if uart.any(): .... That will not detect the cable, but the first character sent. That will however not detect characters from a telnet session. You can also connect one of the modem control signals of the USB/UART bridge to a GPIO pin.
https://forum.pycom.io/topic/3980/equivalent-to-pyb-usb_vcp-or-lobo-s-machine-stdin_get
CC-MAIN-2020-45
refinedweb
276
65.22
By: Heather Miller, Martin Odersky, and Philipp Haller Updated September 15th, 2013 Functional programming languages are regularly touted as an enabling force, as an increasing number of applications become concurrent and distributed. However, managing closures in a concurrent or distributed environment, or writing APIs to be used by clients in such an environment, remains considerably precarious– complicated environments can be captured by these closures, which regularly leads to a whole host of potential hazards across libraries/frameworks in Scala’s standard library and its ecosystem. Potential hazards when using closures incorrectly: This SIP outlines an abstraction, called spores, which enables safer use of closures in concurrent and distributed environments. This is achieved by controlling the environment which a spore can capture. Using an assignment-on-capture semantics, certain concurrency bugs due to capturing mutable references can be avoided. In the following example, an Akka actor spawns a future to concurrently process incoming requests. Example 1: def receive = { case Request(data) => future { val result = transform(data) sender ! Response(result) } } Capturing sender in the above example is problematic, since it does not return a stable value. It is possible that the future’s body is executed at a time when the actor has started processing the next Request message which could be originating from a different actor. As a result, the Response message of the future might be sent to the wrong receiver. The following example uses Java Serialization to serialize a closure. However, serialization fails with a NotSerializableException due to the unintended capture of a reference to an enclosing object. Example 2: case class Helper(name: String) class Main { val helper = Helper("the helper") val fun: Int => Unit = (x: Int) => { val result = x + " " + helper.toString println("The result is: " + result) } } Given the above class definitions, serializing the fun member of an instance of Main throws a NotSerializableException. This is unexpected, since fun refers only to serializable objects: x (an Int) and helper (an instance of a case class). Here is an explanation of why the serialization of fun fails: since helper is a field, it is not actually copied when it is captured by the closure. Instead, when accessing helper its getter is invoked. This can be made explicit by replacing helper.toString by the invocation of its getter, this.helper.toString. Consequently, the fun closure captures this, not just a copy of helper. However, this is a reference to class Main which is not serializable. The above example is not the only possible situation in which a closure can capture a reference to this or to an enclosing object in an unintended way. Thus, runtime errors when serializing closures are common. Spores have a few modes of usage. The simplest form is: val s = spore { val h = helper (x: Int) => { val result = x + " " + h.toString println("The result is: " + result) } } In this example, no transformation is actually performed. Instead, the compiler simply ensures that the spore is well-formed, i.e., anything that’s captured is explicitly listed as a value definition before the spore’s closure. This ensures that the enclosing this instance is not accidentally captured, in this example. Spores can also be used in for-comprehensions: for { i <- collection j <- doSomething(i) } yield s"${capture(i)}: result: $j" Here, the fact that a spore is created is implicit, that is, the spore marker is not used explicitly. Spores come into play because the underlying map method of the type of doSomething(i) takes a spore as a parameter. The capture(i) syntax is an alternative way of declaring captured variables, in particular for use in for-comprehensions. Finally, a regular function literal can be used as a spore. That is, a method that expects a spore can be passed a function literal so long as the function literal is well-formed. def sendOverWire(s: Spore[Int, Int]): Unit = ... sendOverWire((x: Int) => x * x - 2) The main idea behind spores is to provide an alternative way to create closure-like objects, in a way where the environment is controlled. A spore is created as follows. Example 3: val s = spore { val h = helper (x: Int) => { val result = x + " " + h.toString println("The result is: " + result) } } The body of a spore consists of two parts: In general, a spore { ... } expression has the following shape. Note that the value declarations described in point 1 above can be implicit but not lazy. Figure 1: spore { val x_1: T_1 = init_1 ... val x_n: T_n = init_n (p_1: S_1, ..., p_m: S_m) => { <body> } } The types T_1, ..., T_n can also be inferred. The closure of a spore has to satisfy the following rule. All free variables of the closure body have to be either capture(see corresponding section below). Example 4: case class Person(name: String, age: Int) val outer1 = 0 val outer2 = Person("Jim", 35) val s = spore { val inner = outer2 (x: Int) => { s"The result is: ${x + inner.age + outer1}" } } In the above example, the spore’s closure is invalid, and would be rejected during compilation. The reason is that the variable outer1 is neither a parameter of the closure nor one of the spore’s value declarations (the only value declaration is: val inner = outer2). In order to make the runtime behavior of a spore as intuitive as possible, the design leaves the evaluation semantics unchanged compared to regular closures. Basically, leaving out the spore marker results in a closure with the same runtime behavior. For example, spore { val l = this.logger () => new LoggingActor(l) } and { val l = this.logger () => new LoggingActor(l) } have the same behavior at runtime. The rationale for this design decision is that the runtime behavior of closure-heavy code can already be hard to reason about. It would become even more difficult if we would introduce additional rules for spores. The type of the spore is determined by the type and arity of the closure. If the closure has type A => B, then the spore has type Spore[A, B]. For convenience we also define spore types for two or more parameters. In example 3, the type of s is Spore[Int, Unit]. Implementation The spore construct is a macro which The Spore trait for spores of arity 1 is declared as follows: trait Spore[-T, +R] extends Function1[T, R] For each function arity there exists a corresponding Spore trait of the same arity (called Spore2, Spore3, etc.) Regular function literals can be implicitly converted to spores. This implicit conversion has two benefits: This conversion is defined as a member of the Spore companion object, so it’s always in the implicit scope when passing a function literal as a method argument when a Spore is expected. For example, one can do the following: def sendOverWire(s: Spore[Int, Int]): Unit = ... sendOverWire((x: Int) => x * x - 2) This is arguably much lighter-weight than having to declare a spore before passing it to sendOverWire. In general, the implicit conversion will be successful if and only if the function literal is well-formed according to the spore rules (defined above in the Design section). Note that only function literals can be converted to spores. This is due to the fact that the body of the function literal has to be checked by the spore macro to make sure that the conversion is safe. For named function values (i.e., not literals) on the other hand, it’s not guaranteed that the function value’s body is available for the spore macro to check. To enable the use of spores with for-comprehensions, a capture syntax has been introduced to assist in the spore checking. To see why this is necessary, let’s start with an example. Suppose we have a type for distributed collections: trait DCollection[A] { def map[B](sp: Spore[A, B]): DCollection[B] def flatMap[B](sp: Spore[A, DCollection[B]]): DCollection[B] } This type, DCollection, might be implemented in a way where the data is distributed across machines in a cluster. Thus, the functions passed to map, flatMap, etc. have to be serializable. A simple way to ensure this is to require these arguments to be spores. However, we also would like for-comprehensions like the following to work: def lookup(i: Int): DCollection[Int] = ... val indices: DCollection[Int] = ... for { i <- indices j <- lookup(i) } yield j + i A problem here is that the desugaring done by the compiler for for-comprehensions doesn’t know anything about spores. This is what the compiler produces from the above expression: indices.flatMap(i => lookup(i).map(j => j + i)) The problem is that (j => j + i) is not a spore. Furthermore, making it a spore is not straightforward, as we can’t change the way for-comprehensions are translated. We can overcome this by using the implicit conversion introduced in the previous section to convert the function literal implicitly to a spore. However, in continuing to look at this example, it’s evident that the lambda still has the wrong shape. The captured variable i is not declared in the spore header (the list of value definitions preceding the closure within the spore), like a spore demands. We can overcome this using the capture syntax – an alternative way of capturing paths. That is, instead of having to write: { val captured = i j => j + i } One can also write: (j => j + capture(i)) Thus, the above for-comprehension can be rewritten using spores and capture as follows: for { i <- indices j <- lookup(i) } yield j + capture(i) Here, i is “captured” as it occurs syntactically after the arrow of another generator (it occurs after j <- lookup(i), the second generator in the for-comprehension). Note: anything that is “captured” using capture may only be a path. A path (as defined by the Scala Language Specification, section 3.1) is: C.this, where Creferences a class. p.xwhere pis a path and xis a stable member of p. C.super.xor C.super[M].xwhere Creferences a class and xreferences a stable member of the super class or designated parent class Mof C. The reason why captured expressions are restricted to paths is that otherwise the two closures (x => <expr1> + capture(<expr2>)) and (x => <expr1> + <expr2>) (where <expr1> and <expr2> are not just paths) would not have the same runtime behavior, because in the first case, the closure would have to be transformed in a way that would evaluate <expr2> “outside of the closure”. Not only would this complicate the reasoning about spore-based code (see the section Evaluation Semantics above), but it’s not clear what “outside of the closure” even means in a context such as for-comprehensions. An invocation of the spore macro expands the spore’s body as follows. Given the general shape of a spore as shown above, the spore macro produces the following code: new <spore implementation class>[S_1, ..., S_m, R]({ val x_1: T_1 = init_1 ... val x_n: T_n = init_n (p_1: S_1, ..., p_m: S_m) => { <body> } }) Note that, after checking, the spore macro need not do any further transformation, since implementation details such as unneeded remaining outer references are removed by the new backend intended for inclusion in Scala 2.11. It’s also useful to note that in some cases these unwanted outer references are already removed by the existing backend. The spore implementation classes follow a simple pattern. For example, for arity 1, the implementation class is declared as follows: class SporeImpl[-T, +R](f: T => R) extends Spore[T, R] { def apply(x: T): R = f(x) } Similar to regular functions and closures, the type of a spore should be inferred. Inferring the type of a spore amounts to inferring the type arguments when instantiating a spore implementation class: new <spore implementation class>[S_1, ..., S_m, R]({ // ... }) In the above expression, the type arguments S_1, ..., S_m, and R should be inferred from the expected type. Our current proposal is to solve this type inference problem in the context of the integration of Java SAM closures into Scala. Given that it is planned to eventually support such closures, and to support type inference for these closures as well, we plan to piggyback on the work done on type inference for SAMs in general to achieve type inference for spores. We now revisit the motivating examples we described in the above section, this time in the context of spores. The safety of futures can be improved by requiring the body of a new future to be a nullary spore (a spore with an empty parameter list). Using spores, example 1 can be re-written as follows: def receive = { case Request(data) => future(spore { val from = sender val d = data () => { val result = transform(d) from ! Response(result) } }) } In this case, the problematic capturing of this is avoided, since the result of this.sender is assigned to the spore’s local value from when the spore is created. The spore conformity checking ensures that within the spore’s closure, only from and d are used. Using spores, example 2 can be re-written as follows: case class Helper(name: String) class Main { val helper = Helper("the helper") val fun: Spore[Int, Unit] = spore { val h = helper (x: Int) => { val result = x + " " + h.toString println("The result is: " + result) } } } Similar to example 1, the problematic capturing of this is avoided, since helper has to be assigned to a local value (here, h) so that it can be used inside the spore’s closure. As a result, fun can now be serialized without runtime errors, since h refers to a serializable object (a case class instance). Contents
http://docs.scala-lang.org/sips/pending/spores.html
CC-MAIN-2016-07
refinedweb
2,258
53.21
Top React Libraries — Measurements, Charts, and Videos To make developing React apps easier, we can add some libraries to make our lives easier. In this article, we’ll look at some popular libraries for React apps. React Measure React Measure is an easy to use library to let us get various dimensions of the screen. To install it, we can run: npm i react-measure Then we can use it by writing: import React from "react"; import { withContentRect } from "react-measure";function App({ measureRef, measure, contentRect }) { return ( <div> <div ref={measureRef}> hello world <pre>{JSON.stringify(contentRect, null, 2)}</pre> </div> </div> ); }export default withContentRect("bounds")(App); We get the measurements with the contentRect prop. measureRef is passed into the element that we want to get the size of. Then we can use the withContentRect higher-order component with it. 'bounds' means we get the dimensions of the bounds. We can also use it to get the dimensions of the offsets, margins, and more. ReactPlayer ReactPlayer is a library that we can use to embed videos from various sources. It supports embedding YouTube, Facebook, Twitch, SoundCloud, Streamable, Vimeo, Wistia, Mixcloud, and DailyMotion videos. To install it, we run: npm i react-player Then we can use it by writing: import React from "react"; import ReactPlayer from "react-player";export default function App() { return ( <div> <ReactPlayer url="" /> </div> ); } We just pass in the ReactPlayer component with the URL of the video set in the url prop. In addition to this, we can change various options like looping, controls, width, height, other styles, a player icon, volume, and more. They’re all available via props passed to ReactPlayer , so we can write: import React from "react"; import ReactPlayer from "react-player";export default function App() { return ( <div> <ReactPlayer controls </div> ); } to show the controls with the controls prop for example. react-chartjs-2 react-chartjs-2 is a port of the Chart.js library for React. It comes with various React components to let us add various kinds of graphs. To install it, we run: npm i react-chartjs-2 chart.js Then we can use it by writing: import React from "react"; import { Line } from "react-chartjs-2";const data = { labels: ["Jan", "Feb", "Mar", "Apr", "May", "Jun"], datasets: [ { label: "apple", data: [33, 53, 85, 41, 44, 65], fill: true, backgroundColor: "red", borderColor: "darkred" }, { label: "orange", data: [33, 25, 35, 51, 54, 76], fill: false, borderColor: "orange" } ] };export default function App() { return ( <div> <Line data={data} height={300} options={{ maintainAspectRatio: false }} /> </div> ); } The data has the data that we pass into the data prop. It includes the labels property with the x-axis labels. datasets has the datasets for the lines. label has the content for the legends. data has the y coordinates for the points. fill set to true means we have the color between the line and the x-axis. backgroundColor is the color for the fill. borderColor has the color for the line. We pass in the whole object as the value of the data prop in the Line component. Also, we set the height with the height prop. options has extra options like whether to maintain aspect ratio. Many other kinds of charts like bars, doughnuts, scatter charts and more are supported. Conclusion React Measure lets us get measurements of a given element. ReactPlayer is a component that lets us embed videos from various sources. react-chartjs-2 is a Chart.js port for React. It can let us add many kinds of charts easily.
https://hohanga.medium.com/top-react-libraries-measurements-charts-and-videos-971b21a3bea6
CC-MAIN-2021-25
refinedweb
587
64
November 2014 I started to write this text after having seen the exhibition Between realities at Dunkers. For various reasons I never completed it. Then came, spring 2017, I felt that I had to polish it a little, and publish it. It was my reading of Charlotte Cotton's The Photograph as Contemporary Art, a text I have difficulties to really appreciate, which triggered me to continue. It never happened, though. 19 August 2014 marked the 175th anniversary of the art of photography. At least if we see the release of Daguerre's patent as the starting point of photography. Of those 175 years, about 50 were good years for photojournalism. Its Golden Age started in the 1930s. Figure 1. The number of sales of paintings recorded at major auction houses from year 1970 to 2013. The four graphs represents four segments. These are from top to bottom graph year 2000: (1) Impressionism and modern (solid curve), (2) post worldwar 2 and contemporary (short dashed), (3) American (long dashed) and (4) Latin American (long and short dashed). For more precises definitions of the categories please refer to Kräussl et al. (2014).. The glossy magazines continued another 20 years, though. But then: The Golden Age of Photojournalism ended in the 1970s when many photo-magazines ceased publication. They found that they could not compete with other media for advertising revenue to sustain their large circulations and high costs. Those other media were in many parts of the commercial television. Interestingly, now thirty years after the demise of the photo-magazines we see how the daily newspapers are losing advertising revenue since advertising is now moving to the Internet. It is also interesting to note that the genre started because of one technical innovation, and that it in its current form started to decline because of two other changed the viability business models of the press: television and the Internet. From A WAY OF LIFE. This is This is where This is where I This is where I'm from Hommage à J.H. Engström Be that as it may, but the curators has recently been even more inclined to summarize their stories of Swedish photography in a number of retrospective exhibitions here in Sweden. There have also been group exhibitions following some threads from the past into the present. Less than half a year ago there was one opening at Moderna Malmö which has now moved the Stockholm head office, entitled A WAY OF LIFE: Swedish photography from Christer Strömholm until Today. The whole idea of this uncompleted entry is to correlate the demise of the glossy magazine with rise of photography as an art form, and how the prices of photography follows the ones in art into the hypothetical bubbles of the western economies. As usual the reality isn't as easy to grasp as you believe when you start formulating the hypotheses. A lot of things distracted me: Someone didn't want to share their data, and I got bought a new computer going from SUSE to Ubuntu forced me to migrate my environment, and when that was done there was no energy left for what is importan: Writing and making photographs. My attention was drawn in other directions, the most important one was the social media notably twitter which didn't require a whole computer and permitted me to sit and participate in the global exchange of fast food like content. Kristoffer Arvidsson, Louise Wolthers & Niclas Östlind (editors), 2014. Between realities. Photography in Sweden 1970–2000 Bokförlaget Arena, Lund 2014. Charlotte Cotton, The Photograph as Contemporary Art, Thames & Hudson, 2014. A. E. Scorcu & R. Zanola, 2011. "Survival in the Cultural Market: The Case of Temporary Exhibitions," Working Paper Series 36_11, The Rimini Centre for Economic Analysis. Kenneth Wieand, Jeff Donaldson & Socorro Quintero, 1998. "Are Real Assets Priced Internationally? Evidence from the Art Market," Multinational Finance Journal, vol. 2(3), pages 167-187, September. Nandini Srivastava & Stephen Satchell, 2012. "Are There Bubbles in the Art Market? The Detection of Bubbles when Fair Value is Unobservable," Birkbeck Working Papers in Economics and Finance 1209, Birkbeck, Department of Economics, Mathematics & Statistics. Roman Kräussl, Thorsten Lehnert & Nicolas Martelin, 2014. "Is there a Bubble in the Art Market?," LSF Research Working Paper Series 14-07, Luxembourg School of Finance, University of Luxembourg. Jeffrey Pompe, 1996. An Investment Flash: The Rate of Return for Photographs. Southern Economic Journal, Vol. 63, No. 2, pp. 488-495 In the background are two of Francesca Woodman's Caryatids. There are a large sample of photobooks about Francesca on the table in the foreground Francesca Woodman took her life 1981 at 22 years of age. By then she had become a serious, hardworking and very good artistic photographer. She had started her exploration of art photography when she was 13. When considering these early works, we are talking of the serious endeavour of a precocious, talented teenager, not a child's play. Much of her work is self portraiture, and often she portrays herself undraped. There are several female photographers who have earned well-deserved fame for (among other things) photographing naked humans, such as Imogen Cunningham and Ruth Bernard. They both depicted naked women, but as far as I know, neither Cunningham nor Bernard turned their cameras towards themselves; with or without clothes. Francesca Woodman's work is on exhibition, entitled On being an Angle at Moderna Museet, Stockholm. According to Anna Tellgren, the curator, the reason why Woodman used herself as model was just practical1. She was there herself when she needed one, at a lower cost than the alternatives and there would never be any problems with model release contracts. However, I am sure there is more to than that. There were brilliant contemporaries, like Cindy Sherman, who started similar projects late 1970ties. Photographers that have since become important players on the photography scene. The interest in self-portraiture has increased through the decades. It is now a fairly common genre and I think it is more common among women than men. This text is about my attempt to understand that difference between the sexes. Emmett Gowin from Landscape Stories on Vimeo. The only things I know about Edith Gowin is that she is a beautiful woman and that her husband is photographer Emmet Gowin. He is a well known photographer who earned some of his fame for portraits of his often scantily clad wife. The two have now celebrated their golden wedding anniversary since several years. In the interviews you find of them (for example on YouTube) they seem to be still today a loving couple. It might be that Emmet don't take as many photographs of Edith now as he did 40-50 years ago. Other husbands and photographers take photos of their wives. Some go far artistically such as Alfred Stieglitz' (1864-1946) portraits of Georgia O'Keeffe (1887-1986). They arose from an intense love story, marriage and a long relationship. 2 Edward Weston shot countless of nudes, of which, according to Robert Adams, most are fairly uninteresting: With the exception of two full length nudes of Tina Modotti and five of Charis Wilson. 3 These photos are to be found on many web sites. Suitable searches in Google images: Many photos by much lesser artists than Weston deserve even less praise. It might be that most such pictures shouldn't really have been taken. One of the best, earliest and still extremely influential theoretical analyses of the nude in western art is in Ways of seeing by John Berger4. His arguments can be summarized in contrasting statements. For example: Berger argues that to be naked is to be without disguise, and to be nude is to be on display, to carry ones nakedness as a disguise. To be disguised without clothes is like being condemned to never be naked. In the western nude cliche, the model is there, on display, disguised in he nakedness for the pleasures of the male viewer. I could go on giving examples of contrasts from Berger's treatment. What interests me, though, is how can we discuss Woodman's self portraiture in the light of Berger's contrasting statements. Then we have Edith and Emmet Gowin, and those seven out of houndreds nudes by Weston that Robert Adams felt was OK. Interestingly Berger talks of hundreds of thousands of nude oil paintings (and perhaps millions of photographs) that fit the cliche he describes and a few houndreds exceptions that do not. Berger describes them in terms of the strength of the painter's vision: In each case the painter's personal vision of the particular women he is painting is so strong that it makes no allowance for the spectator. The painter's vision binds the woman to him so they become as inseparable as couples in stone. The spectator can witness their relationship - but he can do no more: he is forced to recognize himself as the outsider he is. He cannot deceive himself into believing that she is naked for him. [my emphasis] I think John Berger's description of these exceptional paintings fits Emmet's photos of Edith as well. The two did this together, and they love each other and both of them engage in this game for two. You are allowed to see some of what they did, but it wasn't for you. If she looks into the lens, then she is looking into Emmet's eyes, not yours. What about Francesca Woodman? She had, being a young woman, watched herself being looked at. Indeed, she did that in the darkroom as well. She could evaluate to what extent she succeded. She had complete control and could be nude or naked depending on what she'd like a given day or what her artistic goals were for a given work. What we see is fiction, she can decide that she is there for the spectator. Emmet is a documentary photographer; in an interview Edith describes how he asks her to stay where she was until he came back with his camera. Emmet is looking at her, but she doesn't mind. Woodman the photographer does not have wait for Francesca the model to do something worth shooting. It is easier for her to make strong statements about being watching herself being looked at. Emmet could never do that. Fotofestivalen i Landskrona har seglat upp som en fixstjärna på den nordiska fotohimlen. Jag har varit där tre år i rad nu. Håller den måttet? Har den växtverk? Två hela dagar har lagt på årets upplaga. Jag är inte på något sätt besviken, men det känns som om det var mer liv och rörelse 2013 och 2014 än 2015. Å andra sidan var jag tre hela dagar i fjol. Svårt att säga, kanske man implementerade sitt koncept bättre tidigare om åren än nu. Fredagen ägnade jag åt det internationella symposiet och söndagen åt halva fotoboksdagen och resten åt utställningarna. Alltså har jag ägnat för mycket tid åt prat och tyckande än åt das Ding an sich. Kanske ett misstag. Kan i så fall inte göra det ogjort. Fotofestivalen 2015 lyckades samla ihop utställare från när och fjärran. För många för att lista dem, och jag kan inte kommentera mer än en bråkdel av dem. Lars Tunbjörks utställning Vinter i stadens rådhus är en given höjdpunkt. Duane Michals, Boris Mikhailov och Maja Forslund i Konsthallen. Duane Michals gav mig festivalens bästa skratt med serien How photography lost its verginity on the way to the bank (ja, det skall vara verginity: The state or condition of being a vergin, i.e. one who has never had sex with a complete retard). I denna ingick den fantastiska bilden A Gursky Gherkin is Just a Very Large Pickle och bilderna med samlingsnamnet Who is Sidney Sherman. En definitiv höjdpunkt var utställningen med tjeckisk fotografi, och i synnerhet Jindřich Štreit. Temat var Metamorphosis of Photography och det hela modererades av Marc Prüst (curator och bildredaktör) och hade fyra talare Alla fyra hade intressanta ting att säga. Programmet beskriver Holzherr presentation så här: Photo journalism has undergone fundamental changes in recent years. How does Magnum relate to that? The notion that pictures show reality can no longer be taken for granted. Det är uppenbart korrekt att såväl bildjournalistikens som dokumentärfotografiets vilkor ändrats drastiskt de senaste 10 åren. Under själva temat och frågan ligger även något annat: Jag är faktiskt väldigt trött på de postmodernas beskrivning av dokumentärfotografin. Jag känner inte till någon dokumentärfotograf som gör gällande att deras form av fotografi producerar bilder vars relation till verkligheten är direkt och okomplicerad. Det är en slags ganska låg retorisk teknik att beskriva motståndarens inställning och sedan kritisera den. (Jämför, t ex: Kristoffer Arvidsson, Louise Wolthers & Niclas Östlind (editors), 2014. Between realities. Photography in Sweden 1970–2000 Bokförlaget Arena, Lund 2014). Andrea Holzherr gav en glimrande beskrivning av hur både samtida och historiska fotografer hanterat autenticitet och sanning i relation till ämnen som klass, ojämlikhet och krig och konflikt. Slutligen konstaterade hon att hon inte kände någon annan teknik att beskriva verkligheten som uppenbart ger ett sannare resultat. Corinne Vionnets presentation var nog den mest intressanta. Hon utgick från turistiska arketyper eller troféer, det som Susan Sontag kallar tourist's photographic trophies. De flesta människor känner till platser och byggnader som Lutande tornet, Taj Mahal, Goldengatebron, Eiffeltornet osv. Utifrån turistbilder från Google images syntetiserade hon drömlika bilder som var och en bestod av 100 enskilda objekt. Alla av typen Min flickvän vid framför Eiffeltornet. Lieko Shiga, Mishka Henner och Tony Cairns och Kalev Erickson från Archive of Modern Conflict, London, presenterade med Lars Mogensen som moderator. Åter igen var alla presentationerna bra och väldigt intressanta. Intressantast var dock den av Lieko Shiga. Shiga är utpräglat dokumentär och undersökande fotograf. Hon bosatte sig i en liten by på stranden mot Stilla havet i Sendai-provinsen i norra Japan runt 2008. Hon blev byns officiella fotografiska krönikör och arbetade med allt visuellt där. Hon driver ett antal projekt bland byns tre-fyrahundra invånare och deras liv som fiskare och bönder. Allt fick ett abrupt slut med Tōhoku-jordbävningen och den följande Tsunamin utplånade hela samhället. Den japanska staten har bedömt riskerna för en upprepning av katastrofen för stora och förbjudit permanent boende där. Shiga samlade bilder och fotoalbum som blivit begravda bland alla rasmassor. Dessa fragment av minnen samt Shigas egen dokumentation ligger till grund för hennes bok Rasen kaigan (Spiral Shore). Håller den måttet? Det tycker jag! Har den växtverk? Kanske. Min känsla är att det var betydligt mindre folk i år än de två tidigare, fast i år har man mer tid på sig. Har du inte varit där ännu, åk dit! Lördagen den 23 maj, 13:00-16:00, Katarina Bangata 25, Södermalm, Stockholm Se Mickes blogg Vi samlades vid tiotiden och fikade, sedan drog vi vidare och hängde våra alster på det plank Micke valt ut oss baserad på sin lokalkännedom! Det började med vackert väder, men sedan höll evenemanget på att helt regna bort. När vi öppnade utställningen klockan ett tittade solen fram, och höll sig framme till tills det var dags att stänga butiken klockan 16:00. Bilderna i min del av utställningen är ett litet urval ur mitt pendlarprojekt. 07:30:43 15:41:47 16:40:36 16:56:22 17:10:55 18:19:08 19:03:39 Karl Ove Knausgård går till rasande attack på Cyklopernas Land och Ebba Witt-Brattström har inte tid att hyckla mer. Ingen av de två tillhör de författare som jag brukar läsa. Jag har inte läst något i boklängd av någon av dem. Kommer sannolikt inte göra det heller. En reflektion: Ebba är nästan femton år äldre än Karl. Karl skulle kunna vara sladdbarn i Ebbas familj, och kanske teoretiskt en av hennes söner. Jag är bara några år yngre än Ebba, och har upplevt en del av det som hon talar om, menar jag. För allt det hon talar om handlar faktiskt inte om gender eller manligt-kvinnligt eller manlig homo-socialitet. Utan faktiskt om ålder. Kvinnliga genier haussas lyser upp och slocknar, blir stjärnor utan stjärnbilder. Det är därför jag kämpar för kulturkvinnan. Hela hennes kulturella bagage. Du måste ha hela stjärnbilden, säger Ebba Witt-Brattström. Om du varit en stjärna, eller faktiskt fortfarande är en, så bör du veta vid ungefär sextio års ålder om din genialitet kommer att markeras av en stjärnbild eller ej. Även en supernova brinner ut och blir till en vit dvärg eller ett svart hål. Jag tvivlar inte ögonblick på att många fler kvinnor än män inte får en egen konstellation i patriarkatets homosociala zodiak, vars kärna ofta utgörs av the old-boys network. Om man är en loser, så är man det, oavsett kön, och många winners får ingen plats i zodiaken, och det gäller även män. När man nått åldern kring de sextio inser man att de professurer man inte fått, lär man inte få heller. Det gäller sannolikt även de stolar man inte fått i akademier och lärda samfund. Samtidigt kan man säga de sanningar man inte tordes tidigare. Det förefaller vara just det Ebba håller på med, och hon förtjänar all respekt för det även om alla sanningarna kanske inte upplevs som sådana av alla andra. Ett engelskt begrepp för indexering av Internet är Spidering. Roboten liknades vid en spindel i nätet. År 1995 är ett märkesår för mig. Det är då jag bestämmer mig för att byta karriär till Internet och programmering. Jag gick från akademisk forskning till Lunds universitetsbiblioteks forsknings- och utvecklingsavdelning som kallade sig NetLab. Mitt specialområde blev snabbt metadata, sökmaskiner och Internethöstning. Vid den tidpunkten kunde man inte söka på "ål AND öl" i Lycos eller Webcrawler, som var de enda sökmaskinerna där ute. Europeiska bokstäver fungerade helt enkelt inte i de Nordamerikanskt inskränkta sökmaskinerna. l: too short for searching 1996 släppte vi en public service sökrobot. NWI -- En nordisk söktjänst för World Wide Web (läs vår press release från det året). Vi beräknar att den svensk WWW omfattar nästan 600 000 dokument. I skrivande stund (den 9 maj 1996) innehåller vår svenska databas 427 901 "länkar", av vilka 268 170 är indexerade, dvs vi vet vad nästan hälften av alla WWW-sidor i Sverige handlar om och känner till nästan tre fjärdedelar av dem. Antalet kända servrar är 4 387. NWIs robot arbetar för närvarande kontinuerligt med att kartlägga svensk WWW, för att hinna bli färdig innan den färdiga tjänsten skall presenteras senare i sommar. Roboten kopierar dokument över nätverket, läser igenom dem och söker upp och sparar nyckelinformation i texten. Programmet klarar i dag av att hantera ungefär 15 000 dokument per dygn. Siffran är dock normalt lägre på grund av väntetider på nätet, maskinbelasting och dylikt. Jag misstänker att jag grovt underskattade den svenska webbens storlek, kanske med en faktor två, men nog inte en faktor tio. Det blev dock inte NWI som blev först med att klara av europeiska tecken, utan Digital Equipment's sökmaskin AltaVista som kom bara några veckor före vår. Vi var dock nästan två år före Google. Under de två åren hade vi åtminstone till att börja ett visst försprång genom att vi inderexade lokala servrar djupare än just AltaVista. Ett intressant sammanträffande. Anders Ardö och jag själv publicerade våra erfarenheter i ett paper i 7th International Worldwide Web Conference: A regional distributed WWW search and indexing service. Vid samma konferens publicerar Sergey Brin och Lawrence Page The Anatomy of a Large-Scale Hypertextual Web Search Engine. Det är enda gången jag var i samma volym som Google. I efterhand känns det som ödets ironi. Vi slutade med web indexing eftersom vi inte hade varken hade budget för fler hårddiskar eller kunde undvara personal för mer centrala uppgifter för biblioteket. Vi fortsatte några projekt ytterligare en tid, Safari och Studera.nu för högskoleverket. Vad vi kanske inte förstod förrän efteråt var att vi hade testat idén att driva reklamfri Internetsökning som public service i regi av ett konsortium av bibliotek. Det föreföll vara logiskt. Biblioteken skall förse medborgarna med information. I praktiken fanns det ingen bärande affärsmodell. Min Far fick Dag Hammarskjölds Vägmärken1 i femtioårspresent för nästan exakt femtio år sedan. Den har stod på hyllan hemma, och jag började läsa i den då för länge sedan. Nu har jag återvänt till den igen. Den har faktiskt följt mig av och till ett par år nu. Jag är djupt fängslad av Hammarskjölds samtal med sig själv och sin Gud; de kräver tanke och skall smältas långsamt. Jag är ateist. Eller så har jag brukat beskriva mig, men jag skall villigt erkänna att den Gud Hammarskjöld talar med är väldigt olik den jag inte tror på. Den tionde april 1958 skriver han: Först i människan har den skapande utvecklingen nått den punkt där verkligheten möter sig själv i dom och val. Utanför människan är den varken ond eller god. Först när du stiger ned i dig själv upplever du därför, i mötet med den Andre, godheten såsom i den yttersta verkligheten - enad och levande, i honom och genom dig. 2 Den skapande utvecklingen innersta natur är irrelevant för Hammarskjöld. Mitt intryck är att den gärna får vara evolution genom naturligt urval i ett universum som expanderat med ljusets hastighet sedan the big bang. Eller inte. Hammarskjöld kunde inte bry sig mindre. Andligheten finns i att fatta rätt beslut bland alla de fria val människan ställs inför längs sin väg. Gud finns i den Andre och i den yttersta verkligheten. För min del har jag valt icke-tron för att slippa tro nå't alls, och för att slippa tro det som Svenssons tror, och, slutligen, kanske för att skilja mig från alla dem som jag trodde var mängden. För att apostrofera Ulf Peder Olrogs Filosofisk dixieland. Min icke-tro formades som en del av tonårsrevolten, tillsammans med ungdomsrevolten. Om Svensson gifter sig kyrkligt, så kan inte jag tro på Gud. Far fick boken av Tant Valborg. De var fostersyskon, Valborg och Waldemar, min far. Min Farfar, Sigurd, dog 1918 i spanska sjukan när Far var tre år gammal. Jag lär aldrig få veta varför, men efter faderns bortgång flyttade Waldemar från Stockholm till sin farbror Torsten, kyrkoherde i Glimåkra som hade fyra fosterbarn i ett annars barnlöst äktenskap. Det var min far, faster, tant Valborg och en pojke jag inte har något minne av alls. Farbror Torsten dog redan 1939, så honom har jag aldrig träffat. Det hängde bilder av honom på väggen hemma. En man med små stålbågade brillor och yvig vit kalufs och stora polisonger. Han var inte helt olik Henrik Ibsen. Jag tänker mig honom som en sträng och allvarlig bildad herre, inte olik biskopen i Ingmar Bergmans Fanny och Alexander. Jag tror att Far blev agad som barn, men det förekom aldrig i mitt barndomshem. Strängt taget bidrog kyrkoherden säkert till min ateism, genom de attityder han grundlagt hos min far. Vi är alla ensamma. Hur mycket släkt och hur många vänner vi än omger oss med så går vi ensamma. Instängda i våra kroppar, utelämnade åt vår fria vilja och vår egen och andras domar över oss. Den 29 juli frågar han: Gav du mig denna olösliga ensamhet för att jag lättare skulle kunna ge dig allt?3, dagen efter citatet ovan. Lindrade samtalet med Gud ensamheten? Ge mig något att dö för -! Die Maurern stehen sprachlos und kalt, die Fahnen klirren im Winde Icke detta gör ensamheten till en pina: att ingen finns som delar min börda, men detta: att jag har endast min egen börda att bära.4 Gav honom Gud en börda att bära? Var inte Förenta Nationerna tillräckligt tunga? En sista: Så skall världen varje morgon skapas på nytt, förlåten - i dig, av dig.5 I am making an effort to complete my commuting photoproject. (That does not mean that I intend to stop commuting, just that I will no longer regard that as a photoproject.) The product will be a short documentary film; the video clip you see here is a prototype. I am also considering making an attempt at publishing the stuff as a book. The video consists of about 50 still images and three videoclips. För trettiofem år sen reste vi till Grekland, Gertrud och jag. Vi var yngre då än vårt yngsta barn är idag. Trettiofem år är lång tid, mer än halva mitt liv. Förra veckan var vi där igen. Gonna take a sentimental journey Gonna set my heart at ease Gonna make a sentimental journey To renew old memories. (Sentimental Journey) Om jag blir lika gammal som min far, så är trettiofem år återstoden av mitt liv. Kanske jag reser tillbaka till Grekland igen, men i så fall blir det nog inte till Aten. Det är många andra vägar som vi vill gå tillsammans, Gertrud och jag. Jag hade kamera med mig då också. Tri-X PAN, 400 ASA. Jag satte min nya nya digitala kamera på Black & White. Kändes bra så. Det finns en del negativ kvar från resan i en pärm. Inte så många, dock. Det som gör resan speciell är att min gamla Nikon F hamnade i ryggsäcken efter att jag förlorat min fina Gossen exponeringsmätare uppe på Mount Lycabettos. Det tog mig månader att få tag på en likadan, bättre begagnad. Samtidigt började jag forskarstudier och det fanns så lite tid över för en systemkamera och manuell ljusmätning. Det blev slutet på mitt ambitiösa intresse för fotografi så som jag odlat det under tio års tid, från tidiga tonår tills dess att Gertrud och jag besteg Lycabettos. Även om jag hade haft ljusmätaren i behåll var min kamera av en typ som krävde självutlösare och stativ om man skall ta självporträtt. Det var inte därför som vi aldrig tog någon selfie med Parthenon i bakgrunden. Man gjorde helt enkelt inte så då. Däremot skrev vi en resedagbok med en blå kulpenna av märket BIC. På vitt papper i en skrivbok med ljust blåa linjer och likaledes blå pärmar av kartong. Jag skrev två-tre dagar, Gertrud skrev tio-tolv. Vi hade med oss ett limstift, och klistrade in biljetter och krognotor. Grekisk sallad och en halv flaska Demestica för oss båda. 100 drachmer. Några kronor bara. Vi drack Ouzo på balkongen och älskade i den subtropiska natten tills dess sömnen skiljde oss åt. På morgonen förenades vi igen; unga svettiga kroppar glider lätt mot varandra i augustihettan. Vi klättrade inte upp på Lycabettos förra veckan. Minnet är allt vi har kvar från den tid som flytt. Det formar sig till både trauma och berättelser. Det man inte erinrar sig eller berättar om och om igen för andra riskerar man förlora. Framtiden handlar om drömmar, farhågor, fruktan och förhoppningar. Under min Mors sista två-tre år talade vi med varandra per telefon dagligen. Dagligen frågade hon mig om min bror, svärdöttrarna och barnbarnen. En dialog som upprepades och blev till en besvärjelse för att hålla kvar berättelsen om familjen, och henne själv. Det första man förlorar är berättelsens disposition, därefter blir den osammanhängande och utan röd tråd. Vår resedagbok innehåller saker vi minns eller kan erinra oss. Vi har förmodligen hållit dem vid liv genom att vi återkommit till dem i berättelser och samtal. Som uppsättningen av Aiskylos' tragedi Orestien vi såg på Epidaurosteatern med Melina Mercouri i rollen som Klytaimnestra. De saker vi inte minns, verkar vara händelser vi båda har glömt. Turen i Korintkanalen i skymningen med kanallyktorna glimmande längs med sidorna finns inte i vårt medvetande, fastän den borde ha haft goda förutsättningar att bli hur romantisk som helst. Kanske var det just det som var felet. Hur stor andel av alla selfies överlever den förväntade livslängden för en smartphone? Det går bussar från Aten till Kap Sunion. Vi har inte kunna fastställa hur ofta de går. Somliga sa att de går en gång per timma, andra att det är varannan. Vi väntade och väntade, och trodde snart mer på jultomten än bussen till Attikas sydspets. För trettiofem år sedan tog vi den bussen från Egypt square. Förra veckan tog vi den från Filellinon street. Andra har skrivit bättre om platsens och Poseidontemplets skönhet. Lord Byron Place me on Sunium's marbled steep, Where nothing, save the waves and I, May hear our mutual murmurs sweep; There, swan-like, let me sing and die ... och Hjalmar Gullbergs Vid Kap Sunion: Detta är havet, ungdomskällan, Venus’ vagga och Sapfos grav. Spegelblankare såg du sällan Medelhavet, havens hav. ... Ej mer jublas det här och klagas inför havsgudens altarbord. Nio pelare blott är hans sagas ännu bevarade minnesord. Måtte det verk du i människors vimmel skapar från morgon till aftonglöd, stå som en lyra mot tidens himmel, sedan du själv och din gud är död! Fyra böcker som beskriver tidsandan. Naomi Klein skrev två av dem: The Shock Doctrine and This Changes Everything: Capitalism vs. The Climate. Richard G. Wilkinson skrev The spirit level och Thomas Piketty Capital in the Twenty-First Century. Hörde på nyheterna idag att USA nu är nästan självförsörjande på energi, och att det begåtts två nya mord i Biskopsgården. Vad har dessa två nyheter att göra med varandra? Ytligt sett ingenting, men där finns en underström: Vi har problem. Det är uppenbart att utvecklingen i världen leder mot ökande klasskillnader och en långsiktig förändring av världens klimat. Det har varit uppenbart ganska länge att reserverna av olja och gas inte varit ändlös. Faktum är att prognosen har bara gett oss decennier av fortsatt rovdrift, men nu ändras tidsperspektiven. Den nya oljerushen baserad på utvinning ur oljesand har framställts som att vi kan fortsätta det fossila samhället i sekler framöver. Det stämmer så till vida att vi kan producera växthusgaser under flera hundra år framöver. Nyheten implicerar: Business as usual. Nästan all marknadsföring syftar till att göra oss till individer, inte till solidariska medlemmar av något något kollektiv. Under den senaste månaden har Örebro kommun vållat stor debatt genom att införa en köpfri månad under ledning av Krist- och Socialdemokrater därstädes. Läs till exempel Jon Weman på Aftonbladet som också citerar Brave New World (1932) av Aldous Huxley, Kapitel 3 Ending is better than mending. The more stitches, the less riches. Och vad har nu detta med morden i Biskopsgården att göra? Tja. Gängstriderna har med konkurrens mellan olika (visserligen kriminella) affärsverksamheter att göra. I en tid när alla uppmuntras till att bli entreprenörer, när rikedom är framgångsmätaren, varför skall vi då inte förvänta oss ökad konkurrens bland gängen? Nästan all marknadsföring syftar till att göra oss till individer, inte till solidariska medlemmar av något något kollektiv. Minns Because You're Worth it. On my way home today I realized that something essential was missing. The stout. The real Irish one. Not an ordinary Swedish porter, or even an Imperial Stout from from Amager Bryghus. The temporal circumstances wouldn't permit a pub; possibly just about an off licence. I won't ask you the obvious question today: How many bottles of Guinness do you think were available on the-off-licence-on-my-way-home. I acquired a bottle of Imperial Stout. By the way, it was from Amager Bryghus. Not bad at all. Happy St Patrick's day! Cheers! In public transport, bus bunching, clumping, convoying, or platooning refers to a group of two or more transit vehicles (such as buses or trains), which were scheduled to be evenly spaced running along the same route, instead running in the same location at the same time. This occurs when at least one of the vehicles is unable to keep to its schedule and therefore ends up in the same location as one or more other vehicles of the same route at the same time. (Wikipedia article: Bus bunching) Two route 9A buses standing in the way for each other outside Copenhagen Central. The A-buses are meant to go so often that no schedule is needed. The most frequent one is 5A. One morning last week there were five vehicles in at the bus station outside Copenhagen C (OK, I counted both northbound and southbound buses). Suppose that many 5A passengers elsewhere had to wait long for their transport. Two route 3 buses are about to arrive at the Lund C bus stop in one minute. In Lund, routes 3 and 6 are following very tight schedules. This evening the former shows exactly the same symptom as the Copenhagen A routes. The opposite is true for route 1 in Lund. This is how crowded it can be. No fun to wait for fifteen minutes and then not get a seat. In all Lucky Luke stories, the cowboy jumps Jolly Jumper and rides towards the sunset. He could, for instance, ride along Christian's Brygge, pass Ny Kredit and Marriott hotel, Copenhagen, continue straight ahead on Kalvebod Brygge towards the sunset. I doubt that he would be able to convince Jolly Jumper to climb a hot air balloon gondola, and dissappear over the sedimentation dam at Lund sewage treatment plant. However, the photos have something round in the upper right corner. The sun over Copenhagen or a balloon in Lund. The similarity doesn't end there, though. The is a tree lined dirt road going from right to left in Lund, and a bridge in Copenhagen. The is a small vessel, the harbour bus, in Copenhagen and a water fowl in Lund. Almost all elements are present in both photos. Including air pollution in Copenhagen and water pollution in Lund. Waldermar Lundberg, my father, was born 1915. He was a printer and typographer and established a small workshop around 1947. The images here show some of the equipment he kept after his retirement 1975. He continued to work with printing and bookbinding until his death 2007. All types were stored in rack cabinets like this. My father had a fairly complete set of Berling Antiqua, designed by Karl-Erik Forsberg (1914-1995) in the 1950s and produced by Berling type foundry here in Lund until 1980, when the company ended its more than 170 years in the graphical industry. The Berling Type Foundry was established in Copenhagen 1750 and continued its business there until 1783. It was then reestablished in Lund 1837. Hand composing is significantly more complicated than typing touch on a computer or type writer. It is more like it than you would expect, though. You have to know where the types are, and they are not in alphabetical order. Think of a QWERTY keyboard, but you have a large number of different space keys. Width equal to the width of l, n and m is just a start. There were spaces thin as paper. Think of the work to compose whole books having justified margins! I worked several summers mid 1960s decomposing matrixes in the composing room, and I even did some smaller composing jobs. I think I could do it again. Type pieces for large print were stored in cases like this, standing in rows between wooden ribs. This case could be well be the home for Ariella 36p. Prior to his retirement, my father had ten to 15 employees and an at times flourishing business. During a period in the mid 1960s he was the second largest Swedish printer of time cards, and he had also a not insignificant portion of the Norwegian market. After his retirement he worked in this basement workshop for about thirty years. He had various printing presses, including a fairly large modern offset press during a brief period. He kept these two old vintage tabletop printing presses until his death. He inherited one from a colleague and the other had been in his possession since the 1940s. My brother, Torsten, and I donated one of them and a rack cabinet to a local museum. The rest of the equipment went to a young talented graphical designer Markus Sjöborg who has a small business in Malmö. He makes good use of this and other equipment he finds out there. This matrix is for printing a text on a book spine in one of the printing presses. They are the last words typeset by my father. Det här med sociala medier blir liksom ett vanemässigt beteende. Kanske inte som att röka eller supa men mer som ett manér eller en ätstörning. Spelberoende. De sociala medier är mitt sätt att publicera fotografi. Jag använder Flickr, Twitter och Facebook för ändamålet. Jag har frågat mig dagligen: Skall jag lägga upp en bild? Vilken bild? Street eller Landscape eller Portrait? Dagligen kollar man: Hur många rosa stjärnor, hjärtan eller tummen upp får man. Har någon bild blivit viral. Hur många kikar? Nu har jag lekt den där leken i fem år. Jag har medvetet valt bort att försöka mig på att komma med på utställningar, eller att publicera böcker. Några få bilder tryckta, och massor som blivit lånade till web sajts runtom är allt jag kan lägga på vinstkontot. Inga pengar. Samtidigt är det en dyr hoppy. Många av oss amatörfotografer fastnar i Gear Acquisition Syndrome (GAS). Hur mycket fotograferar jag? Hur mycket gallrar jag? Hur mycket visar jag för andra? Mitt viktigaste sätt att publicera är via Flickr. Där hade jag 3133 bilder i början av december i fjol; nu har jag 2728 kvar. Jag har tagit bort ungefär 10% av dem för få bort den värsta skiten. Det skulle behövas ytterligare gallring men jag orkar inte. Skattningar av hur mycket jag fotograferar och hur mycket jag gallrar kan se ut så här: Jag räknade på detta i november i fjol. Senaste sparade råfilen (från kameran jag köpte i oktober 2012) hade då löpnummer 11238 vilket innebär 312 exponeringar per månad. Jag hade då kvar 3390 filer av den typen. Jag sparar alltså med andra ord ungefär 30% av det jag fotar. Det är lite jobbigt att beräkna hur stor andel av vad jag sparar som jag visar för andra, t ex via Flickr. Jag gitter inte räkna på det för de 63 månader jag varit medlem av Flickr. För oktober månad 2014 ligger det 117 råfiler på min dator och jag har lagt upp 37 bilder på Flickr under samma månad. Igen c:a 30%. Slutprodukten blir 0.3*0.3=0.09. Mellan tummen och pekfingret: Jag publicerar 1 bild av 10 med hastigheten ungefär en bild per dag. I skrivande stund rapporterar Flickr ungefär 4000 tittar per dygn. The ubiquitous photographers tried to capture whatever decisive moments they could experience outside Tivoli, Copenhagen, May 2012. Camera Lucida is Roland Barthes' thoughts about photographs and our memories. In particular the photographs he had of his mother after her death 1977. The book was published 1980. It is just 120 pages, a short book. In chapter 47 (out of 48 brief texts) he concludes: The noeme [i.e., the essence] of photography is simple, banal; no depth: ‘That has been.’ I know our critics: What, a whole book (even a short one) to discover something I knew at first glance? Barthes is trying to pinpoint the essence of the photograph, not necessarily the essence of photography. He arrives at the conclusion that a photograph is basically an indication that its subject has existed and the photographer was there to see it. All those people who take photographs everywhere basically tells everyone that they've been there and they have seen it whatever that was. I think that he is basically correct. Most photographs taken are snaps, and the popularity of Instagram and Snapchat are further proof of that. In the very last chapter he says something really surprising: Photography can in fact be an art: when there is no longer any madness in it, when its noeme is forgotten and when consequently its essence no longer acts on me Lewis Powell, portrait by Alexander Gardner from 1865. Roland Barthes' caption under this photograph reads: He is dead, and he is going to die. We look into the eyes of a young handsome man, awaiting his execution. It is a defiant gaze though. Filled with arrogance he is looking over your right shoulder. Basically, Roland Barthes seem to say that art photography is boring. That is a very sweeping conclusion, and an extremely subjective one. The artsy photo is pure studium, not punctum. The former means that the cognitive effect of an image is that one looks at it and perhaps learns something from it, whereas the punctum implies an area which pricks, or triggers a minor shock or surprise. Arthur Koestler (1964) worked on a unified theory of human creativity. One of his points is somehow related to Barthes' dichotomy. Koestler proposes that human creativity has three domains named after their cognitive effects, haha, aha and ah. They correspond to creativity within humour, discovery and the arts, respectively. Put another way: These are the three ways anything can be awesome. There are hardly any other ways, if you ask me. Following Barthes' ways of reasoning, I'd say that it is obvious that punctum could contain elements of haha or aha or both. Just by the sound of it, I feel that an ah is more related to the sublime; the great or the beautiful. Something Barthes would regard, at its best, as a good studium. I doubt that Barthes would voluntarily place a punctum into a one of Alfred Stieglitz' nudes of Georgia O'Keeffe or any Ansel Adams' landscapes. I believe my mother taught me to ride my bike 1962. I was six years old then. At the time I had grey trousers and suspenders, and I had some hair that stood on end and defied any attempt to tame it with a comb. The bicycle was green, and I loved it in spite of the fact that it was a little bit too high. To me the punctum is the perfect shadow. Barthes concludes, in chapter 23: Last thing about the punctum: whether or not it is triggered, it is an addition: it is what I add to the photograph and what is nonetheless already there. Hence the photographer must do something interesting, or there won't be any punctum, but it is the viewer that put it into place in the image. The punctum is in the eyes of the beholder. There are a quite a few interesting image examples in Barthes' book, like the portraits by Richard Avedon of William Casby (Born a slave from 1963) and by Alexander Gardner of Lewis Powell (also known as Lewis Payne) from 1865. Apart from the fact both are very high quality portraits, there is an interesting connection between them. Powell attempted to murder US Secretary of State William H. Seward. He was one of four conspirators hanged for participation in the assassination of Abraham Lincoln. This was after the end of that civil war which gave the freedom to Casby. You know that Powell is dead, and you know that he knew that he was going to die, perhaps that same day the photo was taken. This fact alone makes me place a punctum on his hands with the handcuffs and another on his eyes. It is me that place it, but it was Gardner who made it possible. Is the portrait of Powell interesting on its own right? Is his attitude, posture and handcuffs enough? He is waiting for the gallow. We are all curious about the end of the story. Somehow he knew something rest of us don't. Not yet. A mother's hands. The very same hands that held me when I learned to ride my bike. My mother died two years ago, at 99 years and six months of age. I came an hour and a half too late, and could only say farewell after she had gone. Perhaps one third of Camera Lucida is about Roland Barthes mother, and his quest for a photo that could carry his memories of her. He should recognize her essential identity, the genius of the beloved face. He goes into some detail on how he searched for a nice photo of that face. At last he found one, a photo of her as a five year old child where she is standing together with her brother on a wooden bridge in a conservatory, a winter garden. He then writes: I had understood that henceforth I must interrogate the evidence of Photography, not from the viewpoint of pleasure, but in relation to what we romantically call life and death. Then he adds that just couldn't reproduce the winter garden photo. It was too personal, and no one else would be able to place a punctum in it. Camera Lucida was Barthes last book. He died himself a few weeks after its publication from injuries sustained from being run over by a laundry van (Dillon, 2011). For me it was impossible to read Camera Lucida without thinking about mother, my own mother. We came to her a few hours after her death, and the nurses had arranged her nicely. Everything was calm. Her hands clasped over a book of psalms and a bouquet of Pinocchio roses from her garden. The photograph should be more interesting or more beautiful than what was photographed said Garry Winogrand (Diamonstein, 1982) in an interview a year or so after Barthes death. The interview is entirely unrelated to Barthes, but isn't Winogrand's thinking still similar to Barthes' punctum? If so, isn't it the same as saying that the photo is in one of Koestler's creative domains? Barthes, Roland, 1980. Camera Lucida. Vintage UK, Random House UK. London 2000. Diamonstein, Barbara, 1981–1982. Visions and Images: American Photographers on Photography. Interviews with photographers. Rizoli: New York Dillon, Brian, 2011.Rereading: Camera Lucida by Roland Barthes, The Guardian Koestler, Arthur, 1980. The Art of Discovery and the Discoveries of Art. In: Bricks to Babel. Hutchinson & Company (Publishers) Ltd. London 1980. Reprinted from Arthur Koestler, 1964. The Act of Creation. När jag ville ha jeans, långt hår och i övrigt se ut som en medlem av The Beatles, då såg min far ut nästan precis så här. När jag ville ha jeans, långt hår och i övrigt se ut som en medlem av The Beatles, då hade min far knälång rock och en liten sportig filthatt i tweed. När jag var sjutton var jag på många sätt full av tillförsikt. Vi var medvetna om vad som var fel. Det var många miljöproblem, sur nederbörd, DDT, PCB och kvicksilverbetat utsäde. Det var krig i Vietnam och svält i Biafra, men vi var medvetna och visste vad som behövde göras. Efter en socialistisk omdaning av samhället skulle allt bli mycket bättre. Det var var inte många som fortfarande trodde på det 1980, och när så Sovjetunionen föll tio år senare var antalet sörjande försumbart. Sedan kom en generation unga smarta människor som anammade en ny radikalism baserad på individuell frihet. De var precis lika övertygade om att det skulle leda till en bättre värld, som vi var tjugo år tidigare att den internationella solidariteten skulle göra jobbet. Varje aktör på marknaden skulle dra sitt strå till stacken. Genom att var och en fattar kloka beslut och väl utnyttjar sin valfrihet, kommer också samhället som helhet blir bättre. Avregleringarna och valfriheten löste inte heller problemen vi upplevde 1970. Tron att en fri marknad med hjälp av Den Osynliga Handen leder allting rätt är ungefär lika naiv som att allt blir bättre efter revolutionen. Ytterst är problemet att det aldrig har funnits någon fri marknad utanför Adam Smiths idévärld och de klassiska nationekonomernas matematiska modeller. Marknadsliberalismen är precis som historiematerialismen en historicism. Jag mötte Sven idag när jag väntade på bussen. Han haltade och såg ut att inte må så bra. En blodprop i ena benet, och disbråck. Sven blir säkert helt återställd, men ändå. Han är inte mycket äldre än jag, och har flera år kvar till pension. Varje gång man hör en sådan historia blir man påmind om vart man är på väg. En kliché ur schlagerhistorien: När vi, böjda av år, mot det okända går följa kommande släkten väl i våra spår. Vi tar fart, och riktar in vår utveckling mot ökande tillväxt och global klimatförändring. Elisabeth Ohlson Wallin. Ack Sverige, Du Sköna. Karneval Förlag. Stockholm 2014 Nu har jag läst Elisabeth Ohlson Wallins bok Ack Sverige, Du Sköna. Boken lanserades med hjälp av en en utställning på Galleri Kontrast i Stockholm. Jag var där i helgen som gick, och fann boken. Utställningen var slut sedan ett bra tag så den missade jag. Nu har jag boken. Hannah Goldstein och Göran Segeholm såg utställningen, men missade boken. Lyssna på deras podd: Elisabeth Ohlson Wallin – Vad är det vi missar? Alla gillar inte allt. Man behöver inte tycka om Wallins böcker eller utställningar, och jag skall inte uttala mig om hennes tidigare arbete. Det enda av det jag känner var hennes kontroversiella bildserie Ecce Homo som visades i Uppsala domkyrka 1998. Goldstein och Segeholm noterar Wallins kändisskap, och antyder att hon genom det kommer alltför lätt undan med bilder vars tajming och komposition nästan är amatörmässig och som är fyllda av emotionella klichéer. Som när människor passerar romska tiggare utan att visa ett uns av empati. Boken handlar om just de dilemma som präglar vår tid. De fattigare blir fattigare, utslagningen ökar. Människor bryter upp och försöker finna sitt uppehälle någon annan stans. Folk från Irak, Syrien, Ukraina, Somalia, Rumänien, Bulgarien söker möjligheter för värdigare liv. Undan klimatförändringar, krig, fattigdom och rasism. Solidaritet och klassmedvetande bryter samman till följd av att vi utsätts för stenhård propaganda om att vi alla är medelklass och bör unna oss en resa till Thailand om sommaren och en ny iPhone. Den romske hemlöse tiggaren kommer nog att hänga här länge än, eftersom hen nog inte kan skaffa bröd till familjen på så många andra sätt. Överallt försöker de människor som långsamt är på väg neråt på det sluttande planet försvara vad de upplever som sina rättmätiga privilegier genom att sparka neråt. Och visst vill vi kunna fortsätta med armoteringsfria bolån? Jag tycker att Wallins bok är helt glimrande, och den handlar helt enkelt om dessa förändringarna i samhället. Hennes fotografi är helt ändamålsenlig, och hon gör det dokumentär- och gatufotografer bör göra. Hon har samlat ett omfattande material och koncentrerat sig på projektet under flera års tid. Många bilder är väldigt bra. Jag såg aldrig någon med dålig tajming, men många som fångade ett avgörande ögonblick. Sverige har gott om bra fotografer, men hur många av dem vågar sig på dessa brännande samtida problem? Så, om det är något Goldstein och Segeholm missat, så är att tiden ändrats. Figure 1. My largest readership comes from Sweden. However, the total number of readers from the rest of the world exceeds my Swedish readership. So, I suppose it might still be worthwhile for me to write in English. I have had this site since 1995, but the oldest parts are in fact from 1994, which makes twenty years of personal presence on the web. I have thought about this for a while. For example, should I keep the site or just through it away and save a couple of hundred SEK per month? I've discussed why before. After that, I decide to keep it several times a year. Whenever I pay the bill to my service provider. At times I just don't care about the site. I'm up to something else. If people have problem with my material not being up-to-date, that's their problem. I don't care Although I only periodically care for the stuff I keep here, I've thought about if I However, now I feel that there are things that is more easy to discuss in my native language, and many of my readers should find my material more accessible. After all, Swedes are the largest single nationality visiting my site. However the total number of readers from the rest of the world exceeds my Swedish readership (Figure 1). In short, expect more material in Swedish, but I will continue write in English whenever I expect that it is worthwhile. This means that you shouldn't really expect any significant changes here..., and don't ask me why I use several hundred words for telling you why I won't make any substantial change. He sat on his knees and started to penetrate the tiles with hammer and chisel. I pulled my camera and just before I pressed the shutter, he stretched his back. He noticed me taking the photograph. It is hard! he said. What kind of stone is it? I asked. Granite said he. Then he bent down and continued to penetrate the stone. Today I decided to return and have a look. New nice tiles. Slightly more gray. Don't know what he was up to. None of my business, anyway. I've written some portraits of the lenses I've been using most. One of them is the Super wide Heliar 15mm f/4.5. As a matter of fact I don't use it as much as before. Mainly because it has become much more wide angle than it used to be. On a full frame sensor it becomes super wide. Never before I've been able to capture the whole cathedral, and more. Some of its deficiencies become more obvious, as well. Not that they are that important when stopping down a few steps, and some people are applying special post-processing tools to get that the vignetting in the corner. Anyway, that lens which used to be mounted continuously three years ago is now living a calm life on a shelf. This is one of the very sharpest lenses made by mankind. Its resolution at aperture f/4, 400 lp/mm, corresponds to the maximum resolution theoretically possible at f/4; in other words it represents the calculated 'diffraction limited' performance at this aperture. says La Vida Leica in its review of the lens. To put this in another way: Physics prohibits the construction of a lens with hight resolution power! Before I walked away to buy it I did read a handful of other reviews, including, ones by Steve Huff and diglloyd. You find my shots taken with Biogon T* 2,8/25 ZM here. Below you find a random sample from my personal favourites. The 50mm Sonnar f1.5 is a transplant from the Zeiss Contax rangefinder world. A real classic there. Also, it is indeed a personality with a focus shift, which isn't a bug but a feature. There are a number of people who have reviewed this lens. Some of them have really strong opinions. Figure 1. A study in depth of field of the Sonnar 50mm Sonnar lens. It is shot with my Leica M-E, so it is possible to focus this lens with a M-mount rangefinder. The engineering behind this lens is so well described in the LaVidaLeica review:. Nick Devlin says. I like my Sonnar. Indeed I love it. I've used it more than any other since I acquired it. So my conclusions is close to LaVidaLeica. I've got two very good lenses, with very different qualities. And I claim that I can predictably get very sharp images with the lens wide open. (Figure 1). Below you find images taken with the aperture open or even wide open. You'll see the Sonnar bokeh, but there are also shots taken with the lens stopped down to aperture 5.6 or 8 or even 11. That is street or landscape photography, where there is need for the background to be an integrated part of the composition. Super-wide Heliar was the first M Mount lens I bought. I have used it extensively, in particular on my Olympus E-P2. Ken Rockwell says Although wonderful on film, this lens is awful on the LEICA M9 because its rear nodal point is too close to the sensor. I cannot check this, because I don't own a M. There are quite a few people who use the lens on digital Leicas, and Ken is the only one that complains. The distance to the sensor must be the same on my cameras, but the problem might be less with smaller sensor. When acquiring my current body, the Ricoh GXR M, it all of a sudden became more super wide than it used to. The reason for that is that the GXR has an APS-C sensor and the PEN a μ4/3. This triggerd my decision to buy the Zeiss Biogon T* 2,8/25 ZM, which made me has made me less inclined to mount my Heliar. Still, it is a much wider lens and the two have so different personalities. Here are my most interesting shots taken with Heliar. Find below a more personal sample. There is a story connected to this photo, shown here in both colour and black and white. I visited the opening of a new exhibition at Martin Bryder Gallery. While having a good time, I saw the dog and that this child started to climb the stair. I pulled the camera and shot this image. The whole process is more or less automatic. I didn't even realize that she were all clad in red until I did the processing. The story is both related to how I shoot and the workings of my camera. I have some standard settings, actually one per lens I use regularly. The Ricoh GXR allows me to store them as named configurations. So when I switch on my camera, it set on ISO 400. For every shot it stores a raw file (in colour) and one JPG in B&W. The camera is set on aperture preset (A). The focus ring should be turned to infinity and the aperture set on 5.6. When I switch on the camera, everything is predictable. The things I see and think of are related to timing, such as actions and juxtapositions and composition. That is geometry, pull-in lines, point of view and perspective. Then there are surfaces and texture, reflections and shadows and depth of field. These are the basics of photography, still difficult to master. Note that colours just don't enter my photography. I don't see them in my viewfinder, nor in live view. I saw the dog and the girl before pulling the camera, still I don't remember that she was a young lady in red. Colours are just too complicated. I don't understand them. And I have no time to think about them while shooting. By the way. Of the few comments I got on this one on Flickr, all of those who mentioned the colour problem found that the B&W was the one they like the most. In landscape photography you have plenty of time. I do often shoot landscape in colour. In particular when it is fog, rain or snow. I.e., good photo weather. But snow is often increasing the contrast, and that can accentuated by making black & white. As in this example. The comments in Flickr about these two versions of the same flower shots two said that they liked the black and white better than the colour one. When a subject contains vidid colours, I invariably transform the photo to black and white. If there are few I usually preserve colour. Unless there is special conditions that make me feel that the colours are important. For instance, when the subject contains vivid colours.. Min goda Flickrkontakt Kristina Alexandersson har publicerat en intervju med mig Creative Commons är mitt sätt att göra mitt tidsfördriv lite nyttigt. Kristina uttrycker sig så översvallande att jag rodnar upp till öronen. Ni skall veta hon är en utomordentligt skicklig fotograf själv. Titta in på hennes fotoström. Man kan uttrycka väldigt många tankar med hjälp av stilleben. Figure 1. Some of my current photographic equipment. All of it is Leica M-mount. None of it is from Leitz/Leica. The Leica brand is so strong that it influences peoples' secret wishes. Many buy cameras that look or work alike. This is the Leica envy. I too suffer from it. Last year I did something about it. Since November last year I've been shooting M mount (See Figure 1). I have no real Leica stuff. I could possibly afford it but then I would have to sacrifice a lot from the other parts in my life to get a marginal improvement of my kit. I've lost all my interest in the rangefinder technology, but I admire the lenses. There are high quality alternatives to Leica glass. I own lenses from Voigtländer, Zeiss and Minolta. I acquired the Ricoh GXR Mount A12, also known as GXR-M. That body has some advantages. It is M mount, i.e., I can use these lenses without adapter. It is optimized for use together with manual lenses. For instance, Ashwin Rao describes how he is able to do manual focus fast enough for sports photography. That impressed me, and it the factor that settled the issue for me. Focus peaking have become even more important for me since my eye sight has deteriorated since I ordered it. It also has really good image quality. Sean Reid was the one who convinced me that anti-aliasing filters are detrimental for image quality. Finally, the user interface is very well thought out. See Luminous Landscape's The field report. The GXR-M is a true exchangeable lens camera. You mount whatever you have, possibly using an adapter. I have a Nikon F to Leica M adapter (Figure 2). I works nicely, even at infinity for the lenses I've tested. For about fifteen years I've been active with both the creation and implementations of standards. I've been an evangelist for good metadata, good text encoding and sound internet standards. Good standards do help. In the library world we have the MARC bibliographic record as a prime example. Without such a standard it would have been much more difficult to establish the computerized library catalog as we know it. That concept has been the basis for a small but not insignificant branch of the software industry. The uniform acceptance of the MARC format makes it possible to exchange records on a global scale across systems from different vendors. To the success we have to add the Z39.50 protocol for bibliorgraphic search and information retrieval, which has enabled us to cross-search virtually any library OPACS. All this is indeed useful. But to prove that we've saved money is hard. The bibliographic records are created and maintained in workflows that are supported by sets of complicated cataloging rules. The bibliographic processes requires trained staff, often with academic education. Now Library of Congress has finally reached the following conclusion:. A Bibliographic Framework for the Digital Age I sincerely think that this will lead to lesser costs for OPAC software. Noone knows if this will benefit the libraries or just increase the revenues of the software houses that will implement the next generation of library catalogs. I doubt that RDA will be cheaper to use than AACR2. I met Gustav Holmberg, a history of science scholar who also has a deep interest in photography, at the butcher's shop. He was gazing intently at my camera, well perhaps not the camera, but at my CV Super-Wide Heliaf 15mm/4.5. This lens is obviously one of those that he would be interested to fit on his Leica M3. Now he has started his own series with camera portraits like the one in Tokyo Camera Style. See his Lund Camera Style (in Swedish). Photography is visual storyteling, among other things. The stories I want to tell isn't about photographic equipment but about us human beings, our environments, emotions and about our time inasmuch as one can learn anything about that from what I see in my own life. Figure 1. This is what I carry with me in terms of photographic equipment. Gadgets from left to right (links takes you to my corresponding Flickriver sets). (1) Nikkor-PC Auto 105mm f/2.5, (2) Voigtlander F Mount to micro four thirds adaptor (3) Olympus PEN E-P2 with M-Rokkor 40mm f/2.0 mounted using a Leica M to micro four thirds adaptor (4) Nikkor-O Auto 35mm f/2.0 and finally (5) Voigtländer Heliar 15mm f/4.5. Equipment should really be unimportant. Ernst Haas is reported to have made the following remark on the question which camera brand is best: All of them can record what you are seeing. But, you have to SEE. (Quoted from Ken Rockwell's Your Camera Doesn't Matter. He claims that the anecdote comes from Murad Saÿen. Anyway, Rockwell's essay on photographic quality is brilliant.) However, it is the equipment that give me the means for telling those stories. A camera consists of a lens and a body. Today a body is obsolete in two or three years time and their longevity is decreasing and will approach that of a mobile phone. The lens is a serious matter for me, and I cannot stand the idea to acquire new lenses just because I've bought a new body. About a month after I had bought my new digital system camera I decided that I wanted to use my old Nikon lenses, and that I wanted to get a small but useful collection of manual prime lenses (Figure 1). There are issues with using manual lenses. One of them is that you have to focus. Manually. This entry is about focusing. I illustrate it with examples from my own photography, which may not be perfectly sharp. Thom Hogan has written about using manual lenses on micro four third bodies. His conclusions are: Figure 2. The first out-door beer in the spring is most likely the best beer under the sun. Street photography using Voigtländer Super-Wide Heliar 15mm f/4.5. My personal take is that manual focus really only works well in two situations: where you have a stationary subject and time; and when you use depth of field and a preset focus distance (ala Henri Cartier-Bresson's shooting style). Thom Hogan I have no problems with Henri Cartier-Bresson's shooting style. I use that all the time. I mount my 15mm Heliar using a Leica M adaptor and at aperture 8 I have a depth of field from less than a meter to infinity (Figure 2). The Flickr group Decisive Moment devoted to street photography in Cartier-Bresson's style. The guidelines states, among other things that There should be a Background that contributes to the overall composition. In most cases this Background is in focus. Decisive Moment. Figure 3. My Nikkor-O Auto 35mm f/2.0 and the 40mm M-Rokkor coexists because they occupy slightly different ecological niches. Nikkor-O has a closest range at about 30cm is it is an almost macro lens. At aperture 2 and 2.8 it has a very nice bokeh. So, what was enforced by necessity, the level of technological development between 1930-1960 has become an artitistic characteristic of a genre. However, the Henri Cartier-Bresson shooting style is more or less implemented in hardware in all point and shoot compact cameras. Mobile cameras have a very large depth of field, since the very small sensor gives you a standard lens with a small focal length. I have a need for being able to take photos of small things. Obviously a macro lens would have been ideal, but I have not yet been able to acquire one of those. However, my two Nikon F mount lenses are both almost macro. The 35mm lens allows me to focus on objects as close as about 25cm (Figure 3). To focus swiftly on a moving insect or something as close as 25cm with a paperthin DOF is difficult. I have tried it, but not very much. A preset focus distance won't help very much. On the street you can choose a stage where you expect a subject may appear. You can focus on something on adjust the DOF and wait. Press the shutter in the decisive moment. This is preset focus, mentioned by Hogan. With my 35mm (Figure 3), 40mm (Figure 4 & 5) and 105mm lenses (Figure 6) I have to focus and do so accurately. The four people on Figure 4 is a very good example on a shot with preset focus. I focused on the door behind the people when I saw them approaching, and got two shots while they passed each other. Then we have the knob on my roller blinds (Figure 5). The knob is sharp. You can find it in the lower right corner of the image. The dirt on the window is in focus as well. I took photo just because I wanted to demonstrate how you can control the focusing of a manual lens. I've tried in vain to repeat that with an auto-focus lens. There are three issues for me here. (i) The feeling of the lens when you use it, (ii) You're ability to control the depth of field and (iii) the attitude that photographic equipment is some kind of electronic consumer equipment stuff, rather than means for artistic creation. Manual focus lenses permit me to focus on the essentials. What I can see. Not on what the camera designers think I would like to focus on. When I fail, I cannot blame it on anyone else. Figure 6. Two pair of shoes. Nikkor-PC Auto 105mm f/2.5. Of the lenses discussed here, this is the hardest one to focus. But this is a point-focus-shoot capture, one of those where manual focus lenses are useless according to Hogan. You have to practice for months to do this fast. I'm not there yet and estimate that I succeed on less than half of the images I capture with this lens. The rangefinder camera Leica CL (also known as Leica Minolta CL) appeared 1973 as a joint venture between Leitz and Minolta. The collaboration did not last long, and the next version of this product was called Minolta CLE. without carrying a "Leica" or a Leitz logo.. Stephen Gandy, CameraQuest Dandelions shot with M Rokkor 30 April 2011. The aperture was 2.8, so the background became blurred. The M Rokkor lenses were produced by Minolta for this excellent camera. I've recently aquired one of them, the 40mm f2.0, for use with my Olympus E-P2. On 35mm film leica M mount camera that lens is a standard (or normal) lens, albeit a bit more shorter than normal. On my micro four thirds camera it becomes a short telephoto lens, with a focal length suitable for portrait and landscape photography I'm not going to review the lens. Instead I'll refer to the following rather personal statement, again from Stephen Gandy: The 40/2 lenses for the CL or CLE are among the sharpest lenses I have ever used. Stephen Gandy, CameraQuest There are a lot of useful and interesting information to gather about rangefinder technologies; Stephen Gandy has a lot less info left to gather than most of us. I love this lens. Here you can see some photos taken with it in Flickriver. Around 2003 I began the political foot work to establish Ediffah, a project aiming at establishing an union catalog for archival collections at the larger Swedish research libraries. As usual when you're up to something like that you have to talk to and visit the right people. One of the right people, in this case, was the Head of Manuscript and Rare books collections at the National Library of Sweden (KB). Let's call him John, since that is what he is called in the TV series that came up recently on Swedish National Television. I met John in his office at KB. He had a light grey tailored suit, a white shirt and a thick tie with broad stripes in white and dark and light blue. His suit jacket hang on his chair. What I remember most was the very whiteness of his shirt, and how he mixed intellectual brilliance with arrogance. Intellectuals are common in the library communities, but John was an odd fowl with his shirt and his arrogance. We met two more times, both times were project meetings within Ediffah. A handful archivists and manuscript curators from the National Archives and the largest research libraries. John and one other member of the group had both earned their doctorates at a the same department of Uppsala University. The two former colleagues apparently hated each other, but John was the only one of the two who showed it. One morning in December 2004, the unravelling of large scale book theft was the leading news items on radio and our daily paper. The value of the stolen material amounted to tens of millions SEK. By lunch the identity of the thief had reached all research libraries in the nation. A friend of mine, Birgitta Lindholm, curator of manuscripts at Lund university library, told me her story of that morning. All libraries with something like a manuscript collection participate in a collaboration and their responsible curators meet regularly. They should have a meeting this day. She picked up a paper at the airport and started to read about the theft. And how one of the leaders at the library had been arrested. All curators appeared at the meeting. Except John. They took for granted that he was needed in his office that morning. After his confession and after the police had all what it has to do in a case like this, John was released while waiting for his trial. Three days later he ended his life in a most dramatic way. I once smoked a cigarr with John. Here I discuss two lenses. Both are very new. I've owned them less than three months, and they represent recent developments from the respective companies building them. But appart from that they differ in almost all aspects. Even in the very idea behind their existence. Heliar is produced by Cosina Voigtländer. There is no company having that name, though. Cosina is a Japanese company building cameras and camera lenses. Voigtländer is one of the very old brands in German optical industries. It is Johann Christoph Voigtländer founded the company 1756 long before Louis Daguerre invented the photographic process. Copenhagen harbour. Photo taken from the office of the development team at Royal Library using the Cosina Voigländer Superwide Heliar 15mm F4.5. Cosina, on the other hand, was founded 1959 during the Japanese industrial expansion after the second world war. The company have made very little for sale under its own brand but much more producing lenses and bodies for sale under brands like Nikon, Contax, Zeiss and now most recently Voigtländer. While Cosina builds, for instance, Zeiss Ikon cameras designed by the German Company, Cosina has leased and controls Voigtländer brand. The products sold as Voigtländer are interesting, such as lenses bodies for the Nikon S, Contax and Leica M systems. You can actually get a modern lens produced for a camera system that Nikon discontinued 1965. I can very much symphasize with those ideas. Cosina is designing and marketing new stuff in a more of 50 years tradition. And a lot of people wants their products, but, yes, the products addresses various niche markets. The Superwide Heliar is a Leica M mount lens. I bought it because I wanted a classical wideangle lens and because it has goot wonderful reviews. One that allows me to set focus at 3 meter, and aperture 8 and then be able to expect that just about everything from one meter to infinity is within focus. It works. Heliar is exectly that kind of lens. Excellent for candid street photography where you are not able to focus at all. Such a lens is wonderful for capturing a lot at the same time, such as a lot of water or a lot of just about anything, such as trees. The wideangle lens captures it all. For some reason, there always seem to be even more of the stuff on your photo. More water or more trees. I cannot mount the Superwide Heliar on my Micro Four Thirds (MFT) camera without an adaptor. The M.Zuiko lens came with the camera. MFT is its native language as it was designed to be the kit lens, and I suppose that such a thing has to be as good as ever possible at a minimum cost. This lens is one of the basic sales arguments for the body as well as further investments. It has got good reviews, and I think that it deserves it. Copenhagen harbour. Photo taken from the office of the development team at Royal Library using the M.Zuiko Digital 14-42mm F3.5-5.6 Other sale arguments involves ease of use, rapid autofocus good zoom and whatever. The zoom starts at 14mm. It is actually wider than the Heliar. The snag is that when I mount a zoom I mentally adjust myself to zooming. I use the zoom as an aid for composing, basically for cropping. What I'm not doing is mentally to have both a wideangle and a short telephoto mounted and that I am able to choose between a shallow and a deep depth of field. Here we come to the point that is crucial to me. Fixed focal length manual lenses make me aware of the lens. I don't like this lens because I feel that I'm not aware of what's going when I use it. It is not the zoom, it is not the automatic focussing but rather the fact that the zoom goes from wideangle to telephoto. I think that sort of loose the awareness of what kind of lens I'm using. The new M.Zuiko 14-150mm would be a nightmare for me, but I think I would love the new 75-300mm one and similarly I think I would like the 9-18mm. I think a lens need an identity. Heliar has got the wideangle niche in my walk. Earlier this year Laurent Romary and myself authored an Internet Draft, which is the first step towards a Request For Comments (RFC). Now the text has been trough the review process and we have made all the revisions required. So, finally it has been approved by the IESG's review process. I do now understand the RFC concept. Internet was created by people connected to the DARPA net family of projects with the first serious experimental work starting around 1968. The people who did it were affiliated to various US universities and a few companies capable mean, you send your idea on how Internet can be made better to your fellow researches in ARPA net. It seems that these people communicated their findings by storing and circulating them as oddly formatted text files on the very infrastructure that were the subject for their research. And now more than forty years later there still is this mix of individuals from academic and corporate research environments from all over the world. Friendly geeks who want that the Internet should be good place for its users. So, when you submit a draft you do get the comments you requested. You get a lot of them and in our case they were all constructive and friendly. The process is open That's why its called an RFC and thats why they Internets standardization procedures works so well.? According to Wikipedia, a newsagent is the owner of a newsagent's shop (or news stand in US English). The stock in his or her shop contains items that are expected to interest the market. Earlier this year I realized that some of my experience indeed were regarded as useful in more glossy contexts. All of a sudden Knoppix Admin and Linux Format PHP coding is Glossy. Users & Developers want to Master Linux Now Hidden away, there is a magazine for Office 2010 I have just recently bought a new digital camera, one of the new Electronic Viewfinder Interchangable Lens (EVIL) ones. Actually one that embodies all the 5 Reasons to Ditch Your Digital SLR. It is an Olympus PEN E-P 2, a second generation Micro Four Thirds camera. Figure 1. My two two cameras with accessories. The equipment on the image are: I also have an early Nikon Photomic viewfinder, but is lives somewhere in a cupboard together with my Gossen Variosix light meter. Nikon F, configured in the image, is capable of flash synchronization. That is the only electronics it has. One could say that I've ditched my SLR, but that I did very long ago. I've been interested in photography since I was a teenager. My father was a printer and he had a workshop where I worked during summer holidays. When I was thirteen or fourteen I used my savings together with some extra support from my parents to by a Nikon F single-lens reflex camera. This must have been 1969-70. I still keep this treasure, but the configuration has improved. The original 50 mm F1.4 lens was replaced with a 35mm wide angle and a 135m tele lens. My Nikon stuff are items number 4-10 in Figure 1. During the mid eighties I lost my interest in photography, but it was (kind of) revitalized when our two children were born 1988-89. That lead to the acquisition of an easy to use 35mm compact camera, which could be carried a pocket and swiftly pulled out to document the various milestones in the kids development. More recently my wife and myself decided, I'd say less than five years ago, that the time was ripe to buy a digital camera. A bit later still we acquired smarter phones. The digital camera was actually a really good one. I used it to take the snap shot in Figure 1. The phone caused a new development I hadn't expected. Figure 2. A summer view. This is one of the really beautiful places along the Strait of Øresund. I think myself that this is a nice picture taken with my mobile phone. It is nice from almost all perspective except the technical ones. I blame on the digital zoom, but excuses cannot change the fact that the poor image quality masks all the other qualities of the image. Once I has a serious interest in photography. I lost that. I have still all darkroom equipment in a few cardboard boxes in the basement. The phone changed everything. I continuously carried around (what I used to regard as a) decent camera. All the images I've recently put on this site are taken with my phone. I started to think of image composition rules again. Now, web images are one thing. High resolution things you can print large copies to put on your wall is something else. Now, I copied some images to an USB stick had the local photo store enlarge them to 20x30cm. Figure 2 was one of the images. When I did this, I hadn't looked at the image in any other way than in the phone's viewer. Figure 3. My Olympus PEN, equiped with the NIKKOR-O Auto 1:2 f=35mm using the Voigtländer adapter. See Figure 1 for the details on the various items. I just cannot tell how disappointed I was with the result. This, however, coincided with me something else. I brought all my photographical chemicals to the deposit for environmentally hazardous chemicals. I felt that I somehow had lost a way to express myself. I became obsessed with the thought of a really good camera which I could carry around without any effort. I found the Olympus PEN in the local Photo Store. It is a real beauty, with the right feeling. I suppose it is all plastic, but it doesn't feel like it. The decisive point was all the communities where people mentioned all the wonderful lenses they are able to put on these cameras. That is a virtue of standardization. (a) (b) Figure 4. Now I have again controll. These two details are my Nikon F body photographed with the configuration you see in Figure 2. In (a) I have at least tried to focus on Nikon, whereas in (b) the focus is on 3.5 on the 135mm lens. However, there are not that many different kinds of lenses available, but there are a large number of adapters. Leica, Nikon and all other kinds of lenses are now used successfully and there are people whose interest in photography seems to boil down to find obscure lenses to fit on their micro 4/3 shooters. For instance, there are a lot of C-mount video and cinematic lenses available. One example is the P. Angenieux 25mm/f0.95 from the early sixties and originally a professional cine lens. Believe me, eBay has got an entirely new segment of lens collectors. I suppose that you've already realized that when you put a vintage SLR lens on a modern digital camera there are things you loose. You use them with fixed aperture and the camera choose the exposure. This is better than what I'm used to. I don't need to bring use my Gossen Variosix light meter. Also, you have to focus manually. The interesting effect of the m4/3 architecture is that my NIKKOR-O 35mm, which used to be my wide angle lens back in the seventies, is now a short telephoto lens. It is the equivalent of a 70mm lens and ideal for landscapes and portraits. In addition, its maximum aperture 2.0 is much better than what you get today in the kit zoom lens of an ordinary DSLR. With the wide aperture that lens gives me the possibilty to focus on a just a part of an image leave an unsharp back- or forground (Figure 4). Figure 5. A fountain in Central Park, Lund, Sweden. An early morning, taken with the Olympus PEN kit zoom Zuiko Digital 14-42mm F3.5-5.6, which is really nice lens. The camera as it is delivered is really good, even without vintage equipment. My NIKKOR-Q 135mm 1:3.5 is also capable collecting more light than most modern zoom lenses. When I bought it, this was regarded as a poor lens in this respect. On the other hand, I wouldn't even dream of a zoom at the time. The 135mm lens is more difficult to use than the 35mm one. It is heavier, naturally, than the other lenses and it is not that easy to focus. To increase the success rate, I'm considering to bring my tripod, but then the equipment isn't that portable anymore. I don't have more than the original Zuiko Digital kit lens (See Figure 5). It is a very nice and light lense and with it, the e-p2 is almost as light as any compact camera. I've spent much of the last fifteen years working on problems related to interoperability, portability and longevity of data, and in particular metadata in a library context. During these years I've been working a lot with XML. Early on I evangelized about XML with the fervour of a taleban. I suppose that I contributed to the four or five years of XML hype that started at the turn of the century and culminated 2004 - 2005 (cf. Figure 1, and Edd Dumbill's How Do I Hate Thee?). Since then I've become more moderate, but still embrace XML as the preferred tool for the modeling of data. Figure 1. The number of web pages appearing per year containing the phrase hate XML. There is a steep increase in occurrence a few year after the Applied XML Developer's Conference 2004. After that it seems to level off. The data points represents hits pooled for two year periods graphed against the first of January the second year in each period. Because of the way Google works, the date searches mostly hits blog entries and other syndicated material, where publication date and other metadata are known. The earliest occurrence is from 2001. Once you have grasped a technology you cannot keep up strong emotions, at least not positive ones. You see both the shortcomings and the advantages. Possibly you can become more negative as time goes by. Or that is my personal experience. I think that our relations to our technologies is a bit like human relations. There is an early period when you fall in love. Then if you're lucky there is a long period of friendship and love of another deeper kind that may last much longer. Already year 2000 a friend of mine described his relation to XML as angle bracket fatigue, which I suppose is to be understood as I hate typing XML, since its syntax is just terrible. Figure 2. Search interest in Google for the term hate XML. There are hardly any connection between the change in occurrence of the hate XML-emotions and search interest. People seem search for the term a few years after they wrote about it. Developers hate XML Year 2004, sellsbrothers.com convened an Applied XML Developer's Conference. I cannot recollect that I heard of that conference at the time, but there was one contribution which got quite some attention around in the blogosphere. It was by Chris Anderson (AKA SimpleGeek) who gave the presentation Developers Hate XML. I haven't found the slides on the net. However, Jeff Barr summarizes as follows Chris’s big beef with XML is that XML must be processed in isolation, using special purpose tools and languages such as XSLT. In order to use these special things, developers must become domain experts [my emphasis] in a rich and complex space that is essentially unrelated to the application itself. I think Chris (and Jeff) points out something important here. Many developers are interested in programming and computing, but not necessarily application areas. To parse and make something meaningful out of, say, bibliographical records or encoded text requires that you are actually interested in those areas. The same is true for fluid mechanics and natural language processing. Interest in partial differential equations and linguistics, respectively, will help. I cannot see any difference here between XML and SQL in this respect. Any tools for object-relational modeling is subject to the same problems. The developers that do the modeling has to become domain experts. This is fairly obvious for XML and less so for SQL. While few people form large scale standardization efforts for XML schemas, they hardly do so for RDBMS Schemas. What XML technologies provide that RDBMSs don't is an interoperable transfer syntax. Which I suppose is the reason why most XML in the world is stored in, well, RDBMSs. I cannot recall that I've ever written somewhere that I hate someone or even something. I feel that hatred is emotional and irrational. It may be a difference between languages and cultures, though; it can also be a question of semantic inflation. When we Scandinavians say that we hate something, we mean it. I'm not sure that that's the case in the Anglo-American sphere, and I'm not even certain that's the case for young Scandinavians. However, when you're a middle-aged academic and intellectual, you may not think it's appropriate to hate. Or, at least, you may not think that it's appropriate to describe your dislikes and annoyances as hatred. I have tried, a lot, to find out what people think about technologies such as XML, SQL, noSQL etc. Here both love and hatred are used as code words. People hate (or love) XML the same way they love (or hate) bebop, pop or hip hop music. Not the way they love their children, wife or mother. Tablets, eReaders, Smartbooks: You Name the Device and It’s News at Computex writes Katie Morgan of Advanced Risc Machines (ARM). You may not know about it, but just as Intel rules the desktop, the British chip design company's ARM Architecture rules the pockets. According to Wikipedia, 90%+ of all embedded devices and an even higher proportion of mobile devices are designed around the ARM Architecture. There's.a fair chance that both your phone, your wireless router and your fridge are all ARM systems. Most new gadgets appear first at Computex, and most of them are based on the ARM architecture. The mobiles and the tablets are currently given prominent positions in the news feeds, and as I write this entry one of them is more often than not top news item in Google News' Sci/Tech section. It is usually about Apple's iPhone or iPad, both of which, by the way, like the Android systems, are ARM. Intel has tried to establish itself in this area, but have not been able to get an increased market share by winning customers from ARM. The tablet is a new market segment, and the race has started to gain control over this as yet unchartered land. The chip Intel tries to market is the Atom processor which, of course, is x86. See, for instence: ARM and Intel's new battleground: the living room At basic academic levels there's differences. Frontiers. Different academic traditions expressing themselves in people's pockets. For those who don't know the history of the Unixes, it is well worth to know that Linux is a reverse engineered variant of the AT&T System V Unix originally by Linus Torvalds who at the time lived in Helsinki. There is also a free Unix coming from the Berkeley Software Distribution BSD Unix. BSD and Linux come with very different open source licenses. Google's Android and Chromium are both Linux platforms (hence Helsinki and Stanford Universities), whereas iPhone OS is a redecorated Berkeley Unix with a Mach kernel. Yes, it is a Mach kernel, Mack as in the speed of sound, not Mac as in Macintosh. These gadgets are somehow affiliated to University of California and Carnegie Mellon University. It shouldn't surprice you that Steve Wozniak got his training in computer science and electric engineering at Berkeley. Whenever there are conflicts, people form alliances, sometimes new and entirely unexpected: The good news is that we finally have an operating system to unite behind. Android is an operating system that has gained a tremendous amount of momentum all over the world [...] Android has become the fastest growing mobile operating system in the world and, in fact, it has surpassed the iPhone in terms of growth and in terms of users, said Jen-Hsun Huang, president and CEO at Nvidia will be hosted by the Linux Foundation and governed using the best practices of the open source development model. intel.com Just recently Meego 1.0 was released. A lot of creative work focus on Meego, and it seem to gain impetus, there are aleady quite discussions around it and even a positive review in ArsTechnica, where you can find a discussion of the prospects of the system. This is very hardware oriented alliance, formalized as a not for profit company. Its mission is so.interesting, so I just had to quote the whole list: From linaro.org. Obviously, Linaro's mission is to ensure that there will be a Linux kernels and device drivers for a large number of mobile platforms. See the point here? All of a sudden it will be as easy to install Ubuntu Linux on a smart phone as it is on a standard PC today. Anyone perceiving a pattern? There are many companies out there that right now who see opportunities for innovation and business. Quite a few of them would very much like to see one or more of the actors Google, Apple and Microsoft dwindle. Furthermore, there are some that wouldn't mind a bit less influence of either Intel or AMD. I read and write mail, manage my calendars, surf and do some of my wordprocessing with my tablet. That is, I want a hand-held gadget which enables me to be creative anywhere anytime. This is pin-points one of questions most important questions: Why do we buy gadgets for computing in the first place? The vendors have obviously radically different views on the future of computing. Recently Steve Jobs claimed that the PC as we know it is like an old truck and that the future belongs to the tablets. Steve Ballmer participated in the same conference and disputed these claims. He seemed to be convinced that the desktop PC will continue to be the workstation for a foreseeable future. Read more about what the blogosphere is saying about D The PC as we know it will continue to morph, he said. The real question is what are you going to push. In Microsoft’s case, the answer is more installations of Windows. To a man with a hammer everything looks like a nail; we have a hammer, Mr. Ballmer joked. The problem is, it isn't funny. I've posted entries here under the heading Readings on Digital Objects. This text will, I hope, not be the last posting in that series, but you should not expect many more in the near future. Scholars as well as the general public consume digital objects en mass. The range of digital genres on the menu is broad, covering everything between Worlds of Warcraft and Shakespeare's Sonnets. An object delivered on the net via a library (in the broadest sense of that word) is a Digital Library Object. Those who have followed this series have realized that the research on what kind of data such an object should contain has been going on for more than a decade and possibly almost two. From a very early stage, the vision within the library communities has been to move the library to the Internet and particularly to the Web. The trend to establish a presence where the users are has been hysterical at times with libraries spending staff manpower on engagement in Second Life and Facebook. Otherwise, the tactic has been the creation of repositories. We do now see now a shift from the repository as the gravitational center towards the objects, the resources, themselves. There should be no need to go to the digital library and search in what is basically hidden web. Rather the content should be on the Worldwide Web and available in the mainstream search engines. Lagoze et al. (2008) use this shift of emphasis as justification for proposal of the new standard OAI-ORE. They formulate the new trend in a very clear way: It [the OAI-ORE standard] also reflects a recognition of the position of digital libraries vis-à-vis the Web that sometimes seem to co-exist in a curiously parallel conceptual and architectural space. We exist in a world where information is synonymous not with 'library' but with the Web and the applications that are rooted in it. In this world, the Web Architecture is the lingua franca for information interoperability, and applications such as most digital libraries must exist within the capabilities and constraints of that Web Architecture. Implementors of digital library objects have often successfully put the library collections on the web. They are also successfully disseminating simple bibliographical metadata (for example using OAI). Libraries have, however, been less successful as regards putting the objects on the Web (Lagoze, op. cit.) Another point of failure is that we have yet been incapable of producing interoperable objects. Or, as McDonough (2008) puts it: Hence XML's similarity to a rope. Like a rope, it is extraordinarily flexible; unfortunately, just as with rope, that flexibility makes it all too easy to hang yourself. Maly and Nelson (1999) make distinctions between Dumb and Smart Objects and Dumb and Smart Archives, leading to four architectures DODA, SODA, DOSA and SOSA. Ever since, object oriented developers have concentrated on the SODA model (Smart Objects in a Dumb archive. Fedora is intended as an implementation of that. I think it is time to give attention to the DODA architecture. Then the all inferencing and intelligence is deferred to an independent indexing agent which behaves more or less like an Internet search engine through clever indexing and automatic text analysis. Lagoze, Carl, Herbert Van de Sompel, Michael L. Nelson, Simeon Warner, Robert Sanderson and Pete Johnston, 2008. Object Re-Use & Exchange: A Resource-Centric Approach. Arxiv preprint. arXiv:0804.2273v1 Maly, Kurt and Michael L. Nelson, 1999. Smart Objects, Dumb Archives — A User-Centric, Layered Digital Library Framework. D-Lib Magazine. Vol. 5(3). McDonough, Jerome, 2008. . I've spent many hours searching for articles, standards and texts on digital objects, and how to model, store and deliver them. This entry is for measuring my progress. The articles are grouped with respect to the theme I feel they belong. Thursday morning. A clock went off in the bedroom. Very early. It was the calendar prompting me to attend a whole day event. The tablet synchronizes calendar entries from my outlook calendar. Very nifty, I'd say. What was that? my wife asked. Oh, it was just my calendar, I answered. The standard 18 hrs Yesterday she heard a pling from my left pocket. An SMS? she enquired. Nope, an e-mail! said I. The alerts differ. I stand in the kitchen, by the stove, cooking. I write while waiting for something to get ready, typing with frenzy on my tablet. What are you up to? asks she. I'm just writing a text on my life as being connected. I bought three books (Fig. 1) at Bell's Books, 536 Emerson Street, Palo Alto. This is a large, well kept (mostly antiquarian) bookshop with knowledgeable staff. I can recommend it wholeheartedly. Fig. 1. I bought (from left to right) Wallace Stegner's Little live things, The man who loved books too much by Allison Hoover Bartlett and A hole in the water by Mae Briskin. I bought Stegner based on the first sentence in the book, Briskin because I like the title and Hoover Bartlett's book because I've actually met a book thief. How do people choose the books they read? In spite of the fact that I'm a very digital person, I choose a lot of the books I read based on the cover, typography, the feeling when you hold it and even their smell. Mostly, however, I choose certain authors, or read a bit in the beginning of the text, or the back cover. And yes, I read a lot of pocket books, as well. Of the three books, I've as yet only read Hoover Bartlett's on the Californian book thief. I've actually met a large scale book thief. The one operating at the Swedish Nation Library, and who dramatically ended his life in a gas explosion. Now he'll become film. We were in the same project group when we were building Ediffah. I have previously discussed the impact of the tablet computer and described how I've acqired one myself. Here I give some further pointers on the subject. Wired Magazine dedicated the cover and quite a few of its April issue to the appearance of the iPad. Here's two quotes: [The iPad] will remake both book publishing and Hollywood, because it creates a transmedia that conflates books and video. You get TV you read, books you watch, movies you touch. (Kevin Kelly) and. Steven Johnson, 2010 I found this snippet on the effect of tablets on higher education:. (Evan Schnittman ). I can really recommend Evan Schnittman's blog Black Plastic Glasses. The spring is advances. April's weather has been staying, but nevertheless the Thinker's Tulips drop their petals. The previous object is something that came or occurred before the one at hand. The previous installment of my digital object series addressed the vocabulary used in the Atom link tag describing relations between URI's. Next means the nearest in place, degree, quality, rank, right or relation; having no similar object intervening. For page n the previous is page n-1 and the next is n+1. I haven't made up my mind about the next installment in this series. In the previous installment we discussed up, first, previous, next and last. Now we discuss the meaning of The National Library of Australia. Australian METS Profile TYPE attribute vocabulary. Version 1.0 This list gives terms that can be used to describe content structure. The vocabulary is organized for the use in conjunc with Metadata Encoding and Transmission Standard (METS). Using this approach permits Australians to write stuff like: <structMap TYPE="logical"> <div TYPE="newspaper"> <div TYPE="issue"> <div TYPE="edition"> <div TYPE="supplement"> <div TYPE="section"> <div TYPE="article"> <div TYPE="article part"> Having your content organized like this makes sense. All of a sudden you know the meaning of next. This entry is part of my series Readings on digital objects I've been looking with envy on all people around with flashy smart phones. All these iPhones and Androids. My problem has been: Should I aquire a phone capable of computing or a computer capable of phoning. I'm an open source - open access kind of person. I just cannot buy any i* gadget. Apple's business model is clever, but I need a device I can understand. I've been UNIX man for more than twenty years and to buy something without a shell would be unthinkable. On the other hand I'm using my computer for the same reasons as every one else. I.e., mostly surfing and communication. Which would be the same as a mobile internet device (a MID). In spite of the fact that I had settled that my next phone should be an Android one, the fate wanted that I should run into Nokia N900. This gadget weighs more than twice as much as an iPhone but you i think you should compare it with an iPad rather than with a cell phone. It is actually easier to send an email than to make phone call. Nokia doesn't market the thing as a phone but as very mobile computer. Yes, I do know that iPad has a screen which is (perhaps) six times as large as the one on N900. But I have checked out my entire web site from my CVS repository and I'm able to run the entire rebuild machinery. This is a computer that allows creativity. I did try to install emacs and nxml mode but that wasn't trivial. The OS is called Maemo, which is a Debian based Linux maintained by a community sponsored by Nokia. The development is promising since Nokia and Intel are joining forces and are merging forces. Their respective mobile OSes will merge into Meego maintained by Linux foundation. The N900 is about as large as a state of the art smart phone, a bit larger when the keyboard is slid open. It is twice a thick as an iPhone, and when carried in your pocket it will bulge. Well. This is a very good little tablet computer. ... just works. Sometimes things aren't really as intuitive as you wanted. Or, perhaps, the intuition of the designers were not the one you expected but that is common, isn't it? There's a lot of other software. OpenOffice.org is here and then there is the whole lot of UNIX utilities. I can plot my data using Gnuplot and edit perl script with VI. So what about the screen and keyboard? The buttons on this thing are smaller than on my phone. You operate with your thumbs, and it does work. The suite supplied by Nokia uses some clever T9 like algorithm for guessing what you're typing which minimises the typing. The thing will keep you connected 24/7. The thing has 3D graphics, and should you like a shoot'em: DOOM is ported! M. Nottingham & R. Sayre, 2005. The Atom Syndication Format The RFC 4287 specifies the Atom syndication format and states the following about [t]he "atom:link" element [which] defines a reference from an entry or feed to a Web resource. This specification assigns no meaning to the content (if any) of this element. (Section 4.2.7) The reference mentioned is given in the href attribute. The RFC does, clearly, not assign any semantics to the relation embodied by the tag. However, it continues discussing the rel attribute (those of you who know the elements of <html:link/> realizes that the two tags are very similar): atom:link elements MAY have a "rel" attribute that indicates the link relation type. If the "rel" attribute is not present, the link element MUST be interpreted as if the link relation type is "alternate". [...] If a name is given, implementations MUST consider the link relation type equivalent to the same name registered within the IANA (Section 4.2.7.2) The semantics of the relations are thus from a controlled list which is has an international maintenance agency, IANA. In a moment I'll discuss that list, but before I'll mention a few more useful attributes of the atom:link element: title and hreflang. The former conveys human-readable information about the link and the latter describes the language of the resource pointed to by the href attribute. Now, the vocabulary to be used for the rel attribute is maintained by Internet Assigned Numbers Authority (IANA) and published in the document Atom Link Relations. Each entry on this list has to be documented (which is usually done in an RFC). Here are obvious ones, such as: Hence, here you have anything you need for navigating just about any content that may be modelled as a tree. And much more. J. Snell, 2006. Atom Threading Extensions The Atom infrastructure for comments, discussions and annotations is described in RFC4685. The mechanism is simple: The "in-reply-to" element is used to indicate that an entry is a response to another resource. The element MUST contain a "ref" attribute identifying the resource that is being responded to. (RFC 4685, Section 3) The ref attribute specifies the persistent, universally unique identifier of the resource being responded to. In practice it will refer to the content of the id element of the annotated research, which is described in RFC 4287. So, you just add a reply-to link to your entry, and the entry has become an annotation. The rest is programming. ;-) This entry is part of my series Readings on digital objects Carl Lagoze, Sandy Payette, Edwin Shin and Chris Wilper, 2005. Fedora: an architecture for complex objects and their relationships. International Journal on Digital Libraries. This entry should have been about the paper mentioned above. It is only remotely related to it, I'm afraid. We are about to start implementing Fedora here at the Royal Library; our journey to Stanford was about about that. The decision has been discussed at length and I think that it is wise, because when you build a repository of digital objects you need a uniform environment for metadata processing. Here I'm going to discuss XML processing, not metadata processing. Some may think Fedora isn't a tool for XML processing. However, it is often used that way. One caveat, the thoughts here are the opinions of one who have not yet used the package. However So if you are willing to consider my predilections, read on. From my view, the use of Fedora is connected with some pitfalls. They are as dangerous as they are easy to circumvent. In my view Fedora is basically JAXP & Web technologies built on top of standards such as WSDL, SOAP & XSD. It is not even a little anno dazumal, to me it is very much the obvious technology stack of year 2001. Today it is REST not SOAP, RelaxNG not XSD. Also we have now XQuery and other technologies that were just not invented anno dazumal. Since Fedora Commons 3.0 the entire Fedora API should be available through REST. XSD still sucks (read this as well), but I can live with it. All objects in Fedora are ingested by formulating your data in Fedora Object XML (FOXML), which is the format used internally in Fedora. Tim Bray (editor of the spec of XML 1.0) gives the following advice to anyone designing mark-up languages: Accept that there will be a clash between the model-centric and syntax-centric world-views, but bear in mind that successful XML-based languages support the use of multiple software implementations that cannot be expected to share a data model. (Tim Bray, 2005) Seen from this perspective FOXML is not a good idea, being a typical object-oriented serialization used in one single implementation only, with one single datamodel. FOXML is poorly documented and usually used as a container for home brewed mixture of XML or RDF. That is, it has all characteristics of a loser language. Now, here I here we have to put this into perspective. If we look upon FOXML as just a serialized object and not as the document format that will preserve our valuable data for posterity, then FOXML is just another internal data format used inside a piece of software. Then I can live with FOXML. Indeed people invent such XML languages all the time to get their job done, and in comparison with those, then FOXML isn't that bad. In an object store with multiple document types this is almost the only way to do it, unless you've got an XML database. The FOXML is usually indexed using a RDF triple store. However, the indexing process is computationally demanding and many implementors switch off this feature. A popular complement (or even replacement) is to index the data using Solr, which is a high performance search and retrieval engine based on Apache Lucene. I can live with the indexing in Lucene or Solr. They are good. I haven't even mentioned all the object oriented bindings. However, bindings can never be better than what they bind to. And that is FOXML and it sucks. It is just a poor quality language in comparison with TEI, DIDL, METS, MODS and ATOM. It is good enough for a lot of purposes, though. Such as presenting the content of a METS file. You can do wonders with Fedora. But remember FOXML can never replace METS even if the Fedora people are better at UML. The METS community is better at XML. This entry is part of my series Readings on digital objects I took this image 8 April and another earlier in February. The difference in weather is dramatic, it is 49 days between the two images. There is one odd thing about the tulips surrounding the April Thinker. They are extremely early. I haven't seen any tulips as early in Skåne or Zealand. Ludvig Holberg on the main square in down-town Bergen, a wonderful city by a very Norwegian fjord beneath some Norwegian mountains. I took this snapshot when I visited Uni Digital I wanted Ludvig, the square and the mountain on the same image. Ludvig isn't as sharp has he could have been. Sorry for that. The camera in my phone focused on the background. Ludvig Holberg was born here, but his academic career is connected to the university of Copenhagen. The University of Bergen and Det Danske Sprog- og Litteraturselskab are now producing a new critical edition of Holbergs collected work. Uni Digital delivers the technology and then plan is that we, The Royal Library, Copenhagen, and the University Library, Bergen, share the burden of hosting the. Nicholas Carr, 2008. Is Google Making Us Stupid? What the Internet is doing to our brains. The Atlantic Magazine. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. writes Nicholas Carr, and continues a bit further down: first read this article in a printed Swedish translation. It was about a year and a half ago. Months later I found the original and started to read that. Initially I did not realize that I was actually rereading it, but for some reason I then returned to the translation and could establish the connection between. It was when I started to follow Carr's arguments in some detail that I recognized the article. Texts on the Internet are generally brief. I don't say that Fyodor Mikhaylovich Dostoyevsky's texts somehow become more condensed in digital editions, but most digital texts are entries shorter than this one and they are shorter than articles in printed newspapers and magazines. We have accustomed ourselves to brevity and and perceive difficulties when reading longer texts. This is just the symptoms; Nicholas Carr reviews the evidence and I'm convinced. Our media consumption habits are changing, and he even manages to convince me that this affects the brain itself. Does it matter? Nicholas Carr does not even try to answer that question. It might be that the Internet brain has acclimatized itself to the new information environment, and that is neither good or bad. He mentions Plato’s Phaedrus, [in which].' Then, through history, this bemoaning has been repeated at each step when media production and consumption passes through a new period of rapid development. Carr claims that Socrates was right: People's memory deteriorated when they no longer had to memorize Homer's Iliad and Odyssey. He also claims that for each of these historic changes the benefits have exceeded the drawbacks. We don't know if this will be the case this time, though. This entry is part of my series Readings on digital objects Helena Francke, 2008. Towards an Architectural Document Analysis. Journal of Information Architecture, 1(1). Fig. 1. Information architecture level categories compared with document architecture ones, as Helena Franke presents them. The term labelling refers to how subjects or segments are referred to in the navigation system. In the case of navigation inside a document the labels are, I presume, equal to the chapter and section headers. This is a paper with a very simple take home message, which I reformulate here in my own words (Helena Franke might not agree): When a document, be it a book or an article or whatever is transformed to web content its content and expression (I've decided to adopt Buzzetti's terminology) become a part of a web site's information architecture, or perhaps an extension of it. The usability of a digital text, does depend its literary and scientific quality, but also very much on how well the information is represented. Now, this might be obvious for those who consume media on the net, but unfortunately it is not obvious for many of those who produce digital editions on the same net, which is the reason I have waste so much HTML markup on this subject area. This entry is part of my series Readings on digital objects The Perl programming language was created by Larry Wall, who declared that the name was an abbreviation of Pathologically Eclectic Rubbish Lister, or possibly Practical Extraction and Report Language. I've used it a lot and I'm not ashamed of it. And I still use it. In spite of the fact that some people say it's dying. However, now when I'm more or less completetly converted to JAVA, we are actually discussing to switch part of our development to Ruby on Rails. Dino Buzzetti, 2002. Digital Representation and the Text Model.New Literary History, 33(1), p. 61-88. I found this article by searching for Ordered Hierarchy of Content Objects (OHCO) in Google Scholar. The study and use of the theory and methodologies of text is either very practical, i.e., deployed by people attempting to use it for producing digital editions or text collections. It can also be extremely theoretical, i.e., it is used by people proving that it is impossible to represent text using such crude technologies such strongly embedded markup languages, such as Text Encoding Initiative (TEI) XML. It should not suprise you that the people discussing OHCO belong to the latter category. It is those who perceive problems that need to name them. And regardless of how many elements and attributes are added to the TEI, it is limited by the fact that it is XML. I am writing code for my living. I spend most of my time on practical problems. I'm about as interested in the intricacies of the theory of text as I am in flight mechanics theories proving that bumblebees are unable to fly. However, if you want a really good discussion about text, you have to turn to these contributions. They are written by people that are just as smart as the students of flight mechanics. Recently they managed to prove that bumblebees can fly which required more sofisticated mathematics and data. That is, it required new knowledge. Buzzetti's contribution is a classic in the area. It was published 2002, but that version is a translation. The original was published in Italian 1999. That means that it was written somewhere in the interval 1997-1999. That is, he might have started his research before XML even was a recommendation. Most of the tools we use today where not even on the drawing board. When he offers criticism against markup languages, he talks exclusively about SGML, the precursor of XML -- the much extended subset of SGML. Anything which somehow depicts something else can be said to be a model, like a globe is a model of the earth. Buzetti focus two aspects of any model of a text. The first aspect is called the content of the text, whereas the second is referred to as an expression of it. For example, assume that you want to preserve a MS Word document. One way would be to The conventional wisdom says that we have two aspects of a text, its form and its content. Buzzetti doesn't mention this dichotomy at all. Rather, he says that an expression as well as the content may have a form and a substance. In his parlance, an edition is the set of the various contents and expressions available that can be linked to a work. The edition may then represented by interpretations. Buzzetti is very much against the use of SGML. I will not go into his discussion of the lack of datamodels connected to the the markup, since the explosive development of XML technologies make that part of his criticism obsolete. The Document Object Model API supported by all major programming languages is just one of many answers to that critique. Buzzetti makes a distinction between strong and weak embedded markup. Languages that embed marks both at the beginning and at the end of character sequences inside a text are strongly embedded. Those that mark onsets are weakly embedded. Buzzetti is against strong embedding, but claims that text encoding ideally should be done without any embedded markup. By using that you blur the distincion between the form of substance. The ideal form of text encoding is to store the offsets, string lengths and corresponding semantics externally. Doing encoding this way you may support as many overlapping hierarchies of content objects your heart may desire. No hierarchy need to be given more importance all of them are are equal. TEI doesn't do things this way. It isn't because Buzzetti isn't right. I'm sure he is. Those who did text encoding thought that SGML was good enough. Now we think that XML is OK and practical to use. The designers of TEI choose to give a priority to the logical structure of the texts we are encoding. That is, since we are allowed to have one single hierarchy, we use it for encoding the content. I.e., the chapters, sections, paragraphs and phrases. We don't use it for pages, lines and characters which belong to the expression. We regard the page and line breaks as points in the character stream and add empty tags for them. These empty tags are called milestones. If we accept some computational difficulties we can encode a lot using this machinery, including insertions and deletions of text across (for example) paragraphs. But, yes, our encoded text is an expression of the content, and we have limited the range possible interpretations of the text. This entry is part of my series Readings on digital objects We are about to register a mime type for TEI. This page tracks the progress of that effort Robert Kahn and Robert Wilensky, 2006. A framework for distributed digital object services, International Journal on Digital Libraries 6(2), pp. 115-123, This is not really a scientific paper, or it does not look like one. Rather, it is a document written, maintained and used 1993-1995 internally within the CSTR project (Computer Science Technical Reports). The text has since influenced the development, and was finally published in a special issue on complex digital objects of International Journal on Digital Libraries. I ensure you that you'll find more references to that issue here in the weeks to come. The importance of the CSTR project cannot be overestimated. In many ways it lead forward to initiatives such as DCMI and OAI, and indeed the establishment of digital libraries research as a discipline on its own right. Having written this, I have to confess that I've already lost the interest to discuss it further. This entry is part of my series Readings on digital objects The London Library is an independent subscription library. In the UK there are quite a few libraries of this kind left, in spite of the fact that it is more or less an anachronism. There are enough of them to organize themselves in an Association of Independent Libraries. The London Library is the largest member of the organization, and it is also the one with the highest membership fee. O'Reilly, the book publishling company, has an online subscription library, perhaps mainly aimed at technical people like myself. I've had a personal subscription for quite some time, and I recently I upgraded the subscription such that I have unlimited access to the entire stock. The fee is $53.74 (including VAT 25%). That's $656.88 per annum which is £398.60. This is an interesting figure, since the annual membership fee for The London Library is £395.00. I let Safari describe itself in a quotation from its top page: Online access to books, videos, and tutorials from Peachpit, New Riders, Adobe Press, O'Reilly, lynda.com and others.1 The London Library takes a more intellectual stance on its home page: The London Library has long played a central role in the intellectual life of the nation, serving generations of readers and writers throughout the country - and beyond - by lending material from its remarkable collections, and by providing a rare literary refuge in the heart of the capital. Membership is open to all.2 The London Library has a very differen subject range [which] is mainly within the humanities, and the collection is particularly strong in history, literature (including fiction), biography, art, philosophy, religion and related fields3. The library aquires both books and periodicals, and has similar services as a university library when it comes to electronic journals and access to databases. The Safari Subscription Library is aiming at another audience, but appart from that the two has basically the same business model. They gather media and provide access to paying customers. Safari does not help you if you're about to write your thesis in computer science, whereas The London Library would do that for a serious student of history. The obvious difference between the two on the one hand and the public library system on the other is the funding. The London Library is a charity, but otherwise it is as dependent on its customers as is Safari. Besides, both offers subscription schemes for public libraries and other businesses. Last but not least, both of them supply quality. So, yes, the comparison is meaningful. In particular, it is interesting to see how a surviving speciemen of the precursors of the modern library is so very much like what could possibly be what may come after., though. You need to be logged in to Amazon to look inside these books. If you're involved in what goes on on the Net at a technical level then you will at some stage run into acronyms asuch ISOC, IESG and IETF. They are, respectively, short for Internet Society, Internet Engineering Steering Group and Internet Engineering Task Force. Even if industry has contributed a lot to the development of Internet it has done so through these organizations. Furthermore, the Net as we know it with its standards, traditions and concepts comes from Academia. Internet grew out of a research project financed by the US Navy, which supported research at US universities such as University of California at Berkeley. These organizations keep track of their histories. Indeed the best Internet history comes from the IETF in Request For Comments 2235. There is another important entity on the Internet. The Internet Assigned Numbers Authority (IANA). This organization has in its care the Internet Root Zone, that is it owns the name server that rules them all. IANA is also keeping track of the list of MIME media types. To explain the significance of this list, you have to think about all those times when you've failed to open some file sent to you with e-mail. That's what happens when the sender's message header contains an erroneous MIME media type. Laurent Romary and myself have spent quite some time this week writing an Internet draft. The purpose of that draft is to add a single line on the list of media types, containing the string application/tei+xml. The most frequently retrieved document from this site is my syndication feed. I can tell that from the otherwise lousy statistics provided by my ISP. I also use Google Analytics, and the crude statistics from FeedBurner. Analytics doesn't tell me anything about the use of the feed, since I cannot embed javascript in a feed. Only Googlebot (I use it as a Google Sitemap) and Feedburner accesses the feed directly. The rest of you get the feed via Feedburner. Feedburner keeps a cache, and refreshes that every 30 minutes. The feed is retrieved around 250 times a week, and I would have expected 2*24*7=336 hits on the feed. Note that 250 includes Googlebot. Since I've no direct access to the Apache log files I cannot tell how the file is retrieved. If I was Google I would add an If-Modified-Since header to my GET request and looked for a Status 304 (not modified) in the response. Then I could get away with a much lower access frequency and gain throughput. This would explain the slightly lower access rate than expected. My estimate is that about five to ten persons visits my site daily and that two or three of them read something. Furthermore, between ten and fifteen readers follow me using the feed. In particular the use stats for the feed is hard to evaluate. All the entries on this site are written directly in Atom using nxml mode for Emacs. The site is generated from this source using a number of different scripts, the bulk of it written in XSLT. The source file for this entry is here. There are two things you have to think about when you write and design your content for syndication: The problem with javascript manifests itself differently. grazr.com has a nice category browser, which can be used on some of KB's data. It seases to work at least when stuff is syndicated on Google Hompepage. Fig. 1. My previous entry as it appears on my iGoogle Homepage. Note that all the captions appear next to the first two images, not as they should adjacent to each corresponding one. This is Google Chrome but it looks the same in Firefox. When I write these entries I preview them using the server I run on my laptop. Making the links absolute is the last thing I do before publishing to the web. I forget this now and then, and I won't see that before it has been through feedburner and reached google and bloglines. The rebuild-run-debug cycle is long for syndicated content. There might be problems with the feed readers' HTML support. I'm currently only the online ones, iGoogle and Bloglines, so the html shouldn't theoretically not be a problem on them, but still there is. One is that (for instance) Google defines stuff in its CSS which changes the behaviour of tags in a way such that my content does not look as intended. The <kbd>...<kbd> (keyboard) didn't work the last time I checked. The other problem is that (for instance) Google actually edits my markup. It seems that in my previous entry, the one about stanford, Google stripped the <br clear="all"/> elements from its attribute (clear="all") and that CSS redefines the break tag in some way or another (see Fig. 1). Please bear with me. Below you find the content from my previous entry. In this version I'm employing some other methods for achieving a break. I indicate in the text which method I use. A lovely garden somewhere between our hotel and the Meyer Library at Stanford University. Break is the original <br clear="all"/>. Predicted behaviour: No break will be generated in Google. Gustave Rodin at Stanford University. You may want to compare this with the Rodin in Copenhagen. Break is using CSS <br style="clear:both;"/> Predicted behaviour: Will not generate a break if Google strips all attributes on <br/> and generate one if it just strips clear A nice graffiti, or is it al fresco? And what is the difference? Break is using an empty <div style="clear:both;">...</div>. Predicted behaviour: Will generate a break in Google. Science against an evilgelical religion. Break is again a <br style="clear:both;"/> The Papua New Guinea Sculpture Park. <br clear="all"/> as before The view from my hotel room. Jacob Larsen, Eld Zierau and myself are participating in the LibDevConX Meeting at Stanford, 23-25 March 2010 (See also Roy Tennant's Photostream). I will return to this here. However, I've as yet little to report. I have to think a bit before I write. While I do that, you may enjoy my first visual impressions from Palo Alto and Stanford.. A lovely garden somewhere between our hotel and the Meyer Library at Stanford University. Gustave Rodin at Stanford University. You may want to compare this with the Rodin in Copenhagen. A nice graffiti, or is it al fresco? And what is the difference Science against an evilgelical religion. The Papua New Guinea Sculpture Park. The view from my hotel room. Back in July last year, I promised that I should return to how xslt extensions could be used to index arbitrary XML documents. xslt_indexer is a JAVA application that I wrote a few years ago. It is used in some applications, such as the Guaman Poma Web Site. The search engine is Apache Lucene, and the indexing engine is based on Xalan Java. The principle is that you use xslt to create and save lucene documents, and you can use xslt constructs to insert text into arbitrary fields in the lucene documents. Please find here Beware though that this is work in progress, although the rate of progress is very low. I've put it online mostly for the purpose of publishing the idea. The error handling is virtually absent. Nick Nicholas, Nigel Ward and Kerry Blinco Abstract presents discusses at length in the article Modelling of Digital Identifiers. About eighty percent of the paper presents a taxonomy of possible identifiers. I hope that the authors enjoyed the intellectual exercise writing it. The remaining twenty percent disputes the current usage of URIs for identification, and in particular the use of HTTP URIs that can easily be dereferenced. I want to make one thing clear. What interests me is hypertext linking. In my world I working with resources that are overlapping hierarchies of content objects, where the access point or anchor is to be decided by the user and not the service provider. Nicholas et al. are interested in something else. I cannot understand why they find these other things are useful. To me they are just boring. Detta är slutrapporten för ett misslyckat projekt från 2003. Tankarna på projektet grundlades när jag (några år tidigare och av misstag) införlivade CDn Dance Minuets 1731-1801 av Anders Rosén och Ulf Störling i mina samlingar. I det lilla informativa häftet skriver Anders: En polska låter aldrig vackrare än som avslutning till en menuett då kontrasten framhäver egenarten i båda danserna. Detta slog an en sträng i mitt sinne. Jag skulle lära mig att spela menuett! Varje menuett skulle skulle jag avsluta med en passande polska. Eftersom jag kanske är mer tidig-musik-nörd än spelman borde detta passa mig bra. Ty, som Anders Rosén skriver, under en tid vid mitten av 1700-talet ingick t.o.m. danserna och melodierna en symbios, så att det ibland var svårt avgöra var den ena slutar och den andra tar vid. Den största enskilda bidragsgivaren till låtarna i häftet var Johan Eric Blomgren, född 1757. För att sätta in Blomgren i musikhistoriens kronologi kan det kanske vara intressant att notera att Johan Eric föddes bara sju år efter J.S. Bachs död, och därmed var han också ett år yngre än Wolfgang Amadeus Mozart. Förmodligen hade man på den skånska landsbygden knappt hunnit märka att barocken var på väg att ge vika för klassicismen. Blomgren, som är välkänd för de som intresserar sig för gammal skånsk musik, var verksam som organist och fiolspelman i Hässlunda. I samlingarna efter Blomgren finns en avskrift av Wolfgangs pappas lärobok i violinspel. Jag började alltså sommaren med en CD, ett nothäfte jag likaledes förvärvat från skivbolaget Hurv och Skånedelen av svenska låtar. Den sistnämnda eftersom Roséns och Störlings CD och nothäfte bara innehöll menuetter, polskorna fick jag plocka från annat håll. Men juli månad gick all världens väg utan att jag lärde mig en enda av Blogrens menuetter. Det tog emot, och jag ville gärna para ihop stycken innan jag tog itu med själva instuderingen. Under semestern gjorde familjen och jag själv en tur till Stockholm, och jag beslöt mig för att sätta mig en dag på Svenskt Visarkiv för att se om Blomgren själv kunde ge mig en ledtråd. Tanken var att man kanske kunde hitta kommentarer eller någon sorts struktur i nothäftena som belyste frågan om hur Johan Eric Blomgren valde att spela sina låtar. Min egen notsamling är i en förfärlig röra. Det jag spelar mest är svensk folkmusik, men även en del danser från renässansen och barocken. I mina hyllor har några jag några pärmar och mappar med fotostatkopior med noter, vilka kommer från olika tider i mitt liv. En mapp från sjuttiotalet med progg. En tjock pärm från åttiotalet finns massor av visor och slagdängor efter titel; i den är all folkmusik samlad under F. Mot slutet av åttiotalet kom våra barn... och då kom det inte fler kopior. På hyllan står det i stället några böcker med barnvisor. Noterna jag samlat under de senaste femton åren ligger lösa i högar och mappar. De är ordnade i enlighet med ett intrikat system som uppkommer efter efter hur ofta jag spelar dem. Jag plockar ut den sökta låten ur högarna -- om jag nu hittar den -- och lägger tillbaka den överst. Jag har frågat runt bland vänner: Jag spelar inte så mycket efter noter, sa en duktig spelman jag känner. Efter viss övertalning medgav han, motvilligt, att han faktiskt hade en notsamling. Han hade en del dalalåtar från en kurs han gått i Malung. De låg för sig. De skånska låtar för sig. Inom varje ursprungskategori låg de sorterade efter dans. Åter till Blomgren. Hur organiserade Johan Eric sin samling? Visarkivet har en mapp med häften efter honom och hans söner och andra släktingar. De äldsta är efter honom själv, men i några ser man byten av handstil. Den äldsta är ett relativt tunt häfte med beteckningen Ma13a. Omslaget förkunnar: med en vacker, jämn och sirlig piktur. Han hade inte någon kartong, utan lade samman flera lite tjockare ark papper som han sydde ihop runt kanterna med ett ganska kraftigt snöre. Häftet innehåller nio låtar, varav sju menuetter. I början och slutet hade han skrivit ner polonesser. Det här häftet sticker ut genom ett större format än de övriga. Låtarna är bara skrivna på en sida. Pappret är tunt och hade han skrivit på båda sidorna hade bläcket trängt genom, och låtarna blivit svårlästa. Johan Eric linjerade förmodligen sina notblad själv. Hur han gjorde kan jag bara gissa. Alla notlinjer är perfekt parallella, och om han darrade med handen märks det på alla fem notstrecken. Han måste haft ett pennskaft för fem spetsar och dragit alla linjerna på en gång. En penna som inte är så olik en gaffel, alltså. Notarken är vikta på mitten och fastsydda i ryggen på den handgjorda pärmen. Han måste alltså ha linjerat halva arket på framsidan, och på vänt på det och linjerat andra halvan på andra sidan. I det färdiga häftet skev han bara på högersidorna. Allting tyder på att det var ett häfte som han ville visa upp med glädje, inte någon anteckningsbok han spontant skrev i när han lärt sig något nytt. Johan Eric hade sådan nothäften också. Ett av dem liknade mina pärmar och mappar från sjuttio- och åttiotalen. Det innehöll allmänt och blandat: God save the King och åtskilliga andra stycken stycken okända för mig. Låtarnas namn är på engelska, tyska och franska. Kanske är det slagdängor från 1700-talets sista årtionden. En låt med titeln Tune requiring double tounge antyder att han kopierat ur en lärobok för träblåsare. Jag har en massa fotokopier av låtar jag aldrig lärde mig. Det kostar så lite att kopiera, men det tar betydligt mera tid att skriva. Bläck och papper kostar, och man måste linjera arken själv. Jag tror inte Johan Eric skrev av låtar som han inte trodde att han ville spela. Å tror jag att han hade mer tålamod än jag. I've been working with the REST paradigm for years. Or that is what I've claimed. But really, what I've been delivering is really GETful web services. These are services delivering all services using GET-requests, which good if you're actually just delivering content. But REST the architecture behind the IT revolution. And central in the WWW is HTTP and there in the midst you'll find the HTTP methods. You can do a lot using REST and the set of methods defined for HTTP 1.1. I've put the method in Tab. 1. I'd like to dwell on the idempotency concept as applied to computing. I quote directly from Wikipedia:. This is a very useful property in many situations: It means that an operation can be repeated or retried as often as necessary without causing unintended effects. If the operation were non-idempotent, it would be necessary to keep track of whether it was already performed or not, which complicates many algorithms. Let us make the gedanken experiment that the retrieval methods, GET and HEAD, in HTTP where not idempotent in exactly this sense. Lean back, close your eyes and meditate over the problem: Which sequence of pages would be necessary to retrieve today in order to get the original DNA structure. To put it in another way: Idempotency is the feature of HTTP ensuring that you receive predictable content when you dereference an URI, rather than a my personalized version of, say, Watson & Crick. In a recent article Theresa Velden & Carl Lagoze Communicating chemistry, Nature Chemistry 1, 673-678 (2009) discusses how traditions and attitudes concerning IPR affect data sharing and how that in turn affects scientists propensities to use web 2.0 tools and web of data technologies. They address the problem why chemists, as opposed to (say) physicists and molecular biologists are late adopters of Web 2.0 and OpenAccess. These two are entirely different entities. The former is a set of technologies and the second is a business model for the publishing industry. Nevertheless, they are related. Chemical and pharmaceutical industry are large employers of chemists. Velden & Lagoze mentions that ACS has 155000 members. Of them 62% works within industry. As a comparison, American physical Society has 47189 members (year 2009) but only about 20% of them work within industry. On the other hand, the same US source claims that 56% of all physics PhDs worked in industry. I don't think that the proportion of chemists working within industry is high compared with people having training in engineering or computer science. However, chemistry should be compared with the other sciences not with engineering. Here chemistry appears to be a bit different in comparison with physics or earth and life sciences. Chemistry is the science where a the largest proportion of all researchers just consume scientific publication, but not contribute. It should, conclude Velden & Lagoze, not conclude anyone that IPR thinking and secrecy more common within chemistry than elsewhere. Chemistry has fewer OpenAccess journals than most other part of science, and chemists are less interested in social media kind of e-science than are most other branches. In spite of this, there are some really interesting developments coming from chemistry. Velden & Lagoze mentions a standardized chemical mark-up language, a computable identifier for organic molecules (the IUPAC international chemical identifier, or InChI), open-source tools for the manipulation and management of chemical information6, and the use of free, hosted Web 2.0 services to support 'open-notebook science' This open lab notebook is interesting. It is fascinating to see how an e-science infrastructure makes use of off-the shelf wiki and blog software and combine these with storage in Google apps and communicates via syndication feeds.. A form is a document, or a part of one, that can be used for the entry of data. A join is a construct in query languages such SQL. A join allows you to lookup data in one part of a database based on a query in an other part. Joins are general, and may appear in other contexts than SQL. Such as XQuery, XSLT and XML forms language, XForms. I've recently written my first extensive application in that language. It required a larger effort than I had expected. Having worked with XML processing for more than ten years, I had thought that I would easily be able to relate to a new XML technology by extrapolating from my earlier experiences. This has hitherto been the case. For instance, learning Xerces in java when I've used dom4j in java and XML::LibXML in perl was a piece of cake. If you have used the venerable Expat callback based parser, the idea behind SAX or Stream API for XML (StaX) are quite obvious. One could expect that if you know XML technologies and HTML forms you would easily grasp XForms. Having realised that this wasn't the case I thought that having learned XPath and XSLT I would easily grasp XForms. That was true, but only partly. A small part. An XForm script, just as an XSLT one, can read XML documents and act upon them. The result is very different. XSLT generates another document, usually an XML one. The XForm generates a form, a graphical user interface. And it is usually one that can be used for editing XML. A user interface is event driven, an there are a whole lot of events to keep track of. <data> <lookup> <values xml: <value>one a</value> <value>one b</value> <value>one c</value> </values> <values xml: <value>two a</value> <value>two b</value> <value>two c</value> </values> <values xml: <value>three</value> </values> </lookup> <keys> <key lookup="id1">first</key> <key lookup="id2">second</key> <key lookup="id3">third</key> </keys> </data> Fig. 1. An XML snippet where there is a list of keys that via a reference (a so-called IDREF) in an attribute called lookup refer to nodes in another part of the document. The references are anchored using xml:id attributes. The relation between the keys and the values is one to many. What is a brilliant feature in XSLT might not work at all in a GUI, so if you're a lover of the functional programming style recursive processing in XSLT you'll be disappointed. XForms isn't XML transformed into forms, it is language for writing GUIs for XML. Typically one can write really nifty GUIs in XForms. You'll find a lot of examples online, for instance by following links from the Wikipedia article. There are various implementations, server side ones as well as those running client side. I opted for the one implemented as a Firefox plug-in. My project is about editing quite complicated documents, namely really heavy beasts in Music Encoding Initiative XML. We're building a MEI application while the inititive are revising the specification and, among other things, move from a DTD to RelaxNG. A wise move. I might return to the project itself at a later stage, but here I want to tell you about XForms itself. And about joins. Consider the fragment in Fig. 1. If you want to be able to edit the values in the vicinity of the keys, you may need a form like this (requires XForms in your browser). The essential code performing the join can be studied in Fig. 2. <xf:group <xf:repeat <!-- do things with each key --> <xf:repeat <!-- do things with value group --> </xf:repeat> </xf:repeat> </xf:group> Fig. 2. XForms snippet that loops around all key elements and for each of them make the lookup. That is, here we have the join in red colour. xf is short for the XForms namespace, which is. Note that this code works inside a single document that looks like the one in Fig. 1. You cannot edit a database this way. select last_name, department_name from employees e left outer join departments d on e.department_id=d.department_id; Fig. 3. The SQL equivalent to the XForms code in Fig. 2. The SQL assumes that there is a table called employees and another called departments, and that employees table contains department_id as a foreign key. This setup is similar with the key-values (based on IDREF & ID) arrangements in Fig. 1-2. The core is the XPath function current() which returns the current context node. This construct can then be combined with input fields, text areas etc. The corresponding SQL code could look as in Fig. 3. It's the same old story, but it's new to me. Whenever you use a RDBMS, you'll sooner or later ask the question: Is there a way to get the last inserted ID? This, if any, is indeed a SQL FAQ. That is, you'll inserted a line in a table, and now you want to use this ID as a foreign key in another table. That is you want to create data that can be retrieved using a join. All SQL dialects and APIs provide facilities for this. Unfortunately, you don't have such functions in neither XForms nor XSLT. In XSLT that is no big deal; you can program anything in that language. It is worse in XForms. Fig. 4 shows shows an XForms trigger that executes a number of actions. It inserts two nodes, one containing an IDREF and the other the corresponding ID (like inserting both a department and an employee in the SQL setup in Fig. 3). The IDREF/ID values are created on the fly using XPath functions. The problem turned out to be to get hold of the value again for the second insert. <xf:trigger> <xf:label>Add key and value</xf:label> <xf:action ev: <xf:insert <xf:setvalue ev: <xf:insert <xf:setvalue ev: </xf:action> </xf:trigger> Fig. 4. XForms code that inserts two elements in a document of the kind shown in Fig. 1. The key element contains an IDREF (the attribute lookup) which points the corresponding xml:id among the values. The form operates by copying nodes from an empty instance and inserting them into the data-instance, i.e., the instance being edited by the user. The last setvalue contains the solution to the SQL FAQ: How do I get the last inserted ID I've learned a lot of XForms and some new XPath functions. The code I needed in the Music Encoding initiative example was much more complicated than the trigger in Fig. 4. The very reason for writing this was that I used more than a week on this join and the corresponding ID/IDREF thing. I hadn't expected that complexity. But having been through this, I don't think there is any format that cannot be edited using XForms 1.1. It is a pity that virtually all text books and tutorials out there is XForms 1.0 and six or seven years old. This is a very good technology that deserves a breakthrough. Alma Mater (the university that saw me mature from a teenager to a Ph D) has a web site. It used to implement a graphical profile which stated that all links should be listed at the end of a document. The documentation actually also claimed that hypertext links inside text bodies was poor usability. This is just plain bullshit. The hypertext link is the the single thing that makes the Worldwide web to a web. If some usability expert dislikes Wikipedia I bet it isn't because of all the hypertext links. Libraries have over the centuries provided experiences and information. Scholars have read and quoted the material we offer. This has changed. Today the most important challenge for libraries today is to provide anchors for hypertext links. Libraries who think that mission is accomplished when they've solved the problem of efficient document delivery belong to the past. You have to provide means for patrons to link to the arbitrary anchors in hypertexts, to polygons in images or time segments in video recordings. Document delivery is important. You cannot use documents unless they are delivered. Neither can you create arbitrary hypertext links into it. However, it is the fact that you are not done when your patron has the document in his or her browser which is the single most important reason why URN:NBN, DOI or HANDLEs are obsolete. It isn't you who should link to your documents. It is the library patrons who should do that. If they want to link to page five, they should be able to do so, and the link created should be as persistent as any German URN:NBN. And then. If you're not convinced: The problem is that people don't use PIDs in their texts, since they never see them. They follow links, are redirected by the resolver and link to what they get. When you remove your original hypertext anchor, many of your digital patrons links will rot. The truth is that you don't need to use PIDs for this. If you use HTTP as intended you redirect your users when moving documents to the archive. Users will never suffer. If you use PIDs they'll experience the 404. Yes, it works but it is not good. It is as bad as any other redirect based PID system. It works though, but that is because the consortium forces its members to care about links. That is very good, but the DOI system isn't. The truth is that you don't need DOIs for that. You need a link management policy. Not really. Since people don't use PIDs in their documents, this doesn't help here either -- see point 1 above. The truth is that you should not change your domain. It is not regarded good practice among people who care about content. However, if you have to, use redirects (Moved permanently, status 301). If you have to change, keep the old one for a very long time. Someone may buy using your brand for selling porn or something else you don't approve. I'm one of those who try to base all decisions on facts. Politics belong to the strategic field, and in that field the only thing I can offer is advice. As a scientist and a software developer I've repeatedly considered the problems related to addressing on the Internet. I've written at length about URNs and digital libraries and Cool URIs. I'm going to continue to do that, because there are people in the library communities who just fail to understand that the URN stuff is a complete waste of time and money on something which is to little benefit and that may, in my view, be potentially harmful. Most recently it is the PersID project. Please don't pull resources from scarce library budgets in support for an idea which died six or seven years ago! The success of Internet is due to the fact that there are very few single points of failure. The same is true for the Worldwide web and the protocols that support it. The URN:NBN systems require HTTP based resolution services. That introduces such single points of failures. The HyperText Transfer Protocol provide means for servers to inform software clients that a resource has been moved. This is called redirection and comes in two shapes: Moved permanently and Moved temporarily. The redirects should be used for exactly what the protocol says. Anything else is abuse of the intention of the protocol. Unfortunately redirects is the perhaps most abused part of the HTTP protocol, and URN:NBN resolvers and similar systems are the worst culprits. They are permanently sending temporary redirects. That is, they are lying. What makes a cool URI? A cool URI is one which does not change. What sorts of URI change? URIs don't change: people change them. wrote Tim Berners-Lee 1998. He knows what he's talking about, because he actually invented the Worldwide web. The take-home message of his essay is that if you have things you care of, you should assign URIs to them that are such that you will be able to maintain for years to come. If you change the URI, then you don't care enough. And you have definitely not put enough intellectual effort into the dissemination of your resources. To put it another way: You have not considered the fact that your URI structure is a part your application and essential to the reusability of your date. To invent a special infrastructure, such as URN:NBN for this is to hide away these basic facts. Things have changed a lot since 1998. It was only Tim Berners-Lee and others in the working groups of w3.org and IETF that could forsee Web 2.0 and Web 3.0. You can recognize many of the pioneers from the fact that they usually claim that nothing has changed. And they're right, nothing technical has changed since 1998, other than that peoples understanding of what can be achieved using the web can do has gone from version 1.0 to 2.0 and is now on its way to version 3.0. Those who argue that we should go for URN:NBN have not understood. As a matter of fact, I'd say that they never understood the release version 0.9. I'm sorry. In the web of data, the aspects of a resource that are worth persistent identification is in the hand of its users, since it is the users who do the linking. Why on earth should we assign a URN:NBN to Romeo & Juliet, if the users want to quote the balcony scene? Why on earth should you assign a URN:NBN to an article, when I want to address Figure 3? User annotation need a persistent annotation anchor, and that could be a part of an image. Yes, we need persistence. But please, not technology advised by people who never understood web 0.9 Syster Lundberg föddes den 27 februari 1913 och hon är min mor. Jag menar att hon är en helt fantastisk kvinna, med ett knivskarpt intellekt. Hon fick aldrig gå mer än sju år i skolan som barn, men i 60-årsåldern tog hon i princip gymnasiekompetens. I denna serie läser hon dikter och berättar ur minnet eftersom hon är nästan blind. Det är inte helt lätt att få henne att berätta, men gamla psalmer är lättare. Sannolikt ur Svensk söndagsskolsångbok för hem, skolor och barngudstjänster (938 KB) Dikten Ny Dag är egentligen en sång ur en kantat av Anita Nathorst (text) och Oskar Lindberg (musik), skriven till Fredrika Bremer Förbundets 50-årsfest 1934 och publicerad i förbundets tidsskrift Herta. Mor var med och sjöng den i Husmodersföreningens kör någon gång på 60-talet. De fåglar, som bådar gryningen, flyger med lätta slag. Skall deras mjuka vingar nås av en klarare morgondag? Vi har gått vilse i hårda år. Vågar vi ana en mjukare vår, räcka varandra händerna tvärs över haven och länderna? Vågar vi, mitt i vårt ödes strid, -- stanna och lyssna och lysa den frid världen behöver och langtar till mer än den anar och vet och vill? Makt -- är den eld, som förtär oss, Kärlek -- är guden, som bär oss. (938 KB) En av Johan Olof Wallin vackraste psalmer. Svenska Psalmboken 481 (894 KB) Fig. 1. The usage of the term semantic drift in web documents. Note the logarithmic scale on the Y-axis: There is a tenfold increase between 2007 and 2008. By the first of January 2010 there had appeared 19600. These use of the term in the context of semantic web is not very common. For instance, semantic drift and semantic web 827 hits whereas the latter on its own yields more than three millions hits. The increase in popularity occurs in bursts. It seems to me that the term get a new meaning in each burst, but I've only looked at a small number of texts. You hear a term, and you may think that it sounds cool and therefore you import it into your active vocabulary. About ten or twelve years ago semantic drift was such a term for me. Semantic change, also known as semantic shift or semantic progression describes the evolution of word usage (Wikipedia). I'm an evolutionary biologist by training, for me genetic drift and evolution through natural (Darwinian) selection have two different meanings -- drift is the evolution which is due to random change which arise, mainly but not exclusively, through mutation between synonymous genes -- they differ but the effect is the same. I used the term semantic drift publicly year 2000. I used it for a couple of years, but I don't think I've used it since 2003. The term has since then gained some popularity in many communities (see Fig. 1). From what I've found out, the term first appeared in a metadata or semantic web context in an article by Stuart Weibel 1995. Fig. 2. The so-called dumb down principle. The only way to cope with the semantic diversity. The discussion I adhere to above (the hitherto only documented occation where I mention semantic drift) was on w3.org's RDF mailing list, and was about the meaning of the metadata element title. More precisely, we were discussing what it meant that the DC title changed name space URI between version 1.0 () and version 1.1, (i.e.,)? There was a reader of DCMI documents who didn't understand why we did that change. In retrospect I think we didn't understand that either ourselves. We had made a mistake and we couldn't really acknowledge that. This is the history behind the version number 1.1 in DC. At the time I presented a study of meta-tags in around 5 million HTML documents during my life as search engineer 1996-2000. This was the time when I felt the need for Fig. 2. I drew it for illustrating what I was doing when I was handling metadata. The reason for using dumb down is that even if your service you're building is equipped with an advanced search form, you won't be able to sell it to a customer who claim that usability is important if it contains more than around ten fields. I have recently been thinking about these issues, frightened by dbpedia.org's vocabularies. See for instance August Strindberg's entry. Inferno is one of his works. From what I can tell, dbpedia.org is mainly using their own vocabularies, and foaf. I cannot find dc:creator as a property of Inferno, instead they use dbpedia-owl:author. I hope they had a nice time defining their ontologies. I, however, feel that I need my funnel (Fig. 2) more than ever. Three reflexions inspired by the three Adobe has grown from being the owner of postscript and the electronic font foundry of the 1990's. It is still going strong on the traditional markets but have a lot of inpact on just about anything related to media consumption and production. Apple don't support flash. Apple don't support the *.epub copy protection promoted by Adobe. Fig. 1. The search interest for "tablet pc" in Google. It has been declining until very recently :-) Amazon thinks that eBooks should be cheap, say $9.99. Just as Apple thinks that 99 cent is OK for a track. Publishers in general and MacMillan in particular doesn't like that and suggests $14.99. Apple likes that. Torrent users doesn't like that. I don't think that iPad will have any effect on eBook prices in the long run. However, the tablet PC will, finally, gain in popularity. And so will eBooks. The eBooks will make 40 year old books a commercial asset, which will further decrease the libraries market share. An earlier version of this entry had the title: Social media, Google adwords and the curation of digital refuse. However, it would be misleading call this document a version of the text I planned to write. I just couldn't get the various threads in place and write a single text with a neat take home message. Instead, the result is a relatively shallow annotated URIography, rather than a deep analysis of some current trends in the evolution of the Internet and the western societies. Sorry. A site which is designed as the primary Web property for a person, place, or thing is a power site, says Tim Bray, if the person, place, or thing has a Wikipedia entry but, in popular search engines, the site ranks above that Wikipedia entry. Tim Bray's own blog is a power site, my stuff isn't. I have no Wikipedia entry ;-). Power sites are assets these days, if you put adverts on the, that is. Lorcan Dempsey, whose blog is also a Power Site, discusses a number of social technology sites for research information, complete with friends and ranking etc. I like facebook for scientific papers. Do you like too?. The Ouseful blog discusses google economics and page ranks, while The Times Higher Education has ranked universities for years. That is a not enough, obviously, since Thomson Reuters and NSF follow suit.. During the year 2009 we saw the advent of the content farms. It is fairly easy to rapidly aggregate content by archiving mailing lists, RSS feeds etc. Adding Google Ads you get a business model as well. There's good reason to be worried, in particular for Google. A few other voices: Combat content farms, The rise of fast food content is upon us, and it’s going to get ugly and curating farmers. To my knowledge, Jacob Nielsen was the first to describe the parasitic nature of the search engine's business model. The dilemma is that without the search engine, noone would find your pages and with the search engine noone will click on ads on your pages. More recently there has been discussions on how people will be able to survive as journalists and authors on a market where content farms and real news sites generate about as much revenue. See Charles Stross' The monetization paradox (or why Google is not my friend). Tim O'Reilly & Sarah Milstein have written The Twitter Book. I haven't read it, but then I have to confess I'm no Twitter Power user. I'm not even a user. Bill Heil and Mikolaj Piskorski describe recent results from their research on Twitter. They are really interesting and some of them are surprising, such as that the top 10% of prolific Twitter users accounted for over 90% of the tweets. These numbers are very much more aggregated than for social internet sites in general, where the top 10% accounts for about 30% of the activity. The same study shows that men follows men to a very large extent, and also that woman follows men more than other women but that they are more equal in their preferences. (see also twitter hype or, businessweek.com). One study shows that many people still perceive Twitter as just mindless babble of people telling you what they are doing minute-by-minute and that the tweets is written in self-promotion with very few folks actually paying attention (Pointless babble) I feel that there is a single theme in all this. There are problems here. There are needs for business models and new methods for monetization. We need that as well, we in the library business. Perhaps a public service model could make a difference? ground.. Today I gave a presentation on our metadata work. It was meant to be about METS and MODS, but it became more of a justification for our work than about. As usual I had too many slides. My main idea was to use google images instead of slides, but then I had to add some slides anyway because the navigation overhead was too high and I had difficulties to remember which images to click on. The google searches are interesting. The visual result sets give you an overview very different from the text search. The visual search engines (see, for example Search Cube) base their navigation on some kind of canned screen dumps. That's nice but it isn't the images produced by the author, but an image of the text. Please find below an annotated list of links to searches I like: First hit is Figure 1, from the appropriate W3 recommendation. Then there are fifteen polar bears belonging to the cover of Information Architecture for the World Wide Web, a title that I think I should read when I get the opportunity. This one isn't as spectacular as the architecture, but it gives you an idea on what people know and believe they know when it comes to what libraries and cataloging is about. There is a wonderful image (Figure 2) showing what we were all thinking about around 2006. When I think of it, we still are. The range of protocols to support, and the list of metadata formats are the same. The ideas doesn't change that fast. All libraries I know have a circulation desk. Searching for it in Google images yields 978000 hits. Adding quotation marks reduces the number to 34400. None of them are particularly interesting, except perhaps the one at Doheny Library. My fascination is more due to fact that they are all there, all these desks. I mean, how interesting is a circulation desk? This one hard core information science. Work is the ontological work, before it is expressed in a manuscript. The platonic blog entry, before it hits the web. There are other hard core ones, such as collect preserve organize provide access. As a web search, it yields hits mainly in library & archive mission statements. People within any profession or trade should continuously be involved in discussions of quality. I describe my profession as web developer and my trade as digital libraries. These areas are no exception. Like many web developers I spend a better part of my day on coding and other activities related to software development. But so do people involved in non-linear oscillation simulators. The web is an application area, but like non-linear systems it is also an area of research on its own right, and so is digital library research. There is more to digital libraries than just software development. There are all research concerning resource description, and how it relates to usability. There are also extensive standardization work going on. Now, how much digital library knowledge would a programmer need to know in order to build a good service? In theory nothing. If the specification is good enough, then the product should be good enough and this could, again in theory, ensured by unit tests. The problems here are, first, the quality of the specification, and, secondly, the Two Cultures (developers and editorial staff do not always understand each other). The two problems are related to each other. The specification is a part of the agreement between the programmer and his or her customer. The work is completed when everything in the spec is in place. Programmer's need to know when they can send the bill. We who write software, we're a kind of mathematical people. We will feel secure if we know that the axioms are in place, that the theorems are there as well, and that the deductions are correct. For instance, if a piece of software is intended to assist an aircraft both at take-off and landing, then the specification should say so. Even if we thus kind of state the obvious, or if could imagine that there is no space in the sky for all planes that wouldn't be able to land. The web changed this. Not entirely but to a very large extent. There's a fundamental difference between being a vendor and being a service provider. The service provider try to get more users all the time which calls for continuous development. Through the web, a software vendor and a service provider could all of a sudden compete on the same market. The software vendor use the traditional specification, but a service implies software which is in continuous development. Strictly speaking, there are no projects anymore, just different activities. No specifications anymore, but TODO lists containing incidents, bugs and requests for features ranked by importance. And there is a continuous need for innovation. C.P. Snow quoted G.H. Hardy . (C. P. Snow)” in his book The Two Cultures. Another. Snows book appeared fifty years ago. I've studied science and technology for a better part of my life, and I've no idea how many times I've got remarks like “that's a technical detail.” It has happened this year. Do you know what a markup language is? You may, or may not know the answer, but I suppose it's still more important to read Shakespeare if you're to be regarded as an educated person. I do know the laws of thermodynamics, and I've read more than one work by Shakespeare. However, we software developers usually don't write like Shakespeare. We may be good hackers anyway. Anyone who has stayed in the same business for some years, have seen the recurrent changes in fashion. You'll also experience how people try to sell you the little black dress as something new. People invent a lot of things. Some of them are not that innovative. Some ideas required further development before they could take off. Some ideas were right, but the time wasn't. The legacy is something that is accumululating behind you. What someone (usually someone else) acquired or developed. You don't like the legacy, in particular not its documentation, but you have to maintain it. Once upon a time just about every script on the internet was written in the PERL programming language. Then just about every introductory programming course choose java as the prime example of an OO language. Just about any project should have some object oriented modelling, and J2EE was the name of the game. There were noone available for the legacy. Then things happen again. PHP had been there all along. Many of the largest sites hade been using it for years for front-end work. The back-end could be written in JAVA but C, C++, perl and python are used. For newer sites there are languages like Haskell and Erlang, H and E in HECS. The remaining characters are C as in Clojure and S for Scala Mankind is stratified into those who are early adopters and those that are more hesitant. Some of the early adopters buy the little black dress over and over again. In a sense I belong to the hesitant ones; novelty per se does not add to my interest. On the other hand I can get very enthusiastic to learn about ideas and technologies that I find elegant and new to me. One such idea, which is a true design classic, is functional programming. It isn't strictly speaking new to me. I've studied the Scheme programming language, and I'm using XSLT and Xquery. I'm very impressed by the elegance of the Haskell programming language, and the community that has been built around it. Hackage is Haskell's answer to PERL's cpan, PHP's PEAR and Python's pypi. The Haskell developers seems to be most advanced when it comes to building an open-source community. I do believe that Erlang is the platform which has the best track record; it has been used in Ericsson's Open Telecom Platform, for building ATM switches, couchDB -- a modern cloud computing classic. Still, I'm going for Haskell. I want a language which isn't using any virtual machine. The referential transparency (purity) is just a wonderful idea. I want something really new and elegant, such as a little black dress. The last few weeks I've not felt as good as I used to. As a matter of fact I've been really miserable. It started in August. You see, when I am about to complete any substantial piece of work or project, I looked forward to the delivery with certain level of separation anxiety. And then, after all deliverables are completed and a new service is in place, I get a postpartum depression. This autumn all this was aggravated by severe nicotine withdrawal symptoms. I'd say, don't even try giving up wet snuff, unless you've prepared yourself for a passage through hell. Some time ago I discussed our architecture for presenting some of our digitized material. We use a combination of opml and OpenSearch for letting our users search and navigate our Digital Editions. Opml is used for dissemination of the subject structure and for hierarchical tables of contents. Result set navigation is using open search. The system could be described as having two major architectural components, the database layer (Fig. 1) and the presentation layer (Fig. 2). Fig. 1. Database layer. The export controller pulls data out of our Cumulous Digital Asset Management system, normalizes them to a single syntactical and semantical system, namely Metadata Object Description Schema (MODS). From that format it is further transformed into RSS 2.0 and Dublin Core. Finally these metadata objects are stored in our Oracle as XML fragments. The normalization rules are implemented as a set of XSLT scripts with a supporting set of xpath functions implemented in java. A couple of weeks ago, I accidentally searched the Web for "metadata processing". "Google suggest" invites you to that kind of serendipity. I've previously not been able to give a name to the discipline of one of my main areas of expertise; here we go. It's metadata processing. This phrase yields about 10.600 hits in a Google search, which makes it a smaller disciplin than sewage processing. The database layer is all about metadata processing. Our Cumulus installation contains a multitude of metadata fields invented by people for particular purposes in the past. The main objective has been to get a job done and images into a database, not out on the web in an efficient way and not to get the metadata across to other services. Fig. 2. Presentation layer. The presentation layer is really two layers, one Web service layer and one user step further away a graphical user interface which we refer to as an OpenSearch gateway. This is because OpenSearch is its main communication protocol with the web service layer. In this project our goal was to be able to present this material in a single framework, using a homogenous metadata profile. The metadata core is built around Metadata Object Description Schema (MODS). The Export controller (Fig. 1) consists of two loosely connected components; the Exporter and the Digester. The former basically dumps data from Cumulus using its java API, the latter basically transforms those data into a range of XML fragment formats described in Table 1. The basic idea is to avoid any stage where we have to generate XML from scratch later on in the process; processing of semi-manufactured XML objects is computationally much cheaper than the generation of new ones. The web service layer (Fig. 2) does search and retrieval and operates basically by retrieving the XML fragments generated by the Digester (see above). The design goal has been to minimize the cost of XML processing here to a minimum. For the processing that occur there we use the Streaming API for XML (StAX) (JSR-173). StAX is standard from java sdk 1.6. It wasn't trivial to get it running for java 5. When you just want to make modest stream editing of XML, this is an excellent tool. I doubt that you can ever get an as fast DOM based XML tool, so whenever possible use this. There are a few cases where we were forced to introduced XML and XSLT for more extensive editing, but in general here we use ligh-weight XML technology. We access Oracle through Hibernate, which was a pleasure when the mappings were in place -- but it is a nightmare to make the hibernate mappings if you're a beginner. OpenSearch is my favourate XML protocol for searching, since it is much easier to use than SRU. You can direct test an OpenSearch implementation on on a9.com. You paste the URI of an opensearch description into their form and you'll then get back an interpretation of your data and a search form. Please try it!. Don't search for the suggested search term esbjerg, try something like "Adam". The OAI service is already running, and for the first time our library catalog can retrieve and import bibliographical records from our digitization services automatically. The part of the system which users will actually see and use is best described as a mash-up engine. The basis is yet-another template system. I think that it is fair guess that each major web development platform already has more of those than they need. My problem was that I needed a recursive one. The template is a XML document with include statements in certain places. The engine then retrieves data from the web services, transform the them and put the product in place with the XML DOM api. There will now be new include statements, which were the product of the previous set of data inserted. There are usually a handful of recursions before the page is ready for delivery. We have two skins or templates running, Danmarksbilleder and Default used for the services mentioned above. Apart from different look and feel, Danmarksbilleder uses an older version of the database and another set of web services. See my previous posting. There are currently no plans to make new HTML skins. However, I'm about to write one that generates Metadata Encoding and Transmission Standard (METS) documents. But that's another story. I'm trying to place my blog as being from Lund on the blog map. Det har varit en lång debatt om vad biblioteket är till för i Sydsvenskan, nu senast av Per Svensson: Viva Läs Vegas. Här kommer min version av varför jag tycker vi behöver bibliotek. Biblioteket bygger på två principer. När jag säger biblioteket menar jag inget särskilt bibliotek, vare sig Library of Congress, Umberto Ecos klosterbibliotek i Rosens namn eller för den delen Malmö Stadsbibliotek, utan biblioteket som idé. Jag syftar på alla bibliotek i kollektiv singularis. För att över huvud taget förstå varför det finns bibliotek, måste man ta hänsyn till de två prinicperna. Den första principen är att det är lönsamt för de som läser och studerar att dela böcker. Böcker har varit sällsynta, dyrbara och platskrävande. Under en stor del av den civiliserade mänsklighetens historia har det funnits bibliotek som den boksynte gärna tagit en omväg för att kunna besöka. Den andra principen är att man gjort ett undantag i upphovsrätten som gör just denna delning möjlig i en marknadsekonomi. Författare får en viss ersättning per utlån. Förnyarna i biblioteksdebatten ser säkert denna inledning som ytterligt konservativ. À la bon heur. Det enda sättet att få en träff på i princip enbart mission statements för större Nordamerikanska bibliotek är att söka på collect preserve organize provide access i Google. Det smyger sig in en del arkiv i träffmängden, med det är flest bibliotek. Jag har arbetat med att programmera digitala bibliotek i ungefär 15 år. Jag startade min första webbserver 1994 och började arbeta professionellt med forskning & utveckling på digitalabiblioteksområdet mindre än ett år senare. Mitt personliga mål har hela tiden varit att inte arbeta med problem vars lösning går att köpa, att hela tiden ligga vid någon front. Under nittiotalet låg biblioteken faktiskt i framkant i IT-utvecklingen. Vi arbetade med att testa var gränserna går. Vårt arbetsområde var information. Böckerna var bara en källa. Vid NetLab vid Lunds universitet arbetade vi med att bygga sökrobotar och såg det som en naturlig uppgift för oss att organisera hela Internet enligt public service-principen. När vi började arbetet kunde sökmaskinen Lycos, som hade monopol just då, inte indexera dokument med europeiska tecken. Ödet ville att vi skulle släppa Nordiskt Webbindex månaden efter Alta Vista släppte sin sökmaskin. De kunde, precis som vi, faktiskt söka på ål and öl, och att vi skulle presentera oss vid the Seventh International Worldwide Web Conference 1998, samma konferens där två unga studenter från Kalifornien, Lawrence Page och Sergey Brin, presenterade en ny sökmaskin som de kallade Google. Jag ägnade mig åt att sälja insamling, bevarande, organisering och tillgängliggörande av Internet ytterligare något år, sedan fick andra ta över. Det finns ingen som vill finansiera organisering av informationen på Internet enligt public service-modellen. Jag programmerar fortfarande digitala bibliotek, men det är en annan historia. Som biblioteksperson tillhör jag den minoritet som aldrig arbetat med annat än digital library R&D. Som sådan kan jag inte förstå varför inte Chris Andersons böcker The Long Tail och Free inte nämnts en enda gång i Sydsvenskan. Åtminstone inte i de artiklar som jag har läst. Det traditionella biblioteket är inklämt mellan Akademibokhandeln, å ena sidan, och Amazoogle å den andra. Google har som uttalat mål att organisera all information i världen. Boklådorna säljer hitlistornas material, och Amazoogle distribuerar resten. Information väger ingenting längre, den är inte dyr och tar inte så mycket plats. Inte ens fysiska böcker är dyrbara att lagra om man har ett magasin långt ute in the middle of nowhere. Med rätt logistik fungerar det ändå. 25% av Amazon's försäljning är titlar som ligger utanför deras 100,000 toppsäljare. De har ett helt nätverk av antikvariat bakom sig för att kunna tillfredsställa de mest esoteriska önskemål. Vad Malmö Stadsbibliotek betraktar som hyllvärmare är big business för Amazoogle. I massdigitaliseringens årtionde är de utgallrade böckerna hårdvaluta. Skall vi ha hus med böcker? Det inte är uppenbart att vi behöver sådana hus. Problemet är likartat som när René Descartes betvivlade sin egen existens. Det gäller att hitta en punkt där vi kan nysta upp den härva av argument vi sett för Biblioteket. Det finns faktiskt en sak som bara bibliotek gör, och det är att bevara kulturnationens minne. OK, Malmö Stadsbibliotek slänger en del. Men tro mig, de skulle inte slänga sista examplaret av en titel, om den råkade finnas i samlingarna där. Små som stora bibliotek samlar och bevarar. Det finns lokalsamlingar ute vid folkbiblioteken. De stora nationalbiblioteken gör sådant som att bevara Worldwide Web. Det görs vid Kungl. Biblioteket i Humlegården och vi gör det vid Det Konglige Bibliotek i København. Allt detta samlande utgör den demokratiska kulturnationens minne. Vi kan inte överlåta denna uppgift till någon annan. Vare sig till Akademibokhandeln eller Amazoogle. Inte ens till Internet Archive. Om vi börjar vid denna punkt kan vi förstå att vi faktiskt inte skulle klara oss utan biblioteken. Varje kommun, region och nation måste samla, bevara, organisera och tillgängliggöra. Den som inte gör det kommer att förlora en viktig del av sig själv. Minnet. Vi måste tillhandahålla unikt material via webben i våra digitala bibliotek, vi måste ha en rent fysisk infrastruktur i form av läsesalar där studenter tentamensläser och raggar. Varje målgrupp måste ha sina faciliteter. Ja, och så kan vi bygga vidare. Och då har vi snart mentalt byggt ett hus med böcker, och dessutom insett att vi behöver det. Även om somliga köper sina Stieg Larsson och Camilla Läckberg på ICA, och sin De Consolatione Philosophie via amazon.com. Problemet är närmast när vi inom biblioteksväsendet börjar skämmas för att vi faktiskt tjänar den demokratiska kulturnationens minne. In order to understand what's going on here, you have have to visit the resource. There is a some scripting going on which cannot be syndicated. If you're reading this, you are on the wrong page. Sorry. There is a link at the top of this page (if you're reading the HTML version directly from my site, that is). It says "Show anchors". It's not a very user friendly description of what it does, but I cannot figure out a better term right now. If you click on it, there will appear a link after most words in the text. These are the annotation anchors of this text. When visible, the anchors appears as a clickable asterisk '*', and a click on one of them activates the usual disqus.com forum. This is by some client side DOM programming. Submitting a footnote works, but the script never get the acknowledgement from the disqus.com server. I don't know why, but it seems that the form doesn't like to live in an iframe. My server retrieves the annotations from the disqus.com web services and print a list of notes at the end of the page. This is done server side, shuch that the discussions within the footnotes will be visible by search engines. The place of the annotation is marked as a note with a link to the notes. You may read the annotation when your mouse is over the link. Is this a good idea? Do you like it? Should all my pages have user supplied foot notes? In order to add footnotes to a text, we need to be able to address individual positions in it. These positions are what I'd like to call annotation anchors. I've already briefly discussed the problem on how to create identifiers (see a A quotation is much more than an extract). For this entry I've extended the xslt script such that it can tokenize html text embedded in atom entry documents. It can also add anchors (id attributes) on each individual word (produced by the tokenize function) in the whole text. We also implement a javascript which expose these anchors to users, and allow them to comment more or less arbitrary point in the text. There is a big drawback in this procedure; once you have processed your text and the anchors are there, then you must not change them. Each word has become a resource on the internet. Please view the source of this document, before you use the stylesheet atom_anchor_id.xsl. <span id="anchor54764">Would</span> <span id="anchor54766">you</span> <span id="anchor54768">like</span> <span id="anchor54770">to</span> <span id="anchor54772">edit</span> <span id="anchor54774">this</span> <span id="anchor54778">text?</span> See what I mean? I cannot author prose like that in my XML editor. You wouldn't suffer as much using WYSIWYG tools, but on the hand it would presumably destroy the ID-strings. When preparing this text, I felt that I had to prepare some annotations of my own, and in order to do that I compiled the text, published it, made my annotation and then pasted the ID back into the source XML in order to be able to continue writing. Formbased proof reading of an existing digital text wouldn't be any problem, though. You can find the possible anchors by clicking on the link Show anchors, and hide them again by clicking on Hide anchors at the same place (there is a toggle). There are two javascript functions handling the toggling. The possibly most interesting, but also, the most vulnerable, feature is the generation of javascript from rss for handling the readable notes. <script type="text/javascript"> var usedAnchorIds = new Array(); var noteTexts = new Array(); usedAnchorIds[0]="anchor55402b10.359095368701543"; noteTexts[0]='I had problems obtaining unique id strings from the' +'generate-id() function in XSLT. (siggelundberg, Fri, 18 Sep 2009' +'17:46:27 -0000)'; usedAnchorIds[1]="anchor55018b510.584488500647474"; noteTexts[1]='This is an arbitrary point in my text. I find this' +'really suitable for adding a footnote. (siggelundberg, Fri, 18 Sep +'2009 03:27:16 -0000)'; printNotes(usedAnchorIds,noteTexts); </script> The rest of the features are readily visible on this page. You have to reaload the page to see the new footnote -- if I implement this I'll will refresh those asynchronously. then. The digital revolution is over wrote Nicholas Negroponte already 1998. According to one source, whose credibility I have not evaluated, Douglas Adams once described Nicholas Negroponte as someone who writes about the future with the authority of someone who has spent a great deal of time there. The declaration of victory for the digital appeared in Negroponte's last regular column in the Wired Magazine. He might've thougth: We've won, and now is time for giving one lap top to every child on the globe. That would indeed consolidate the victory. * After the digital revolution, it was natural for the library community to create a vision of moving The Library onto the Internet. The libraries were rising new buildings all over the globe called digital libraries. One particularly good one is California Digital Library. CDL was founded 1997, i.e., the year before the digital revolution ended. The CDL web site is very typical in many ways. In particular its home page. It starts with a corporate site, and on it there is a list of services Calisphere, eScholarship Editions, eScholarship Repository, Mark Twain Project Online and Online Archive of California (OAC). There is much more available. I just took those from the top page. Most of these services are brilliant, they about the best you can find from any library on the globe. We see here the main method of moving the library to the Internet: The creation of repositories. Just as we have structured the library into collections, the major ones may have seperate reading rooms, the digital library is split into seperate services. The information resources from California Digital Library are spread over many, different domains. I'm not critizing my colleagues over there. Rather I'm just stating a fact that more or less all digital libraries -- even the best ones -- are suffering from the same problems, of which the most important one is that all of us are building a new repository whenever we start a new project. * We see now a shift from the repository is the gravitational center for the digital library towards the resources themselves. There should be no need to go to the digital library, its content should be on the Web. There is no need to move the library to Facebook to be where the users are. It is sufficient to have the library resources on the Web -- which isn't the same as having the library there. Carl Lagoze et al. uses this shift as a justification for proposing the new standard OAI-ORE. When discussing the differences between the old OAI-PMH and the new OAI-ORE, they formulate the trend in a very clear way: It [OAI-ORE] also reflects a recognition of the position of digital libraries vis-à-vis the Web that sometimes seem to co-exist in a curiously parallel conceptual and architectural space. We exist in a world where information is synonymous not withlibrarybut with the Web and the applications that are rooted in it. In this world, the Web Architecture is the lingua franca for information interoperability, and applications such as most digital libraries must exist within the capabilities and constraints of that Web Architecture. Because of the virtual hegemony of Web browsers as an information access tool and Google as a discovery tool, failure to heed to Web Architecture principles, and therefore requiring somewhat special treatment by thesemonopoly applications(which is rarely if ever granted), effectively means falling into an information black hole.. I sat there looking up references for a text. Then I realized that there were these five years, 1997-2001. They were decisive for the coming nine to ten years digital library development. For the fun of I looked up some events that occured during these five years Almost all of these events were triggered by XML. If I remember correctly, We released S:t Laurentius Digital Manuscript Library 2001 which included my first serious XSLT scripting, but before that I made some serious work using perl and the expat XML parser. 'We quote,' writes Isaac D'Israeli, 'to save proving what has been demonstrated, referring to where the proofs may be found.' How much wisdom isn't embedded in this single line! If we did not quote, then we had to explore or invent everything anew ourselves. We do need methods of referring our readers to the source. Worldwide web provides eminent facilities for this, such as the hypertext link. With a quote we can make our lives as writers more comfortable. We can, as D'Israeli continues, 'screen ourselves from the odium of doubtful opinions, which the world would not willingly accept from ourselves'. How wonderful! I can thus market ideas that noone would find credible when coming from me, or refer to ones that I would not myself dare to publicize. The actual stylistic use of referencing belong to the art of writing, which is distinct from the art of hypertext. Google books provide writers with exellent facilities for hypertext linking. For instance, you can cut a piece of text and put into your own page. Like this: There are dangers of quoting, as D'Israeli makes clear. When you you cut and paste too much, you're no longer authoring. You're compiling. If you've followed my links above you've seen how one can link to a page in a text, how we can highlight an area in an image and also quote by cutting a snippet out of an image. The single points, areas or ranges in an object which can be used for referencing is in my world called annotation anchors. In text we anchor a reference by using a unique sequence of tokens. In this particular text, unique sequence of tokens could have served the purpose, if I hadn't it repeated here. The position of a footnote could have been persistently anchored just by that sequence. In hypertext, we typically use mark-up for the purpose. There is a drawback in that, a user can only anchor his or her reference in the predefined positions. However, if the text is completely tokenized each single word is available for the purpose (see below). Any user interface for searching and navigating parts of an object requires new thinking when it comes to persistence policies. The libraries' HTTP hostile resolution procedures fails utterly. There is also much less research in the area persistence and annotation anchors. Very competent search and navigation systems, such as the eXtensible Text Framework (XTF) from California Digital Library, allow users to search, navigate and link to arbitrary parts of a document. XTF is used to deliver the following document:. Now, here we have extremely good facilities for navigation and search, facilities that goes far beyond what can be delivered by Google. But, alas, the ark based persistent identification layer isn't used for those facilities, and any linking or quotations or references pointing into the document might die with this particular implementation. The PI people lives with ideal of delivering documents, in the same way as they were delivered at the lending counter 25 years ago. I don't think anyone involved in current web development has escaped the linked data bandwagon. Linked data and the semantic web is believed by many to become the core of Web 3.0. Tim Berners-Lee lists four characteristics of the web of data. Anyone who wants to jump the bandwagon by means anything else on than than good old cool URIs does so at his or her own risk, and I won't. For, as D'Israeli puts it, 'the art of quotation requires more delicacy in the practice than those conceive, who can see nothing more in a quotation than an extract.' If you're interested in experimenting with mark-up based anchors, I'll give you a XSLT-script which basically copies a XML document but while doing so it adds an id attribute to every element in your document. I find this extremely useful. You can extend it to tokenize text and add anchors on individual words. I use such a script when creating search and navigation systems involving XML text. Beware though that a document type may use the generic xml:id attribute. Also, a DTD or schema may have another name for id attribute and also it may not be permissible on all elements. This comes to you as an afterthought. I killed other peoples' links when I last week moved a document. I've felt guilt for this for a week now. Status 404 is a pain in the arse. You've found that link. Apparently it contained some vital information for you. And then it's gone... We've all seen this. And since the end of the last century there have been people working on it, many of them coming to from the library world where I belong as well. The Germans seem to be most ardent. Followed by the Finnish and us Swedes. I full well understand people who try do something against URI rot. Mind you, I've spent six years of my life maintaining harvesting robots. However, I've hated URNs intensely since the end of the last century when I realized that people who hardly knew anything about HTTP, hypertext and markup commissioned technical solutions, such as URNs and URN resolvers as futile fixes to human and organizational problems. Those who know me also know that I have very strong emotions about persistence. These emotions started long before I had read Tim Berners-Lee's brilliant paper Cool URIs don't change. Tim did get one thing wrong in that paper: URI's never ever change, people move (or remove) the documents. Once you've formulated a URI, it will exist until the end of networked information as we know it (or until the end of the world, whichever happens first). By the end of the nineties, the Nordic Metadata Project had commissioned a technical solution where we in the Nordic Web index should identify all pages with DCMI metadata and in particular those that where using encoding scheme URN for identifier. As a matter of fact, I found a couple of thousands such <meta> tags searching my databases. The only snag was that they all appeared on two sites. Project Runeberg had a very neat collection with digitized books, each with a unique URN embedded in the cover page, and then isPartOf elements for each chapter. Lars Aronsson, the main architect behind Project Runeberg, had made his homework. However, most of the DCMI identifiers with encoding scheme URN were found on the web site of the Swedish National Library. The only snag here was that all of them contained the same value. I worked for a couple of weeks with this, but result was useless because of the incompetence of the very people who marketed these ideas in Sweden. For some years to come, I sat on many meetings listening to them talking about the status of URN:NBN in Sweden, and for each time I heard about it my anger increased. One document colleagues have wanted to discuss with me the last few years has the following URI: It leads to Hans-Werner Hilse and Jochen Kothe, 2006. Implementing Persistent Identifiers, published by Consortium of European Research Libraries, London and European Commission on Preservation and Access. Sending a HTTP HEAD request to the URI above yields the following response: HEAD- ...-8 --> 302 Found HEAD ... 508-3 --> 307 Temporary Redirect HEAD. ...3-8.pdf --> 200 OK Connection: close Date: Sat, 15 Aug 2009 11:16:15 GMT Accept-Ranges: bytes ETag: "7499f564-92b7f-f0dca280" Server: Apache Content-Length: 600959 Content-Type: application/pdf Last-Modified: Tue, 12 Dec 2006 15:28:58 GMT Client-Date: Sat, 15 Aug 2009 11:16:01 GMT Client-Peer: 134.76.9.3:80 Client-Response-Num: 1 Obviously people in Germany love maintaining resolution services. First there is this global one at DNB, Deutsche Nationalbibliothek, and then the local one at Göttingen. It does not help them very much, though. I've been through all links pointing to this booklet. Only four out of fifteen points to the canonical source. To put this another way: Whenever Gesellschaft für wissenschaftliche Datenverarbeitung moves their documents, three quarter of the links pointing to them are gone. I have tried for hours to find any document in Google Scholar with a reference pointing to the Finnish URN:NBN resolver. If there are any, I ensure you that they never appear in papers with any impact. The money spent on maintaining the resolvers doesn't help. A complete waste of both time and Euros if you ask me. The PI is a dead concept, promoted by organizations who fail to learn from experience. It is completely incompatible with modern thinking on web technologies, and in particular semantic web and annotation and navigation of complex digital resources. Sorry, I felt the anger boiling up inside me again. Forgive me, Father, for I have sinned... I have moved a document. Dear Reader! I've got a new URI for my ATOM feed. I appologize for the inconvenience. This entry will be the only one available at the original one. Feed burner provides so many advantages that I cannot ignore them. Just click on the link above and subscribe, and you'll get my scribbles as usual. However, I've got another new feature as well. The really substantial responses I've got on this site I got from my colleague Jacob and my wife Gertrud Both agree that my note on commuting is brilliant. I'm grateful for that, but as far I can tell from the download stats, they are the only ones who like it. Then Jacob told me I ought to have a forum for discussions in relation to my scribbles. Well, then... I've added one of those nifty javascript based ones, disqus.com. Then Gertrud said: You'll never find the time to moderate that. We'll see. I guess it's more likely that it will be yet another of those pathetic empty discussion things on the net. way. Chris Anderson's book has finally appeared. The new one, Free, much discussed already before it was printed, and, off course, much hyped in the Wired magazine. I've just ordered it from Amazon, but I've been walking around thinking about Internet business models, open access and the like. I've also written at length about this (in Swedish, I'm afraid). How is Free related to the situation for The Library? Well... I'd say: in just about any way you may imagine. However, let me start at an entirely different angle. Blog and Wiki software were at the time the first easy ways to really take advantage of the new medium based on hypertext, a simple protocol and a global network. Through additions such as syndication and pings (notifications that new content have appeared) and automatic cross-linking (based on pings) to enable discussions across blogs, it goes beyond hypertext. This gave us basically a new medium built for transaction and human interaction. When a blog really takes off, the traffic generated is enough for letting the ads pay your rent and buy milk for your kids. Obviously some quality is required or it won't take off in the first place. (Quality which is in the eye of the beholder). Content which is hard to get elsewhere. That is what gives the attention needed for the transactions to take place. Blogging is cheap. You write about what's around you that you know. You benefit from from the transaction and attention economy which feeds the Internet. Syndication, aggregation and harvesting will move you're content around, and you'll mostly benefit from it. On the other hand, much of the user's navigation will take place elsewhere, and many user's will not be exposed to your ads but Google's. This lead to Jakob Nielsen's out-cry where he labelled them as leeches already a few years ago. Michael Massing discusses this and other issues in a recent article in The New York Review of Books. He quotes David Simon's statement in a testimony in front of the US Senate: [Internet] leeches... reporting from mainstream news publications, whereupon aggregating web-sites and bloggers contribute little more than repetition, commentary and froth. Meanwhile, readers acquire news from the aggregators and abandon its point of origin—namely the newspapers themselves. In short, the parasite is slowly killing the host. There is a problem deep in this statement. It assumes that there would be no news without the traditional media. Since the traditional media are on the decline, We can be almost certain that this is not the case. Massing continues: .... Do we really need these big media institutions? What use do we have for Reuters, CNN, AFP etc. They are good to have, but I'd say we could do without them. For example, the notion that we cannot get proper information from distant places without having expensive foreign correspondents travelling there seems extremely arrogant to me, on the verge of being racist. This is not to say that we don't need journalism. Indeed we do. But as Massing states, it is currently reinventing itself on the Internet. A new one which has adapted itself to the medium. One which is part of the transaction & interaction based technology and economy. Now, let us return to The Library. The business model The Library is based upon is the idea that it will save money to share media. It started with scholars sharing papyrus scrolls two thousand years ago. Continued with libraries within monastic societies during the middle age, where books were extremely valuable assets. Then, today, we're licensing electronic material which is meant to save money as an alternative to pay per view. Those who invented The Library had clearly in mind collections of stuff that were not Free. Stuff that are scarce and only legal to copy for fair use. Produced by organizations that think that Google is stealing their content. In Chris Anderson's world, and in Google's, the value of information is declining. Material which is Free or open access have the price corresponding to a zero marginal cost of producing one additional copy. Who will fund each University library's fumbling and overlapping attempts to collect, preserve, organize and provide access to virtually the same digital resources through very similar integrated library systems? And do these resources belong at The Library at all if they are Free? I leave these questions as an exercise to the reader (hint: think about the economics of blogging). And end this by quoting Google CEO Eric Schmidt .” By and large we in the library world have, finally, accepted the reality. We do no longer regard the online catalogue's poor usability as an information literacy problem. As such, there was a remedy: User education. The roots of education are bitter, but the fruit is sweet. The path towards mastering the online catalogue is like a very long session in the gym. You feel better the day after tomorrow. After (rougly) 2003, there was a general acceptance that the our patrons deserved an easier to use interface, and the concept of the integrated library system was born. The new kind of system should be so easy to use that new generations used to Amazoogle and Googlezon should not be frustrated by the complications of the typical scanlists generated by the library opac upon the entering of a long and complicated query in CCL. The user preferences have been known for a long time. For instance, all users of the Open Text search engine used the simple form, except those that didn't. They used the advanced one, and they amounted to about 0.5% of the grand total. Such users belong to the tip of some long tail, and they do know what they want to know, but knowing that might not help if we cannot deliver. I heard a story from a librarian at The Royal Library. A student appeared at the information desk at one of our reading rooms requesting a list of contemporary Japanese literature translated into Danish for some paper in translation studies. Our nice ILS Primo provides an OK Amazoogle experience. But, well... An user hostile boolean fielded search in the rusty old OPAC did the job. The good news is that the advanced search form is much cheaper to build than simple one. That one is still the real challange. Even for Google. I started learning XSLT eight years ago. At the time, it was a fresh technology, and there was also intense discussions on the extensibility of the language. Since then xslt 1.1 has disappeared down the drains, xpath 2.0 & xquery 1.0 appeared on the market. The former of these has had a slow start. The latter, however, has gained acceptance, however a bit reluctant. The stuff I want to discuss here is the writing of extension functions. This is regarded as an evil habit in some parts of the XML processing communities, but it is supported by most implementations. It is not to be recommended if you're producing a stand-alone application, but in many cases you're xslt code is an integrated part of an application. Most of my code is of this kind; in particular I'm writing xslt for xalan running in a JAVA environment. Now, why should I refrain from using regex, just because xalan does not support this package? I cannot see why. And nobody would ask me to write ANSI SQL in spite of the fact that we use Oracle. Now, it has been proven mathematically that xslt is a Turing complete programming language. That means that it is possible to program anything which is possible to program with such a language. I.e., you have theoretically the same possibilities as in JAVA and C++. That doesn't mean that it is practical do do so. It is, in my view, much easier to traverse a XML tree in XSLT than in (say) JAVA, whereas it is much easier to connect to a database or implement a XSLT processor using a conventional programming language. It is here the XSLT extension functions become really useful. You can pass data from XSLT to your own functions written in your favourite programming lanuguage. I use this for tasks like indexing XML documents. When traversing a document, you select text and pass it to you're function which pass it to your indexing API. One big advantage is that the same software can index heterogenous collections of XML documents just by using different XSLT scripts. They are easy to write and you can a new document type without recompilation of you're indexing software. This ability of a fullblown functional programming language (mind you, its Turing complete) to interface both with a host program and from scripts inside it call other functions in other languages is powerful. This, in combination that it is really good for what it is designed for has made it an object for extreme affection from me. Besides I'll return to examples of using XSLT for indexing complex XML objects. Many things in life can best be represented as trees. Indeed, life itself is a tree of various life forms. We have the animal and plant kingdoms, and these main categories are further sub-divided into phylae, classes, genera. The animal kingdom is divided into invertebrates and vertebrates. The latter include amphibians, reptiles, birds and mammals. Human knowledge is also depicted as a tree. A lot of other things are trees. Hence, philosophers, mathematicins and computer scientists have put quite an effort into the modelling stuff as trees. Within library & information science we have used the term classification systems for centuries, I suppose, and more recently we have started to talk about ontologies and knowledge organization. There are a lot of theory and a plethora of standards in this area. Systems like topic maps (see also topicmaps.org) and more recently Simple Knowledge Organization System SKOS) are available for those who need a standard for knowledge organization. On a very down to earth level, I tend to think of controlled vocabularies as filters and navigators. The latter are trees and may contain many terms arranged into a tree of arbitrary depth. The filters are lists of controlled terms, which may be whereabout in the tree we are. Think of Yahoo Directory. You can browse into Tuscany and get a very long list of resources. Then there is a menu at the top where you can choose between "Business and Shopping", "Entertainment and Arts", "Recreation and Sports", "Travel and Transportation" and so forth. These are broader categorizations that apply to Firenze as well as Siena or New York city or any place you may want to visit. We in the Web business call them filters or facets. From a user perspective, you have to take into account that there are two cognitive processes involved when using services which like Yahoo Directory employ navigators and filters. The first process is more fundamental, and is by nurture and possibly nature a part of the human mind: This is to say: Ah Tuscany, that is a part of Italy, isn't it. It's like distinguishing between a wasp and a bumble bee. On an evolutionary time-scale we've benefitted from being able to say, this is edible this is not, or this one has a sting and is likely to use it. This is to recognize something when you see it. The opposite is a much more difficult cognitive process: I want to know something about Tuscany: Ok, I then start by clicking on Italy. You have to connect Italy with Tuscany, not only recognize the word Tuscany when you see it. This makes it more difficult to find Tuscany given Italy than Italy given Tuscany (in Yahoo it's in the bread crumb path). Remembering might not be trivial. To know in advance things like "frogs are amphibians" and that "diabetes is an endocrinological disorder". On the other hand, to perform successful searching in Google is even more demanding, in spite of google suggest and spell checking tools that have improved the search comfort for many users. There are structures that are inside a resource, and that require navigation as well. As a first approximation we can assume that all digital resources, such as electronic books, are trees as well. In the theory of text this model is called OHCO (ordered hierarchy of content objects). That view has been questioned, and it is for instance claimed that any ordinary text consists of many, overlapping hierarchies. I just love overlapping hierarchies. They are great fun. Anyway, if you search the scientific literature on the theory of text, you'll find everything you want to know about post-structuralistic deconstruction of the OHCO model. Theoretically, I can understand the problems. But for me they are purely academical. If you instead search Google books for some real content, nightingale lark balcony you'll find Romeo & Juliet in her bed room. Scene V in Act III of Romeo and Juliet, on page 189, 160 and a lot of other page numbers, dependent on which edition. In the world according to me, you may have a lot of hierarchies and they may overlap, and OHCO may be false. But if you want to be able to address content in a more clever way than the one chosen by Google books, then you cannot do without a content model that looks like, well, very much like a tree. I was asked to participate in an e-science pilot project. The pilot is directed towards social science, and its main goal is to find out what we could do in the area. Should libraries play a role in a future eScience infrastructure? If yes, what could we do for our patrons in this area? That the answer should be yes to the first question is obvious for anyone responsible for the strategic development of any research library around the globe, but the second question is clearly more difficult to answer. Libraries have, traditionally, been occupied with four activities: They collect, preserve, organize and provide access to information. Indeed a search for these four terms in Google seem to yield a search very focussed on mission statements for Northern American research libraries. It isn't farfetched to conclude that we could extend this role to include research data in digital form In my vision, our role may not end here. The task should involve an infrastructure permitting the our existing information (published in the form of books, journals and whatever) live in a happy hypertextual symbiosis with the research data it is based on. In this ecosystem of data and models scientists should be able to combine old stuff in new ways just and continue building the human knowledge by submitting new data. Every piece of data need to be possible to address, for the purpose of annotation as well as for retrieval. Every piece of data, any point in a diagram need an identifier. And it has to be a URI. No single organization will be able to afford to build the infrastructure needed by any research community. The good news is that no single organization need to build it. The web of data, which has to be the eScience infrastructure, will almost certainly be the Worldwide web. Nothing less, nothing more. This is the vision of linked data and the semantic web, rephrased in the context of scientific information. By tradition libraries are aimed for some local community. In particular, the are funded by some local authority for that purpose. Hence they build collections suitable for that community. This local focus is a problem when everything else is Worldwide. The idea of the web of data as a foundation for an eScience is a good idea, and most likely the only one that will work. Indeed it is so obvious ones that any actor within the business of scientific information are about to adopt it; publishers, learned societies and libraries alike. However there are others that are interested. Google eScience doesn't sound too exotic to me. Google eScience is not a threat, per se. The web of data need data feeds. We can provide that. The problem is our local scope, and in particular the focus on a local community. But that boils down to the local funding. market has forced the major browser manufacturers to converge on standards. Microsoft cannot afford to produce a browser that won't work on google apps, and the Linux based netbooks and netboxes make a difference. If software is a service, what is the client? Obviously the web client, the browser. But why are the search engines lagging behind? Browsers are capable of AJAX and advanced XML processing, but the search engines are still basically just removing tags and presenting raw text extracts. I have quite a few documents on this site that are written in raw XML, such as the Digital Humanities Infrastructures position paper from last year. Since it is about digital humanities and text encoding, I wrote it in TEI XML. The paper is presented using client side XSLT. Just view source on that note, and you'll see my markup. The document is a blatant show-off; it is written in nerdcore vanity. At the top you'll see <?xml version="1.0" encoding="UTF-8" ?> <?xml-stylesheet href="render.xsl" type="text/xsl" ?> The xml-stylesheet processing instruction is read by your browsser, which then retrieves the stylesheet render.xsl. Without any further ado it is then transformed into html, rendered using the CSS indicated by my XSLT script for you to read. You can even inspect the resulting html using firebug, if you've got a modern installation. There is almost certainly a range of XML formats that is actually interpreted by Google. Which formats, well we don't know but in practice can hardly expect anything more exotic than common syndication formats (such as RSS and ATOM ), microformats and RDFa. I have noticed that Google has been doing OAI harvesting for many years, most likely for the benefit of Scholar. One could envisage a number of viable indexing strategies for a general search engine for a document like my Digital Infrastructure Nerdcore Vanity paper. One would be to execute a global regex search and replace "<[^>]+>" with "" (which basically means: remove anything between angle brackets, or remove all tags). This would yield a "detagged incipit", which in this case is "Digital Humanities Infrastructures 2008-06-12 Digital Humanities Infrastructures Sigfrid Lundberg" Another strategy would be to use the style sheet provided by the nerd publishing this paper. That would yield the "transformed incipit", which for this document is (using the HTML body only) "Digital Humanities Infrastructures Sigfrid Lundberg". In using the transform, Google would then be able to use its usual HTML indexing techniques. For instance it would understand the title, and be able parse all hypertext links. I made the search digital humanities infrastructure +lundberg and the results looks like "Digital Humanities Infrastructures 2008-06-12 Digital Humanities ... - Jun 5 ... Digital Humanities Infrastructures Sigfrid Lundberg slu@kb.dk Digital .... Also McCarty's humanities computing research infrastructure (or rather ..." The machine is actually using a detagged incipit as the title, which is what I think it uses for PDF. This is very far from the ideas in Gleaning Resource Descriptions from Dialects of Languages (GRDDL) where providers of documents actually provide XSLT for the benefit of robots that could use that to extract descriptions in RDF. At the Royal Library we have been working with the building of an infrastructure for publishing of digitized material. It is collections of digital images, usually with very little textual content to go with it. The cataloging of the images has been made by library staff using the image database system Cumulus from Canto. Cumulus is designed as digital asset management system for people within graphical industry. Canto is producing an advanced Web user interface, but it is rather poor as regards dissemination of the collections on the web. For example, images published that way never ever appear in Google Images. Syndication of our content has not been possible, since the system have been lacking any concept of standardized metadata, which is essential for all collaboration between institutions in the library communities. To take advantage of the relative ease of use for the people handling the images, but still be able to provide access to our collections on the Internet we have an entirely new web interface. For performace reasons, we decided that it should not access the cumulus database directly, but a mirror in our Oracle DB. Furthermore we wanted a REST XML web service layer. The latter goal was achieved by using two syndication services, built on top of Oracle. The two services were written as JAVA servlets, and supported Outline Processor Markup Language (OPML) and OpenSearch, together with RSS. The rationale for the latter choice of standard was that we wanted mainstream Internet Standards rather than typical library standards such as SRU. OpenSearch is promoted by A9, a subsidiary of Amazon and there are at least pledges of support from other big players such as Google and Yahoo. OPML was not an as obvious choice. However, OpenSearch is basically a Syndication protocol and as such OPML is a candidate since its most common usage is to provide subject structured access to feeds. An early version of the system is available on our web site, the first version was released for about a year ago. Our technology choices makes it stright forward to syndicate the content. The gadget on this page is an example of this. It is a gadget showing the OPML and RSS from the first collection of images released in this way, Danmarksbilleder, a collection of historical images from a few Danish cities. The widget itself is "The Original Feed Widget" a nifty thing coming from grazr.com. You can make one of your own by filling in the URI in the form on grazr.com. The use of truly simple de facto Internet standards is one advantage. But we needed a way to build traditional web contents on top of our two web services. Sites as Danmarksbilleder mentioned above, and its "cousins" Kistebilleder, Daells varehus and Partiprogrammer. All these are actually delivered by one single servlet, which is most appropriately described as a mashup-engine. The engine is written in-house, and supports multiple skins (as seen in the examples above -- Danmarksbilleder differs from Kistebilleder). In this application a skin is a XML document, which consists of the lay-out for the html page (it is little more than the HTML). Appart from html tags it also contain <kb:include/> tags identified by they id attribute. Each kb:include tag corresponds to a REST XML web service and is connected to a XSLT script via configuration file. The mashup engine reads the skin at initialization, and upon a request it retrieves the required OPML or RSS, transforms the content, pastes these fragments into the skin DOM tree and finally delivers the content to the client. The mashup engine is not very well written. I can do better. Also the current set of services requires an Oracle schema per collection disseminated which is wasteful. We've haven't got the time yet to fix the first problem, but the second problem is about to be solved since we have a second generation of web services in the pipe-line for release real soon. In this application we've just one single Oracle database for all editions served. I will give you a report on that one when it is online. I'm a terrible nerd. To use software like content management systems or blog software for building my web site is unthinkable for me. I have a substantial collection of old stuff written for various purposes in very different formats; it is a must to be able to integrate that material into the site. Some of this material are hard-core XML documents in text encoding initiative or docbook XML. There are also documents in legacy formats, such as RTF and (GNU) troff. I'm still using troff for production of new texts, usually via transform from XML using XSLT. The more recent XML documents are using client side XSLT for viewing. This material is growing. I see it as an extension of my CV, but some of it could be interesting for users since it is general documentation for how to solve certain kinds of problems. I also wanted to seemlessly integrate other, more lightweight, material. Like this article As I mentioned the other day, I wanted a new navigation system. There are two requirements on that: (i) it should be easy to follow for users and (ii) the pages in it should be good landing pages for search engines. I felt that there was a need for more text that would generate hits in search engines without out being irrelevant for users and misrepresent the content. Finally I wanted the site to be somewhat like a blog. In spite of the improved look and feel I also wanted that all the material already published on the site should retain the current URIs. The old site was just plain files, I did no scripting whatsoever on that site. I couldn't, however, possibly manage a navigation system a manually, so some scripting had to be involved in generating the site. To generate the navigation system, I have catalog all existing 'static' material. In particular I had to index manually using keywords or "tags". I decided what material to include and wrote manually a single monolithic metadata file in XML using using the atom syndication format. Having this file it was easy to write a xslt tranform that generated the browse by year and subject menues appearing in the left column on most pages (that took 61 lines). Then there is another xslt transform aggregating the title, summary and link data into menus, such as the one in XML processing. This took about 200 lines of xslt. Now, these two xslt scripts take into account only the older kind of static material. I also wanted a new kind of bloggish 'dynamic' material. I needed to integrate the two kinds of material in a single structure. Just as I wrote the metadata for the older set of material in Atom feed document, I write the bloggish kind of stuff in Atom entry documents. They live in a file system /entries under the document root. To integrate the two I have a nifty little perl script that does two things. It reads the Atom feed for the and parses it into a Document Object Model (DOM) object. Then it traverses the /entries file system, parses each entry. First it drops a transform into html of each entry. The entry itself is entered into the global DOM, which is finally printed into a complete Atom feed. This took 65 lines of perl and 84 lines of xslt. To get a blog style home page I had to sort the entire set inversely in temporal order, print the most recent entry on the home page and finally print a few pointers to more recent entries. These tasks took another 250 lines of xslt. In addition I have a utility which generates the /entries directories and prints a skeleton entry, which generates another 100 lines of perl which generates the skeleton using XML DOM. It takes less than a second to rebuild the entire site. Before the refurbishment, I used to copy all files to my server. In order to implement incremental update, I've created a CVS repository on my server. Now I check in everything using CVS after building and testing, and then I publish my stuff by checking out in the servers document root. This will scale up to a some thousand blog entries. Then I'll ingest my entries in Oracle Berkeley DB XML and replace most xslt with XML Query. Or I'll do that anyway, for the fun of it. Min mor var 43 år gammal när hon födde mig 1956. Alltså föddes hon själv 1913, året innan första världskriget bröt ut. Nu är hon 96, och lever, som hon säger, på lånad tid. Tid, det är något som hon tar på största allvar. Vi talas vid per telefon varje dag. Hon ringer 20.10, nästan exakt. Det blir alltid ett trevligt samtal. När det är dags att säga godnatt, återkomer hon ständigt till att man måste vara tacksam för varje dag som gått, och att "i morgon är det en ny dag". Visst har Mor rätt! Det är faktiskt inte är självklart att man får leva hela nästa dag. Har man hunnit med ett långt liv, så är man mer medveten om den saken än vad vi, yngre människor, är. Jag tänker på det ibland, att hon har varit med om en stor del av 1900-talet, och hur hon kan tala om arbetslösheten på 30-talet, om Krügerkraschen 1931. Då var hon en fager ung kvinna 18 år gammal. Mor har varit folkpensionär under i princip hela mitt yrkesverksamma liv, hon föddes innan ryska revolutionen och lever faktiskt fortfarande. Hon arbetar fortfarande i trädgården. "Ni behöver inte vara rädda, jag kan inte trilla när jag sitter på marken," berättade hon nyligen. I mars i år fick hon höra på radion om SEBs nyemission nu i mars månad tog hon rollatorn ner till Handelsbanken. Som folkpensionär tjänar hon 7000 kronor i månaden, så för att köpa de femhundra hennes innehav berättigade henne till, fick hon höja sin checkkontokredit. Vi fick 200 aktier var, jag och min bror. Det var för sina barns skull hon köpte aktierna på lånade pengar. I dag när vi talades vid berättade hon om gravstenen. Min far dog i september i fjol 93 år och tre månader gammal, men det är först nu som vi fått gravstenen på plats, och Mor är lycklig för det. Den är i lite mer äldre stil, hög och smal, inte så där bred och låg som många gravstenar är numera. Jag har inte sett den själv ännu, men mor säger att den är vacker. Huggen i grå granit från Östra Göinge, i trakten kring Glimåkra, där Far växte upp. Texten är i Garamond, stilen som Far tyckte bäst om, jämte Berling Antikva. Far var boktryckare och vi har nog åtskilliga hundra kilo Berling Antikva i källaren efter honom. Mor ser illa. Hon har både grå starr och gula fläcken och kan inte läsa inte tidningen längre. Men lyssnar hon på radio det gör hon. Jag tror att hon sitter där i köket och lyssnar mest hela dagen. Inte missar hon en nyhetssänding om hon kan undvika det. Ekot och i synnerhet ekonomiekot är viktigt. Radioföljetongen och radioteatern är hennes enda intag av skönlitteratur nuförtiden. Mor började skolan 1920. Det var inte så ovanligt på den tiden att flickornas utbildning försummades på bekostnad av gossarnas. För hennes del blev det aldrig mer än sexårig folkskola. Istället läste hon in nästan stora delar av en gymnasiekompetens på KOMvux mot slutet av 60-talet och början av 70-talet. Det blev dock inte moderna språk eller matematik, för det var hennes förkunskaper för klena. Dagligen diskuterar vi utvecklingen i världen, och hon berättar annat som hon har hört på radion. Mor har stor tillit till att Barack Obama och hoppas på fred i mellersta östern. Nu senast har hon talat en del om PO Enquists som går som radioföljetong just nu, med författaren själv som uppläsare. Vi åker hem till mor var eller varannan helg, jag och min bror. Vi byts om för att hon skall få besök oftare. När jag är hos henne äter vi lunch tillsammans. Sedan lyssnar vi på lunchekot medan vi tar en kopp kaffe på maten. Sedan stänger vi av radion och går igenom posten, mest fönsterkuvert. Nästan varje gång ber hon mig att läsa jag högt ur tidningen. Börsnoteringarna för hennes aktier och dödsannonserna måste jag läsa varje gång vi träffas. Så är det i livet. När man är ung funderar man kanske inte så mycket över det, men det är ju så att man umgås väldigt mycket med människor som är som man själv är. Unga umgås med unga, småbarnsfamiljer med varandra och de diskuterar barn. När man kommer upp i medelåldern diskuterar man barn, barnbarn och sjuka föräldrar. När blir gammal gammal läser man dödsannonserna. När man är så gammal som Mor är det få kvar av hennes vänner att läsa om där. Efter kaffet språkar om allt möjligt. Ofta går vi en promenad, men oftast tycker hon bättre att gå uppför än nerför. Rollatorn rullar för fort och det är lätt att hon inte hinner med den i nerförsbacke. Mor tycker om poesi. Hon kan en hel massa dikter och psalmverser utantill som hon gärnar citerar när det passar. Hon har tränat yoga sedan hon fyllde 60, och hon tränar varje dag. Jag tror också att hon dagligen läser alla de dikter och psalmer hon kan utantill, och i synnerhet JO Wallins: Ack, när så mycket skönt i varje åder av skapelsen och livet sig förråder, hur skön då måste själva källan vara, den evigt klara Fast hon kan minst fem verser till. The ATOM syndication format has been around for almost five years. In comparison with the diverse fauna kept in the menagerie abbreviated to the three character acronym RSS, ATOM is a well kept standard and much more useful as well. Now, I've used ATOM before refurbishing my personal web site, but I haven't used seriously. That is, I haven't manually encoded a sizeable amount of text in it, validated the result, and then written software using the data. ATOM is the fundament for this site, since the data used for generating the navigation system is encoded that way. I may return to that later on. Tim Bray lists and provide a thorough discussion of a number of characteristics shared by successful mark-up languages (see his article On Language Creation). The list of success critera includes items such as extensibility, clever but not too extensive use of XML name spaces, the posibility of implementing the language using many, widely different, datamodels. From Bray's perspective, ATOM seems designed for success. I don't know the statistics concerning syndication format popularity, but if ATOM isn't yet the leader, it will become sooner or later. Possibly in RDF disguise. From a modelling point of view there is a few, annoying inconsistencies. Let us, for example, compare the <author> element with the <title> element. A resource may have an author, then the author may have a number of properties, such as a name, an email address and a home page. The author for this note is encoded as: <author> <name>Sigfrid Lundberg</name> <uri></uri> </author> This is, in my view, a sign of good content modelling when it comes to metadata. A resource may have a title. A title may have different components, for example main title and sub-title. The main and sub parts are properties of the title, not of the resource. The authors of the ATOM spec did not see that way. A feed may have both. This is annoying for semantic puristic nerd like me. Entries, however, may not have sub-titles at all, which in my view is an inconsistency as well. The biggest flaw is the encoding of category data. A resource belongs to a category, i.e., this membership is a property of the resource. The category may have an identifier and a name, and this name is different depending to language. However, ATOM forces me to use the following construct <category xml: <category xml: which in my view is less than excellent. I would have preferred something like <category label="structuralwebdesign"> <name xml:Structural web design</name> <name xml:Strukturell webbdesign</name> </category> These are shortcomings, but I can live with them. But expect changes in this area. The semantic web is here already, and with SKOS becoming a de facto standard on the internet, the syntactic communities will have to come up with something more clever for categorization of syndicated web resources. A week or two ago I registered this site with Google Analytics. My Internet supply provider's statics is virtually useless. Then I've spent quite some time cutting and pasting the javascript into my olde stuff. I haven't cared about this site for about four years and as far as I can tell now, this place is hit by a one or two people per day, each of which stay less than a minute. Why they come I don't know, and Google Analytics won't tell me yet. Now, I didn't expect anything else. I use the site as a store for old documentation, of which just a few things could possibly be valuable for students of Internet history. Most likely, though, they hardly even deserve a footnote in the posterity. I started this site autumn 1995. The oldest documents still in the public collection is from 1996. Now, started isn't really the right word. I never planned to build a web site with old documents. Basically, I started a web server on a development machine and devoted an area for project documentation and reports. I could then drop a piece of text there and send the URI to my colleagues or customers. That is how it started. This worked well to begin with, but about five years later the rudimentary structure per project was lost. I then made a backup copy, and created a structure by year, and copied stuff from the old site into it. This is the way it is now. I started the current site 2005, when I left Lund university for a new job in Copenhagen. I felt that I just couldn't let this material just disappear. The site contains quite a few archived files. Counting them gives 241 HTML files, 21 pdf, 84 xml and so forth. Of these I regard 37 as publications of some kind, in the sense that there are links to them that may be reached from the home page. It is still just a sample, the raw collection contains 2452 HTML files. I have no idea if I have benefited from the site, or if the material has been useful for someone else. For a few years, notably 2002, 2003 and 2004, I did have an idea that I should make an active effort to maintain a web site. Some of the best texts are from that period. I earn my livelihood as a developer for the Worldwide Web. But this site is not a part of what I do for my living. So, should I have it available here or let it perish? I pay my ISP 250 SEK per month for this, which include a Linux shell prompt available via secure shell and some storage. The storage side of it is no big deal; I have already a number of USB hard disks around. I have also *.zip and *.tar.gz files as attachment on my gmail account, which provides huge storage and good value for money in comparison with what I get from my ISP. But then, 250 SEK is no big deal either. So I'll keep it, for the time being. I don't think my stuff contribute very much to the digital refuse spread over the net. I've added an ATOM syndication feed for the garbage I decided to keep, a subject browsing based on the ATOM categories. Finally I've added some texts from the last few years. These are basically of the same kind as the rest of my stuff. Reports and papers written for some specific purpose and republished here entirely out of context.
http://feeds.feedburner.com/SigfridLundbergsStuff?format=xml
CC-MAIN-2018-34
refinedweb
41,722
64.91
References Added Later Updated 7-may: Minor clarifications to probabilities in "Proving That a Shorter Solution Exists" section. I could see how the. Running the following PHP program: "; [download] [download] Pop quiz: what should 154047519861 % 2119 produce? Perl, Python and Ruby are unanimous on that score: the correct answer is 157. Surprisingly, however, running the following PHP test program: $L_num = 154047519861 % 2119; $L_str = "154047519861" % 2119; echo "L_num : $L_num\n"; echo "L_str : $L_str\n"; [download] L_num : -1324 L_str : 49 [download] <?while(11^$n=md5(fgetc(STDIN).XXXXXX)%2119+1)$t+=$n-2*$t%$n?><?=$t; [download] Though I admit I never expected to break the 70 barrier, I now believe that a score in the low 60s is a certainty. Here's why. Suppose you could find a direct hit: M -> 1000, D -> 500, C -> 100, L -> 50, X -> 10, V -> 5, I -> 1, newline -> 0. You could then replace the: while(11^$n=md5(fgetc(STDIN).XXXXXX)%2119+1) [download] while($n=md5(fgetc(STDIN).XXXXXX)*1) [download] How long would the magic string need to be to find such a solution? The probability of matching three specific characters followed by [a-f] is (1/16)*(1/16)*(1/16)*(6/16) = 1/10923, which is the rough odds of matching M, D and C: "1e3", "500", "5e2", "100", "1e2". Matching 50 and 10 is more likely: (1/16)*(1/16)*(6/16) = 1/689. Matching five and one is more likely still: (1/16) * (6/16) = 1/42. Finally, matching zero is only: 6/16 = 3/8. So the, admittedly very rough, odds of such a miraculously lucky hit is (1/10923)**3 * (1/689)**2 * (1/42)**2 * (3/8), which comes to the order of one in 10**21. Since there are 180 characters available, a ten character magic string can produce 180**10 = 10**22 distinct combinations, making such a lucky hit a virtual certainty. I therefore declare that a 64 stroke solution is certain and shorter ones possible. This problem, however, is now no longer one of golf, but of computation. Even with super computers, 10**21 combinations is a daunting challenge and I expect you'd need to set up a cooperative grid computing project to find it. Such a project, of course, is far less worthy than SETI@home or Einstein@Home and would be a horrible waste of CPU cycles. :) Update 2012: it seems you don't need a cooperative grid computing project anymore, you can just rent the power in the cloud for around $2 per hour. See also Cracking Passwords in The Cloud: Amazon's new EC2 GPU Instances. Update 2014: See also The 10**21 Problem (Part I), an example of evaluating 10**21 combinations in a C program to find a Python magic formula. Though I'd long stopped searching for fresh ideas to try in this game, a little while after this ToastyX-provoked epiphany, I could not help but wonder if the string bitwise & PHP golfing trick might somehow be similarly applied to the older Roman to Decimal game. I started with an inspection of my previous best PHP solution: <?while(+$n=md5(fgetc(STDIN).XXXXXXXXXX))$t+=$n-2*$t%$n?><?=$t; [download] Can good ol' string bitwise & help here? To find out, I started by running a little Perl program: for my $i (48..57,97..102) { printf " %08b (ord %3d %c)\n", $i, $i, $i; } [download] 00110000 (ord 48, "0") 00110001 (ord 49, "1") 00110010 (ord 50, "2") 00110011 (ord 51, "3") 00110100 (ord 52, "4") 00110101 (ord 53, "5") 00110110 (ord 54, "6") 00110111 (ord 55, "7") 00111000 (ord 56, "8") 00111001 (ord 57, "9") 01100001 (ord 97, "a") 01100010 (ord 98, "b") 01100011 (ord 99, "c") 01100100 (ord 100, "d") 01100101 (ord 101, "e") 01100110 (ord 102, "f") [download] 00111001 (ord 57, "9" character) & 01110000 (ord 112, "p" character) = 00110000 (ord 48, "0" character) [download] What about the first character? A simple brute force search program showed that the best we can do here is &u, which transforms [0-9] like so: 0 0 1 1 2 0 3 1 4 4 5 5 6 4 7 5 8 0 9 1 [download] Putting these two together, a &uppp bitwise string operation truncates an MD5 digest string to four characters as follows: [0145a`de] [0`] [0`] [0`] [download] The next step was to update my old C search program with an adjusted my_md5 function as follows:; nib &= 5; // <-- added this line outi = outi * 10 + nib; nib = (unsigned char)(mdContext.A & 0xF); if (nib > 9) return outi; nib = 0; // <-- added this line outi = outi * 10 + nib; /* 2nd byte */ nib = (unsigned char)((mdContext.A >> 12) & 0xF); if (nib > 9) return outi; nib = 0; // <-- added this line outi = outi * 10 + nib; nib = (unsigned char)((mdContext.A >> 8) & 0xF); if (nib > 9) return outi; nib = 0; // <-- added this line outi = outi * 10 + nib; return outi; // <-- only care about first four chars now } [download] Because PHP and Perl have essentially the same string bitwise operator behavior, I was able to test these ideas in Perl. You can see more clearly how all this works, by running the following Perl test program: use Digest::MD5 qw(md5_hex); for my $r (M, D, C, L, X, V, I, "\n") { my $hd = md5_hex( $r . PQcUv ); print "$r $hd ", $hd & uppp, "\n"; } [download] M 993332d3c4fa8b3839761ca4dd480f7b 1000 D 503c3b61b971c24100e97c3297882b22 500` C 112da378c970c5a0ff7769acd85095c7 100` L 58dbff2e42141165a1e04bbc629df030 50`` X 90a86e4f0b96ada8fb44bcb43d16f25e 10`0 V 7c9f70b74e8f19325e8dc62d31a9dd87 5`0` I 9bb0bbb1977969abd73e5f8121e6cff7 1``0 c697df2fcf84f10272369ee4482e5c1c a000 [download] Here's an example Perl solution to this game: use Digest::MD5 md5_hex; $\+=$n-2*$n%($n=uppp&md5_hex$_.PQcUv)for<>=~/./g;print [download] As you might expect, having a built-in md5 function made this new approach a winner in PHP, as demonstrated by the following 63 stroker: <?while(+$n=md5(fgetc(STDIN).PQcUv)&uppp)$t+=$n-2*$t%$n?><?=$t; [download] The shortest known PHP solution to this game is thus reduced from 68 to 63 strokes, and without requiring a super computer. A general golfing lesson to take from this little episode is that if you get stuck on one golf game, try another. You may be able to apply tricks learnt in the new game to the one you were previously stuck on. Glad you enjoyed it! I found the whole thing surreal/hilarious while it was running, especially when I thought about how I'd go if this game had a one week time limit like the good ol' traditional Perl golf games. It didn't run continuously for six months though. The search program could be run in pieces, as indicated by the C source code below, and running four copies of it simultaneously on a quad core machine helped (as I said, the required search is fundamentally highly parallelizable). Note that this C code handles the md5("M".magic-string) case. Small changes allow it to also be used for the md5(magic-string."M") case. Any suggestions for performance improvements to this C code are very welcome. Oh, and after running this searcher and saving its stdout, I ran another small PHP program to confirm C/PHP md5 compatibility and to search for possible hits by applying the mod operator. /* findmin6k.c For asm version (copy m5_win32.obj from openssl crypto/md5/asm direc +tory) and build with: cl /W3 /O2 findmin6k.c m5_win32.obj (windows) gcc -Wall -O3 -o findmin6k findmin6k.c mx86-elf.o (32-bit Linux +) gcc -Wall -O3 -o findmin6k findmin6k.c md5-x86_64.o (64-bit Linux +) Will likely need a C version to work with NVIDIA graphics card CUDA. For a pure C version, just write a C version of md5_block_asm_data_o +rder() and link with that. For example: cl /W3 /O2 findmin6k.c m5_ssl_le.c cl /W3 /O2 findmin6k.c m5_nsa_le.c For now, assume 32-bit int and little endian (_le). Example run on Unix: nohup nice ./findmin6k 65 66 48 160 >6k-65-1.tmp 2>err1.tmp & nohup nice ./findmin6k 65 66 160 256 >6k-65-2.tmp 2>err2.tmp & */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> static void test_endian() { int i; char sStrTmp[64]; int iIntSize = (int)sizeof(int); int iLongSize = (int)sizeof(long); union { unsigned long l; char c[sizeof(long)]; } u; printf("sizeof(int)=%d\nsizeof(long)=%d\nsizeof(size_t)=%d\n", iIntSize, iLongSize, (int)sizeof(size_t)); if (iLongSize > 4) { u.l = (0x08070605L << 32) | 0x04030201L; } else { u.l = 0x04030201L; } memset(sStrTmp, 0, sizeof(sStrTmp)); for (i = 0; i < iLongSize; ++i) { sprintf(sStrTmp+i, "%c", u.c[i]+'0'); } printf("byteorder=%s (1234=little-endian, 4321=big-endian)\n", sStr +Tmp); } /* Uncomment next line to use openssl asm version of md5_block_asm_dat +a_order() */ #define MY_ASM 1 #define MD5_LONG unsigned int typedef struct MD5state_st { MD5_LONG A,B,C,D; } MD5_CTX; #ifdef __cplusplus extern "C" { #endif #ifdef MY_ASM void md5_block_asm_data_order(MD5_CTX*, const void*, size_t); #else void md5_block_asm_data_order(MD5_CTX*, const void*); #endif #ifdef __cplusplus } #endif /* -------------------------------------------------------------- */ #define HI_SENTINEL 1000000000; outi = outi * 10 + nib; nib = (unsigned char)(mdContext.A & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 2nd byte */ nib = (unsigned char)((mdContext.A >> 12) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.A >> 8) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 3rd byte */ nib = (unsigned char)((mdContext.A >> 20) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.A >> 16) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 4th byte */ nib = (unsigned char)((mdContext.A >> 28) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.A >> 24) & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; /* 5th byte */ nib = (unsigned char)((mdContext.B >> 4) & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; nib = (unsigned char)(mdContext.B & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; /* 6th byte */ nib = (unsigned char)((mdContext.B >> 12) & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.B >> 8) & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; /* 7th byte */ nib = (unsigned char)((mdContext.B >> 20) & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.B >> 16) & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; /* 8th byte */ nib = (unsigned char)((mdContext.B >> 28) & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.B >> 24) & 0xF); if (nib > 9) return outi; if (outi > 9999999) return HI_SENTINEL; outi = outi * 10 + nib; #if 0 /* XXX: very unlikely to matter (only diabolicals like 0000000000000 +00999a) */ /* 9th byte */ nib = (unsigned char)((mdContext.C >> 4) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)(mdContext.C & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 10th byte */ nib = (unsigned char)((mdContext.C >> 12) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.C >> 8) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 11th byte */ nib = (unsigned char)((mdContext.C >> 20) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.C >> 16) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 12th byte */ nib = (unsigned char)((mdContext.C >> 28) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.C >> 24) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 13th byte */ nib = (unsigned char)((mdContext.D >> 4) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)(mdContext.D & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 14th byte */ nib = (unsigned char)((mdContext.D >> 12) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.D >> 8) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 15th byte */ nib = (unsigned char)((mdContext.D >> 20) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.D >> 16) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; /* 16th byte */ nib = (unsigned char)((mdContext.D >> 28) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; nib = (unsigned char)((mdContext.D >> 24) & 0xF); if (nib > 9) return outi; outi = outi * 10 + nib; #endif return outi; } #define M_TARG 999 #define D_TARG 499 #define C_TARG 99 #define L_TARG 49 #define X_TARG 9 #define V_TARG 4 #define I_TARG 0 #define SEARCH_WIDTH 6 #define H_LEN (SEARCH_WIDTH+1) #define MD5_CBLOCK 64 static void do_one(int start_val, int end_val, int start_val_2, int en +d_val_2) { unsigned char m_buf[MD5_CBLOCK]; unsigned char d_buf[MD5_CBLOCK]; unsigned char c_buf[MD5_CBLOCK]; unsigned char l_buf[MD5_CBLOCK]; unsigned char x_buf[MD5_CBLOCK]; unsigned char v_buf[MD5_CBLOCK]; unsigned char i_buf[MD5_CBLOCK]; unsigned char n_buf[MD5_CBLOCK]; int nmiss = 0; int nsent = 0; int q1 = 0; int q2 = 0; int q3 = 0; int q4 = 0; int q5 = 0; int q6 = 0; int m_char = 'M'; int d_char = 'D'; int c_char = 'C'; int l_char = 'L'; int x_char = 'X'; int v_char = 'V'; int i_char = 'I'; int n_char = 10; int m5 = 0; int d5 = 0; int c5 = 0; int l5 = 0; int x5 = 0; int v5 = 0; int i5 = 0; int n5 = 0; time_t tstart = time(NULL); clock_t cstart = clock(); time_t tend; clock_t cend; memset(m_buf, 0, MD5_CBLOCK); memset(d_buf, 0, MD5_CBLOCK); memset(c_buf, 0, MD5_CBLOCK); memset(l_buf, 0, MD5_CBLOCK); memset(x_buf, 0, MD5_CBLOCK); memset(v_buf, 0, MD5_CBLOCK); memset(i_buf, 0, MD5_CBLOCK); memset(n_buf, 0, MD5_CBLOCK); m_buf[H_LEN] = 0x80; d_buf[H_LEN] = 0x80; c_buf[H_LEN] = 0x80; l_buf[H_LEN] = 0x80; x_buf[H_LEN] = 0x80; v_buf[H_LEN] = 0x80; i_buf[H_LEN] = 0x80; n_buf[H_LEN] = 0x80; m_buf[MD5_CBLOCK-8] = H_LEN * 8; d_buf[MD5_CBLOCK-8] = H_LEN * 8; c_buf[MD5_CBLOCK-8] = H_LEN * 8; l_buf[MD5_CBLOCK-8] = H_LEN * 8; x_buf[MD5_CBLOCK-8] = H_LEN * 8; v_buf[MD5_CBLOCK-8] = H_LEN * 8; i_buf[MD5_CBLOCK-8] = H_LEN * 8; n_buf[MD5_CBLOCK-8] = H_LEN * 8; m_buf[0] = m_char; d_buf[0] = d_char; c_buf[0] = c_char; l_buf[0] = l_char; x_buf[0] = x_char; v_buf[0] = v_char; i_buf[0] = i_char; n_buf[0] = n_char; for (q1 = start_val; q1 < end_val; ++q1) { if (q1==91 || q1==92 || q1==93 || q1==94 || q1==96 || q1==123 || q1==124 || q1==125 || q1==126) continue; m_buf[1] = q1; d_buf[1] = q1; c_buf[1] = q1; l_buf[1] = q1; x_buf[1] = q1; v_buf[1] = q1; i_buf[1] = q1; n_buf[1] = q1; for (q2 = start_val_2; q2 < end_val_2; ++q2) { if (q2==91 || q2==92 || q2==93 || q2==94 || q2==96 || q2==123 || q2==124 || q2==125 || q2==126 || q2==58 || q2==59 || q2==60 || q2==61 || q2==62 || q2==63 + || q2==64) continue; m_buf[2] = q2; d_buf[2] = q2; c_buf[2] = q2; l_buf[2] = q2; x_buf[2] = q2; v_buf[2] = q2; i_buf[2] = q2; n_buf[2] = q2; fprintf(stderr, "%d %d\n", q1, q2); for (q3 = 48; q3 < 256; ++q3) { if (q3==91 || q3==92 || q3==93 || q3==94 || q3==96 || q3==123 || q3==124 || q3==125 || q3==126 || q3==58 || q3==59 || q3==60 || q3==61 || q3==62 || q3== +63 || q3==64) continue; m_buf[3] = q3; d_buf[3] = q3; c_buf[3] = q3; l_buf[3] = q3; x_buf[3] = q3; v_buf[3] = q3; i_buf[3] = q3; n_buf[3] = q3; for (q4 = 48; q4 < 256; ++q4) { if (q4==91 || q4==92 || q4==93 || q4==94 || q4==96 || q4==123 || q4==124 || q4==125 || q4==126 || q4==58 || q4==59 || q4==60 || q4==61 || q4==62 || q4 +==63 || q4==64) continue; m_buf[4] = q4; d_buf[4] = q4; c_buf[4] = q4; l_buf[4] = q4; x_buf[4] = q4; v_buf[4] = q4; i_buf[4] = q4; n_buf[4] = q4; for (q5 = 48; q5 < 256; ++q5) { if (q5==91 || q5==92 || q5==93 || q5==94 || q5==96 || q5==123 || q5==124 || q5==125 || q5==126 || q5==58 || q5==59 || q5==60 || q5==61 || q5==62 || +q5==63 || q5==64) continue; m_buf[5] = q5; d_buf[5] = q5; c_buf[5] = q5; l_buf[5] = q5; x_buf[5] = q5; v_buf[5] = q5; i_buf[5] = q5; n_buf[5] = q5; for (q6 = 48; q6 < 256; ++q6) { if (q6==91 || q6==92 || q6==93 || q6==94 || q6==96 || q6==123 || q6==124 || q6==125 || q6==126 || q6==58 || q6==59 || q6==60 || q6==61 || q6==62 | +| q6==63 || q6==64) continue; m_buf[6] = q6; d_buf[6] = q6; c_buf[6] = q6; l_buf[6] = q6; x_buf[6] = q6; v_buf[6] = q6; i_buf[6] = q6; n_buf[6] = q6; nmiss = 0; m5 = my_md5(m_buf); if (m5 == 0) continue; if (m5 != M_TARG) { if (m5 <= M_TARG+M_TARG) continue; ++nmiss; } d5 = my_md5(d_buf); if (d5 == 0) continue; if (d5 != D_TARG) { if (d5 <= M_TARG+D_TARG) continue; ++nmiss; } c5 = my_md5(c_buf); if (c5 == 0) continue; if (c5 != C_TARG) { if (c5 <= M_TARG+C_TARG) continue; ++nmiss; } if (nmiss > 2) continue; l5 = my_md5(l_buf); if (l5 == 0) continue; if (l5 != L_TARG) { if (l5 <= M_TARG+L_TARG) continue; ++nmiss; } if (nmiss > 2) continue; x5 = my_md5(x_buf); if (x5 != X_TARG) { if (x5 <= M_TARG+X_TARG) continue; ++nmiss; } if (nmiss > 2) continue; v5 = my_md5(v_buf); if (v5 != V_TARG) { if (v5 <= M_TARG+V_TARG) continue; ++nmiss; } if (nmiss > 2) continue; i5 = my_md5(i_buf); if (i5 != I_TARG) { if (i5 <= M_TARG+I_TARG) continue; ++nmiss; } if (nmiss > 2) continue; n5 = my_md5(n_buf); nsent = 0; if (m5 == HI_SENTINEL) { m5 += 1000; ++nsent; } if (d5 == HI_SENTINEL) { d5 += 500; ++nsent; } if (c5 == HI_SENTINEL) { c5 += 100; ++nsent; } if (l5 == HI_SENTINEL) { l5 += 50; ++nsent; } if (x5 == HI_SENTINEL) { x5 += 10; ++nsent; } if (v5 == HI_SENTINEL) { v5 += 5; ++nsent; } if (i5 == HI_SENTINEL) { i5 += 1; ++nsent; } if (n5 == HI_SENTINEL) { ++nsent; } printf("N %d nsent=%d: %d %d %d %d %d %d: %d %d %d %d %d + %d %d %d\n", nmiss, nsent, q1, q2, q3, q4, q5, q6, m5, d5, c5, l5, +x5, v5, i5, n5); fflush(stdout); if (m5==d5 || m5==c5 || m5==l5 || m5==x5 || m5==v5 || m5 +==i5) continue; if (d5==c5 || d5==l5 || d5==x5 || d5==v5 || d5==i5) cont +inue; if (c5==l5 || c5==x5 || c5==v5 || c5==i5) continue; if (l5==x5 || l5==v5 || l5==i5) continue; if (x5==v5 || x5==i5) continue; if (v5==i5) continue; printf("N %d: %d %d %d %d %d %d; %d %d %d %d %d %d %d %d +\n", nmiss, q1, q2, q3, q4, q5, q6, m5, d5, c5, l5, x5, v5, + i5, n5); fflush(stdout); } } } } } } tend = time(NULL); cend = clock(); printf("(wall clock time:%ld secs, cpu time:%.2f units)\n", (long) (difftime(tend, tstart)+0.5), (double) (cend-cstart) / (double)CLOCKS_PER_SEC); } int main(int argc, char* argv[]) { int start_val = 65; int end_val = 256; int start_val_2 = 48; int end_val_2 = 256; if (argc > 2) { start_val = atoi(argv[1]); end_val = atoi(argv[2]); if (argc == 5) { start_val_2 = atoi(argv[3]); end_val_2 = atoi(argv[4]); } } printf("start: %d %d, %d %d\n", start_val, end_val, start_val_2, end +_val_2); test_endian(); fflush(stdout); do_one(start_val, end_val, start_val_2, end_val_2); return 0; } [download] suggest: No recent polls found
https://www.perlmonks.org/?node_id=762180
CC-MAIN-2021-10
refinedweb
3,136
63.63
28 April 2010 17:38 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--A string of chemical company results on Wednesday pointed to strengthening markets in the first quarter, strong volume gains and price rises. Dow, the largest US-based chemicals maker, and others revealed figures that reflected stronger growth in developed world markets and clear year-on-year and quarter-to-quarter gains. Dow’s volumes were up 16% year on year in the quarter; Greater ?xml:namespace> Given the continued emphasis on costs at Dow, the top-line gains translated into improved operating earnings and a much healthier net result. Dow has been through a period of divestments as it has sought to pay down debt accrued when it bought speciality materials maker Rohm and Haas. Its underlying results, however, remain strong and equity earnings have powered ahead. The global operating rate was 83% in the quarter, up 7 percentage points from the fourth quarter of 2009. Dow CEO Andrew Liveris was clearly pleased with the results, but rightly cautioned about the impact on the Dow businesses of still depressed residential and commercial construction in the developed economies, inflation concerns in emerging countries and sovereign debt issues in southern Worries about the “Overall, the global economic environment is on a stronger footing and there are signs that this will continue for the foreseeable future,” he said, adding: “This is good news for Dow”. The point is also that it is good news for all chemical companies. Buoyed by growth in Asia - The American Chemistry Council’s weekly economic updates have charted growing confidence in chemicals demand growth in the The sales and profits gains made by Dow, and others, in the quarter are not surprising given the state of the sector a year ago: the most recent reporting period is compared with the most depressed in decades. The sequential gains, however, are welcome and it is they that have helped push company earnings above consensus - well above in some cases. Dow, Shell, DSM, Rhodia, Praxair, Merck KGaA and Croda reported on Wednesday. Their early-in-the-day releases followed news from Asia players such as Japan’s Showa Denko and major producers in China such as CNOOC and PetroChina, which demonstrated clear first-quarter gains. The Dow results show that it was the company’s performance businesses that were leading the way in the quarter, not so much the basics or commodity-type operations as might be expected. Dow’s electronics and speciality materials businesses say volumes improved by 31% year on year in the quarter. Performance Systems and Products volumes were both 27% higher. Basic plastics volumes were up only 5% but prices were 44% higher. The downstream product gains particularly suggest that the manufacturing economy is improving. Gains are being made in A striking statistic from the Merck KGaA results was the 82% increase in sales of the company’s liquid crystals. The company said it was operating at full capacity for these products. The export pull was a significant factor for companies such as Dow in the first quarter. Liveris said in a conference call that 20% of the company’s “There is demand pull going on,” he added, “not just re-stocking.” That is encouraging and suggests a return to growth that is at long last driven, given Dow’s other comments, by increased consumer confidence.
http://www.icis.com/Articles/2010/04/28/9354761/insight-asia-europe-and-us-firms-report-strong-volume.html
CC-MAIN-2014-52
refinedweb
563
56.08
DOS related FAQ DOS The FreeBASIC port to DOS is based on the DJGPP port of the GNU toolchain to 32-bit protected-mode DOS. The current maintainer of this port is DrV. To be written: platform-specific information, differences from Win32/Linux, differences from QB?, tutorials, etc. WANTED TESTERS The DOS version/target of FreeBASIC needs more testers. If you are interested in using FreeBASIC on DOS, please don't wait for future releases, give it a try now. Tests from running in DOS on both old and new PC's are welcome (graphics, file I/O, serial port, ...). If something doesn't work, please place a detailed bug report into the forum or bug Tracker. If all works well, you can write about your success as well. Make sure to test a recent version of FB (reports from FB older than 0.90 will be probably considered as obsolete and useless), and check this document before complaining about anything. Limitations The DOS target is fairly well working and supported by FreeBASIC, and up-to-date. A few differences compared to other platforms exist, however. The features missing are mostly those not supported by the operating system or DOS extender or C runtime: - Cross-compiling to an other target - Multithreading (see FAQ 23) - Graphics in windowed mode or using OpenGL - Setting Screenres to a size not matching any resolution supported by the graphics card - Unicode isn't supported in DOS, Wstring will be the same as Zstring, character sets other than latin aren't supported. (do it yourself) - Shared libraries (DLL's) can't be created/used (at least not "easily"), amount of available static external libraries usable with DOS is limited FreeBASIC DOS related questions: - 1. FB is a 32-bit compiler - do I need a 32-bit DOS? - 2. What about FreeDOS-32? Does/will FB work, is/will there be a version? - 3. When running FreeBASIC in DOS, I get a 'Error: No DPMI' message! - 4. Is there a possibility how to get rid of this CWSDPMI.EXE and CWSDPMI.SWP? - 5. Can I use other DOS extenders, like DOS/4GW, Causeway, DOS/32A? - 6. Where is the nice blue screen with all the ... / where is the IDE? - 7. How can I view the documentation in CHM or PDF format in DOS? - 8. How can I write/edit my source code? - 9. How can I play sound in DOS? - 10. How can I use USB in DOS? - 11. How can I use graphics in DOS? - 12. DEF SEG is missing in FB! How can I workaround this in my code? - 13. How can I rewrite QB's CALL INTERRUPT / access the DOS and BIOS interrupts? - 14. How can I rewrite QB's XMS/EMS handling? - 15. FBC gives me a 'cannot find lsupcxx' error! - 16. How can I use the serial or parallel port? - 17. How can I use a printer? - 18. How can I make a screenshot of a FreeBASIC program running in DOS? - 19. Graphics mode doesn't work (freeze / black screen / garbage output)! - 20. Mouse trouble! Mouse doesn't work at all in DOS / arrow 'jumps' / etc. ... - 21. What about the 64 KiB and 640 KiB problems / how much memory is supported by FB in DOS? - 22. My program crashes when I try to use more than cca 1 MiB RAM! Is this a bug in FreeBASIC? - 23. Threading functions are disallowed in DOS? Help! - 24. Executables made with FB DOS are bloated! - 25. Compilation is very slow with FB! - 26. SLEEP doesn't work! How can I cause a delay? - 27. The performance is very bad in DOS! - 28. Can I access disk sectors with FB? - 29. Can I use inline ASM with advanced instructions like SSE in DOS ? See also Back to Table of Contents FreeBASIC DOS related questions 1. FB is a 32-bit compiler - do I need a 32-bit DOS? No, the DOS version of FreeBASIC uses a DOS extender, allowing you to execute 32-bit code on top of a 16 bit DOS kernel. You can use FreeDOS (16-bit), Enhanced-Dr-DOS, old closed Dr-DOS, or even MS-DOS down to version cca 4. You need at least 80386 CPU, see also Requirements. 2. What about FreeDOS-32? Does/will FB work, is/will there be a version? FreeDOS-32 is experimental at time of writing, but it should execute FreeBASIC and applications generated by it with no change. While FB DOS support already works on FreeDOS (16), it should be ready for FreeDOS-32 as well. 3. When running FreeBASIC in DOS, I get a 'Error: No DPMI' message! You need a DPMI host (DPMI kernel, DPMI server), means the file "CWSDPMI.EXE" (cca 20 KiB) or HDPMI32.EXE (cca 34 KiB). See requirements, and FAQ 4 for more details. 4. Is there a possibility how to get rid of this CWSDPMI.EXE and CWSDPMI.SWP? Yes, 2 possibilities. To get rid of CWSDPMI.EXE and create a standalone DOS executable embedding CWSDPMI, you need the CWSDPMI package and the "EXE2COFF.EXE" file. Using EXE2COFF, you remove the CWSDPMI.EXE loader (file loses 2 KiB of size, resulting in a "COFF" file without extension), and then glue the file "CWSDSTUB.EXE" before this COFF. The new executable is cca 21 KiB bigger than the original one, but it is standalone, no additional files are needed. To get rid of CWSDPMI.SWP, you can then edit your executable with CWSPARAM.EXE, and disable the swapping (occasionally also - incorrectly - referred as paging). Note, however, that this will limit the memory that can be allocated to the amount of physical memory that is installed in a system. This work can be done both with the FBC.EXE file and all executables created by FBC. The method is also described in the CWSDPMI docs in the package. Alternatively, you can use the WDOSX or D3X extender. They don't swap and create standalone executables. Since they run your executable in Ring 0, the crash handling of them is not very good and can cause freezers or reboots on bugs where other hosts exit the "civil" way with a register dump. Also, spawning might not work well / at all with WDOSX or D3X. Finally, you can use HDPMI . Download the "HXRT.ZIP" file (here: japheth.de/HX.html alternative links), extract "HDPMI32.EXE" (cca 34 KiB) and "HDPMI.TXT" (not required by the code, just for your information), and include it to your DOS startup ("HDPMI32 -r"). This will make HDPMI resident and prevent all FreeBASIC (also FreePASCAL and DJGPP) programs from both crying about missing DPMI and swapping. HDPMI can not (easily / yet) be included into your executables. Running an executable containing D3X, CWSDPMI or some DPMI host inside under HDPMI or other external host is fine - the built-in host will be simply skipped. Using DPMI is definitely required for FreeBASIC, since it can't generate 16-bit real mode code, and there is no other good way to execute 32-bit code in DOS. 5. Can I use other DOS extenders, like DOS/4GW, Causeway, DOS/32A? Not any extender around. So-called WATCOM-like extenders can't be used because of important differences in memory management and executable structure. WDOSX and D3X do work, since they are a multi-standard extenders, not only WATCOM-like. You also can use PMODE/DJ (not "original" Tran's PMODE, nor PMODE/W (!), saves cca 5 KiB compared to CWSDPMI, can be included into the EXE, but might affect stability or performance) or, as aforementioned, HDPMI. 6. Where is the nice blue screen with all the ... / where is the IDE? The FreeBASIC project focuses on the compiler, generating the executables from your BAS sources. It looks unspectacular, but is most important for the quality of software developed by you. The project does not include an IDE. There are several external IDEs for FreeBASIC, but probably none does have a DOS version by now. If you really need one, you could try Rhide, but note that it is complicated and buggy, so use it at your own risk. See also FAQ 7 and 8. 7. How can I view the documentation in CHM or PDF format in DOS? There is no good way to view CHM or PDF files in DOS by now. But you can view the FreeBASIC documentation nevertheless. One of the FreeBASIC developers, coderJeff provides a FreeBASIC documentation viewer with the docs included in a special format, and having also a DOS version. It looks similar the QB's built-in help viewer, but does not contain an editor or IDE. Download here: 8. How can I write/edit my source code? There are many editors for DOS around, but only few of them are good - some possibilities are FreeDOS EDIT (use version 0.7d (!!) or 0.9, 64 KiB limit, suboptimal stability (save your work regularly) ), SETEDIT, INFOPAD (comes with CC386 compiler, can edit big texts also, has syntax highlighting for C and ASM, but not for BASIC). 9. How can I play sound in DOS? There are 2 ways how to play sound in DOS: either the ("archaic") PC speaker, famous for beeping if something goes wrong, or a soundcard. The speaker is easy to control, allows more than one might think, even to play audio files (WAV, with decompression code also OGG Vorbis, MP3 etc.), you can re-use most of existing QB code easily (example: o-bizz.de/qb...speaker.zip) or ASM code via inline ASM, but provides one channel and 6 bits only, and of course significantly worse quality than a soundcard, and, on some newest (P4) PC's the speaker quality is very bad or there is no speaker at all. For old ISA soundcards, there is much example code around, a newer PCI soundcard can be accessed (supposing bare DOS in this category) either using a ( "emulation" SB16 compatible) driver, if it is available for your card (unfortunately, this is becoming more and more a problem, the DOS drivers are poor or even inexistent), or access the card directly (this is low-level programming, hardware-related, assembler is also needed, and you need technical docs about the card). There are a few sources of inspiration like the DOS audio player MPXPLAY (written in C with some ASM), supporting both methods (native + "emu" drivers), see an up-to-date list here: drdos.org/...wiki...SoundCardChip. Support of sound in DOS is not business FB DOS port, actually FB doesn't "support" sound on Win32 and Linux either - the games "connect to the API" rather than use FreeBASIC commands or libraries. To play compressed files (MP3, OGG Vorbis, FLAC, ...) , you additionally need the decompressing code, existing DJGPP ports of those libraries should be usable for this. 10. How can I use USB in DOS? Again, not business of FB, you need a driver, FB doesn't "support" USB on Win32 or Linux either, see other Wiki: drdos.org/...wiki...USB about possibilities of USB usage in DOS. 11. How can I use graphics in DOS? GUI or graphics in DOS is definitely possible, there are several approaches: Note that some graphic cards report limited features through VESA, most notably less memory (for example 8 MiB instead of 64 MiB) or less modes (for example only 24 bpp modes visible while 32 bpp hidden, only lower resolutions visible (up to cca 1280x1024) while higher hidden, only "4:3" modes visible while "wide" modes hidden). This is a problem of the card, not of DOS or FreeBASIC. You will see the additional features in systems other than DOS, or in DOS only using hardware detection tools going to the lowest level bypassing VESA. - Use the FB graphics library. It uses VESA (preferably linear, but also supports banked) to access the video card and supports any resolution reported by the card's VESA VBE driver, in addition to standard VGA modes. Note: use preferably FB version 0.20 or newer, the FB DOS graphics works not as good on 0.17, and does not work at all in previous releases. - VGA mode 320x200x8bpp: very simple, maximum reliability and compatibility, but low resolution and 256 colours only, see example. - VGA "ModeX" 320x240x8bpp: similar to above, less easy, good reliability and compatibility, but low resolution and 256 colours only, see example. - VGA "planed" mode 640x480x4bpp: difficult to set pixels, maximum reliability and compatibility, but low resolution and 16 colours only, no public example yet (?). - Some other "odd" VGA "ModeX" modes (like 360x240x8bpp): possible, but for freaks only ;-) - Write your own VESA code: More difficult, good compatibility, high-res and true color possible, there might be reliability problems if not implemented carefully. - Use an external library (DUGL, Allegro, MGL, WxWidgets): Allows to create "expensive" graphics & GUI's, bloats EXE size, need to respect library license, potential loss of reliability. 12. DEF SEG is missing in FB! How can I workaround this in my code? DEF SEG is related to 16-bit RM addressing and was removed because of this. "direct" access to VGA or other low memory areas is not possible, because FreeBASIC's memory model (same as DJGPP's) is not zero-based. For accessing low DOS memory, use DOSMEMGET and DOSMEMPUT , see "vga13h.bas" example, or "_dos_ds" selector for inline ASM, see example: '' DOS only example of inline ASM accessing low memory '' 13. How can I rewrite QB's CALL INTERRUPT / access the DOS and BIOS interrupts? Those interrupts can be accessed only using the DOS version/target of FB. The access to interrupts is slower than in QB: with FB the DPMI host will have to do 2 context switches, going to real-mode and coming back. All of that will eat hundreds of clocks in raw DOS and thousands of clocks if emm386 is loaded or if inside a Windows' DOS box. The slow down might be negligible or relevant, it depends. You should try to minimize the number of such calls, and process more data per call - at least several KiB, not just one byte or a few bytes. Use DJGPP's DPMI wrapper: #include "dos/dpmi.bi" Type RegTypeX As __dpmi_regs #define INTERRUPTX(v,r) __dpmi_int( v, @r ) Type RegTypeX As __dpmi_regs #define INTERRUPTX(v,r) __dpmi_int( v, @r ) Alternatively you can call INT's via inline ASM, 2 important things you have to care about are the fact that FB's memory model is not zero-based (see also FAQ 12, "DEF SEG" issues), and additionally "direct" passing of addresses (like DS:[E]DX) to an INT will not work except you have a DPMI host with "DOS API translation". 14. How can I rewrite QB's XMS/EMS handling? Depends why original code uses it. If it's just to bypass low memory limits, simply remove it and use "ordinary" FB's data types / memory handling features instead. If it is used for (sound) DMA, you are out of luck and have to redesign the code completely, about sound see FAQ 9. For DMA use preferably the low memory (should be no big problem, since the application code and most buffers are in DPMI memory instead), DMA in DPMI memory is possible but more difficult. 15. FBC gives me a 'cannot find lsupcxx' error! The source of this problem is the libsupcxx.a file in LIB\DOS\ directory, having 9 characters in the name. Your fault is to have extracted the ZIP with long file names enabled, usually in Windows, and then using FB in DOS with no LFN support, resulting in this file looks LIBSUP~1.A and can't be found. Rename the file in LIBSUPCX.A (one X only) or extract the ZIP again in DOS. Note: changes in FB 0.18, retest needed. 16. How can I use the serial or parallel port? The DOS INT14 is not very useful/efficient as it sends/reads a single char in each call. So it's better to use an external DOS32 comms library. FB up to 0.18.2 doesn't support OPEN COM on DOS target, coderJeff has an experimental library/driver available, included with FB since 0.18.3. 17. How can I use a printer? DOS kernel won't help you here, so you have to prepare the text (trivial) or pixel data (acceptably easy for printers compatible with the "ESC/P" standard) yourself and send in to the printer via the parallel port or USB using an additional driver (see FAQ 10). So-called "GDI" or "Windows" printers can't be made working in DOS with reasonable effort. 18. How can I make a screenshot of a FreeBASIC program running in DOS? Ideally include this feature into your own code. DOS TSR based screenshooters like SNARF mostly will work with text based screens, but probably none of them with FreeBASIC's GFX library. It's not really a bug on one or other side, it's a problem "by design". 19. Graphics mode doesn't work (freeze / black screen / garbage output)! Place a bug report into the forum. To make it as useful and productive as possible, please beware of the following, proceed given steps and provide all related information: RayeR's VESATEST and CPUID can be downloaded here: rayer.ic.cz/programm/programe.htm , VBEDIAG here drv.nu/vbediag/. - Check the limitations listed on the page GfxLib - The graphics might not work well / at all on very old PC's. If your CPU has less than cca 500 MHz, provide exact info about it, if you don't know, use RayeR's CPUID or similar program to test. - Exact info about your graphics card is needed. Test on DOS using DrV's VBEDIAG (reports info only) and RayeR's VESATEST (also tries to set mode, allows visual inspection of the result). Find out what "useful" modes (640x480, 800x600) are supported and with what bitdepths (8, 16, 24, 32 bpp), and whether they can be set and look correctly. - Find out and describe exactly what's wrong ("mode works with VESATEST but not with FB", "no graphics but no error either", "black screen and freezer", "graphics is messy/incomplete", ...). - If some sophisticated program doesn't work, try also a minimal test like placing a circle in middle of the screen. - Try without a mouse driver (this reduces the CPU "cost"). - Find out what modes are affected. If a mode doesn't work, reduce the resolution or bitdeph, make sure to test the "cheapest"/safest modes 640x480 with 32/24/16/8 bpp, 640x480 with 4 bpp, and 320x200 with 8bpp. - For some old cards there are VESA drivers available (S3VBE/UVIVBE). Test both with and without, and include this info into your report. - Remove potentially problematic content (memory managers, drivers) from DOS startup files. Nothing of such is required for FB, except a DPMI host (see also FAQ 4.). - Post info about your graphics card, CPU (if old), DOS type and version, bug symptoms, and a simple example code. 20. Mouse trouble! Mouse doesn't work at all in DOS / arrow 'jumps' / etc. ... To use a mouse in DOS, you need a compatible driver, recognizing your mouse, and recognized by FreeBASIC library. For optimal results, you need a good driver and a suitable mouse. Mouse: the optimal choice, and pretty well available nowadays, is a PS/2 mouse. The old type would be a serial mouse, also this one should work. The newest is USB mouse - but is not very suitable for use in DOS, since it would need a compatible (INT33) high quality native USB mouse driver (none available by now, only some experimental), or rely on BIOS emulation (not always available, or "unprecise"). Driver: the preferred choice is CTMOUSE from FreeDOS project. There are versions 1.9a1, 2.0a4, and 2.1b4 from 2008-July available. It is included with (but not limited to) FreeDOS, or download a version from here: ibiblio.org/pub/...mouse . None of them is perfect, but still they are well usable and better than most competitors. 1.9xx and 2.1xx will cooperate with BIOS, allowing USB emulation, 2.0xx bypasses BIOS and thus USB emulation will NOT work. Also Logitech mouse drivers usually do a good job, download from here: uwe-sieber.de/util_e.html - version 6.50 is a good start. Known for problems are DRMOUSE and some (old ?) versions of MSMOUSE. If the mouse does not work at all, then most likely the driver is not loaded, doesn't recognize the mouse (see driver messages), or is not compatible with the INT33 "standard". For USB mouse, activating the "USB mouse emulation" in BIOS settings can help. If the mouse control is "unprecise", the arrow "jumps" , then you either have a bad driver - use a better one, or the BIOS emulation is bad - the solution is to buy a PS/2 mouse then. 21. What. 22. My program crashes when I try to use more than cca 1 MiB RAM! Is this a bug in FreeBASIC? No, it's not a bug in FreeBASIC and it's not really DOS specific, see also Compiler FAQ. For a beginner, the easy solution is to use Shared for arrays. More advanced users could consider using memory management functions like Allocate. This is even more important in DOS, since it allows the application to run on (old) PCs with little memory (and still edit at least small texts for example), as well as to use all huge RAM if available (and edit huge texts for example). 23. Threading functions are disallowed in DOS? Help! The Threading Support Functions are not supported for DOS target, and most likely won't be soon/ever. The reason is simple: neither the DOS kernel, nor the DPMI host/standard, nor "GO32" DOS Extender support threading, unlike the Win32 or Linux kernel. However nothing is impossible in DOS: you can set up your threading on the top of DPMI. There are multiple possibilities, two of which are: - Set up an ISR, see "ISR_TIMER.BAS" example. This is not a "full" replacement, but sufficient in some cases. - There is a pthreads library for DJGPP allowing to "emulate" Linux-like threading to some degree. It works acceptably for [P]7-ZIP DJGPP port (written in C++), no tests with FB yet. - See forum t=21274 24. Executables made with FB DOS are bloated! This is true but there is no easy/fast way to fix. FB is a 32-bit HLL compiler, and most of the size is imported from DJGPP. !writeme! (see forum: t=11757) 25. Compilation is very slow with FB! Problem: "FBC takes 10 seconds to compile a "Hello world" program ! TurboBASIC / QBASIC / VBDOS / PowerBASIC do take < 1 second for the same job ..." True, but this is "by design": FB compiles your sources in 3 steps, saving the intermediate files, as described in CompilerCmdLine, while many older compilers do just 1 pass in memory. This is related mostly to file I/O performance, see FAQ 27 below about possibilities of improvements, additionally a small improvement can be achieved here by making the DPMI host resident (HDPMI32 -r or CWSDPMI -p , see FAQ 4 above). Note that the delay is mostly "additive" , so it won't hurt too much with bigger projects. 26. SLEEP doesn't work! How can I cause a delay? Sleep does work ... but has a resolution of cca 55ms = 1/18s only, thus "SLEEP 500" is fine, while for example using "SLEEP 2" for 2 milliseconds won't work. !writeme! / !fixme! - PIT / BIOS timer (runs at 18.2 Hz by default), peek the BIOS timer or set your own, see "ISR_TIMER.BAS" example, raise PIT frequency (use with care) - Poll the BIOS timer + PIT counter, method from TIMERHLP.ASM from DKRNL32, allows to enhance precision of above without raising the PIT frequency - RDTSC instruction (Pentium and newer) - RTC clock - Delay loops 27. The performance is very bad in DOS! Problem: "The performance in DOS is poor compared to Win32 / Linux binary compiled from the very same source !" or "Even worse, the very same DOS binary runs much faster in NTVDM than in DOS !" Both indeed can happen, nevertheless, DOS is no way predestined to be slow, the inefficiencies can be fixed. First you have to identify the area where you code looses performance. File I/O: DOS by default uses very little memory for its buffers, while other systems use much more and are "aggressive" with file caching. When dealing with many small files, this results in serious performance degrade. The solution is to install a filecache, for example LBACache, or you can install a RAMDISK (a good one: SRDISK ) and copy the "offending" files (for example FreeBASIC installation) there in and work there (make sure to backup your work to a more durable media regularly). Both will need an XMS host (use HIMEMX ). Also DOS by default uses BIOS to access the hard drives, while other systems try hard to find and use DMA. Test util: IDECHECK by Japheth (Download: japheth.de/Download/IDECheck.zip) - run it in "I13" and "DMA" modes and compare results. If "DMA" is much faster (can be 1...10 times, depends from PC model), then installing a DOS DMA driver (for example XDMA 3.1 is worth to try) can bring a big speedup on large files. Also make sure to read and write data in large pieces (16 KiB at least), not just single bytes. Other OSes are more forgiving here, but on DOS every single file I/O call causes a small "additive" delay, thus an efficient code design with good buffering is crucial. Graphics: Pentium 2 and newer CPU's have a cache related feature called "MTRR" to speed up writes to video RAM. Drivers of other OSes usually do enable it. DOS doesn't (since it doesn't deal with graphics at all), neither does FB GFX. Use "VESAMTRR" tool by Japheth (contained in "HXGUI.ZIP" package), it will enable the speedup, surviving also mode switches and most "non-fatal" application crashes, up to a reboot. The possible speedup factor varies much depending from the PC model, up to cca 20 times. Also the mouse handling eats some (too much) CPU performance on DOS, this is a known weak point (the design of DOS FB GFX is not "very bad", it's just the common "standard" - which is not very good), fixing is theoretically possible but not easy, you just can try several mouse drivers (see FAQ 20). 28. Can I access disk sectors with FB? You can ... but FreeBASIC won't help you too much here: no "portable" solution, use OS specific low level way. For DOS 3 methods are possible Note that such experiments are a bit "dangerous" - you can easily lose data or make your PC unbootable if something goes wrong. - Use logical disk access features of DOS for sector access bypassing the filesystem, see example in the forum: freebasic.net/forum/viewtopic.php?t=11830 - Use physical disk BIOS INT 13, bypassing DOS - Use CPU ports, lowest level, bypassing both DOS and BIOS, see forum freebasic.net/forum/viewtopic.php?t=16196, source of IDECHECK from FAQ 27 above, FASM forum or some OS development resources 29. Can I use inline ASM with advanced instructions like SSE in DOS ? You can ... but SSE2 and above need to get enabled before. This is usually considered as business of the DPMI host, HDPMI32 and CWSDPMI 7 will do that, most other hosts won't. Make sure to properly CPUID for such instructions before using them. It's a good idea to provide a code branch compatible with older CPU's (early Pentium, 80386) besides supporting latest instructions, and to avoid CMOV in those too. See also: Back to Table of Contents
https://www.freebasic.net/wiki/wikka.php?wakka=FaqDOS
CC-MAIN-2018-09
refinedweb
4,646
64.71
SPECIAL: to create special "state-smart" words ( -- wordlist-map ) 258: ' new-locals-find A, ' new-locals-reveal A, 259: 260: vocabulary new-locals 261: new-locals-map ' new-locals >body cell+ A! \ !! use special access words 262: 263: variable old-dpp 264: 265: \ and now, finally, the user interface words 266: : { ( -- addr wid 0 ) \ gforth open-brace 267: dp old-dpp ! 268: locals-dp dpp ! 269: also new-locals 270: also get-current locals definitions locals-types 271: 0 TO locals-wordlist 272: 0 postpone [ ; immediate 273: 274: locals-types definitions 275: 276: : } ( addr wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 277: \ ends locals definitions 278: ] old-dpp @ dpp ! 279: begin 280: dup 281: while 282: execute 283: repeat 284: drop 285: locals-size @ alignlp-f locals-size ! \ the strictest alignment 286: set-current 287: previous previous 288: locals-list TO locals-wordlist ; 289: 290: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 291: } 292: [char] } parse 2drop ; 293: 294: forth definitions 295: 296: \ A few thoughts on automatic scopes for locals and how they can be 297: \ implemented: 298: 299: \ We have to combine locals with the control structures. My basic idea 300: \ was to start the life of a local at the declaration point. The life 301: \ would end at any control flow join (THEN, BEGIN etc.) where the local 302: \ is lot live on both input flows (note that the local can still live in 303: \ other, later parts of the control flow). This would make a local live 304: \ as long as you expected and sometimes longer (e.g. a local declared in 305: \ a BEGIN..UNTIL loop would still live after the UNTIL). 306: 307: \ The following example illustrates the problems of this approach: 308: 309: \ { z } 310: \ if 311: \ { x } 312: \ begin 313: \ { y } 314: \ [ 1 cs-roll ] then 315: \ ... 316: \ until 317: 318: \ x lives only until the BEGIN, but the compiler does not know this 319: \ until it compiles the UNTIL (it can deduce it at the THEN, because at 320: \ that point x lives in no thread, but that does not help much). This is 321: \ solved by optimistically assuming at the BEGIN that x lives, but 322: \ warning at the UNTIL that it does not. The user is then responsible 323: \ for checking that x is only used where it lives. 324: 325: \ The produced code might look like this (leaving out alignment code): 326: 327: \ >l ( z ) 328: \ ?branch <then> 329: \ >l ( x ) 330: \ <begin>: 331: \ >l ( y ) 332: \ lp+!# 8 ( RIP: x,y ) 333: \ <then>: 334: \ ... 335: \ lp+!# -4 ( adjust lp to <begin> state ) 336: \ ?branch <begin> 337: \ lp+!# 4 ( undo adjust ) 338: 339: \ The BEGIN problem also has another incarnation: 340: 341: \ AHEAD 342: \ BEGIN 343: \ x 344: \ [ 1 CS-ROLL ] THEN 345: \ { x } 346: \ ... 347: \ UNTIL 348: 349: \ should be legal: The BEGIN is not a control flow join in this case, 350: \ since it cannot be entered from the top; therefore the definition of x 351: \ dominates the use. But the compiler processes the use first, and since 352: \ it does not look ahead to notice the definition, it will complain 353: \ about it. Here's another variation of this problem: 354: 355: \ IF 356: \ { x } 357: \ ELSE 358: \ ... 359: \ AHEAD 360: \ BEGIN 361: \ x 362: \ [ 2 CS-ROLL ] THEN 363: \ ... 364: \ UNTIL 365: 366: \ In this case x is defined before the use, and the definition dominates 367: \ the use, but the compiler does not know this until it processes the 368: \ UNTIL. So what should the compiler assume does live at the BEGIN, if 369: \ the BEGIN is not a control flow join? The safest assumption would be 370: \ the intersection of all locals lists on the control flow 371: \ stack. However, our compiler assumes that the same variables are live 372: \ as on the top of the control flow stack. This covers the following case: 373: 374: \ { x } 375: \ AHEAD 376: \ BEGIN 377: \ x 378: \ [ 1 CS-ROLL ] THEN 379: \ ... 380: \ UNTIL 381: 382: \ If this assumption is too optimistic, the compiler will warn the user. 383: 384: \ Implementation: migrated to kernal.fs 385: 386: \ THEN (another control flow from before joins the current one): 387: \ The new locals-list is the intersection of the current locals-list and 388: \ the orig-local-list. The new locals-size is the (alignment-adjusted) 389: \ size of the new locals-list. The following code is generated: 390: \ lp+!# (current-locals-size - orig-locals-size) 391: \ <then>: 392: \ lp+!# (orig-locals-size - new-locals-size) 393: 394: \ Of course "lp+!# 0" is not generated. Still this is admittedly a bit 395: \ inefficient, e.g. if there is a locals declaration between IF and 396: \ ELSE. However, if ELSE generates an appropriate "lp+!#" before the 397: \ branch, there will be none after the target <then>. 398: 399: \ explicit scoping 400: 401: : scope ( compilation -- scope ; run-time -- ) \ gforth 402: cs-push-part scopestart ; immediate 403: 404: : endscope ( compilation scope -- ; run-time -- ) \ gforth 405: scope? 406: drop 407: locals-list @ common-list 408: dup list-size adjust-locals-size 409: locals-list ! ; immediate 410: 411: \ adapt the hooks 412: 413: : locals-:-hook ( sys -- sys addr xt n ) 414: \ addr is the nfa of the defined word, xt its xt 415: DEFERS :-hook 416: last @ lastcfa @ 417: clear-leave-stack 418: 0 locals-size ! 419: locals-buffer locals-dp ! 420: 0 locals-list ! 421: dead-code off 422: defstart ; 423: 424: : locals-;-hook ( sys addr xt sys -- sys ) 425: def? 426: 0 TO locals-wordlist 427: 0 adjust-locals-size ( not every def ends with an exit ) 428: lastcfa ! last ! 429: DEFERS ;-hook ; 430: 431: ' locals-:-hook IS :-hook 432: ' locals-;-hook IS ;-hook 433: 434: \ The words in the locals dictionary space are not deleted until the end 435: \ of the current word. This is a bit too conservative, but very simple. 436: 437: \ There are a few cases to consider: (see above) 438: 439: \ after AGAIN, AHEAD, EXIT (the current control flow is dead): 440: \ We have to special-case the above cases against that. In this case the 441: \ things above are not control flow joins. Everything should be taken 442: \ over from the live flow. No lp+!# is generated. 443: 444: \ !! The lp gymnastics for UNTIL are also a real problem: locals cannot be 445: \ used in signal handlers (or anything else that may be called while 446: \ locals live beyond the lp) without changing the locals stack. 447: 448: \ About warning against uses of dead locals. There are several options: 449: 450: \ 1) Do not complain (After all, this is Forth;-) 451: 452: \ 2) Additional restrictions can be imposed so that the situation cannot 453: \ arise; the programmer would have to introduce explicit scoping 454: \ declarations in cases like the above one. I.e., complain if there are 455: \ locals that are live before the BEGIN but not before the corresponding 456: \ AGAIN (replace DO etc. for BEGIN and UNTIL etc. for AGAIN). 457: 458: \ 3) The real thing: i.e. complain, iff a local lives at a BEGIN, is 459: \ used on a path starting at the BEGIN, and does not live at the 460: \ corresponding AGAIN. This is somewhat hard to implement. a) How does 461: \ the compiler know when it is working on a path starting at a BEGIN 462: \ (consider "{ x } if begin [ 1 cs-roll ] else x endif again")? b) How 463: \ is the usage info stored? 464: 465: \ For now I'll resort to alternative 2. When it produces warnings they 466: \ will often be spurious, but warnings should be rare. And better 467: \ spurious warnings now and then than days of bug-searching. 468: 469: \ Explicit scoping of locals is implemented by cs-pushing the current 470: \ locals-list and -size (and an unused cell, to make the size equal to 471: \ the other entries) at the start of the scope, and restoring them at 472: \ the end of the scope to the intersection, like THEN does. 473: 474: 475: \ And here's finally the ANS standard stuff 476: 477: : (local) ( addr u -- ) \ local paren-local-paren 478: \ a little space-inefficient, but well deserved ;-) 479: \ In exchange, there are no restrictions whatsoever on using (local) 480: \ as long as you use it in a definition 481: dup 482: if 483: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 484: else 485: 2drop 486: endif ; 487: 488: : >definer ( xt -- definer ) 489: \ this gives a unique identifier for the way the xt was defined 490: \ words defined with different does>-codes have different definers 491: \ the definer can be used for comparison and in definer! 492: dup >code-address [ ' spaces >code-address ] Literal = 493: \ !! this definition will not work on some implementations for `bits' 494: if \ if >code-address delivers the same value for all does>-def'd words 495: >does-code 1 or \ bit 0 marks special treatment for does codes 496: else 497: >code-address 498: then ; 499: 500: : definer! ( definer xt -- ) 501: \ gives the word represented by xt the behaviour associated with definer 502: over 1 and if 503: swap [ 1 invert ] literal and does-code! 504: else 505: code-address! 506: then ; 507: 508: :noname 509: ' dup >definer [ ' locals-wordlist >definer ] literal = 510: if 511: >body ! 512: else 513: -&32 throw 514: endif ; 515: :noname 516: 0 0 0. 0.0e0 { c: clocal w: wlocal d: dlocal f: flocal } 517: ' dup >definer 518: case 519: [ ' locals-wordlist >definer ] literal \ value 520: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 521: [ ' clocal >definer ] literal 522: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 523: [ ' wlocal >definer ] literal 524: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 525: [ ' dlocal >definer ] literal 526: OF POSTPONE laddr# >body @ lp-offset, POSTPONE 2! ENDOF 527: [ ' flocal >definer ] literal 528: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 529: -&32 throw 530: endcase ; 531: special: TO ( c|w|d|r "name" -- ) \ core-ext,local 532: 533: : locals| 534: \ don't use 'locals|'! use '{'! A portable and free '{' 535: \ implementation is compat/anslocals.fs 536: BEGIN 537: name 2dup s" |" compare 0<> 538: WHILE 539: (local) 540: REPEAT 541: drop 0 (local) ; immediate restrict
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?f=h;only_with_tag=MAIN;ln=1;content-type=text%2Fx-cvsweb-markup;rev=1.23
CC-MAIN-2022-21
refinedweb
1,701
69.72
Richard Stallman <address@hidden> writes: >. It would certainly be simpler, just a little slower as the object-to-pointer conversions are done multiple times, particularly when a function called frequently is changed from taking a pointer argument to taking a lisp object. I've made a few changes along those lines, and will almost certainly make more. > Why can't Guile's symbols be used as Lisp symbols? Actually, the issue is the symbol's value(s). Scheme symbols don't have separate function and value slots. Emacs Lisp symbols have a few other fields associated with them, like indirect_variable, that also don't fit the basic Scheme model. And if we separate the name from the values, there's the question of what exactly a symbol is. There are several ways to deal with this. Some involve changing Guile's symbol representation to have the extra fields. IMNSHO this is tantamount to declaring Scheme not to be up to the task of implementing Lisp, and I'm not prepared to believe that. (That Guile isn't currently up to the task, maybe; we'll find out, and Guile's Scheme implementation can be improved. That Scheme can't do it, no.) It also sets a poor example for people wanting to put other existing languages on top of Guile. Another approach would be to use Scheme symbols, but store as their values vectors containing the various fields of a Lisp symbol. Special accessor functions would be needed for Scheme code to manipulate Lisp variables, and while I don't have a problem with that, the people who want transparent interaction between the two languages probably won't like it. I guess it depends whether you view it as Lisp on top of Scheme, or the two languages working side by side. I'm inclined towards the former view, maybe only because it's easier for an implementor or designer, but I understand the temptation of the latter for such similar languages. But in that case, how do you implement Lisp's "symbolp" when looking at Scheme objects? Is a Lisp symbol a Scheme symbol that has a particular kind of object as its value? Should other Scheme symbols be treated as Lisp symbols? If a Scheme symbol has a value, what happens if you do an "fset" or "let" on it in Lisp? (Maybe that's an argument for separate namespaces for Scheme and Lisp symbols, outside of explicitly requested sharing.) Treating Lisp symbols as a new type is a third approach. (Either an opaque type, or a vector as above but also including the symbol's name.) It's what I did before, and it does sidestep a lot of these issues by simply not allowing any sort of confusion between the two symbol types, but it's not as appealing as using Scheme symbols. A related issue is the symbol name -- a Lisp symbol name is a Lisp string, with some of the associated fields ignored for matching purposes. The names themselves can have text properties: ELISP> (symbol-name 'booga-booga) #("booga-booga" 0 3 nil 3 6 (some-prop some-val) 6 11 nil) (I got this by previously interning a string with the same contents and some text properties.) An Emacs Lisp symbol's name can also be changed after interning, such that it won't be found when interning either the old name or (probably) the new name. I don't know what Scheme's specifications might say about such cases. (Much as I like Scheme, I'm hardly an expert; on subtle Scheme issues I'll readily defer to others.) I really hope no one is relying on such weird behavior, but if it needs to be maintained, that's another complication. Because I don't have all the answers, I think it's simplest to plan to treat them as separate types initially, and then merge them if and when we decide a particular approach will work well in all cases we care about, perhaps even before the actual coding starts, but perhaps not; worst case, we change a few Lisp macros to just invoke Guile macros. It's easier than starting coding on the assumption that they'll be the same, and then finding that we have to separate them. Ken
http://lists.gnu.org/archive/html/emacs-devel/2002-07/msg00586.html
CC-MAIN-2014-41
refinedweb
720
68.2
Subject: Re: [boost] Is there interest in a library for string-convertible enums? From: Felix Uhl (felix.uhl_at_[hidden]) Date: 2014-10-29 15:14:50 Rob Stewart wrote: > You should not add features because someone on this list asked for them. > If you're not convinced of the utility of something, push back > to understand the request better. This sounds like advice I should follow. I tend to forget that Iâm not writing for a specific client. Also, I noticed that the library has too much compile time overhead already, which should be fixed before introducing any additional ground-breaking features. So Iâve gone through the features suggested and ordered them by priority, perhaps the requesters of the following ones could answer my questions about them and specify why they think these features would be a good addition to the library: By Damien Buhl: - Adaptation of enums after declaration with something like BOOST_ADAPT_ENUM Is this really worth the effort? If somebody knows they want to string-convert an enum (or enum class), why would they not define it as one in the first place? Or is this about enumerations that are already defined by other libraries? Would it be ok to instead define a new enum that has the âto adaptedâ one as its underlying type and can thus be converted to and from it? There are implementational problems with offering both namespace independent definition of those enums and adaptability. By Gavin Lambert: - Specify custom and multiple strings In which use cases would this actually be beneficial for anyone? Could those use cases be solved by using custom char_traits or even custom string classes? - Specify different conversion tables for use in different contexts What do you exactly mean by that? Conversion to different string values? Conversion by different algorithms? Again, if you could answer those questions, it would really help evaluation. --- Felix Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2014/10/217383.php
CC-MAIN-2021-31
refinedweb
335
65.22
Sample source: public class t { public static final int[] x = { 5, 7, 9, 11 }; } In gcc 3.3 (and 3.4 and earlier than 3.3), this code compiled to something like: .LJv0.0: .long _Jv_intVTable .long 4 .long 5 .long 7 .long 9 .long 11 Now we actually emit code for this. The culprits are a combination of this patch to gcj: 2004-07-08 Richard Henderson <rth@redhat.com> * expr.c (case_identity, get_primitive_array_vtable, java_expand_expr, emit_init_test_initialization): Remove. * java-tree.h (java_expand_expr): Remove. * lang.c (LANG_HOOKS_EXPAND_EXPR): Remove. and this patch to libgcj (which removed the primitive vtables): 2004-07-23 Bryce McKinlay <mckinlay@redhat.com> * prims.cc (_Jv_InitPrimClass): Don't create an array class. (_Jv_CreateJavaVM): Don't pass array vtable parameter to _Jv_InitPrimClass. (DECLARE_PRIM_TYPE): Don't declare array vtables. * include/jvm.h (struct _Jv_ArrayVTable): Removed. * java/lang/Class.h (_Jv_InitPrimClass): Update friend declaration. Confirmed. Some of the code from java_expand_expr should be moved into java_gimplify_new_array_init. And maybe fix up libjava code or maybe just storing the array's vtable by using a store. I might look into doing this tomorrow. Here's my thoughts about this: - This optimization only ever worked for source compilation. Bytecode compilers always emit array initializers as code, so for byte compilation it makes no difference. - I don't see a strong reason not to reference the vtable symbols, but if we decide to remove my patch then we need to be careful that the original bug remains fixed - see: - Is this optimization really worth worrying about? I'm pretty sure that performance-wise, the difference is insignificant - binary size is what we should be concerned about here. Is a binary that initializes arrays in code significantly larger? Created attachment 7473 [details] front-end part Here is the front-end part which is just a port of the expand stuff to the gimplifier (it works in the sense I can see that we get the same assembly code as before the merge of the tree-ssa and now). ChangeLog: * java-gimplify.c: Include parse.h. (get_primitive_array_vtable): New function. (java_gimplify_new_array_init): Add staticly allocated arrays. Note I attached the front-end part so the only thing left is to fix up libjava to emit the vtables again. Created attachment 7474 [details] A new patch which completes porting the expand part to the gimplifier This new patch which adds the other part of what the expand code did for NEW_ARRAY_INIT for arrays greater than 10 and constants (with primitive types) to be copied from a static variable: ChangeLog: * java-gimplify.c: Include parse.h. (get_primitive_array_vtable): New function. (java_gimplify_new_array_init): Add staticly allocated primitive arrays and copy arrays which have more than 10 elements and is constant primitives. I finished up porting the other part of the expand part. Ada and Java bugs are not release-critical; therefore, I've removed the target milsetone. Removing target milestone per last comment. Will not be fixed in 4.1.1; adjust target milestone to 4.1.2. Closing 4.1 branch. Tom, this hasn't been reconfirmed in a while. Does this bug still exist in gcj? It at least still happens with 4.3 and I have no belief that it is fixed with 4.4 if comment #3 applies. Code generated is also absymal (-O2): _ZN1t18__U3c_clinit__U3e_EJvv: .LFB2: pushl %ebp .LCFI0: movl %esp, %ebp .LCFI1: subl $8, %esp .LCFI2: movl $4, 4(%esp) movl $_Jv_intClass, (%esp) call _Jv_NewPrimArray movl 4(%eax), %edx testl %edx, %edx je .L8 cmpl $1, %edx movl $5, 8(%eax) jbe .L9 cmpl $2, %edx movl $7, 12(%eax) jbe .L10 cmpl $3, %edx movl $9, 16(%eax) jbe .L11 movl $11, 20(%eax) movl %eax, _ZN1t1xE leave ret .L8: movl $0, (%esp) call _Jv_ThrowBadArrayIndex .L9: movl $1, (%esp) call _Jv_ThrowBadArrayIndex .L10: movl $2, (%esp) call _Jv_ThrowBadArrayIndex .L11: movl $3, (%esp) call _Jv_ThrowBadArrayIndex (4.3 again), we do not seem to be able to CSE the load of the array length on the tree level and we have no way (still) of commoning the calls to _Jv_ThrowBadArrayIndex either: <bb 2>: D.241 = _Jv_NewPrimArray (&_Jv_intClass, 4); D.249 = D.241->length; if (D.249 != 0) goto <bb 4>; else goto <bb 3>; <bb 3>: _Jv_ThrowBadArrayIndex (0); <bb 4>: D.241->data[0] = 5; D.263 = D.241->length; if ((unsigned int) D.263 > 1) goto <bb 6>; else goto <bb 5>; <bb 5>: _Jv_ThrowBadArrayIndex (1); <bb 6>: D.241->data[1] = 7; D.277 = D.241->length; if ((unsigned int) D.277 > 2) goto <bb 8>; else goto <bb 7>; <bb 7>: _Jv_ThrowBadArrayIndex (2); <bb 8>: D.241->data[2] = 9; D.291 = D.241->length; if ((unsigned int) D.291 > 3) goto <bb 10>; else goto <bb 9>; <bb 9>: _Jv_ThrowBadArrayIndex (3); <bb 10>: D.241->data[3] = 11; x = D.241; return; Closing 4.2 branch. GCC 4.3.4 is being released, adjusting target milestone. GCC 4.3.5 is being released, adjusting target milestone. 4.3 branch is being closed, moving to 4.4.7 target. 4.4 branch is being closed, moving to 4.5.4 target. GCC 4.6.4 has been released and the branch has been closed. The 4.7 branch is being closed, moving target milestone to 4.8.4. GCC 4.8.4 has been released. The gcc-4_8-branch is being closed, re-targeting regressions to 4.9.3. GCC 4.9.3 has been released. GCC 4.9 branch is being closed This won't be fixed for 5.x or 6.x so closing as won't fix.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=18190
CC-MAIN-2019-47
refinedweb
922
69.99
Iterate through all distinct dictionary values in a list of dictionaries python iterate dictionary key, value python list of dictionaries get value python iterate over dictionary values iterate through dictionary python iterate through nested dictionary python parse list of dictionaries python while loop dictionary python Assuming a list of dictionaries, the goal is to iterate through all the distinct values in all the dictionaries. Example: d1={'a':1, 'c':3, 'e':5} d2={'b':2, 'e':5, 'f':6} l=[d1,d2] The iteration should be over 1,2,3,5,6, does not matter if it is a set or a list. You can use set with a comprehension: d1 = {'a':1, 'c':3, 'e':5} d2 = {'b':2, 'e':5, 'f':6} L = [d1, d2] final = set(j for k in L for j in k.values()) # {1, 2, 3, 5, 6} Different ways to Iterate / Loop over a Dictionary in Python , It returns a iterable View object of all key,value elements in the dictionary. Its backed by original dictionary. Let's iterate over the list using dict.iter() Python |. You can use set.union to iterate over distinct values: res = set().union(*(i.values() for i in (d1, d2))) # {1, 2, 3, 5, 6} How to Iterate Through a Dictionary in Python – Real Python, Dictionaries map keys to values and store them in an array or collection. are much like a set , which is a collection of hashable and unique objects. new iterator object that can iterate through all the objects in the container. Python, Sometimes, while working with Python dictionaries, we can have a problem in which we need to find the unique values over all the dictionaries in a list. us to get the unique of them, and dictionary comprehension to iterate through the list. Surprisingly enough, it's not all that different to iterate over a list of dictionaries. for dict_entity in dict_list: will handle each individual dictionary in the list of dictionaries just as if they were individual items in a list or any other iterable. Simple solution using itertools.chain and a generator expression: from itertools import chain set(chain.from_iterable(d.values() for d in l)) Python, In this, we just iterate over the list of dictionary for desired value. In this, map is used to link the value to all the dictionary keys and itemgetter gets the desired This function iterate over all the key value pairs in dictionary and call the given callback function() on each pair. Items for which callback() function returns True are added to the new dictionary. In the end new dictionary is returned. Now let’s use this to filter a dictionary by key i.e. print list(set().union(a, b)) Pretty straight forward Python Dictionaries - Python Pandemonium, In addition to being unique, keys are also required to be hashable. We passed a sequence, in this case a list of key-value tuples, to the dict() data in to a list of dictionaries, then we'll iterate through all the dictionaries and print out the keys In this article, we show how to iterate through all values of a dictionary in Python. So, say, we have a dictionary. This can be any dictionary. A dictionary contains key-value pairs. Each key contains a value. Let's say, however, that we don't want the keys but only the values in this particular instance. Python Tutorial: Dictionaries, Tutorial on Dictionaries in Python: Operators and Methods of the Dictionary class. If we want to get the population of one of those cities, all we have to do is to use the name of Only immutable data types can be used as keys, i.e. no lists or dictionaries can be No method is needed to iterate over the keys of a dictionary:. A Dictionary class is a data structure that represents a collection of keys and values pair of data. The key is identical in a key-value pair and it can have at most one value in the dictionary, but a value can be associated with many different keys. More about. C# Dictionary Dictionary Versus HashTable. Dictionary is a generic type How to iterate through a python dictionary, Convert list of tuples(Key,Value) into a dictionary Python supports a concept of iteration over containers and An iterator needs to define two methods: The objects returned by dict.keys(), dict.values() and dict.items() are view objects. The final dictionary have all the keys with values greater than 500.. Dictionaries - PY4E, In a list, the index positions have to be integers; in a dictionary, the indices can be that maps from English to Spanish words, so the keys and the values are all strings. Each time through the loop, if the character c is not in the dictionary, we Now let’s see how to iterate over this dictionary using 2 different techniques i.e. Iterate over the dictionary using for loop over keys | “for in dictionary” By using for in dictionary , it loops through all the keys in dictionary and for each key select the value and prints it.
http://thetopsites.net/article/51303187.shtml
CC-MAIN-2020-50
refinedweb
856
60.55
rich:calendar and calendar objectsFab Mars Oct 11, 2007 4:54 AM Hi We're using (Gregorian)Calendars in our data model and we'd like to use rich:calendar without having to convert manually our Calendars to Dates and vice-versa. We cannot really use a converter since it does object to string and string to object. Wouldn't it be nice to bind calendars as well as dates in the rich:calendar component ? This content has been marked as final. Show 5 replies 1. Re: rich:calendar and calendar objectsIlya Shaikovsky Oct 11, 2007 8:12 AM (in response to Fab Mars) Update your RF version. This improvement was already made. 2. Re: rich:calendar and calendar objectsFab Mars Oct 16, 2007 9:09 AM (in response to Fab Mars) Hi I'm sorry but this is not the case. I checked with last friday's snapshot, and here is what I have : /pages/test.xhtml @15,49 value="#{fakeData.calValue}": Can't set property 'calValue' of type 'java.util.Calendar' on class 'com.xxx.xxx.xxx.fake.FakeData' to value '4/10/07 0:00'. Here is my test.xhtml page : <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <ui:composition <ui:define <h:form> <h:panelGroup> <rich:calendar</rich:calendar> <h:commandButton</h:commandButton> </h:panelGroup> </h:form> </ui:define> </ui:composition> </html> Here is my backing bean : package com.xxx.xxx.xxx.fake; import java.util.Calendar; public class FakeData { private Calendar calValue; public Calendar getCalValue() { return calValue; } public void setCalValue(Calendar calValue) { this.calValue = calValue; } } Just in case, I checked using a more specific GregorianCalendar instead of just Calendar and uit doesn't work either. Is there anything I'm doing that's wrong ? Can you help me ? 3. Re: rich:calendar and calendar objectsFab Mars Oct 23, 2007 7:47 AM (in response to Fab Mars) No answer ? 4. Re: rich:calendar and calendar objectsIlya Shaikovsky Oct 23, 2007 7:51 AM (in response to Fab Mars) What is precise version number? I'm succesfully use calendar type from 3.1.2 GA, 5. Re: rich:calendar and calendar objectsFab Mars Oct 23, 2007 12:45 PM (in response to Fab Mars) October 12, but it was after you told me it was fixed. We've upgraded to RF 3.1.2 now. I'll retest asap. thanks.
https://developer.jboss.org/thread/5149
CC-MAIN-2018-30
refinedweb
400
59.6
The vwscanf() function is defined in <cwchar> header file. vwscanf() prototype int vwscanf( const wchar_t* format, va_list vlist ); The vwscanf() function reads the data from stdin and stores the values into the respective locations as defined by vlist. vwscanf() Parameters - format: Pointer to a null-terminated widewscanf() does not assign the result to any receiving argument. - An optional positive integer number that specifies maximum field width. It specifies the maximum number of characters that vwscanf() Return value - If successful, the vwscanf() function returns the number of arguments successfully read. - On failure, EOF is returned. Example: How vwscanf() function works? #include <cwchar> #include <cstdarg> #include <clocale> void read( const wchar_t* format, ... ) { va_list args; va_start (args, format); vwscanf (format, args); va_end (args); } int main () { setlocale(LC_ALL, "en_US.UTF-8"); wchar_t name[50]; wprintf(L"What is your name? "); read(L" %ls", name); wprintf(L"Hello %ls\n", name); return 0; } When you run the program, a possible output will be: What is your name? Götz Hello Götz
https://www.programiz.com/cpp-programming/library-function/cwchar/vwscanf
CC-MAIN-2020-16
refinedweb
165
58.48
[hackers] [slock] Simplify the oom-taming-function || FRIGN This message : [ Message body ] [ More options ( top , bottom ) ] Related messages : [ Next message ] [ Previous message ] Contemporary messages sorted : [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ] From : < git_AT_suckless.org > Date : Sun, 14 Feb 2016 01:34:51 +0100 (CET) commit 3abbffa4934a62146e995ee7c2cf3ba50991b4ad Author: FRIGN <dev_AT_frign.de> AuthorDate: Sun Feb 14 01:28:37 2016 +0100 Commit: FRIGN <dev_AT_frign.de> CommitDate: Sun Feb 14 01:28:37 2016 +0100 Simplify the oom-taming-function There really is no need to source a defined variable from a linux header. The OOM-rank ranges from -1000 to 1000, so we can safely hardcode -1000, which is a sane thing to do given slock is suid and we don't want to play around too much here anyway. On another notice, let's not forget that this still is a shitty heuristic. The OOM-killer still can kill us (thus I also changed the wording in the error-message. We do not disable the OOM-killer, we're just hiding. diff --git a/slock.c b/slock.c index cf49555..3188ff7 100644 --- a/slock.c +++ b/slock.c _AT_@ -60,28 +60,20 @@ die(const char *errstr, ...) #ifdef __linux__ #include <fcntl.h> -#include <linux/oom.h> static void dontkillme(void) { int fd; - int length; - char value[64]; fd = open("/proc/self/oom_score_adj", O_WRONLY); - if (fd < 0 && errno == ENOENT) + if (fd < 0 && errno == ENOENT) { return; - - /* convert OOM_SCORE_ADJ_MIN to string for writing */ - length = snprintf(value, sizeof(value), "%d\n", OOM_SCORE_ADJ_MIN); - - /* bail on truncation */ - if (length >= sizeof(value)) - die("buffer too small\n"); - - if (fd < 0 || write(fd, value, length) != length || close(fd) != 0) - die("cannot disable the out-of-memory killer for this process (make sure to suid or sgid slock)\n"); + } + if (fd < 0 || write(fd, "-1000\n", (sizeof("-1000\n") - 1)) != + (sizeof("-1000\n") - 1) || close(fd) != 0) { + die("can't tame the oom-killer. is suid or sgid set?\n"); + } } #endif Received on Sun Feb 14 2016 - 01:34:51 CET This message : [ Message body ] Next message : git_AT_suckless.org: "[hackers] [slock] No need for usage() || FRIGN" Previous message : git_AT_suckless.org: "[hackers] [slock] Clarify config.def.h || FRIGN" Contemporary messages sorted : [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ] This archive was generated by hypermail 2.3.0 : Sun Feb 14 2016 - 01:36:18 CET
http://lists.suckless.org/hackers/1602/9897.html
CC-MAIN-2022-05
refinedweb
396
66.23
On Tue, Feb 17, 2009 at 2:47 AM, Christian Tanzer <tanzer at swing.co.at> wrote: > Please note that the right-hand operand can be a dictionary > (more specifically, any object implementing `__getitem__()`) > > For objects implementing `__getitem__`, the stuff inside the > parentheses of `%(...)s` can get pretty fancy. Indeed it can. I had some functionality for templating C programs that relied on just this. The files would be mostly C code, but then %-formatting was used to specify certain chunks to be generated by Python code. The custom class I wrote implemented a __getitem__ class that broke down the given key into arguments to an indicated function, eval-ed Python code, and generated the desired C code. This was super-useful for things like spitting out static const arrays and specifying the array sizes in the header file without requiring duplicate human effort. An example would be the following snippet from a C-template, num_to_words.ctemplate: %(array; char * ones; "zero one two three four five six seven eight nine".split())s Taking this file, reading it in, and doing %-interpolation with my custom class would result in the following output: num_to_words.h: extern char * ones[10]; num_to_words.c: #ifndef NUM_TO_WORDS_C #define NUM_TO_WORDS_C #include "num_to_words.h" char * ones = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine" }; #endif So there are some pretty crazy things possible with %-formatting and custom __getitem__ classes. As long as format can do similar things, though, I don't think there is a problem. Brandon
https://mail.python.org/pipermail/python-ideas/2009-February/002971.html
CC-MAIN-2017-04
refinedweb
251
55.54
Over learning AMI that (1) I use often and (2) is publicly available to any PyImageSearch reader who wants to utilize it in their own projects. And while I’m not a fan of Amazon’s AWS user interface, I’ve gotten used to it over the years. I suppose there is a sort of “familiarity” in its clunky complexity. But I had heard such good things about the Ubuntu DSVM that I decided to test it out. I was incredibly impressed. The interface was easier to use. The performance was great. The price was on point. …and it didn’t hurt that all code from Deep Learning for Computer Vision with Python ran on it without a single change. Microsoft even graciously allowed me to author a series of guest posts on their Machine Learning Blog and share my experiences while I was using it, testing it, and evaluating it: - Deep Learning & Computer Vision in the Microsoft Azure Cloud - 22 Minutes to 2nd Place in a Kaggle Competition, with Deep Learning & Azure - Training state-of-the-art neural networks in the Microsoft Azure cloud Microsoft is serious about establishing themselves as the “go to” cloud environment for deep learning, machine learning, and data science. The quality of their DSVM product shows that. In the remainder of today’s special edition blog post I’ll be sharing my thoughts on the DSVM and even demonstrating how to start your first instance and run your first deep learning example on it. To learn more about Microsoft’s deep learning virtual machine (and whether it’s right for you), keep reading! A review of Microsoft’s deep learning virtual machine When I first evaluated Microsoft’s data science and deep learning virtual machine (DSVM) I took all code examples from Deep Learning for Computer Vision with Python and ran each and every example on the DSVM. The process of manually running each example and inspecting the output was a bit tedious but it was also a great way to put the DSVM for the ringer and assess it for: - Beginner usage (i.e., just getting started with deep learning) - Practitioner usage, where you’re building deep learning models and need to quickly evaluate performance - Research usage, where you’re training deep neural networks on large image datasets. The codebase to Deep Learning for Computer Vision with Python compliments this test perfectly. The code inside the Starter Bundle is meant to help you take your first step with image classification, deep learning, and Convolutional Neural Networks (CNNs). If the code ran without a hitch on the DSVM then I could certainly recommend it to beginners looking for a pre-configured deep learning environment. The chapters + accompanying code in the Practitioner Bundle cover significantly more advanced techniques (transfer learning, fine-tuning GANs, etc). These are the techniques a deep learning practitioner or engineer would be applying in their day to day work. If the DSVM handled these examples, then I knew I could recommend it to deep learning practitioners. Finally, the code inside the ImageNet Bundle requires GPU horsepower (the more the better) and I/O performance. Inside this bundle I demonstrate how to replicate the results of state-of-the-art publications (ex. ResNet, SqueezeNet, etc.) on the massive image datasets, such as the 1.2 million image ImageNet dataset. If the DSVM could handle reproducing the results of state-of-the-art papers, then I knew I could recommend the DSVM to researchers. In the first half of this blog post I’ll summarize my experience with each of these tests. From there I’ll show you how to launch your first deep learning instance in the Microsoft cloud and then run your first deep learning code example in the DSVM. Comprehensive deep learning libraries Figure 1: The Microsoft Azure Data Science Virtual Machine comes with all packages shown pre-installed and pre-configured for your immediate use. Microsoft’s deep learning virtual machine runs in their Azure cloud. It can technically run either Windows or Linux, but for nearly all deep learning projects, I would recommend you use their Ubuntu DSVM instance (unless you have a specific reason to use Windows). The list of packages installed on the DSVM is is quite comprehensive — you can find the full list here. I have included the most notable deep learning and computer vision packages (particularly to PyImageSearch readers) below to give you an idea on how comprehensive this list is: - TensorFlow - Keras - mxnet - Caffe/Caffe2 - Torch/PyTorch - OpenCV - Jupyter - CUDA and cuDNN - Python 3 The DSVM team releases a new, updated DSVM every few months with the most up to date packages pre-configured and pre-installed. This is a huge testament to not only the DSVM team for keeping this instance running seamlessly (keeping the DSVM free of package conflicts must be a painful process, but it’s totally transparent to the end user), but also Microsoft’s desire to have users enjoying the experience as well. What about GPUs? The DSVM can run in both CPU-only and GPU instances. For the majority of all experiments and tests I ran below, I utilized an Ubuntu GPU instance with the standard NVIDIA K80 GPU. Additionally, Microsoft granted me to access to their just released NVIDIA V100 behemoth which I ran a few additional quick spot checks with (see results below — it’s fast!) For all Starter Bundle and Practitioner Bundle experiments I opted to test out Microsoft’s Jupyter Notebook. The process was incredibly easy. I copied and pasted the Jupyter Notebook server URL in my browser, launched a new notebook, and within a few minutes I was running examples from the book. For the ImageNet Bundle experiments I used SSH as replicating the results of state-of-the-art papers required days of training time and I personally do not think that is a proper usage of Jupyter Notebooks. Easy for deep learning beginners to use Figure 2: Training the LeNet architecture on the MNIST dataset. This combination is often referred to as the “hello world” example of Deep Learning. In my first guest post on the Microsoft blog, I trained a simple Convolutional Neural Network (LeNet) on the MNIST handwritten digit dataset. Training LeNet on MNIST is likely the first “real” experiment for a beginner studying deep learning. Both the model and dataset are straightforward and training can be performed on a CPU or GPU as well. I took the code from Chapter 14 of Deep Learning for Computer Vision with Python (Starter Bundle) and executed it in a Jupyter Notebook (which you can find here) on the Microsoft DSVM. The results of which can be seen in Figure 2 above. I was was able to obtain 98% classification accuracy after 20 epochs of training. All other code examples from the Starter Bundle of Deep Learning for Computer Vision with Python ran without a hitch as well. Being able to run the code in browser via a Jupyter Notebook on the Azure DSVM (with no additional configurations) was a great experience and one that I believe users new to deep learning would enjoy and appreciate. Practical and useful for deep learning practitioners Figure 3: Taking 2nd place on the Kaggle Leaderboard for the dogs vs. cats challenge is a breeze with the Microsoft Azure DSVM (pre-configured) using code from Deep Learning for Computer Vision with Python. My second post on the Microsoft blog was geared towards practitioners. A common technique used by deep learning practitioners is to apply transfer learning and in particular, feature extraction, to quickly train a model and obtain high accuracy. To demonstrate how the DSVM can be used for practitioners looking to quickly train a model and evaluate different hyperparameters, I: - Utilized feature extraction using a pre-trained ResNet model on the Kaggle Dogs vs. Cats dataset. - Applied a Logistic Regression classifier with grid searched hyperparameters on the extracted features. - Obtained a final model capable of capturing 2nd place in the competition. I also wanted to accomplish all of this in under 25 minutes. The end result was a model capable of sliding into 2nd place with only 22 minutes of computation (as Figure 3 demonstrates). You can find a full writeup on how I accomplished this task, including the Jupyter Notebook + code, in this post. But could it be done faster? After I had ran the Kaggle Dogs vs. Cats experiment on the NVIDIA K80, Microsoft allowed me access to their just released NVIDIA V100 GPUs. I had never used an NVIDIA V100 before so I was really excited to see the results. I was blown away. While it took 22 minutes for the NVIDIA K80 to complete the pipeline, the NVIDIA V100 completed the task in only 5 minutes — that’s a massive improvement of over 340%! I believe deep learning practitioners will get a lot of value out of running their experiments on a V100 vs. a K80, but you’ll also need to justify the price as well (covered below). Powerful enough for state-of-the-art deep learning research The DSVM is perfectly suitable for deep learning beginners and practitioners — but what about researchers doing state-of-the-art work? Is the DSVM still useful for them? To evaluate this question, I: - Downloaded the entire ImageNet dataset to the VM - Took the code from Chapter 9 of the ImageNet Bundle of Deep Learning for Computer Vision with Python where I demonstrate how to train SqueezeNet on ImageNet I chose SqueezeNet for a few reasons: - I had a local machine already training SqueezeNet on ImageNet for a separate project, enabling me to easily compare results. - SqueezeNet is one of my personal favorite architectures. - The resulting model size (< 5MB without quantization) is more readily used in production environments where models need to be deployed over resource constrained networks or devices. I trained SqueezeNet for a total of 80 epochs on the NVIDIA K80. SGD was used to train the network with an initial learning rate of 1e-2 (I found the Iandola et al. recommendation of 4e-2 to be far too large for stable training). Learning rates were lowered by an order of magnitude at epochs 50, 65, and 75, respectively. Each epoch took approximately 140 minutes on the K80 so the entire training time was ~1 week. Using multiple GPUs could have easily reduced training time to 1-3 days, depending on the number of GPUs utilized. After training is complete, I evaluated on a 50,000 image testing set (which I sampled from the training set so I did not have to submit the results to the ImageNet evaluation server). Overall, I obtained 58.86% rank-1 and 79.38% rank-5 accuracy. These results are consistent with the results reported by Iandola et al. The full post on SqueezeNet + ImageNet can be found on the Microsoft blog. Incredibly fast training with the NVIDIA V100 After I trained the SqueezeNet on ImageNet using the NVIDIA K80, I repeated the experiment with a single V100 GPU. The speedup in training was incredible. Compared to the K80 (~140 minutes per epoch), the V100 was completing a single epoch in 28 minutes, a huge speedup over over 400%! I was able to train SqueezeNet and replicate the results in my previous experiment in just over 36 hours. Deep learning researchers should give the DSVM serious consideration, especially if you do not want to own and maintain the actual hardware. But what about price? On Amazon’s EC2, for a p2.xlarge instance, you’ll pay $0.90/hr (1x K80), $7.20/hr (8x K80), or $14.40/hr (16x K80). That is $0.90/hr per K80. On Microsoft Azure, prices are the exact same $0.90/hr (1x K80), $1.80/hr (2x K80), and $3.60/hr (4x K80). This also comes out to $0.90/hr per K80. Amazon has V100 machines ready and priced at $3.06/hr (1x V100), $12.24/hr (4x V100), $24.48/hr (8x V100). Be prepared to spend $3.06/hr per V100 on Amazon EC2. The recently released V100 instances on Azure are priced competitively at $3.06/hr (1x V100), $6.12/hr (2x V100), $12.24/hr (4x V100).This also comes out to $3.06/hr per V100. Microsoft offers Azure Batch AI pricing, similar to Amazon’s spot pricing, enabling you to potentially get a better deal on instances. It wouldn’t be a complete (and fair) price comparison unless we look at Google, Paperspace, and Floydhub as well. Google charges $0.45/hr (1x K80), $0.90 (2x K80), $1.80/hr (4x K80), $3.60/hr (8x K80). This is clearly the best pricing model for the K80 at half the cost of MS/EC2. Google does not have V100 machines available from what I can tell. Instead they offer their own breed, the TPU which is priced at $6.50/hr per TPU. Paperspace charges $2.30/hr (1x V100) and they’ve got API endpoints. Floydhub pricing is $4.20/hr (1x V100) but they offer some great team collaboration solutions. When it comes to reliability, EC2 and Azure stick out. And when you factor in how easy it is to use Azure (compared to EC2) it becomes harder and harder to justify sticking with Amazon for the long run. If you’re interested in giving the Azure cloud a try, Microsoft offers free trial credits as well; however, the trial cannot be used for GPU machines (I know, this is a bummer, but GPU instances are at a premium). Starting your first deep learning instance in the Microsoft cloud Starting a DSVM instance is dead simple — this section will be your quick-start guide to launching one. For advanced configurations you’ll want to refer to the documentation (as I’ll mainly be selecting the default options). Additionally, you may want to consider signing up for Microsoft’s free Azure trial so you can test out their Azure cloud without committing to spending your funds Note: Microsoft’s trial cannot be used for GPU machines. I know, this is a bummer, but GPU instances are at a huge premium. Let’s begin! Step 1: Create a user account or login at portal.azure.com. Step 2: Click “Create Resource” in the top-left. Step 3: Enter “Data Science Virtual Machine for Linux” in the search box and it will auto-complete as you type. Select the first Ubuntu option. Step 4: Configure the basic settings: Create a Name (no spaces or special chars). Select HDD (do not select SSD). I elected to use a simple password rather than a key file but this is up to you. Under “Subscription” check to see if you have any free credits you can use. You’ll need to create a “Resource Group” — I used my existing “rg1”. Step 5: Choose a Region and then choose your VM. I selected the available K80 instance (NC65_V3). The V100 instance is also available if you scroll down (NC6S_V3). One of my complaints is I don’t understand the naming conventions. I was hoping they were named like sports cars or at least something like “K80-2” for a 2x K80 machine, instead they’re named after the number of vCPUs which is a bit confusing when we’re instead interested in the GPUs. Figure 8: The Microsoft Azure DSVM will run on a K80 GPU and V100 GPU. Step 6: Review the Summary page and agree to the contract: Step 7: Wait while the system deploys — you’ll see a convenient notification when your system is ready. Step 8: Click “All resources”. You’ll see everything you’re paying for here: If you select the virtual machine, then you’ll see information about your machine (open the screenshot below in a new tab so you can see a higher resolution version of the image which includes the IP address, etc.): Step 9: Connect via SSH and/or Jupyter. Clicking the connect option will provide you with connectivity details for SSH whether you’re using a key file or password: Unfortunately, a convenient link to Jupyter isn’t shown. To access Jupyter, you’ll need to: - Open a new tab in your browser - Navigate to (the “s” after “http” is important). Make sure you fill in the URL with your public IP. Running code on the deep learning virtual machine Now, let’s run the LeNet + MNIST example from my first Microsoft post in Jupyter. This is a two step process: Step 1: SSH into the machine (see Step 9 in the previous section). Change directory into the ~/notebooks directory. Clone the repo: $ git clone Step 2: Fire up Jupyter in your browser (see Step 9 in the previous section). Click the microsoft-dsvm directory. Open the appropriate .ipynb file ( pyimagesearch-training-your-first-cnn.ipynb ). But before running the notebook, I’d like to introduce you to a little trick. It isn’t mandatory, but it can save some headache if you’re working with multiple notebooks in your DSVM. The motivation for this trick is this: if you execute a notebook but leave it “running”, the kernel still has a lock on the GPU. Whenever you run a different notebook, you’ll see errors such as “resource exhausted”. The quick fix is to place the following two lines in their very own cell at the very bottom of the notebook: Now, when you execute all the cells in the notebook, the notebook will gracefully shut down its own kernel. This way you won’t have to remember to manually shut it down. From there, you can click somewhere inside the first cell and then click “Cell > Run all”. This will run all cells in the notebook and train LeNet on MNIST. From there you can watch the output in the browser and obtain a result similar to mine below: Figure 12: Training LeNet on MNIST in the Microsoft Azure cloud and the Data Science Virtual Machine (DSVM). I like to clear all output when I’m finished or before starting new runs after modifications. You can do this from the “Kernel > Restart & Clear Output” menu selection. Summary In today’s blog post, I reviewed and discussed my personal experience with Microsoft’s data science and deep learning virtual machine (DSVM). I also demonstrated how to launch your first DSVM instance and run your first deep learning example on it. I’ll be the first to admit that I was a bit hesitant when trying out the DSVM — but I’m glad I did. Each and every test I threw at the DSVM, ranging from beginner usage to replicating the results of state-of-the-art papers, it handled it with ease. And when I was able to use Microsoft’s new NVIDIA V100 GPU instances, my experiments flew, seeing a whopping 400% speedup over the NVIDIA K80 instances. If you’re in the market for a deep learning cloud-based GPU instance, I would encourage you to try out Microsoft’s DSVM — the experience was great, Microsoft’s support was excellent, and the DSVM itself was powerful yet easy to use. Additionally, Microsoft and the DSVM team will be sponsoring PyImageConf 2018, PyImageSearch’s very own computer vision and deep learning conference. PyImageConf attendees will have free access to DSVM GPU instances while at the conference, allowing you to: - Follow along with talks and workshops - Train their own models - Better learn from speakers To learn more about PyImageConf 2018, just click here. I hope to see you there! Does it makes sense to still be looking at VMs at this point when Docker is taking over a lot of the functionality of VMs? I think there might be a bit of confusion regarding the terminology. Microsoft calls their DSVM a “virtual machine” but it’s actually more similar to a super lightweight hypervisor or even container virtualization. It still runs as close “to the metal” as it possibly can, giving you access to the GPU, better CPU utilization, I/O, etc. Not true. It is actually a virtual machine. It is not a container. See my reply to “TimY”. I was addressing what most people think of when they hear the term “VM”. This is splitting hairs. Docker for Mac and Windows actually runs a thin VM layer (Alpine Linux) under the hood. So Docker on Linux is the most ‘pure’ non VM scenario. The key to Docker is not VM or not but the ability to easily layer system level dependencies in such a flexible and maintainable way. I agree, it’s splitting hairs at this point. But most people will tend to think of VMs as big bloated images that typically run on VMWare or VirtualBox. VMs encompass a wide range of applications and implementations. I was commenting on what most people think of when they hear the term “VM” and didn’t want others to get confused. Very interesting document !!!!! Thanks !!! Thanks for the great post! I actually had quite the bad experience with azure- I used it in my first time to compete in kaggle, but in the second day the machine just wouldn’t connect with ssh. The whole affair cost me about two thirds of a work day, in which I couldn’t access the previous day’s work. After the fact I thought that there must be a way to make a code directory easily avaliable from any machine, thus serving two purposes: 1. Backup, in cases like mine 2. Ability to develop and test the code on a low end machine, and just activate the high end machine for the final run. It sould be easy, but I couldn’t find any obvious way to do so. Does anyone know how to do that? I’m sorry to hear about the bad experience, Yovel. That’s definitely a bummer. Did you try reaching out to Microsoft’s support directly? Yeah, they took their time… It was weird, there was probably a problem with my contact information. My question still stands though- do you know of any such way? It should be very helpful to anyone running code. Right now I’m send from one computer to the other using ftp which is *very* fast, but it gets tedious after some time and keeping a shared directory will be much more helpful. Take a look at Tom’s reply to your question. You can create a storage resource that different vm’s can mount to appear like local folders. There’s also desktop client (Azure Storage Explorer) to upload/download files Good coverage Adrian, I would just add that with the free trial you can request access to GPU VM’s by sending a support request to Azure customer service. When on paid plan you still need to contact them to get access to the NCxS_V3 machines. Plus, if you want to leave a process running for a long time, I recommend doing it in a ‘screen session’ that is spawned from the ssh terminal but runs independently. Cheers, Tom Thanks for the tip, Tom! I also concur — for long running sessions absolutely use screen. What enviroment did you use? The main problem with their enviroment is that their opencv doesn’t support reading from video. I specifically used their conda enviroment, but I’m pretty sure that the others don’t support it either. I mean, I could just install from source but then I would lose the whole advantage of the pre-built enviroment. The conda enviroment didn’t cooperate and even when I installed from conda-forge it didn’t install a video supporting version. I used the Python 3 environment included with the DSVM. I didn’t try accessing a video file or video stream from the DSVM so I can’t comment on that. I would also disagree with your statement of saying “lose the whole advantage of the pre-built environment”. That’s not entirely true. If you need to compile and install 1% of the software on the machine to achieve additional functionality and not change 99% of the other packages then I would say you are not losing very much. That’s not to say that compiling OpenCV isn’t a bit of a task. I do understand where you’re coming from. Well, theortically it’s easier but you know how it is… You reinstall one thing and you ruin everything else’s dependencies for some obscure reason. In the end I exported my native conda enviroment to run my code (if it will ever interest anyone). I just hoped you already did it the “right” way. I will try to update if I ever do it the right way myself. Thanks for this info. I would be very interested to learn how to ‘process’ a video stream with Azure. I think the MS IoT hub would be too slow but I’ve not tried that route. Link to the third blog post, titled “Training state-of-the-art neural networks in the Microsoft Azure cloud” points to the same page as previous one “22 minutes to 2nd place…”. Thanks Prafull, I have fixed the links 🙂 Hey Adrian, very cool, thanks for sharing the information. Somehow I got used to the AWS EC2, their GUI designer must have travelled from 20 years ago :p. I’ve stopped using AWS since I tried IBM PowerAI 8 with 4 P100 GPUs. Maybe i’m a little biased to say that ? Ha! I don’t think you’re biased at all 😉 Good work Adrian for giving a wide menu of existing approaches.My only hope is Microsoft will not entice with its DSVM only to do “vendor lock-in”. Hey Abkul — sorry for any confusion here, but the DSVM is meant to be executed in the Azure environment just like Amazon’s AMIs are meant to be executed only in the Amazon ecosystem. You can’t move the two between the platforms. Is super painful to get the GPUs… How is this “easier to use” than AWS? The user interface itself is significantly easier to use then AWS. I use DSVM all the time… I can access anytime from anywhere, also use Azure File Share, have my all ML file and data in one place, no longer need copy or move file from VM to another VM, regardless windows or Linux. Can the DSVM utilize built-in models like we see in ML studio? I prefer ML Studio as it provides their own models like boosted decision trees and ANN’s and a very clear workflow from training to evaluating and then deploying models that can be called via web services. It also allows us to visualize each step of our ML pipeline. However, ML Studio also lacks certain features mainly due to the fact that it takes a very high-level approach, so I’m wondering if DSVM would work better when there is more customization involved (e.g. freezing NN inputs to constant values during model-scoring which is not possible on ML studio or repetitively scoring a model until a certain output constraint is satisfied) while still being able to use Azure’s native models. What are your thoughts? PS. Thanks for the tutorial! Extremely helpful as always. I’ve only played with ML Studio. I don’t have enough experience to fully comment on it. But I do know that you can spin up an Azure instance (including the DSVM) and then connect it to ML Studio and their experiment workbench and run experiments that way. But again, I’m not well versed in ML Studio so I don’t think I’m equipped to answer that question. Thanks a lot for the guide. I am trying to replicate it, and I have succeeded in: – getting a K80 VM in Azure – interfacing it through SSH using PuTTY – connecting through jupyter, and viewing code in it I have so far not been able to accomplish the following: – Run the example above and get the above shown output. I have tried a lot, here is what I do: 1. under ‘Kernel’ choose ‘Restart & Clear Output 2. Click in the top cell and under ‘Cell’ choose ‘Run All’ First, very little happen: Under the first cell I get a warning: and then, after a while: ConnectionResetError Traceback (most recent call last) … 14 dataset = datasets.fetch_mldata(“MNIST Original”) 15 data = dataset.data and then a whole lot of other traceback. What could be the problem? The scikit-learn library uses the mldata.org website to download datasets. MLData hosts these datasets for free. Sometimes the website does go down which is why you are seeing an error related to the connection. To verify if the mldata.org website is down you can try to load it in your browser. The website does come back though 🙂 In the meantime you can use the Keras helper utilities to load the MNIST dataset. See this post for an example of such. If you swap out the scikit-learn MNIST helpers for the Keras helpers you’ll be good to go. Hello, I discovered that my connection to jupyter was completely unsecure, as the browser flatly refused to use the https://[ip-address]:8000 as you recommend above, it just became //[ip-address]:8000, with a warning that the connection was unsecure. So it seems that it didnt use any ssh-tunnel or anything. However, I found a guide here, which made it possible for me to connect without the warning, and I assume that it is now actually tunnelling through SSH: Nice, thanks for sharing Torben! when use jupyter to run “pyimagesearch-training-your-first-cnn” on NC6 If someone got error as me, It can be fixed when some code change to another way. I don’t know why. from: from sklearn import datasets dataset = datasets.fetch_mldata(“MNIST Original”) change to : from sklearn.datasets import fetch_mldata dataset = fetch_mldata(“MNIST Original”) Wow, that’s incredibly odd. I can’t imagine how that would make a difference but thanks for sharing Davis 🙂 Hi again, Another thing I am struggling with is how to interface the VM in Azure through a GUI. This is for convenience, and for transferring the DL4CV code and data from Google drive, where I have it, to the VM, using the Firefox browser. It is virtually impossible to upload them from my local pc because I have a very slow upload speed. I found some guidance on the internet, and installed something called xrdp and ubuntu desktop on the VM. But when I try to connect via Remote Desktop from Windows, I get a grey desktop for a few seconds, and then it closes down again. Maybe you could point to a good guide on how to get a remote desktop running towards the dsvm ? I haven’t tried to use a GUI interface for the DSVM for an extended period of time and I have never used Windows to access the DSVM so I unfortunately do not have any suggestions for remote desktop. I would contact Azure support and mention the concern to them — I’m sure they have many guides on how to remote desktop to instances in the Azure cloud. 1., pip install xrdp 2., On the Azure portal, go to DSVM Networking settings and add a new inbound rule to accept traffic on port 3389 (Remote desktop well known port) for both TCP and UDP from any address. 3., Press Connect (to VM) on the toolbar and save the RDP connection file or just copy the FQDN of you machine and append “:3389” to it when connecting to it form a PC. 4., Launch the RDP file. 5., Alternativel, you could run mstsc.exe with IP_or_DNS_address_of_your VM:3389 username 6., Pls note that unline a PC RDP connection, you’ll have to reauthenticate yourself within that session. Off: Please note that XRDP by default supports US keyboard layout. However, there are workarounds by defining new layouts.E.g. this post describes how to switch to SLovenian layout. Foloowing that, I manager to switch to Hungarian, but the logon screen layout will still be US: Hi Torben, it’ll work only of you add a new network rule to let your DSVM accept RDP packets on port 3389. To do that, follow the steps of this guide. Networking config is at #7. Pls note that the RDP login screen uses US keyboard layout by default, regardless your locale settings. When you connected, you would most probably find that clipboard is not working. Known XRDP problem. I read that disabling every ofther RDP channels, like printer or audio, helps, but in my case nope. Investigating. Pro tip: You could also subscribe to Azure Storage (and select File Storage). You’ll get an SMB file share that you can access form either you local PC or the cloud. I am using this to transfer training sets and project files. BR, Z Forgot to add the URL of the guide: Awesome, thank you so much for sharing this information Zoltán! That worked perfectly! – Now I could download the PB code etc. directly from firefox into the vm, circumventing my slow internet connection at home. Thanks Zoltan! Hello Adrian I successfully ran lenet_mnist.py file on my vm in Azure, but it took 35 sec per epoch. It also noted in the start that “Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA” Both the time used per epoch, and the message above seems to indicate that the tensorflow binary is not using the K80 GPU that was supposed to be in the vm, and the binary is not compiled to use some special instructions of the CPU. So it appears to be a wrong tensorflow binary I am using ? I didn’t install anything of this on my own, everything was preinstalled. Hello Adrian, Congrats with the wedding! I just found the answer to my previous question: In order to utilize the GPU, you have to create a ‘DLVM’ (Deep Learning Virtual Machine), rather than a ‘DSVM’ (Data Science Virtual Machine). There is a surcharge of app. 30% on the DLVM compared to the DSVM. I tried the NC6 / K80 vm with the lenet code shown above. It ran at 5 sec. per epoch. Thanks Torben! And thank you for providing the answer to your question. Hi Adrian! Thanks for this guide – it’s unfortunate that I am unable to load the jupyter notebook interface. I have started the VM and tried to open up but the page never loads. Is there something obvious that I might have missed? The port of the server may be blocked from serving web traffic. I would contact Microsoft support and ask them if the port is blocked.
https://www.pyimagesearch.com/2018/03/21/my-review-of-microsofts-deep-learning-virtual-machine/
CC-MAIN-2019-35
refinedweb
5,837
72.05
In this tutorial, we are going to solve the following problem. Given an integer n, we have to find the (1n+2n+3n+4n)%5 The number (1n+2n+3n+4n) will be very large if n is large. It can't be fit in the long integers as well. So, we need to find an alternate solution. If you solve the equation for the number 1, 2, 3, 4, 5, 6, 7, 8, 9 you will get 10, 30, 100, 354, 1300, 4890, 18700, 72354, 282340 values respectively. Observe the results of the equation carefully. You will find that the last digit of the equation result repeats for every 4th number. It's the periodicity of the equation. Without actually calculating the equation we can say that the if n%4==0 then (1n+2n+3n+4n)%5 will be 4 else 0. Let's see the code. #include <bits/stdc++.h> using namespace std; int findSequenceMod5(int n) { return (n % 4) ? 0 : 4; } int main() { int n = 343; cout << findSequenceMod5(n) << endl; return 0; } If you run the above code, then you will get the following result. 0 If you have any queries in the tutorial, mention them in the comment section.
https://www.tutorialspoint.com/find-1-n-plus-2-n-plus-3-n-plus-4-n-mod-5-in-cplusplus
CC-MAIN-2021-31
refinedweb
204
73.98
Some time ago I was working on a simple Python script. What the script did is not very important for this article. What is important, is the way it parsed arguments, and the way I managed to improve it. All below examples look similar to that script, however I cut most of the code, and changed the sensitive information, which I cannot publish. The main ideas for the options management are: - The script reads all config values from a config file, which is a simple ini file. - The script values can be overwritten by the command line values. - There are special command line arguments, which don't exist in the config file like: - --help - shows help in command line - --create-config - creates a new config file with default values - --config - the path to the config file which should be used - If there is no value for a setting in the config file, and in the command line arguments, then a default value should be taken. - The option names in the configuration file, and the command line, must be the same. If there is repo-branchin the ini file, then there must be --repo-branchin the command line. However the variable where it will be stored in Python will be named repo_branch, as we cannot use -in the variable name. The Basic Implementation The basic config file is: [example] repo-branch = another The basic implementation was: #!/usr/bin/env python import sys import argparse import ConfigParser import logging logger = logging.getLogger("example") logger.setLevel(logging.DEBUG) ch = logging.StreamHandler() formatter = logging.Formatter('%(asctime)s - %(name)s : %(lineno)s - %(levelname)s - %(message)s') ch.setFormatter(formatter) logger.addHandler(ch) class Options: def __init__(self, args): self.parser = argparse.ArgumentParser(description="Example script.") self.args = args self.parser.add_argument("--create-config", dest="create_config", help="Create configuration file with default values") self.parser.add_argument("--config", dest="config", default="/tmp/example.cfg", help="Path to example.cfg") self.parser.add_argument("--repo-branch", dest="repo_branch", default="something", help="git branch OR git tag from which to build") # HERE COME OVER 80 LINES WITH DECLARATION OF THE NEXT 20 ARGUMENTS self.options = self.parser.parse_args() print "repo-branch from command line is: {}".format(self.options.repo_branch) def get_options(self): return self.options def get_parser(self): return self.parser class UpgradeService(): def __init__(self, options): if not options: exit(1) self.options = options if self.options.config: self.config_path = self.options.config self.init_config_file() self.init_options() def init_config_file(self): """ This function is to process the values provided in the config file """ self.config = ConfigParser.RawConfigParser() self.config.read(self.config_path) self.repo_branch = self.config.get('example', 'repo-branch') # HERE COME OVER 20 LINES LIKE THE ABOVE print "repo-branch from config is: {}".format(self.repo_branch) def init_options(self): """ This function is to process the command line options. Command line options always override the values given in the config file. """ if self.options.repo_branch: self.repo_branch = self.options.repo_branch # HERE COME OVER 20 LINES LIKE THE TWO ABOVE def run(self): pass if __name__ == "__main__": options = Options(sys.argv).get_options() upgrade_service = UpgradeService(options) print "repo-branch value to be used is: {}".format(upgrade_service.repo_branch) upgrade_service.run() The main idea of this code was: - All the command line arguments parsing is done in the Optionsclass. - The UpgradeServiceclass reads the ini file. - The values from the Optionsclass and the ini file are merged into the UpgradeServicefields. So a config option like repo-branchwill be stored in the upgrade_service.repo_branchfield. - The upgrade_service.run()method does all the script's magic, however this is not important here. This way I can run the script with: ./example.py- which will read the config file from /tmp/example.cfg, and the repo_branchshould contain another. ./example.py --config=/tmp/a.cfg- which will read the config from the /tmp/a.cfg. ./example.py --help- which will show the help (this is automatically supported by the argparsemodule). ./example.py --repo-branch=1764-- and the repo_branchvariable should contain 1764. The Problems First of all, there is a lot of repeated code, and repeated option names. Repeating code is a great way to provide lots of bugs. Each each option name is mentioned in the command line arguments parser (see the line 36). It is repeated later in the config file parser (see line 68). The variable name, which is used for storing each value, is repeated a couple of times. First in the argparse declaration (see line 36), then in the function init_options (see line 79). The conditional assignment (like in the lines 76-77) is repeated for each option. However for some options it is a little bit different. This makes the code hard to update, when we change an option name, or want to add a new one. Another thing is a simple typo bug. There is no check if an option in the config file is a proper one. When a user, by a mistake, writes in the config file repo_branch instead of repo-branch, it will be ignored. The Bug One question: can you spot the bug in the code? The problem is that the script reads the config file. Then overwrites all the values with the command line ones. What if there is no command line argument for --repo-branch? Then the default one will be used, and it will overwrite the config one. ./example.py --config=../example.cfg repo-branch from command line is: something repo-branch from config is: another repo-branch value to be used is: something Fixing Time The code for the two implementations (the one described above, and the one described below) can be found on github: I tried to implement a better solution, it should fix the bug, inform user about bad config values, be easier to change later, and give the same result: the values should be used as UpgradeService fields. The class Options is not that bad. We need to store the argparse configuration somewhere. I'd like just to have the option names, and default values declared in one place, without repeating it in different places. I left the Options class, however I moved all the default values to another dictionary. There is no default value for any option in the argparse configuration. So now, if there is no command line option e.g. for --repo-branch then the repo_branch field in the object returned by the method Options.get_options() will be None. After the changes, this part of the code is: DEFAULT_VALUES = dict( config="/tmp/example.cfg", repo_branch="something", ) class Options: def __init__(self, args): self.parser = argparse.ArgumentParser(description="Example script.") self.args = args self.parser.add_argument("--create-config", dest="create_config", help="Create configuration file with default values") self.parser.add_argument("--config", dest="config", help="Path to example.cfg") self.parser.add_argument("--repo-branch", dest="repo_branch", help="git branch OR git tag from which to build") self.options = self.parser.parse_args() print "repo-branch from command line is: {}".format(self.options.repo_branch) # Here comes the next about 20 arguments def get_options(self): return self.options def get_parser(self): return self.parser So I have a dictionary with the default values. If I would have a dictionary with the config values, and a dictionary with the command line ones, then it would be quite easy to merge them, and compare. Get Command Line Options Dictionary First let's make a dictionary with the command line values. This can be made with a simple: def parse_args(): return Options(sys.argv).get_options().__dict__ However there are two things to remember: - There is the command --create-configwhich should be supported, and this is the best place to do it. - The arguments returned by the __dict__, will have underscores in the names, instead of dashes. So let's add creation of the new config file: def parse_args(): """ Parses the command line arguments, and returns dictionary with all of them. The arguments have dashes in the names, but they are stored in fields with underscores. :return: arguments :rtype: dictionary """ options = Options(sys.argv).get_options() result = options.__dict__ logger.debug("COMMAND LINE OPTIONS: {}".format(result)) if options.create_config: logger.info("Creating configuration file at: {}".format(options.create_config)) with open(options.create_config, "w") as c: c.write("[{}]\n".format("example")) for key in sorted(DEFAULT_VALUES.keys()): value = DEFAULT_VALUES[key] c.write("{}={}\n".format(key, value or "")) exit(0) return result The above function first gets the options from an Options class object, then converts it to a dictionary. If there is the option create_config set, then it creates the config file. If not, this function returns the dictionary with the values. Get Config File Dictionary The config file converted to a dictionary is also quite simple. However what we can get is a dictionary with keys like they are written in the config file. These will contain dashes like repo-branch, but in the other dictionaries we have underscores like repo_branch, I will also convert all the keys to have underscores instead of the dashes. CONFIG_SECTION_NAME = "example" def read_config(fname, section_name=CONFIG_SECTION_NAME): """ Reads a configuration file. Here the field names contain the dashes, in args parser, and the default values, we have underscores. So additionally I will convert the dashes to underscores here. :param fname: name of the config file :return: dictionary with the config file content :rtype: dictionary """ config = ConfigParser.RawConfigParser() config.read(fname) result = {key.replace('-','_'):val for key, val in config.items(section_name)} logger.info("Read config file {}".format(fname)) logger.debug("CONFIG FILE OPTIONS: {}".format(result)) return result And yes, I'm using dictionary comprehension there. Merging Time Now I have three dictionaries with configuration options: - The DEFAULT_VALUES. - The config values, returned by the read_configfunction. - The command line values, returned by the parse_argsfunction. And I need to merge them. Merging cannot be done automatically, as I need to: - Get the DEFAULT_VALUES. - Overwrite or add values read from the config file. - Overwrite or add values from command line, but only if the values are not None, which is a default value when an argument it not set. - At the end I want to return an object. So I can call the option with settings.branch_nameinstead of the settings['branch_name']. For merging I created this generic function, it can merge the first with the second dictionary, and can use the default values for the initial dictionary. At the end it uses the namedtuple to get a nice object with fields' names taken from the keys, and filled with the merged dictionary values. def merge_options(first, second, default={}): """ This function merges the first argument dictionary with the second. The second overrides the first. Then it merges the default with the already merged dictionary. This is needed, because if the user will set an option `a` in the config file, and will not provide the value in the command line options configuration, then the command line default value will override the config one. With the three-dictionary solution, the algorithm is: * get the default values * update with the values from the config file * update with the command line options, but only for the values which are not None (all not set command line options will have None) As it is easier and nicer to use the code like: options.path then: options['path'] the merged dictionary is then converted into a namedtuple. :param first: first dictionary with options :param second: second dictionary with options :return: object with both dictionaries merged :rtype: namedtuple """ from collections import namedtuple options = default options.update(first) options.update({key:val for key,val in second.items() if val is not None}) logger.debug("MERGED OPTIONS: {}".format(options)) return namedtuple('OptionsDict', options.keys())(**options) Dictionary Difference The last utility function I need is something to compare dictionaries. I think it is a great idea to inform the user that he has a strange option name in the config file. Let's assume, that: - The main list of the options is the argparse option list. - The config file can contain less options, but cannot contain options which are not in the argparse list. - There are some options which can be in the command line, but cannot be in the config file, like --create-config. The main idea behind the function is to convert the keys for the dictionaries to sets, and then make a difference of the sets. This must be done for the settings names in both directions: config.keys- commandline.keys- if the result is not an empty set, then it is an error commandline.keys- config.keys- if the result is not an empty set, then we should just show some information about this The below function gets two arguments first and second. It returns a tuple like (first-second, second-first). There is also the third argument, it is a list of the keys which we should ignore, like the create_config one. def dict_difference(first, second, omit_keys=[]): """ Calculates the difference between the keys of the two dictionaries, and returns a tuple with the differences. :param first: the first dictionary to compare :param second: the second dictionary to compare :param omit_keys: the keys which should be omitted, as for example we know that it's fine that one dictionary will have this key, and the other won't :return: The keys which are different between the two dictionaries. :rtype: tuple (first-second, second-first) """ keys_first = set(first.keys()) keys_second = set(second.keys()) keys_f_s = keys_first - keys_second - set(omit_keys) keys_s_f = keys_second - keys_first - set(omit_keys) return (keys_f_s, keys_s_f) Build The Options And now the end. The main function for building the options, which will use all the above code. This function: - Gets a dictionary with command line options from the parse_argsfunction. - Finds the path to the config file (from the command line, or from the default value). - Reads the dictionary with config file options from the read_configfunction. - Calculates the differences between the dictionaries using the dict_differencefunction. - Prints information about the options which can be set in the config file, but are not set currently. Those options are in the Optionsclass, but are not in the config file. - Prints information about the options which are in the config file, but shouldn't be there, because they are not declared in the argparseoptions list, in the Optionsclass. - If there are any options which cannot be in the config file, the script exits with error code. - Then it merges all three dictionaries using the function merge_options, and returns the named tuple. """ Builds an object with the merged opions from the command line arguments, and the config file. If there is an option in command line which doesn't exist in the config file, then the command line default value will be used. That's fine, the script will just print an info about that. If there is an option in the config file, which doesn't exist in the command line, then it looks like an error. This time the script will show this as error information, and will exit. If there is the same option in the command line, and the config file, then the command line one overrides the config one. """ options = parse_args() config = read_config(options['config'] or DEFAULT_VALUES['config']) (f, s) = dict_difference(options, config, COMMAND_LINE_ONLY_ARGS) if f: for o in f: logger.info("There is an option, which is missing in the config file," "that's fine, I will use the value: {}".format(DEFAULT_VALUES[o])) if s: logger.error("There are options, which are in the config file, but are not supported:") for o in s: logger.error(o) exit(2) merged_options = merge_options(config, options, DEFAULT_VALUES) return merged_options Other Changes There are some additional changes. I had to add a list with the command line argumets, which are fine to be omitted in the config file: COMMAND_LINE_ONLY_ARGS = ["create_config"] The UpgradeService class is much simpler now: class UpgradeService(): def __init__(self, options): if not options: exit(1) self.options = options def run(self): pass The runner part also changed a little bit: if __name__ == "__main__": options = build_options() upgrade_service = UpgradeService(options) print "repo-branch value to be used is: {}".format(upgrade_service.options.repo_branch) upgrade_service.run() The only main difference between those two implementations is that in the first, the options could be accessed as upgrade_service.repo_branch, and in the second they need to be accessed as: upgrade_service.options.repo_branch. Full Code The code for the two implementations can be found on github: 2 comments: Any chance of creating a printable version of this post? Hi, I've created a pdf from my markdown sources. It's not perfect, but it should be possible to print that without any problem.
http://blog.endpoint.com/2015/04/manage-python-script-options.html
CC-MAIN-2017-13
refinedweb
2,761
57.27
In this article, I am going to discuss how to use Swagger in WEB API Application to document and test restful Web API services. What is Swagger? Swagger is a language-agnostic specification for describing REST APIs. The Swagger project was donated to the OpenAPI Initiative, where it’s now referred to as OpenAPI. Both names are used interchangeably; however, OpenAPI Specification is an important part of the Swagger flow. By default, a document named swagger.json is generated by the Swagger tool which is based on our API. It describes the capabilities of our API and how to access it via HTTP. Steps for implementing Swagger Step 1 – Create Web API application using Visual Studio Step 2 – Install Swashbuckle from Nuget Package Step 3 – Modify the SwaggerConfig class which is present in the App_Start Folder")) .EnableSwaggerUi(); } } } Step 4 – Run the project and go to the URL:[PORT_NUM]/swagger How to enable Swagger to use XML Comments in ASP.NET Web API Application?\SwaggerImplementation\SwaggerImplementation.XML”, System.AppDomain.CurrentDomain.BaseDirectory));"); c.IncludeXmlComments(string.Format(@"{0}\bin\SwaggerImplementation.XML", System.AppDomain.CurrentDomain.BaseDirectory)); }) .EnableSwaggerUi(); } } } Let’s add some XML documents to our API methods as shown below. Here we are adding XML Document to the get method. Modify the Get method as shown below. namespace SwaggerImplementation.Controllers { public class ValuesController : ApiController { /// <summary> /// Get All the Values /// </summary> /// <remarks> /// Get All the String Values /// </remarks> /// <returns></returns> public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } } } Now run the Application and check the result. Wow, awesome blog format! How long have you been running a blog for? you make running a blog glance easy. The full glance of your site is wonderful, let alone the content! You have noted very interesting points! ps nice website. Saved as a favorite!, I love your site! I blog frequently and I really appreciate your content. Your article has truly peaked my interest. I will bookmark your site and keep checking for new information about once per week. I subscribed to your RSS feed as well. As I website owner I conceive the articles here is real good, thanks for your efforts.
https://blog.codehunger.in/implementing-swagger-in-web-api/
CC-MAIN-2021-43
refinedweb
356
51.04
In this article, I will try to provide some help in creating a photo resize software using C# and its basic namespaces. The software itself will try to fit all types and sizes of images to a ‘web’ version giving the user the option to choose the folder, destination path and also the option to insert subtitles and a transparent banner. By default, the .NET Framework is essential. This software requires attention to multi-language support: we declare variables for all the labels and buttons titles as public strings and we change their container when the user changes the software's language. To resize the pictures, I got information from MSDN forums and C# documentation; using System.Drawing it's possible to create 'big thumbnails' from the raw images without losing the original resolution. To insert the subtitles and manage the transparent banners on the pictures, I used as my source the article Creating a Watermarked Photograph with GDI+ for .NET. The transparent banner will be inserted setting two color manipulations. On the first, we remove the back color with one that is transparent and then we change the opacity of this by applying a 5x5 matrix that contains the coordinates for the RGBA space. The pictures will be resized using a scale scanning which means that different scales need different sizes (pictures can be 640x480 or 480x640). public string System.Drawing Is the file a valid picture-file? What is the main scale to use? Starting with the ReziseImage() that basically checks the size, container and scale, we set the new size and throw a ‘new-picture-size’ to the memory. We will use this new image ‘variable’ to add the banner or text (if the user wants). ReziseImage() variable This function will also call at its end the InsertBannerAndText() that will manage the image attributes to add or not to add the banner and text to the bottom of the file. We'll also check the size of the picture to know which font-size we will use. A good thing to pay attention to is that when we treat these images with Image objects, we also have to copy the camera details manually. This means that, if you don't want to lose the EXIF data, you need to use the above code to save all the picture information files like camera type, picture date, camera functions used, etc. InsertBannerAndText() foreach (PropertyItem e in imagem.PropertyItems) { imgPhoto.SetPropertyItem(e); } For more information, read this. Draw() Thanks Matthis Kiechle for the German translation, software tests and bugs searches. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) FullsizeImage.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); FullsizeImage.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/23700/C-Batch-Photo-Resize?msg=3317246
CC-MAIN-2016-36
refinedweb
491
52.8
A long time ago I wrote an article on using double for money. However, it is still a common fear for many developers when the solution is fairly simple. The problem with using double for money double has two types of errors. It have representation error. i.e. it cannot represent all possible decimal values exactly. Even 0.1 is not exactly this value. It also has rounding error from calculations. i.e. as you perform calculations, the error increases. double[] ds = { 0.1, 0.2, -0.3, 0.1 + 0.2 - 0.3}; for (double d : ds) { System.out.println(d + " => " + new BigDecimal(d)); } prints 0.1 => 0.1000000000000000055511151231257827021181583404541015625 0.2 => 0.200000000000000011102230246251565404236316680908203125 -0.3 => -0.299999999999999988897769753748434595763683319091796875 5.551115123125783E-17 => 5.5511151231257827021181583404541015625E-17 You can see that the representation for 0.1 and 0.2 is slightly higher than those values, and -0.3 is also slightly higher. When you print them, you get the nicer 0.1 instead of the actual value represented 0.1000000000000000055511151231257827021181583404541015625 However, when you add these values together, you get a value which is slightly higher than 0. The important thing to remember is that these errors are not random errors. They are manageable and bounded. Correcting for rounding error Like many data types, such as date, you have an internal representation for a value and how you represent this as a string. This is true for double. You need to control how the value is represented as a string. This can can be surprise as Java does a small amount of rounding for representation error is not obvious, however once you have rounding error for operations as well, it can some as a shock. A common reaction is to assume, there is nothing you can do about it, the error is uncontrollable, unknowable and dangerous. Abandon double and use BigDecimal However, the error is limited in the IEE-754 standards and accumulate slowly. Round the result And just like the need to use a TimeZone and Local for dates, you need to determine the precision of the result before converting to a String. To resolve this issue, you need to provide appropriate rounding. With money this is easy as you know how many decimal places are appropriate and unless you have $70 trillion you won’t get a rounding error large enough you cannot correct it. // uses round half up, or bankers' rounding public static double roundToTwoPlaces(double d) { return Math.round(d * 100) / 100.0; } // OR public static double roundToTwoPlaces(double d) { return ((long) (d < 0 ? d * 100 - 0.5 : d * 100 + 0.5)) / 100.0; } If you add this into the result, there is still a small representation error, however it is not large enough that the Double.toString(d) cannot correct for it. double[] ds = { 0.1, 0.2, -0.3, 0.1 + 0.2 - 0.3}; for (double d : ds) { System.out.println(d + " to two places " + roundToTwoPlaces(d) + " => " + new BigDecimal(roundToTwoPlaces(d))); } prints 0.1 to two places 0.1 => 0.1000000000000000055511151231257827021181583404541015625 0.2 to two places 0.2 => 0.200000000000000011102230246251565404236316680908203125 -0.3 to two places -0.3 => -0.299999999999999988897769753748434595763683319091796875 5.551115123125783E-17 to two places 0.0 => 0 Conclusion If you have a project standard which says you should use BigDecimal or double, that is what you should follow. However, there is not a good technical reason to fear using double for money. Reference: Double your money again from our JCG partner Peter Lawrey at the Vanilla Java.
http://www.javacodegeeks.com/2011/08/double-your-money-again.html
CC-MAIN-2014-35
refinedweb
581
68.57
Write VTK XML Table files. More... #include <vtkXMLTableWriter.h> Write VTK XML Table files. vtkXMLTableWriter provides a functionality for writing vtTable as XML .vtt files. Definition at line 31 of file vtkXMLTableWriter.h. Definition at line 34 of file vtkXMLTable from vtkXMLWriter. Methods invoked by print to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes. Reimplemented from vtkAlgorithm. Get/Set the number of pieces used to stream the table through the pipeline while writing to the file. Get/Set the piece to write to the file. If this is negative or equal to the NumberOfPieces, all pieces will be written. See the vtkAlgorithm for a description of what these do. Reimplemented from vtkAlgorithm. Fill the input port information objects for this algorithm. This is invoked by the first call to GetInputPortInformation for each port so subclasses can specify what they can handle. Reimplemented from vtkAlgorithm. Implements vtkXMLWriter. Get the default file extension for files written by this writer. Implements vtkXMLWriter. Number of pieces used for streaming. Definition at line 104 of file vtkXMLTableWriter.h. Which piece to write, if not all. Definition at line 109 of file vtkXMLTableWriter.h. Positions of attributes for each piece. Definition at line 114 of file vtkXMLTableWriter.h. Definition at line 115 of file vtkXMLTableWriter.h. For TimeStep support. Definition at line 120 of file vtkXMLTableWriter.h. Definition at line 122 of file vtkXMLTableWriter.h.
https://vtk.org/doc/nightly/html/classvtkXMLTableWriter.html
CC-MAIN-2021-17
refinedweb
252
70.8
: width: either a size in pixels or a fraction of the screen. The default is 50% of the screen. height: either a size in pixels or a fraction of the screen. The default is 50% of the screen. startx: starting position in pixels from the left edge of the screen. Noneis the default value and centers the window horizontally on screen. starty: starting position in pixels from the top edge of the screen. Noneis the default value and centers the window vertically on screen. Examples: # Uses default geometry: 50% x 50% of screen, centered. setup() # Sets window to 200x200 pixels, in upper left of screen setup (width=200, height=200, startx=0, starty=0) # Sets window to 75% of screen by 50% of screen, and centers it. setup(width=.75, height=0.5, startx=None, starty=None) 'fastest'(no delay), 'fast', (delay 5ms), 'normal'(delay 10ms), 'slow'(delay 15ms), and 'slowest'(delay 20ms). New in version 2.5. fill(1)before drawing a path you want to fill, and call fill(0)when you finish to draw the path. fill(0). New in version 2.5.. (x,y)pair. New in version 2.3. This module also does from math import *, so see the documentation for the math module for additional constants and functions useful for turtle graphics. For examples, see the code of the demo() function. This module defines the following classes: Pen(); Turtle is an empty subclass of Pen.
http://docs.python.org/release/2.5.1/lib/module-turtle.html
CC-MAIN-2013-20
refinedweb
241
76.11
marble #include <TextureTile.h> Detailed Description A class that resembles an image tile (extends Tile). This tile provides a bitmap image 57 of file TextureTile.h. Constructor & Destructor Documentation Definition at line 28 of file TextureTile.cpp. Definition at line 36 of file TextureTile.cpp. Member Function Documentation Returns the kind of blending used for the texture tile. If no blending is set the pointer returned will be zero. Definition at line 92 of file TextureTile.h. Definition at line 97 of file TextureTile.h. Returns the QImage that describes the look of the Tile. Definition at line 87 of file TextureTile.h. The documentation for this class was generated from the following files: Documentation copyright © 1996-2019 The KDE developers. Generated on Sun Oct 13 2019 02:56:00 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/4.x-api/kdeedu-apidocs/marble/html/classMarble_1_1TextureTile.html
CC-MAIN-2019-43
refinedweb
149
61.73
Whoosh is a library of classes and functions for indexing text and then searching the index. It allows you to develop custom search engines for your content. For example, if you were creating blogging software, you could use Whoosh to add a search function to allow users to search blog entries. >>> from whoosh.index import create_in >>> from whoosh.fields import * >>> schema = Schema(title=TEXT(stored=True), path=ID(stored=True), content=TEXT) >>> ix = create_in("indexdir", schema) >>> writer = ix.writer() >>> writer.add_document(title=u"First document", path=u"/a", ... content=u"This is the first document we've added!") >>> writer.add_document(title=u"Second document", path=u"/b", ... content=u"The second one is even more interesting!") >>> writer.commit() >>> from whoosh.qparser import QueryParser >>> with ix.searcher() as searcher: ... query = QueryParser("content", ix.schema).parse("first") ... results = searcher.search(query) ... results[0] ... {"title": u"First document", "path": u"/a"} To begin using Whoosh, you need an index object. The first time you create an index, you must define the index’s schema. The schema lists the fields in the index. A field is a piece of information for each document in the index, such as its title or text content. A field can be indexed (meaning it can be searched) and/or stored (meaning the value that gets indexed is returned with the results; this is useful for fields such as the title). This schema has two fields, “title” and “content”: from whoosh.fields import Schema, TEXT schema = Schema(title=TEXT, content=TEXT) You only need to do create the schema once, when you create the index. The schema is pickled and stored with the index. When you create the Schema object, you use keyword arguments to map field names to field types. The list of fields and their types defines what you are indexing and what’s searchable. Whoosh comes with some very useful predefined field types, and you can easily create your own. (As a shortcut, if you don’t need to pass any arguments to the field type, you can just give the class name and Whoosh will instantiate the object for you.) from whoosh.fields import Schema, STORED, ID, KEYWORD, TEXT schema = Schema(title=TEXT(stored=True), content=TEXT, path=ID(stored=True), tags=KEYWORD, icon=STORED) See Designing a schema for more information. Once you have the schema, you can create an index using the create_in function: import os.path from whoosh.index import create_in if not os.path.exists("index"): os.mkdir("index") ix = create_in("index", schema) (At a low level, this creates a Storage object to contain the index. A Storage object represents that medium in which the index will be stored. Usually this will be FileStorage, which stores the index as a set of files in a directory.) After you’ve created an index, you can open it using the open_dir convenience function: from whoosh.index import open_dir ix = open_dir("index") OK, so we’ve got an Index object, now we can start adding documents. The writer() method of the Index object returns an IndexWriter object that lets you add documents to the index. The IndexWriter’s add_document(**kwargs) method accepts keyword arguments where the field name is mapped to a value: writer = ix.writer() writer.add_document(title=u"My document", content=u"This is my document!", path=u"/a", tags=u"first short", icon=u"/icons/star.png") writer.add_document(title=u"Second try", content=u"This is the second example.", path=u"/b", tags=u"second short", icon=u"/icons/sheep.png") writer.add_document(title=u"Third time's the charm", content=u"Examples are many.", path=u"/c", tags=u"short", icon=u"/icons/book.png") writer.commit() Two important notes: If you have a text field that is both indexed and stored, you can index a unicode value but store a different object if necessary (it’s usually not, but sometimes this is really useful) using this trick: writer.add_document(title=u"Title to be indexed", _stored_title=u"Stored title") Calling commit() on the IndexWriter saves the added documents to the index: writer.commit() See How to index documents for more information. Once your documents are committed to the index, you can search for them. To begin searching the index, we’ll need a Searcher object: searcher = ix’s search() method takes a Query object. You can construct query objects directly or use a query parser to parse a query string. For example, this query would match documents that contain both “apple” and “bear” in the “content” field: # Construct query objects directly from whoosh.query import * myquery = And([Term("content", u"apple"), Term("content", "bear")]) To parse a query string, you can use the default query parser in the qparser module. The first argument to the QueryParser constructor is the default field to search. This is usually the “body text” field. The second optional argument is a schema to use to understand how to parse the fields: # Parse a query string from whoosh.qparser import QueryParser parser = QueryParser("content", ix.schema) myquery = parser.parse(querystring) Once you have a Searcher and a query object, you can use the Searcher‘s search() method to run the query and get a Results object: >>> results = searcher.search(myquery) >>> print(len(results)) 1 >>> print(results[0]) {"title": "Second try", "path": "/b", "icon": "/icons/sheep.png"} The default QueryParser implements a query language very similar to Lucene’s. It lets you connect terms with AND or OR, eleminate terms with NOT, group terms together into clauses with parentheses, do range, prefix, and wilcard queries, and specify different fields to search. By default it joins clauses together with AND (so by default, all terms you specify must be in the document for the document to match): >>> print(parser.parse(u"render shade animate")) And([Term("content", "render"), Term("content", "shade"), Term("content", "animate")]) >>> print(parser.parse(u"render OR (title:shade keyword:animate)")) Or([Term("content", "render"), And([Term("title", "shade"), Term("keyword", "animate")])]) >>> print(parser.parse(u"rend*")) Prefix("content", "rend") Whoosh includes extra features for dealing with search results, such as See How to search for more information.
http://pythonhosted.org/Whoosh/quickstart.html
CC-MAIN-2014-10
refinedweb
1,025
57.77
You can now write ModSecurity rules as Lua scripts! Lua can also be used as an @exec target as well as with @inspectFile. This feature should be considered experimental and the interface to it may change as we get more feedback. Go to for more information. ModSecurity: SecRuleScript /path/to/script.lua [ACTIONS] Lua Script: function main() -- Retrieve script parameters. local d = m.getvars("ARGS", { "lowercase", "htmlEntityDecode" } ); -- Loop through the parameters. for i = 1, #d do -- Examine parameter value. if (string.find(d[i].value, "<script")) then -- Always specify the name of the variable where the -- problem is located in the error message. return ("Suspected XSS in variable " .. d[i].name .. "."); end end -- Nothing wrong found. return null;end You can now lookup and act on geographical information from an IP address. The GEO collection will extract the Country, Region, City, Postal Code, Coordinates as well as DMA and Area codes in the US. SecRule REMOTE_ADDR "@geoLookup" "chain,drop,msg:'Non-UK IP address'"SecRule GEO:COUNTRY_CODE "!@streq UK" New actions allow for easier logging of raw data (logdata), easier rule flow by skipping after a given rule/marker instead of by a rule count (skipAfter and SecMarker) and allow for more flexible rule exceptions based on any ModSecurity variable (ctl:ruleRemoveById). Additionally, the "allow" action has been made more flexible by allowing you to specify allowing the request for only the current phase (the old default), for only the request portion or for both the request and response portions (the new default). Enhancements Along with all the new ModSecurity 2.5 features, many existing features have been enhanced. Processing Partial Bodies In previous releases, ModSecurity would deny a request if the response body was over the limit. It is now configurable to allow processing of the partial body (SecResponseBodyLimitAction). Additionally, request body sizes can now be controled without including the size of uploaded files (SecRequestBodyNoFilesLimit). Better support for 64-bit OSes ModSecurity now compiles cleanly on Solaris 10 and other 64-bit operating systems. As Apache (and thus MosDecurity) runs on such a wide variety of OSes, I am asking that you help provide feedback to any portability issues that may arise. Logging There have been numerous enhancements to both auditing and debug logging. Matched Rules Audited A new audit log part, K, is now available. Every rule that matched will be logged to this section of the audit log (one per line) if enabled. This enhances auditing, helps determine why an alert was generated as well as helps to track down any false positives that may occur. To help support migration from ModSecurity 2.1 to 2.5, you can now use the Apache <IfDefine> directive to exclude 2.5 specific rules and directives. <IfDefine MODSEC_2.5> SecAuditLogParts ABCDEFGHIKZ</IfDefine><IfDefine !MODSEC_2.5> SecAuditLogParts ABCDEFGHIZ</IfDefine> Feedback. As always, send questions/comments to the community support mailing list. You can download the latest releases, view the documentation and subscribe to the mailing list at. Share: LinkedIn Facebook Twitter Embed %> Email Tags: ModSecurity
https://www.trustwave.com/Resources/SpiderLabs-Blog/Initial-Release-Candidate-for-ModSecurity-2-5-0-(2-5-0-rc1)/?page=1&year=2007&month=12&tag=ModSecurity&LangType=1033
CC-MAIN-2017-43
refinedweb
504
50.02
Web Services are application components which communicate system to system by using open protocols such as http. Web services worked with the help of two platforms. One is by using xml language which provides a way to communicate between different platforms and languages and still express complex functions and objects and another one is http (Hyper Text Transfer Protocol) which is most usable internet protocol. In this demonstration am showing you a simple web application which use the concept of web services for inserting record of student in a database. For this purpose I have created a web service named Service1.asmx and a web application named Web Service used. For creating web service follow the following steps. Ø Open the visual studio 2010. Ø Go to file menu then followed by New Web Site menu option then click it. Ø Then New Web Site dialog box is open. Then go to .net framework template and then select .net framework version 3.5. Ø Then from template section select ASP.NET Web services. Ø Then click Ok button. After following these steps a file is opened which have a name Service1.asmx.cs and which is looked like these steps. I am going to create a web service which inserts record in a database that’s why use a name space named System.Data.SqlClient firstly. This namespace contains all the necessary classes and interfaces which you need to work with database. private SqlConnection createSqlConnection()//This method returns a SqlConnection class //object. { SqlConnection con = new SqlConnection("connection string of your datasource”); return con; } //This method accept SqlConnection class object as a parameter abd close that connection private void closeConnection(SqlConnection con) { try { if (con != null) { con.Close(); } } catch (Exception e){ } } //this attribute tell IIS that this method can be called by other web application over internet. //If you want to make any method which is invoked by other web application over internet then write this //attribute before the method like this [WebMethod] public int insertRecord(string sid, string sname, int sage, string scity) { int rows = 0; SqlConnection tempConnection = createSqlConnection(); tempConnection.Open(); string query = "insert into Student values('" + sid + "','" + sname + "'," + sage + ",'" + scity + "')"; SqlCommand cmd = new SqlCommand(query, tempConnection); rows = cmd.ExecuteNonQuery(); closeConnection(tempConnection); return rows; } Now after performing these steps you have to build your web service so that dll of that service is created. After building your web service execute that web service and copy the address which displayed in address bar of browser. When you execute your web service it will display all the methods that are marked by [WebMethod]. After creating web service create a web application which uses this web service. <body> <form id="form1" runat="server"> <div> Student Id : <asp:TextBox<br/> Student Name : <asp:TextBox<br/> Student Age : <asp:TextBox<br/> Student City : <asp:TextBox<br/> <asp:Button<br/><br/> <asp:Button<br/><br/> </div> </form> </body> After designing your web application that uses the web service add reference of web service in your web application by following these steps. Ø Open the solution explorer of your web application. Ø Right click on the solution explorer and then click add Web reference option. Ø A new page open in which you add the web reference. The screen shot of that page is as follows. Ø In the URL text box paste the web reference of your web services that you have copied when executing your web services. Ø After entering the url of web reference give a web reference name. By default it is localhost you can also change the name of web reference. Ø Then click Add Reference Button. By following these steps a web services is added in the solution folder of your project. You noticed that after performing these steps a new folder is created in your solution explorer named App_WebReferences which contains web references. After performing these steps click Insert Record button and type following code. protected void btnInsert_Click(object sender, EventArgs e) { InsertRecord.Service1 ins = new InsertRecord.Service1();//Create an object of Service class. //Here InsertRecord is named of Web Service and Service1 is the name of the class. int rows = ins.insertRecord(txtStudentId.Text, txtStudentName.Text, Convert.ToInt32(txtStudentAge.Text), txtStudentCity.Text);//call the insert record method if (rows > 0) { Response.Write("<script>alert('Record Inserted Successfully.')</script>");//Show the success message. txtStudentId.Text = ""; txtStudentName.Text = ""; txtStudentAge.Text = ""; txtStudentCity.Text = "";//Clear all the text box. } }
https://www.mindstick.com/Beginner/431/inserting-records-in-table-by-using-web-services
CC-MAIN-2017-22
refinedweb
736
57.67
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo . -----Original Message----- From: Ian Bicking [mailto:ianb@...] Sent: Monday, January 19, 2004 10:38 PM To: sqlobject-discuss@... Subject: [SQLObject] API Redesign for 0.6 SQLObject 0.6 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D *A tentative plan, 20 Jan 2004* Introduction ------------ During vacation I thought about some changes that I might like to make to SQLObject. Several of these change the API, but not too drastically, and I think they change the API for the better. And we'd not at 1.0 yet, changes are still allowed! Here's my ideas... Editing Context --------------- Taken from Modeling, the "editing context" is essentially a transaction, though it also encompasses some other features. Typically it is used to distinguish between separate contexts in a multi-threaded program. This is intended to separate several distinct concepts: * The database backend (MySQL, PostgreSQL, etc), coupled with the driver (MySQLdb, psycopg, etc). (Should the driver be part of the connection parameters?) * The connection parameters. Typically these are the server host, username, and password, but they could also be a filename or other path. Perhaps this could be represented with a URI, ala PEAK, but I also dislike taking structured data and destructuring it (i.e., packing it into a string). OTOH, URLs are structured, even if they require some parsing. Serialization of URLs is free and highly transparent. Python syntax is well structured and *programmatically* considerably more transparent (in a robust fashion), but also programmatically fairly read-only (because it is embedded in the structure of Python source code). We can also have both. * The database transactional context. * The application transactional context (preferably these two would be seemless, but they still represent somewhat distinct entities, and a portability layer might be nice). The application's transactional context may include other transactions -- e.g., multiple databases, a ZODB transaction, etc. * The cache policy. There are many different kinds of caches potentially involved, include write batching, and per-object and per-table caches, connection pooling, and so on. * Classes, which on the database side are typically tables. (This proposal does not attempt to de-couple classes and tables) Example:: from SQLObject import EditingContext ec =3D EditingContext() # every editing context automatically picks up all the SQLObject # classes, all magic like. person =3D ec.Person.get(1) # by ID ec2 =3D EditingContext() # separate transaction person2 =3D ec.Person.get(1) assert person is not person2 assert person.id =3D=3D person2.id assert person.fname =3D=3D 'Guy' person.fname =3D 'Gerald' assert person2.fname =3D=3D 'Guy' ec.commit() # SQL is not sent to server assert person2.fname =3D=3D 'Guy' # Doesn't see changes person2.fname =3D 'Norm' # raises exception if locking is turned on; overwrites if locking # is not turned on. (Locking enabled on a per-class level) I'm not at all sure about that example. Mostly the confusing parts relate to locking and when the database lookup occurs (and how late a conflict exception may be raised). Somewhere in here, process-level transactions might fit in. That is, even on a backend that doesn't support transactions, we can still delay SQL statements until a commit/rollback is performed. In turn, we can create temporary "memory" objects, which is any object which hasn't been committed to the database in any way. To do this we'll need sequences -- to preallocate IDs -- which MySQL and SQLite don't really provide :( Nested transactions...? Maybe they'd fall out of this fairly easily, especially if we define a global context, with global caches etc., then further levels of context will come for free. We still need to think about an auto-commit mode. Maybe the global context would be auto-commit. Caching ------- Really doing transactions right means making caching significantly more complex. If the cache is purely transaction-specific, then we'll really be limiting the effectiveness of the cache. With that in mind, a copy-on-write style of object is really called for -- when you fetch an object in a transaction, you can use the globally cached instance until you write to the object. Really this isn't copy-on-write, it's more like a proxy object. Until the object is changed, it can delegate all its columns to its global object for which it is a proxy. Of course, traversal via foreign keys or joins must also return proxied objects. As the object is changed -- perhaps on a column-by-column basis, or as a whole on the first change -- the object takes on the personality of a full SQLObject instance. When the transaction is committed, this transactional object copies itself to the global object, and becomes a full proxy. These transactional caches themselves should be pooled -- so that when another transaction comes along you have a potentially useful set of proxy objects already created for you. This is a common use case for web applications, which have lots of short transactions, which are often very repetitive. In addition to this, there should be more cache control. This means explicit ways to control things like: 1. Caching of instances: + Application/process-global definition. + Database-level definition. + Transaction/EditingContext-level definition. + Class-level definition. 2. Caching of columns: + Class-level. 3. Cache sweep frequency: + Application/process-global. + Database-level. + Class-level. + Doesn't need to be as complete as 1; maybe on the class level you could only indicate that a certain class should not be sweeped. + Sweep during a fetch (e.g., every 100 fetches), by time or fetch frequency, or sweep with an explicit call (e.g., to do sweeps in a separate thread). 4. Cache sweep policy: + Maximum age. + Least-recently-used (actually, least-recently-fetched). + Random (the current policy). + Multi-level (randomly move objects to a lower-priority cache, raise level when the object is fetched again). + Target cache size (keep trimming until the cache is small enough). + Simple policy (if enough objects qualify, cache can be of any size). + Percentage culling (e.g., kill 33% of objects for each sweep; this is the current policy). 5. Batching of updates (whether updates should immediately go to the database, or whether it would be batched until a commit or other signal). 6. Natural expiring of objects. Even if an object must persist because there are still references, we could expire it so that future accesses re-query the database. To avoid stale data. Expose some methods of the cache, like getting all objects currently in memory. These would probably be exposed on a class level, e.g., all the Addresses currently in memory via ``Address.cache.current()`` or something. What about when there's a cached instance in the parent context, but not in the present transaction? Columns as Descriptors ---------------------- Each column will become a descriptor. That is, ``Col`` and subclasses will return an object with ``__get__`` and ``__set__`` methods. The metaclass will not itself generate methods. A metaclass will still be used so that the descriptor can be tied to its name, e.g., that with ``fname =3D StringCol()``, the resultant descriptor will know that it is bound to ``fname``. By using descriptors, introspection should become a bit easier -- or at least more uniform with respect to other new-style classes. Various class-wide indexes of columns will still be necessary, but these should be able to remain mostly private. To customize getters or setters (which you currently do by defining a ``_get_columnName`` or ``_set_columnName`` method), you will pass arguments to the ``Col`` object, like:: def _get_name(self, dbGetter): return dbGetter().strip() name =3D StringCol(getter=3D_get_name) This gets rid of ``_SO_get_columnName`` as well. We can transitionally add something to the metaclass to signal an error if a spurious ``_get_columnName`` method is sitting around. Construction and Fetching ------------------------- Currently you fetch an object with class instantiation, e.g., ``Address(1)``. This may or may not create a new instance, and does not create a table row. If you want to create a table row, you do something like ``Address.new(city=3D'New York', ...)``. This is somewhat in contrast to normal Python, where class instantiation (calling a class) will create a new object, while objects are fetched otherwise (with no particular standard interface). To make SQLObject classes more normal in this case, ``new`` will become ``__init__`` (more or less), and classes will have a ``get`` method that gets an already-existant row. E.g., ``Address.get(1)`` vs. ``Address(city=3D'New York', ...)``. This is perhaps the most significant change in SQLObject usage. Because of the different signatures, if you forget to make a change someplace you will get an immediate exception, so updating code should not be too hard. Extra Table Information ----------------------- People have increasingly used SQLObject to create tables, and while it can make a significant number of schemas, there are several extensions of table generation that people occasionally want. Since these occur later in development, it would be convenient if SQLObject could grow as the complexity of the programs using it grow. Some of these extensions are: * Table name (``_table``). * Table type for MySQL (e.g., MyISAM vs. InnoDB). * Multi-column unique constraints. (Other constraints?) * Indexes. (Function or multi-column indexes?) * Primary key type. (Primary key generation?) * Primary key sequence names (for Postgres, Firebird, Oracle, etc). * Multi-column primary keys. * Naming scheme. * Permissions. * Locking (e.g., optimistic locking). * Inheritance (see Daniel Savard's recent patch). * Anything else? Some of these may be globally defined, or defined for an entire database. For example, typically you'll want to use a common MySQL table type for your entire database, even though its defined on a per-table basis. And while MySQL allows global permission declarations, Postgres does not and requires tedious repetitions of the permissions for each table -- so while it's applied on a per-table basis, it's likely that (at least to some degree) a per-database declaration is called for. Naming schemes are also usually database-wide. As these accumulate -- and by partitioning this list differently, the list could be even longer -- it's messy to do these all as special class variables (``_idName``, etc). It also makes the class logic and its database implementation details difficult to distinguish. Some of these can be handled elegantly like ``id =3D StringCol()`` or ``id =3D ("fname", "lname")``. But the others perhaps should be put into a single instance variable, perhaps itself a class:: class Address(SQLObject): class SQLMeta: mysqlType =3D 'InnoDB' naming =3D Underscore permission =3D {'bob': ['select', 'insert'], 'joe': ['select', 'insert', 'update'], 'public': ['select']} street =3D StringCol() .... The metadata is found by its name (``SQLMeta``), and is simply a container. The class syntax is easier to write and read than a dictionary-like syntax. Or, it could be a proper class/instance and provide a partitioned way to handle introspection. E.g., ``Address.SQLMeta.permission.get('bob')`` or ``Address.SQLMeta.columns``. In this case values that weren't overridden would be calculated from defaults (like the default naming scheme and so on).=20 I'm not at all certain about how this should look, or if there are other things that should go into the class-meta-data object. Joins, Foreign Keys ------------------- First, the poorly-named ``MultipleJoin`` and ``RelatedJoin`` (which are rather ambiguous) will be renamed ``ManyToOneJoin`` and ``ManyToManyJoin``. ``OneToOneJoin`` will also be added, while ``ForeignKey`` remains the related column type. (Many2Many? Many2many? many2many?) ForeignKey will be driven by a special validator/converter. (But will this make ID access more difficult?) Joins will return smart objects which can be iterated across. These smart objects will be related to ``SelectResults``, and allow the same features like ordering. In both cases, an option to retrieve IDs instead of objects will be allowed. These smarter objects will allow, in the case of ManyToManyJoin, ``Set`` like operations to relate (or unrelate) objects. For ManyToOneJoin the list/set operations are not really appropriate, because they would reassign the relation, not just add or remove relations. It would be nice to make the Join protocol more explicit and public, so other kinds of joins (e.g., three-way) could be more accessible. ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. _______________________________________________ sqlobject-discuss mailing list sqlobject-discuss@... Brad Bollenbach wrote: > [Gah, very annoying that I can't reply on list. I'll try to get in touc= h=20 > with somebody at SourceForge today to figure out what's going wrong her= e.] >=20 > Le mardi, 20 jan 2004, =E0 12:23 Canada/Eastern, Ian Bicking a =E9crit = : >> Sidnei da Silva wrote: > [snip] >>> * Enforcing constraints in python. Brad B. was chatting to me on irc >>> yesterday and we came to agree on a api. He's writing a proposal (wit= h >>> a patch) and its going to present it soon. Basically, when you create >>> a column you would provide a callable object as a keyword 'constraint= ' >>> parameter. This constraint would then be used to enforce some >>> restrictions. >>> def foo_constraint(obj, name, value, values=3DNone): >>> # name is the column name >>> # value is the value to be set for this column >>> # values is a dict of the values to be set for other columns >>> # in the case you are creating an object or modifying more than >>> # one column at a time >>> # returns True or False >>> age =3D IntCol(constraint=3Dfoo_constraint) >>> class Col: >>> def __init__(self, name, constraint): >>> self.name =3D name >>> self.constraint =3D constraint >>> def __set__(self, obj, value): >>> if not self.constraint(obj, name, value): >>> raise ValueError, value >>> # Set the value otherwise >> >> >> We already have Python constraints available through the=20 >> validator/converter interface, which I hope to fill out some more, and= =20 >> provide some more documentation and examples. >=20 >=20 > These constraints are only useful in trivial cases though. I have at=20 > least one specific case where I need to cross-reference column values i= n=20 > the object, which may currently be set, or about to be changed to a new= =20 > value. So, there are other parameters that must be supplied to the=20 > callback. Well, what we need is a schema-level/instance level validation. The=20 validator interface allows for this, but SQLObject doesn't currently=20 call a validator for the entire instance (so there would have to be some=20 changes). So, if DBI has the values: new_value, target_object, name_of_column,=20 all_new_values, then the validator would look something like: class MyValidator(Validator): def validate(self, fields, state): # we don't know the new value or name_of_column, which doesn't # really apply in this case target_object =3D state.soObject all_new_values =3D fields Through state.soObject you can check a single column for consistency=20 with other columns, but in the case of a .set() call you won't see all=20 of the new values; in that case you may want to have symmetric=20 validators, so if A and B are dependent, then when A is changed it=20 checks B, and when B is changed it checks A. Or use an instance=20 validator, which should So, somewhere in .set() (or probably a new method, called by both .set()=20 and ._SO_setValue()) we'd check any instance validators, probably being=20 more careful that they don't see the object while it's in the middle of=20 having values set (i.e., convert and collect all the values, then set=20 them all at once). I want to use validators more heavily, and have them translate into=20 database-side constraints as well. So, for instance, ForeignKey would=20 become a validator/converter, and would also create the "REFERENCES ..."=20 portion of the SQL. An example of an instance validator might be a=20 multi-column unique constraint, which again could create the necessary SQ= L. > Essentially, I need an interface like Perl's Class::DBI: >=20 > S >=20 > Class::DBI is excellent, but I have little knowledge of its internals,=20 > and so it may suffer from the same performance problems inherent in=20 > SQLObject, but it's definitely a project every SQLObject developer=20 > should be well aware of for a 0.6 redesign, because we might as well do= =20 > what everybody else does and steal ideas and improve upon them. DBI does seem like a good system. Maybe because (besides being in=20 Perl), it actually reminds me a lot of SQLObject ;) Though SQLObject=20 actually seems to be significantly larger in scope than DBI, which=20 doesn't seem to address transactions or caching in any way. This seems=20 particularly significant with respect to transactions, as transactions=20 are one of the things 0.6 tries to resolve more cleanly, and it's not a=20 trivial addition. > In other news, the patch is necessarily on hold until we resolve the=20 > database backend versions vs. SQLObject issue. I've got tons of tests=20 > that have errors in them. The next thing that should be checked into th= e=20 > repository is something that makes those tests pass, but that will=20 > depend on what the consensus is among the users about how to handle=20 > those versioning problems. Ian Sparks wrote: >>. Well, right now you can look at _columns, which should have most of the stuff you'd want. Ideally the SQLMeta object (in this redesign) would have a clear set of methods for doing introspection. For this case, I think it's better for the object to introspect itself, and that you add a method to it like .fields() or something. Of course, you still need some form of introspection, even if the object is introspecting itself. I think for joins, you are really thinking of ForeignKey (or maybe RelatedJoin, I suppose), where you want to find all the possible objects that this object could point to. In this case I think you want to look at the ForeignKey object (which should be available through _columns), and do something like SQLObject.findClass(ForeignKeyColObj.foreignKey).search() to get the possible options. Ian
http://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/thread/400DBBF3.7080706%40colorstudy.com/
CC-MAIN-2015-14
refinedweb
3,036
56.45
+1 keeping the body as a ~dict will help with all existing asserts comparing dicts in tests. Andrea On 30 Aug 2014 06:45, "Christopher Yeoh" <cbky...@gmail.com> wrote: > On Fri, 29 Aug 2014 11:13:39 -0400 > David Kranz <dkr...@redhat.com> wrote: > > > On 08/29/2014 10:56 AM, Sean Dague wrote: > > > On 08/29/2014 10:19 AM, David Kranz wrote: > > >> While reviewing patches for moving response checking to the > > >> clients, I noticed that there are places where client methods do > > >> not return any value. This is usually, but not always, a delete > > >> method. IMO, every rest client method should return at least the > > >> response. Some services return just the response for delete > > >> methods and others return (resp, body). Does any one object to > > >> cleaning this up by just making all client methods return resp, > > >> body? This is mostly a change to the clients. There were only a > > >> few places where a non-delete method was returning just a body > > >> that was used in test code. > > > Yair and I were discussing this yesterday. As the response > > > correctness checking is happening deeper in the code (and you are > > > seeing more and more people assigning the response object to _ ) my > > > feeling is Tempest clients should probably return a body obj that's > > > basically. > > > > > > class ResponseBody(dict): > > > def __init__(self, body={}, resp=None): > > > self.update(body) > > > self.resp = resp > > > > > > Then all the clients would have single return values, the body > > > would be the default thing you were accessing (which is usually > > > what you want), and the response object is accessible if needed to > > > examine headers. > > > > > > -Sean > > > > > Heh. I agree with that and it is along a similar line to what I > > proposed here but using a > > dict rather than an attribute dict. I did not propose this since it > > is such a big change. All the test code would have to be changed to > > remove the resp or _ that is now receiving the response. But I think > > we should do this before the client code is moved to tempest-lib. > > +1. this would be a nice cleanup. > > Chris > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > > _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg33702.html
CC-MAIN-2019-35
refinedweb
371
72.05
#include <hallo.h> Pietro Cagnoni wrote on Wed Apr 03, 2002 um 12:50:52PM: > the floppies don't use a ms-dos readable format. you can read them > properly on a linux machine. No, you cannot. The disk contains a dump of the RAM filesystem for the installation environment. It is a) compressed and b) written in raw mode, without any filesystem on the floppy itself. Gruss/Regards, Eduard. -- There is a difference between knowing the path and walking the path. -- To UNSUBSCRIBE, email to debian-user-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
https://lists.debian.org/debian-user/2002/04/msg00351.html
CC-MAIN-2017-43
refinedweb
103
60.21
I am writing a program to take 3 command line arguments and perform simple arithmetic. The second argument is the operator, and the first and third are operands. They all work fine except for the asterisk. It is converted to a list of files in the current directory. I am working under the command line in Windows XP. Does anybody know of a way to fix this problem and have an asterisk just represent an asterisk? Thanks. Code:#include <iostream> using namespace std; int main(int argc, char* argv[]) { for (int i = 0; i < argc; i++) { cout << argv[i] << endl; } return 0; } /* Test Run: c:\test>test.exe 5 * 8 test.exe 5 test.cpp test.exe 8 where test folder contains test.cpp and test.exe */
https://cboard.cprogramming.com/cplusplus-programming/80903-command-line-asterisk-converted-list.html
CC-MAIN-2017-09
refinedweb
127
77.74
You May Be Sitting in Your Office Again This Year 6 Feet Apart and Masked, But In Person I recently paid 100 Amazon MTurk workers to participate in my five-minute survey that asked about their job, employer, and the plans their employer may have in the future to navigate the pandemic. The data revealed many exciting trends, such as if you can expect to return to your office this year or not. First, asking for some basic information about each respondent’s employer, I learned that 40% worked at a company with 50 or fewer employees. In comparison, the other 60% worked at medium and larger companies such as chains and national manufacturing companies. My next question asked respondents if they worked for a public or private company. While this detail may seem trivial, I wanted to see if there was a correlation between how public and private companies responded to the pandemic. Spoiler: There is no significant data to suggest public and private companies reacted differently. I then wanted to capture employment time and salary levels to use in further analysis, such as if higher-paid workers were offered the option to continue work online. The logic behind this is that many customer-facing roles such as cashiers and salespeople tend to earn less money than people in leadership roles, who might only communicate with their team. The analysis continued when I asked people if they could complete all or most of their job duties when not on location. 67% said their employer had systems in place, while 32% said they must be at their physical work location to complete their responsibilities. One person said they were unsure if they could work remotely. Asking this question allowed me to see which proportion of people could work remotely, which would allow me to compare different figures. The purpose of this survey was to measure the possibility of employers allowing their teams to return to the office. Based on job postings, press releases, and other info, I was under the impression that many employers had no plans for in-person work this year, yet the data shows a different trend. A surprising 49% of people had responded their employer has plans in motion for them to return. Another 29% have already returned, which likely comprises retail and manufacturing workers. The other 22% have either received no information from their employer, or likely won’t be in the physical office this year. Unfortunately, there’s more Zoom Exhaustion on the horizon for them. Maybe look into purchasing a pair of blue light glasses? I also asked respondents what state their employer was located in. This data was used to find how many people in each state had returned to their physical office. I then compared that data with the infection rate in their respective states to see if employers were concerned about it when considering a physical office return. The data made for a pretty fantastic graph but showed little correlation between infection rate and total infection in a state. To be sure, I did a simple regression analysis that compared the two data points. Showing an R squared value of .3825 signifies that the data may have some correlation, but it can be difficult as the maximum value for “returned to work” is one which causes a limit. If I wanted to confirm or deny that state infection rate was considered, further testing would be required. I asked a few more questions for those who still had not returned and their opinion on COVID-19 protocols. An overwhelming 80% of people had reported they would return (or consider returning) to the office if COVID-19 protocols were observed. The other 20% said they would not if offered. I assumed a percent of the population would not want to return, so I asked what protocols they would like to see in place before returning. Respondents were given four primary choices, as shown above, with the option to add their own as needed. Based on this chart, it seems that more people would prefer to physically distance and have plenty of cleaning supplies than wear a mask while working all day. One person even mentioned that they would not go to work if they were required to wear a mask. People had a wide range of responses when asked for open-ended information and opinions relating to their return to work. The most popular keywords were “uncertain,” “returning shortly,” “never worked remotely,” and “following COVID-19 protocol.” This chart also revealed that some companies might require their employees to be inoculated before returning to the office. One person had also mentioned their employer asked them for feedback on how best to return. As I played with charts and figures, I found an intriguing but predictable trend. When income rose, people’s overall view of their company and their new work environment rose. People earning less than $35,000 were generally retail workers who are customer facing and at a higher risk than people in managerial roles and typically only interact with a team. It’s simple to move a whole team online but impossible for a retail worker to push freight while on Zoom. This analysis had some surprising findings, yet also provided a general outlook on the next year. It seems both employees and employers are willing to return to the office if safety protocols are followed. In a recent class session, we spoke about this topic and the benefits of digital meetings. My professor pointed out that no shoes were on his feet while he at least had pants on. (Thank god!) We also discussed the negatives: unintentional social interactions and “water cooler talk” that don’t happen nearly as often while looking at a screen. Here’s to 2021 in the hopes that we return to a semi-normal work environment! I’m a business student always looking for more opportunities. If you’re like me Izzie Ramirez shares how to freelance and network while still in school.
https://medium.com/lets-be-leaders/you-may-be-sitting-in-your-office-again-this-year-7ebc47adb6d4?source=user_profile---------0----------------------------
CC-MAIN-2022-40
refinedweb
1,009
60.24
. In all three states a graphic context is allowed to be modified, but in different ways. In the disabled state the graphic context values form a template values; when a object enters the information or the enabled state, the values are preserved, but when the object is back to the disabled state, the graphic context is restored to the values last assigned before entering new state. The code example below illustrates the idea: $d = Prima::Drawable-> create; $d-> lineWidth( 5); $d-> begin_paint_info; # lineWidth is 5 here $d-> lineWidth( 1); # lineWidth is 1 $d-> end_paint_info; # lineWidth is 5 again ( Note: ::region, ::clipRect and ::translate properties are exceptions. They can not be used in the disabled state; their values are neither recorded nor used as a template). That is, in disabled state any Drawable maintains only the graphic context. To draw on a canvas, the object must enter the enabled state by calling begin_paint(). This function can be unsuccessful, because the object binds with system resources during this stage, and might fail. Only after the enabled state is entered, the canvas is accessible: $d = Prima::Image-> create( width => 100, height => 100); if ( $d-> begin_paint) { $d-> color( cl::Black); $d-> bar( 0, 0, $d-> size); $d-> color( cl::White); $d-> fill_ellipse( $d-> width / 2, $d-> height / 2, 30, 30); $d-> end_paint; } else { die "can't draw on image:$@"; } Different objects are mapped to different types of canvases - Prima::Image canvas pertains its content after end_paint(), Prima::Widget maps it to a screen area, which content is of more transitory nature, etc. The information state is as same as the enabled state, but the changes to a canvas are not visible. Its sole purpose is to read, not to write information. Because begin_paint() requires some amount of system resources, there is a chance that a resource request can fail, for any reason. The begin_paint_info() requires some resources as well, but usually much less, and therefore if only information is desired, it is usually faster and cheaper to obtain it inside the information state. A notable example is get_text_width() method, that returns the length of a text string in pixels. It works in both enabled and information states, but code $d = Prima::Image-> create( width => 10000, height => 10000); $d-> begin_paint; $x = $d-> get_text_width('A'); $d-> end_paint; is much more 'expensive' than $d = Prima::Image-> create( width => 10000, height => 10000); $d-> begin_paint_info; $x = $d-> get_text_width('A'); $d-> end_paint_info; for the obvious reasons. It must be noted that some information methods like get_text_width() work even under the disabled state; the object is switched to the information state implicitly if it is necessary. Graphic context and canvas operations rely completely on a system implementation. The internal canvas color representation is therefore system-specific, and usually could not be described in standard definitions. Often the only information available about color space is its color depth. Therefore, all color manipulations, including dithering and antialiasing are subject to system implementation, and can not be controlled from perl code. When a property is set in the object disabled state, it is recorded verbatim; color properties are no exception. After the object switched to the enabled state, a color value is transformed to a system color representation, which might be different from Prima's. For example, if a display color depth is 15 bits, 5 bits for every component, then white color value 0xffffff is mapped to 11111000 11111000 11111000 --R----- --G----- --B----- that equals to 0xf8f8f8, not 0xffffff ( See Prima::gp-problems for inevident graphic issues discussion ). The Prima::Drawable color format is RRGGBB, with each component resolution of 8 bit, thus allowing 2^24 color combinations. If the device color space depth is different, the color is truncated or expanded automatically. In case the device color depth is small, dithering algorithms might apply. Note: not only color properties, but all graphic context properties allow all possible values in the disabled state, which transformed into system-allowed values in the enabled and the information states. This feature can be used to test if a graphic device is capable of performing certain operations ( for example, if it supports raster operations - the printers usually do not ). Example: $d-> begin_paint; $d-> rop( rop::Or); if ( $d-> rop != rop::Or) { # this assertion is always false without ... # begin_paint/end_paint brackets } $d-> end_paint; There are ( at least ) two color properties on each drawable - ::color and ::backColor. The values they operate are integers in the discussed above RRGGBB format, however, the toolkit defines some mnemonic color constants: cl::Black cl::Blue cl::Green cl::Cyan cl::Red cl::Magenta cl::Brown cl::LightGray cl::DarkGray cl::LightBlue cl::LightGreen cl::LightCyan cl::LightRed cl::LightMagenta cl::Yellow cl::White cl::Gray As stated before, it is not unlikely that if a device color depth is small, the primitives plotted in particular colors will be drawn with dithered or incorrect colors. This usually happens on paletted displays, with 256 or less colors. There exists two methods that facilitate the correct color representation. The first way is to get as much information as possible about the device. The methods get_nearest_color() and get_physical_palette() provide possibility to avoid mixed colors drawing by obtaining indirect information about solid colors, supported by a device. Another method is to use ::palette property. It works by inserting the colors into the system palette, so if an application knows the colors it needs beforehand, it can employ this method - however this might result in system palette flash when a window focus toggles. Both of these methods are applicable both with drawing routines and image output. An image desired to output with least distortion is advised to export its palette to an output device, because images usually are not subject to automatic dithering algorithms. Prima::ImageViewer module employs this scheme. Prima offers primitive gradient services to draw gradually changing colors. A gradient is requested by setting of at least two colors and optionally a set of cubic spline points that, when, projected, generate the transition curve between the colors. The example below fills a rectangle with a gradient calculated between a background color and its pre-lighted variant. $canvas-> gradient_bar( $left, $bottom, $right, $top, { palette => [ $self->prelight_color($back_color), $back_color ], }); Here are keys understood in the gradient request: Each color is a cl:: value. The gradient is calculated as polyline where each its vertex corresponds to a certain blend between two neighbouring colors in the palette. F.ex. the simplest palette going from cl::White to cl::Black over a polyline 0..1 (default), produces pure white color at the start and pure black color at the end, filling all available shades of gray in between, and changing monotonically. Set of 2-integer polyline vertexes where the first integer is a coordinate (x, y, or whatever required by the drawing primitive) between 0 and 1, and the second is the color blend value between 0 and 1. Default: ((0,0),(1,1)) Serving same purpose as poly but vertexes are projected first to a cubic spline using render_spline and current value of splinePrecision. The resulting polyline is treated as poly. Only used in gradient_bar, to set gradient direction. See also: gradient_bar, gradient_realize3d .. This means that the following code $bitmap-> color(0); $bitmap-> line(0,0,100,100); $target-> color(cl::Green); $target-> put_image(0,0,$bitmap); produces a green line on $target. When using monochrome bitmaps for logical operations, note that target colors should not be explicit 0 and 0xffffff, nor cl::Black and cl::White, but cl::Clear and cl::Set instead. The reason is that on paletted displays, system palette may not necessarily contain the white color under palette index (2^ScreenDepth-1). cl::Set thus signals that the value should be "all ones", no matter what color it represents, because it will be used for logical operations. Prima fs::Normal fs::Bold fs::Thin fs::Italic fs::Underlined fs::StruckOut fs::Outline and can be OR-ed together to express the font style. fs::Normal equals to 0 and usually never used. If some styles are not supported by a system-dependent font subsystem, they are ignored. A one of three constants: fp::Default fp::Fixed fp::Variable fp::Default specifies no interest about font pitch selection. fp::Fixed is set when a monospaced (all glyphs are of same width) font is desired. fp::Variable pitch specifies a font with different glyph widths. This key is of the highest priority; all other keys may be altered for the consistency of the pitch key.. The encodings provided by different systems are different; in addition, the only encodings are recognizable by the system, that are represented by at least one font in the system. Unix systems and the toolkit PostScript interface usually provide the following encodings: iso8859-1 iso8859-2 ... other iso8859 ... fontspecific Win32 returns the literal strings like Western Baltic Cyrillic Hebrew Symbol A hash that ::font returns, is a tied hash, whose keys are also available as separate properties. For example, $x = $d-> font-> {style}; is equivalent to $x = $d-> font-> style; While the latter gives nothing but the arguable coding convenience, its usage in set-call is much more usable: $d-> font-> style( fs::Bold); instead of my %temp = %{$d-> font}; $temp{ style} = fs::Bold; $d-> font( \%temp); The properties of a font tied hash are also accessible through set() call, like in Prima::Object: $d-> font-> style( fs::Bold); $d-> font-> width( 10); is adequate to $d-> font-> set( style => fs::Bold, width => 10, ); When get-called, ::font property returns a hash where more entries than the described above can be found. These keys are read-only, their values are discarded if passed to ::font in a set-call. In order to query the full list of fonts available to a graphic device, a ::fonts method is used. This method is not present in Prima::Drawable namespace; it can be found in two built-in class instances, Prima::Application and Prima::Printer. Prima::Application::fonts returns metrics for the fonts available to a screen device, while Prima::Printer::fonts ( or its substitute Prima::PS::Printer ) returns fonts for the printing device. The result of this method is an array of font metrics, fully analogous to these returned by Prima::Drawable::font method. and C are negative, if a glyphs 'hangs' over it neighbors, as shown in picture on the left. A and C values are positive, if a glyph contains empty space in front or behind the neighbor glyphs, like in picture on the right. As can be seen, B is the width of a glyph's black part. ABC metrics returned by the get_font_abc() method. Corresponding vertical metrics, called in Prima DEF metrics, are returned by the get_font_def() method. A drawable has two raster operation properties: ::rop and ::rop2. These define how the graphic primitives are plotted. ::rop deals with the foreground color drawing, and ::rop2 with the background. The toolkit defines the following operations: Usually, however, graphic devices support only a small part of the above set, limiting ::rop to the most important operations: Copy, And, Or, Xor, NoOp. ::rop2 is usually even more restricted, supports only Copy and NoOp. The raster operations apply to all graphic primitives except SetPixel. Note for layering: using layered images and device bitmaps with put_image and stretch_image can only use rop::SrcCopy and rop::SrcOver raster operations on OS-provided surfaces. Additionally, Prima implements extra features for compositing on images outside the begin_paint/end_paint brackets. It supports the following 12 Porter-Duff operators: and set of constants to apply a constant source and destination alpha to override the existing alpha channel, if any: rop::SrcAlpha rop::SrcAlphaShift rop::DstAlpha rop::DstAlphaShift To override the alpha channel(s) combine the rop constant using this formula: $rop = rop::XXX | rop::SrcAlpha | ( $src_alpha << rop::SrcAlphaShift ) | rop::DstAlpha | ( $src_alpha << rop::DstAlphaShift ) Also, function rop::blend($alpha) creates a rop constant for simple blending of two images by the following formula: $dst = ( $src * $alpha + $dst * ( 255 - $alpha ) ) / 255 In addition to that, rop::AlphaCopy operation is available for accessing alpha bits only. When used, the source image is treated as alpha mask, and therefore it has to be grayscale. It can be used to apply the alpha bits independently, without need to construct an Icon object. The Prima toolkit employs a geometrical XY grid, where X ascends rightwards and Y ascends upwards. There, the (0,0) location is the bottom-left pixel of a canvas. All graphic primitives use inclusive-inclusive boundaries. For example, $d-> bar( 0, 0, 1, 1); plots a bar that covers 4 pixels: (0,0), (0,1), (1,0) and (1,1). The coordinate origin can be shifted using ::translate property, that translates the (0,0) point to the given offset. Calls to ::translate, ::clipRect and ::region always use the 'physical' (0,0) point, whereas the plotting methods use the transformation result, the 'logical' (0,0) point. As noted before, these three properties can not be used in when an object is in its disabled state. $d-> clipRect( 1, 1, 2, 2); $d-> bar( 0, 0, 1, 1); thus affects only one pixel at (1,1). Set-call discards the previous ::region value. Note: ::clipRect can not be used while the object is in the paint-disabled state, its context is neither recorded nor used as a template ( see "Graphic context and canvas"). Affects filling style of complex polygonal shapes filled by fillpoly. If 1, the filled shape contains no holes; otherwise, holes are present where the shape edges cross. Default value: false Selects 8x8 fill pattern that affects primitives that plot filled shapes: bar(), fill_chord(), fill_ellipse(), fillpoly(), fill_sector(), floodfill(). Accepts either a fp:: constant or a reference to an array of 8 integers, each representing 8 bits of each line in a pattern, where the first integer is the topmost pattern line, and the bit 0x80 is the leftmost pixel in the line. There are some predefined patterns, that can be referred via fp:: constants: ( the actual patterns are hardcoded in primguts.c ) The default pattern is fp::Solid. An example below shows encoding of fp::Parquet pattern: # 76543210 84218421 Hex 0 $ $ $ 51 1 $ $ 22 2 $ $ $ 15 3 $ $ 88 4 $ $ $ 45 5 $ $ 22 6 $ $ $ 54 7 $ $ 88 $d-> fillPattern([ 0x51, 0x22, 0x15, 0x88, 0x45, 0x22, 0x54, 0x88 ]); On a get-call always returns an array, never a fp:: constant.(), get_font_def(). Selects a line ending cap for plotting primitives. VALUE can be one of constants. le::Round is the default value. Selects a line joining style for polygons. VALUE can be one of constants. lj::Round is the default value. Selects a line pattern for plotting primitives. PATTERN is either a predefined lp:: constant, or a string where each even byte is a length of a dash, and each odd byte is a length of a gap. The predefined constants are: Not all systems are capable of accepting user-defined line patterns, and in such situation the lp:: constants are mapped to the system-defined patterns. In Win9x, for example, lp::DashDotDot is much different from its string definition therefore. Default value is lp::Solid.. Fills rectangle in the alpha channel, filled with ALPHA value (0-255) within (X1,Y1) - (X2,Y2) extents. Can be called without parameters, in this case fills all canvas area. Has only effect on layered surfaces. Plots an arc with center in X, Y and DIAMETER_X and DIAMETER_Y axis from START_ANGLE to END_ANGLE. Context used: color, backColor, lineEnd, linePattern, lineWidth, rop, rop2 Draws a filled rectangle within (X1,Y1) - (X2,Y2) extents. Context used: color, backColor, fillPattern, rop, rop2 Draws a set of filled rectangles. RECTS is an array of integer quartets in format (X1,Y1,X2,Y2). within . draw_text is a convenience wrapper around text_wrap for drawing the wrapped text, and also provides the tilde ( ~ )- character underlining support. The FLAGS is a combination of the following constants: dt::Left - text is aligned to the left boundary dt::Right - text is aligned to the right boundary dt::Center - text is aligned horizontally in center dt::Top - text is aligned to the upper boundary dt::Bottom - text is aligned to the lower boundary dt::VCenter - text is aligned vertically in center dt::DrawMnemonic - tilde-escapement and underlining is used dt::DrawSingleChar - sets tw::BreakSingle option to Prima::Drawable::text_wrap call dt::NewLineBreak - sets tw::NewLineBreak option to Prima::Drawable::text_wrap call dt::SpaceBreak - sets tw::SpaceBreak option to Prima::Drawable::text_wrap call dt::WordBreak - sets tw::WordBreak option to Prima::Drawable::text_wrap call dt::ExpandTabs - performs tab character ( \t ) expansion dt::DrawPartial - draws the last line, if it is visible partially dt::UseExternalLeading - text lines positioned vertically with respect to the font external leading dt::UseClip - assign ::clipRect property to the boundary rectangle dt::QueryLinesDrawn - calculates and returns number of lines drawn ( contrary to dt::QueryHeight ) dt::QueryHeight - if set, calculates and returns vertical extension of the lines drawn dt::NoWordWrap - performs no word wrapping by the width of the boundaries dt::WordWrap - performs word wrapping by the width of the boundaries dt::BidiText - use bidirectional formatting, if available dt::Default - dt::NewLineBreak|dt::WordBreak|dt::ExpandTabs| dt::UseExternalLeading Context used: color, backColor, font, rop, textOpaque, textOutBaseline 1. SINGLEBORDER = 0: The fill area is bounded by the color specified by the COLOR parameter. SINGLEBORDER = 1: The fill area is defined by the color that is specified by COLOR. Filling continues outward in all directions as long as the color is encountered. This style is useful for filling areas with multicolored boundaries. Context used: color, backColor, fillPattern, rop, rop2 Draws a filled rectangle within (X1,Y1) - (X2,Y2) extents using a gradient (see Gradients). Context used: splinePrecision, fillPattern, rop, rop2 Draws a filled ellipse with center in (X,Y) and diameters (DIAM_X,DIAM_Y) using a gradient (see Gradients). Context used: splinePrecision, fillPattern, rop, rop2 Returns an array consisting of integer pairs, where the first one is a color value, and the second is the breadth of the color strip. gradient_bar uses this information to draw a gradient fill, where each color strip is drawn with its own color. Can be used for implementing other gradient-aware primitives (see examples/f_fill.pl ) Context used: splinePrecision. Called implicitly by ::font on set-call, allowing the following example to work: $d-> font-> set( size => 10); $d-> font-> set( style => fs::Bold); In the example, the hash 'style => fs::Bold' does not overwrite the previous font context ( 'size => 10' ) but gets added to it ( by font_match()), providing the resulting font with both font properties set. Member of Prima::Application and Prima::Printer, does not present in Prima::Drawable. Returns an array of font metric hashes for a given font FAMILY and ENCODING. Every hash has full set of elements described in "Fonts". If called without parameters, returns an array of same hashes where each hash represents a member of font family from every system font set. It this special case, each font hash contains additional encodings entry, which points to an array of encodings available for the font. If called with FAMILY parameter set but no ENCODING is set, enumerates all combinations of fonts with all available encodings. If called with FAMILY set to an empty string, but ENCODING specified, returns only fonts that can be displayed with the encoding. Example: print sort map {"$_->{name}\n"} @{$::application-> fonts}; Same as get_font_abc, but for vertical mertics. Is expensive on bitmap fonts, because to find out the correct values Prima has to render glyphs on bitmaps and scan for black and white pixels. Vector fonts are not subject to this, and the call is as effective as get_font_abc. Returns array of integer pairs denoting unicode indices of glyphs covered by the currently selected font. Each pair is the first and the last index of a contiguous range. Context used: font Returns a nearest possible solid color in representation of object-bound graphic device. Always returns same color if the device bit depth is equal or greater than 24. Returns paint state value on of ps:: constants - ps::Disabled if the object is in the disabled state, ps::Enabled for the enabled state, ps::Information for the information state. For brevity, mb::Disabled is equal to 0 so this allows for simple boolean testing whether one can get/set graphical properties on an object. See "Graphic context and canvas" for more. Returns an anonymous array of integers, in (R,G,B) format, every color entry described by three values, in range 0 - 255. The physical palette array is non-empty only on paletted graphic devices, the true color devices return an empty array. The physical palette reflects the solid colors currently available to all programs in the system. The information is volatile if the system palette can change colors, since any other application may change the system colors at any moment. font. The result is an anonymous array of 5 points ( 5 integer pairs in (X,Y) format). These 5 points are offsets for the following string extents, given the string is plotted at (0,0): 1: start of string at ascent line ( top left ) 2: start of string at descent line ( bottom left ) 3: end of string at ascent line ( top right ) 4: end of string at descent line ( bottom right ) 5: concatenation point The concatenation point coordinates (XC,YC) are coordinated passed to consequent text_out() call so the conjoint string would plot as if it was a part of TEXT. Depending on the value of the textOutBaseline property, the concatenation point is located either on the baseline or on the descent line. Context used: font, textOutBaseline. If OPTIONS has tw::CalcMnemonic or tw::CollapseTilde bits set, then the last scalar in the array result is a special hash reference. The hash contains extra information regarding the 'hot key' underline position - it is assumed that '~' - escapement denotes an underlined character. The hash contains the following keys:
http://search.cpan.org/~karasik/Prima/pod/Prima/Drawable.pod
CC-MAIN-2016-44
refinedweb
3,646
50.26
Binding editors to data stored in a database or a file is not the only option. Data can also be created and supplied at runtime. This topic describes how to do this. To learn about other data binding methods, refer to the Data Binding Overview topic. This data binding mode requires that the data source is an object holding a list of "records". Each "record" is an object whose public properties represent record fields and property values are field values. In general, to supply your editor with data created at runtime, you will need to follow the steps below: Declare a class implementing the IList, ITypedList or IBindingList interface. This class will represent a list serving as the editor's data source. This list's elements must be record objects. Note: if you don't want to create your own list object, you can use any of the existing objects implementing these interfaces. For instance, a DataTable object can serve as the data source. Therefore, this step is optional. Note: only data sources implementing the IBindingList interface support data change notifications. When using a data source that doesn't support this interface, you may need to update editors manually. This can be done using the BaseControl.Refresh method, for instance. The code below creates a class whose instances will represent records. The class declares two public properties (ID and Name), so that you can bind editors to their data. public class Record { int id; string name; public Record(int id, string name) { this.id = id; this.name = name; } public int ID { get { return id; } } public string Name { get { return name; } set { name = value; } } } Public Class Record Dim _id As Integer Dim _name As String Sub New(ByVal id As Integer, ByVal name As String) Me._id = id Me._name = name End Sub Public ReadOnly Property ID() As Integer Get Return _id End Get End Property Public Property Name() As String Get Return _name End Get Set(ByVal Value As String) _name = Value End Set End Property End Class Once a record class has been declared, you can create a list of its instances. This list serves as the data source. To bind an editor to this list, use the editor's DataBindings property. For most editors, you need to map data source fields to the BaseEdit.EditValue property. The code below shows how to bind a text editor to the Name field. ArrayList list = new ArrayList(); list.Add(new Record(1, "John")); list.Add(new Record(2, "Michael")); textEdit1.DataBindings.Add("EditValue", list, "Name", true); Dim List As New ArrayList() List.Add(New Record(1, "John")) List.Add(New Record(2, "Michael")) TextEdit1.DataBindings.Add("EditValue", List, "Name", True)
https://documentation.devexpress.com/WindowsForms/618/Controls-and-Libraries/Editors-and-Simple-Controls/Simple-Editors/Editors-Features/Data-Binding-Overview/Binding-to-Data-Created-at-Runtime
CC-MAIN-2019-43
refinedweb
449
67.45
How to Get the First Element From a List in Camel JXPath Apache's Camel is a development framework, or programming resource library, that enables programmers to define how their applications route messages and other information from their data sources. Camel's JXPath support enables the framework to use XPath commands to filter data. If you only need the first bit of information from your data source, you can use JXPath to retrieve only what you need. Instructions - 1 Open your project's Spring XML file, using Microsoft Notepad or an XML editor. Add a reference to the Javabean that contains the data you'd like to retrieve. Type "public class bean name {." Substitute "bean name" with the name of the bean you'd like to call on. - 2 Type in the desired method of retrieval on the next line, using hanging indentation. Type "public sub-cass getobject (){." Substitute "sub-class" with the sub-group of data you'd like to retrieve from the bean. Substitute "object" in "getobject" with element you'd like to retrieve. - 3 Close off your statements. Type "..." on the next line, using two hanging indents. Type a "}" on the next line, using one hanging indent. Type "}" on the next line without any indentation. - 4 Type "Beanname abbreviation = new beanname ();" onto the next line of the document. Substitute "beanname" and "abbreviation" with the name of the bean and its abbreviation. Type "..." onto the next line. - 5 Route your data request through the JXPath context. Type "JXPathContext context = JXPathContext.newContext(bean abbreviation);" onto the next line of the file -- substitute "bean abbreviation" with the abbreviation of the bean. - 6 Parse your string of data. Type String parsedrequest = (String)context.getValue("fullrequest"); onto the next line. Substitute "parsedrequest" with the parsed Java string version of the data you'd like to retrieve -- for example, you'd parse a request for "last name" to "lname." Replace "fullrequest" with the unparsed version of the request. - 7 Save your XML file, then close it.
http://www.ehow.com/how_12196865_first-element-list-camel-jxpath.html
CC-MAIN-2013-48
refinedweb
333
68.06
New firmware release 1.6.7.b1 (LoRaWAN Nano Gateway with TTN example!!) . @Colateral I also wait for this but still do not know how to work with it STA work but AP not able to connect If Wipy WLAN is configured simply as STA-AP ... the ftp is not working. from network import WLAN wlan = WLAN(mode=WLAN.STA_AP, ssid='Cola', auth=(WLAN.WPA2, 'abc12345678')) wlan.ifconfig(id=1, config=('192.168.10.1, '255.255.255.0', '192.168.10.1', '192.168.10.1')) connect to 'Cola' and ftp(with micro/python) is connecting on 192.168.10.1 but fails on root dir (/flash) @johncaipa, for sure, I'll work on an example for US frequencies and will add to the same repo where the other are. We'll do our best to get it done today or tomorrow. Cheers, Daniel @robert-hh, no worries, our only focus from today onwards is code maturing and doing the migration to the MicroPython migration. We'll also update our own repo as we were sorting out the way to publish Sigfox libraries. @daniel Hello Daniel, I do not want to impose any kind of pressure on you, since you are doing a great work in imroving the device, but at what time do you plan the next update of the github repository with the micropython files? Someone can help me with an example of how to use it on US or AU frequencies, I still can not do it on my own. Thanks Thanks for the thumbs up!! :-) @daniel Really nice update! NanoGateway and OTAA node works like a charm! Thank you Pycom Dev Team! :) - constantinos last edited by @constantinos please try playing with the public argument in the constructor: Before the last 2 releases the default of True for the public sync word was not being honoured, and now it is. Set this value to False and that will most likely make the LoPy work with the RFM95 again. - constantinos last edited by Daniel, I lost the functionality of receiving messages from RFM95. It was working with firmware version 1.2.2 and now is lost. I just receive nothing at all now. - gertjanvanhethof Pybytes Beta last edited by This is really fantastic news. I immediately uploaded the code to by LoPy. TTN Dashboards shows gateway is connected! Now I have to test to send messages to it with my other LoPy. Very good news, we were expecting this !. Congratulations to the entire Pycom team. @daniel Congratulations for you and the whole team! I've just upgraded to 1.6.7.b1, and tried the examples (NanoGateway and OTAA node) and... it works like a charm! I'm currently sending my room temperature (from a DS18B20 sensor) on TTN using these examples! For now, there is no TTN gateway at range at home, I have to move away from my home to find some signal. The possibility to create a "dev" nano-gateway that connects to TTN allows me to work on my projects from home, this is wonderful! @jmarcelino thanks for the suggestion! I'll include that on the next nano-gateway release along with the other planned improvements :-) @zmrzli the answer from @robert-hh is correct, it just works like that, but some devices take a while to scan for networks again or to update the ssid if the MAC address is the same. Try switching the WiFi of your iPhone on and off as suggested. Cheers, Daniel Hello @zmrzli, no idea what you did, but this sequence works for me: from network import WLAN wl = WLAN(AP) wl.init(mode=wl.AP, ssid="My_lopy", auth=(wl.WPA2, "This is a Test")) It takes a while before the change gets visible on the other devices, if these do not scan the networks frequently. Sometimes on cycling Wifi Off/On helps. I am not sure why there is no response to this problem... previously posted, so here it is again. Am I the only trying to do this, or I am missing something obvious...? Is is possible to change SSID of wipi in AP mode? Following: import machine import os import pycom pycom.heartbeat(True) uart = machine.UART(0, 115200) # disable these two lines if you don't want serial access os.dupterm(uart) from network import WLAN wl = WLAN() #trying to change AP properties: original_ssid = "MyNode" wl.ssid(original_ssid) original_auth = (3,'bewell') wl.init(mode=WLAN.AP, ssid=original_ssid, auth=original_auth, channel=6, antenna=WLAN.INT_ANT) Does not change SSID visible on my iphone. It remains wipy-wlan-xxx Am I doing something wrong? Or SSID is hardcoded regardless of what says in documentation: "wlan.ssid([ssid]) Get or set the SSID when in AP mode.
https://forum.pycom.io/topic/810/new-firmware-release-1-6-7-b1-lorawan-nano-gateway-with-ttn-example/31
CC-MAIN-2019-39
refinedweb
791
75.91
I'm starting my GCSE Computing next year and it is on Python. I am using Python 2.7.3 and know all about Tkinter. I found this coding on the internet on how to link python scratch. I know how to use it but my teacher said i can only use it if i understand how it works. Here's the coding: from array import array import socket from Tkinter import Tk from tkSimpleDialog import askstring root = Tk() root.withdraw() PORT = 42001 HOST = '127.0.0.1' print("Connecting...") scratchSock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) scratchSock.connect((HOST, PORT)) print("Connected!") def sendScratchCommand(cmd): n = len(cmd) a = array('c') a.append(chr((n >> 24) & 0xFF)) a.append(chr((n >> 16) & 0xFF)) a.append(chr((n >> 8) & 0xFF)) a.append(chr(n & 0xFF)) scratchSock.send(a.tostring() + cmd) while True: msg = askstring('Scratch Connector', 'Send Broadcast:') if msg: sendScratchCommand('broadcast "' + msg + '"') I have edited it slightly but generally it is the same. The bit I don't understand it the sendScratchCommand(cmd) definition. can anyone explain it to me? thanks
http://python-forum.org/viewtopic.php?f=10&t=1142&p=1850
CC-MAIN-2015-22
refinedweb
182
62.34
Changelog for package sensor_msgs 1.13.0 (2020-05-21) Update BatteryState.msg ( #140 ) Use setuptools instead of distutils ( #159 ) Bump CMake version to avoid CMP0048 warning ( #158 ) Fix TabError: inconsistent use of tabs and spaces in indentation ( #155 ) * Fix TabError: inconsistent use of tabs and spaces in indentation Python 3 is much more strict for spacing. Contributors: Ramon Wijnands, Rein Appeldoorn, Shane Loretz 1.12.7 (2018-11-06) Include sstream on header that needs i ( #131 ) included missing import for the read_points_list method ( #128 ) * included missing import for the read_points_list method Merge pull request #127 from ros-1/fix-typos Merge pull request #85 from ros/missing_test_target_dependency fix missing test target dependency Contributors: Dirk Thomas, Jasper, Kuang Fangjun, Tully Foote, chapulina 1.12.6 (2018-05-03) Return default value to prevent missing return warning. Add function to convert PointCloud2 to namedtuples Add new function read_points_list that converts a PointCloud2 to a list of named tuples. It works on top of read_points, which generates lists containing the values. In consequence read_points_list is slower than read_points. Added equidistant distortion model const Added test_depend on rosunit in sensor_msgs fix catkin_lint warnings add mingration rule, copied from common_msgs-1.6 Add missing include for atoi. Fixes #97 Contributors: 2scholz, Adam Allevato, Ivor Wanders, Kei Okada, Tully Foote, alexzzhu 1.12.5 (2016-09-30) Deal with abstract image encodings Fix spelling mistakes Fix year Contributors: Jochen Sprickerhof, Kentaro Wada, trorornmn 1.12.4 (2016-02-22) added type mapping and support for different types of points in point clouds remove boost dependency fixes #81 adding a BatteryState message fix iterator doc remove warning due to anonymous namespace Contributors: Sebastian Pütz, Tully Foote, Vincent Rabaud 1.12.3 (2015-04-20) 1.12.2 (2015-03-21) 1.12.1 (2015-03-17) 1.12.0 (2014-12-29) 1.11.6 (2014-11-04) Fix compilation with Clang Contributors: jmtatsch 1.11.5 (2014-10-27) add a test for the operator+ fix The behavior of that operator also had to be fixed to return a proper child class fix critical bug with operator+ Contributors: Michael Ferguson, Vincent Rabaud 1.11.4 (2014-06-19) Fix bug caused by use of va_arg in argument list. Contributors: Daniel Maturana 1.11.3 (2014-05-07) clean up documentation of sensor_msgs so that wiki generated doxygen is readable Export architecture_independent flag in package.xml Contributors: Michael Ferguson, Scott K Logan 1.11.2 (2014-04-24) 1.11.1 (2014-04-16) fix missing include dirs for tests Contributors: Dirk Thomas 1.11.0 (2014-03-04) add a PointCloud2 iterator and modifier Contributors: Tully Foote, Vincent Rabaud 1.10.6 (2014-02-27) 1.10.5 (2014-02-25) 1.10.4 (2014-02-18) Fix roslib import for message module, remove roslib.load_manifest Contributors: Bence Magyar 1.10.3 (2014-01-07) python 3 compatibility line wrap Imu.msg comments 1.10.2 (2013-08-19) adding __init__.py #11 1.10.1 (2013-08-16) setup.py for #11 adding installation of point_cloud2.py to sensor_msgs fixes #11 1.10.0 (2013-07-13) adding MultiDOFJointState message 1.9.16 (2013-05-21) update email in package.xml 1.9.15 (2013-03-08) 1.9.14 (2013-01-19) 1.9.13 (2013-01-13) YUV422 actually has an 8 bit depth 1.9.12 (2013-01-02) do not consider YUV422 to be a color anymore added missing license header 1.9.11 (2012-12-17) modified dep type of catkin 1.9.10 (2012-12-13) add missing downstream depend switched from langs to message_* packages 1.9.9 (2012-11-22) Added Low-Cost/Android Sensors reviewed messages Updated comments to reflect REP 117 for fixed-distance rangers Adding reviewed MultiEchoLaserScan message. 1.9.8 (2012-11-14) 1.9.7 (2012-10-30) fix catkin function order 1.9.6 (2012-10-18) updated cmake min version to 2.8.3, use cmake_parse_arguments instead of custom macro fix the bad number of channels for YUV422 UYVY 1.9.5 (2012-09-28) fixed missing find genmsg 1.9.4 (2012-09-27 18:06) 1.9.3 (2012-09-27 17:39) cleanup add precision about YUV422 add YUV422 to some functions cleaned up package.xml files updated to latest catkin fixed dependencies and more updated to latest catkin: created package.xmls, updated CmakeLists.txt 1.9.2 (2012-09-05) updated pkg-config in manifest.xml updated catkin variables 1.9.1 (2012-09-04) use install destination variables, removed manual installation of manifests 1.9.0 (2012-08-29) update the docs 1.8.13 (2012-07-26 18:34:15 +0000) made inline functions static inline fix ODR violation and missing headers moved c++ code from sensor_msgs to headers 1.8.8 (2012-06-12 22:36) simplifying deps make find_package REQUIRED removed obsolete catkin tag from manifest files fixed package dependency for another common message ( #3956 ), removed unnecessary package name from another message fixed package dependencies for several common messages (fixed #3956 ) clarify NavSatFix message comments normalize shared lib building, #3838 adding TimeReference to build TimeReference decl was invalid adding point_cloud2 as reviewed at TimeReference msg as reviewed #ros-pkg5355 install headers adding manifest exports fix boost-finding stuff removed depend, added catkin adding roscpp_core dependencies stripping depend and export tags from common_msgs manifests as msg dependencies are now declared in cmake and stack.yaml. Also removed bag migration exports install-related fixes common_msgs: removing migration rules as all are over a year old sensor_msgs: removing old octave support now that rosoct is gone bye bye vestigial MSG_DIRS sensor_msgs: getting rid of other build files adios rosbuild2 in manifest.xml catkin updates catkin_project Updated to work with new message generation macros adios debian/ hello stack.yaml. (sketch/prototype/testing). More tweaking for standalone message generation Getting standalone message generation working... w/o munging rosbuild2 more rosbuild2 hacking rosbuild2 tweaks missing dependencies sensor_msgs: Added YUV422 image encoding constant. adding in explicit ros/console.h include for ros macros now that ros::Message base class is gone adding JoyFeedback and JoyFeedbackArray updating manifest.xml adding Joy.msg Add image encodings for 16-bit Bayer, RGB, and BGR formats. Update isMono(), isAlpha(), isBayer(), etc. rosbuild2 taking shape sensor_msgs: Source-compatible corrections to fillImage signature. sensor_msgs: Functions for distinguishing categories of encodings. From cv_bridge redesign API review. applying patch to this method like josh did in r33966 in rviz sensor_msgs (rep0104): Migration rules for CameraInfo, RegionOfInterest. sensor_msgs (rep0104): Doc improvements for CameraInfo. sensor_msgs (rep0104): Cleaned up PointCloud2 msg docs. Restored original meaning of 'no invalid points' to is_dense ( #4446 ). sensor_msgs (rep0104): Documented u,v channel semantics for PointCloud msg ( #4482 ). sensor_msgs (rep0104): Added distortion model string constants. sensor_msgs (rep0104): Include guard for image_encodings.h. sensor_msgs (rep0104): Applied changes to CameraInfo and RegionOfInterest messages. Clarify frame of reference for NavSatFix position covariance. Add new satellite navigation messages approved by GPS API review. adding Range message as reviewed #4488 adding missing file cleaner fix for point_cloud_conversion definitions for #4451 inlining implementation in header for #4451 sensor_msgs: Fixed URL in CameraInfo.msg and indicated how to mark an uncalibrated camera. #4105 removing all the extra exports add units to message description bug fix in PC->PC2 conversion include guards for point_cloud_conversions.h #4285 Added Ubuntu platform tags to manifest added PointCloud2<->PointCloud conversion routines. Updating link to camera calibration updating message as per review sensor_msgs: Added size (number of elements for arrays) to PointField. pushing the new PointCloud structure in trunk Changed wording of angle convention for the LaserScan message. We are now specifying how angles are measured, not which way the laser spins. Remove use of deprecated rosbuild macros Added exporting of generated srv includes. Added call to gen_srv now that there is a service. Added the SetCameraInfo service. octave image parsing function now handles all possible image format types changing review status adding JointState documentation ticket:3006 Typo in comments updated parsing routines for octave Adding 1 more rule for migration point clouds and bringing test_common_msgs back from future. Adding JointState migration rule. replace pr2_mechanism_msgs::JointStates by new non-pr2-specific sensor_msgs::JointState. Door test passes better documentation of the CameraInfo message updated url sensor_msgs: Added rule to migrate from old laser_scan/LaserScan. sensor_msgs: Added string constants for bayer encodings. clearing API reviews for they've been through a bunch of them recently. Removed the Timestamp message. Updating migration rules to better support the intermediate Image message that existed. Adding a CompressedImage migration rule. Fixing robot_msgs references Changing the ordering of fields within the new image message so that all meta information comes before the data block. Migration of RawStereo message. Migration rule for CameraInfo. First cut at migration rules for images. Moving stereo messages out of sensor_msgs to stereo/stereo_msgs Getting rid of PixelEncoding since it is encompassed in Image message instead. update to IMU message comments and defined semantics for covariance Changing naming of bag migration rules. Image message and CvBridge change moving FillImage.h to fill_image.h for Jeremy Adding image_encodings header/cpp, since genmsg_cpp doesn't actually support constant string values fixing spelling Message documentation Switching IMU to sensor_msgs/Imu related to #2277 adding IMU msg Took out event_type field, as that would indeed make it more than a timestamp. adding OpenCV doc comment Rename rows,cols to height,width in Image message Adding more migration rule tests and fixing assorted rules. Added a timestamp message. (Will be used to track camera and perhaps some day hokuyo trigger times.) sensor_msgs: Updates to CameraInfo, added pixel encoding and ROI. New sensor_msgs::Image message PointCloud: * pts -> points * chan -> channels ChannelFloat32: * vals -> values sensor_msgs: Added explanation of reprojection matrix to StereoInfo. sensor_msgs: Cleaned up CompressedImage. Updated image_publisher. Blacklisted jpeg. merging in the changes to messages see ros-users email. THis is about half the common_msgs API changes sensor_msgs: Comments to better describe CameraInfo and StereoInfo. Renamed CamInfo message to CameraInfo. sensor_msgs_processImage can now process empty images update openrave and sensor_msgs octave scripts Image from image_msgs -> sensor_msgs #1661 updating review status moving LaserScan from laser_scan package to sensor_msgs package #1254 populating common_msgs
http://docs.ros.org/noetic/changelogs/sensor_msgs/changelog.html
CC-MAIN-2020-40
refinedweb
1,696
50.84
Here are my current thoughts.... not that they necessarily mean much. 1. whether plugingroups are in a special svn area (rather than in framework and plugins directly) we need the c-m-p functionality for at least the framework plugingroup. 2. IIUC Joe's idea was to make it so a dependency on a classloader- free plugin turned into dependencies on all its dependencies. I would like to see an extremely convincing and specific use case for this before we consider implementing it. I could easily be wrong but fear this would introduce hard-to-understand complexity with little gain. 3. I have a suspicion that many of the plugingroups will turn out to only have one plugin in them when all the irrelevant stuff is trimmed out and stuff brought in by transitive dependencies is not listed explicitly. If this turns out to be true then having more than the framework plugingroup may not add much usefulness. We'll have to see. 4. As soon as we have numerous plugin groups, we'll have the same problem we do now in that it will be hard to find the plugingroups you want. 5. I think I argued against it in the past but I'm starting to think that perhaps including a list of tags with a plugin and having the select-a-plugin functionality organized by queries on these tags might be worth investigating. You'd pick say "ejb" and see all the ejb related plugins -- openejb, openejb-builder, openejb-console-jetty/ tomcat, cxf-ejb, etc etc. 6. It might be worth displaying plugin dependencies in the select-a- plugin functionality. thanks david jencks On Oct 8, 2008, at 2:13 PM, Lin Sun wrote: > See in-line below... > > On Wed, Oct 8, 2008 at 4:37 PM, Joe Bohn <joe.bohn@earthlink.net> > wrote: > >>>> Thanks for making the suggestions. It is always good to hear >>>> feedback and challenge our thinking! :) >>> >>> Yep, I wish we had more than 4 people actively looking/discussing >>> this :-( >> >> Ok ... you asked for it ;-) ... Also see my response on the other >> branch of >> this thread. > > We had over 4 people in the past... I am sure :) > >>>> My initial thought of grouping the plugins together was by >>>> category, >>>> however I think it has the following limitations - >>>> >>>> 1. A user can specify his module to any category he likes too, >>>> thus it >>>> could interfere with the resulting tree. For example, a user has a >>>> module that is also categorized as console or development tools or >>>> administration. >>>> >>> >>> Don't see this as an issue, since we control the default >>> repository-list >>> and what goes into the geronimo-plugins.xml for the server and >>> samples. If >>> a user created their own repo (like Liferay) then their plugins >>> would be >>> listed under the "Liferay Repo" instead of the "Geronimo Server >>> Repo" and >>> they could use whatever categories they want. > > The user can grow the repo-list easily by deploying an app onto the > local G server. There isn't a geronimo-plugins.xml for the local > geronimo server repo, but the server is capable to build one on the > fly. >> >> I see both points here. As Donald mentions, the repository should >> be in >> control of the namespace for the categories. However, that only >> works with >> an external repository. However, at the moment the assemble a server >> functions only work with the local, server repository which can >> include >> plugins from multiple external repositories. To have the assembly >> function >> with correctly to build up a server it will eventually have to let >> the user >> choose plugins and plugingroups from multiple external >> repositories. That >> should be interesting. > > I agree currently it is local server only, and the prob still exists > with local server, as a user can install the plugin onto G server, > which will make the user's plugin resides in our local repo. > >> >>> >>> >>>> 2. group by category doesn't offer any integration into maven. >>>> As you >>>> know, we are using plugin groups for all of our G assemblies now. >>>> >>> >>> I'm questioning if this "maven integration" is worth the added >>> source >>> complexity. I'm starting to lean heavily towards "No" and >>> wondering if we >>> should remove most of the pluginprofiles for 2.2 and only keep - >>> framework, >>> minimal and jee5. Once we get user feedback on what groupings >>> "are missing" >>> then we can consider adding more. > > I am missing the added source complexity here. We only added very > little code to support plugin group as David reminded me the function > is already there. In fact, having functions divided by plugin groups > allow me to clean up our long list of little G assemblies & javaee5 > assemblies (there are lots of unnecessary dependencies) and only use > what is really needed (others can be pulled via transitive > dependencies). I think we should keep most of them as they are today > and revise them upon users' request. The only ones I had debating > myself if I should create them are jms, jsf and console. jms and jsf > plugin groups each contain only one plugin as the other plugins can be > pulled via transitive dependencies. However, that doesn't mean users > know which ones to pick easily. About console plugin group, I'd be > totally ok removing it when we switch to optional console. >> >> I think this can be worth the effort if we keep things simple. >> Only create >> plugingroups when they are really necessary and leverage the groups >> consistently. I personally like the idea of the groups so that a >> user can >> incrementally grow function in a new or existing server in logical >> chunks >> without having to understand all of the detailed plugins that we >> generate. > > Me too. >>> >>> >>>> 3. group by category doesn't help with command line install-plugin >>>> command. Currently you can install a plugin group by using "deploy >>>> install-plugin <plugingroup id>" >>>> >>> >>> It would have to be enhanced, which it greatly needs IMO. >>>> 4. group by category doesn't help with "gshell deploy/assemble" >>>> command. A user is unlikely to have some fancy GUI like tree in a >>>> command line env. >>> >>> If a user is trying to assemble from cmdline, then they will >>> suffer and >>> should either write a gsh script, use c-m-p or the console. >> >> Alternatively, part of the enhancement of the command line could be >> to allow >> the user to filter the list of plugins returned by category. > > That is possible to enhance, IMO. >>> >>>> >>>> With plugin groups, you can still group plugins by category. In >>>> fact, >>>> in the install plugin portlet that we allow users to sort plugins >>>> by >>>> category, name, etc. I disabled the sort function in assemble >>>> server >>>> portlet as it needs more work but the plugins are sorted by >>>> category >>>> right now. >>>> >>>> I think I agree with you that the console-xxx plugin group are not >>>> that useful and I think they can be removed after we make most of >>>> the >>>> console components optional. Currently, all these console >>>> components >>>> are under application plugins, and I 'm updating their names, >>>> desps, >>>> and categories to enable users to select them easily. For >>>> example, >>>> if a user wants little G tomcat + JMS, he can select web-tomcat, >>>> JMS >>>> plugin group, and Console: JMS from application plugins if he wants >>>> console support for JMS. >> >> I think it would make sense to have several plugingroups (or >> aggregate >> plugins) when dealing with the console extensions. One possible >> pattern >> would be to create a plugingroup for the core function and then >> another >> plugingroup which includes the core function plugingroup and the >> console >> extension. For example (PG indicates a plugingroup, P just a >> plugin): >> >> PG - JMS + Console >> includes: PG - JMS >> P - JMS console >> >> PG - JMS >> includes: P - the JMS and associated plugins that are necessary. >> >> Here a user could choose to include the plugingroup for JMS + >> Console or >> just the plugingroup for JMS. > > Joe, I had thought about this too. The other way to do things is > pretty straightforward too, either pick > > PG -JMS + P - Console, JMS (this is only one plugin thus I don't feel > we need a group for it). > > or > > PG -JMS > > Lin
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200810.mbox/%3CF8CD7261-2C37-4377-B09E-469B8241E53C@yahoo.com%3E
CC-MAIN-2015-27
refinedweb
1,346
62.17
Hacker News Reader in Elm While the code for the app is freely available (just clone and go…), this README is meant to be a tutorial on how to create a simple application in Elm . Note: The icon for the app was taken from The Pictographers , who make some pretty slick icons! I assume you have Elm installed , your favorite editor configured, and a basic understanding of the Elm syntax (from Haskell or ML). If not, there are other good tutorials introducing you to the syntax. Quickstart If all you care about is the application, after cloning the repository, you’ll need to download and install all the required modules that are used. This can be done on the command line with elm package install . Once the dependencies are installed, you can build the app with elm make Main.elm --output=elm.js . The index.html can then be opened and away you go. If you have Electron installed, you can also launch it that way: electron . . Introduction to The Elm Architecture Okay, with the above out of the way, let’s dive into making a Hacker News reader from scratch… Create a Basic Project First, create a new project folder. Name it anything you like, cd into it, and install the core Elm package . $ elm package install And you’ll need a couple other packages, too… $ elm package install elm-lang/html Finally, let’s create a simple Hello, world! Elm file that we can build, run, and see in the browser. module Main exposing (main) import Html import Html.App main = Html.text "Hello, world!" Save the above file as Main.elm , and build is with elm make Main.elm . It should successfully compile an index.html file that you can open in your browser. Let’s improve the edit-compile-test loop, though, with Elm Reactor, while will auto-compile for us after we make changes and refresh the page. $ elm reactor Listening on Now, open your browser to the URL. You should see Main.elm in a list of files, and your package information + dependencies on the right. Simply click Main.elm , and Elm Reactor will recompile and open it. From here, after every change made, simply refresh the page to have it auto-recompile. Without further adieu… The Elm Architecture If you haven’t yet skimmed through the Elm Guide , it’s worth doing. But, once you have the language basics down, the most important section is The Elm Architecture . In a nutshell, every Elm application is built around a Model, View, Update pattern. You define the data (Model), how it is rendered (View), and what messages can be sent to the application in order to Update it. Currently, the main function merely returns an Html.Html.Node . This is fine if all we want is a static page. But, since we’ll want a dynamic page, we need to have it – instead – return a Html.App.Program . Let’s start with a simple skeleton that still outputs Hello, world! . main : Program Never main = Html.App.beginnerProgram { model = "Hello, world!" , view = Html.text , update = identity } Simple enough, but let’s take stock of what’s happening: - Our Model (data) is just a string that we’ll render. - We render it by converting it to an Html text node. - The Update function takes the existing model and returns it. So, while technically we’re running a "dynamic" Html.App.Program , it’s not going to do anything special. A Closer Look… While Html.App.beginnerProgram wraps some things for us, it doesn’t allow us to see what’s really going on. So, let’s peel back a layer and see where it leads… import Platform.Cmd as Cmd import Platform.Sub as Sub main : Program Never main = Html.App.program { init = ("Hello, world!", Cmd.none) , view = Html.text , update = update , subscriptions = always Sub.none } update : msg -> Model -> (Model, Cmd msg) update msg model = (model, Cmd.none) Okay, a lot has changed, but the output is the same… First, notice that we’ve imported a couple new modules: Platform.Cmd and Platform.Sub . These two modules are at the very heart of The Elm Architecture’s application Update pattern. More on that in a bit… Next, instead of passing in model , we pass in init , which consists of both the Model and an initial Cmd (for which we don’t want to use yet). Also, our update function (which we’ve refactored out) has changed its signature as well. Not only does it take a mysterious msg parameter, which we’re currently ignoring, but it also returns the model and a Cmd , just like the init . Finally, there’s a subscriptions . We’ll get back to those later, but for now, we don’t want any. So What is Cmd ? The first part of The Elm Architecture that you need to fully understand is the Cmd type. It is defined as… type Cmd msg Internally, a Cmd is an operation that the Elm runtime will perform. Presumably this operation is native JavaScript, but it could also be an asynchronous operation and/or something that could fail. It then returns the result of that operation back to our application. However, the only way a for our application to receive this value is via our update function. But, this poses a problem since our update function is defined as update : msg -> Model -> (Model, Cmd msg) Notice our the first input to update is of type msg ? This could be anything we want, but the type has to remain consistent throughout the entire program. We can’t have the Elm runtime call update with a Time value from one operation, but then an Http result from another. Now, the astute reader will notice that the Cmd type wraps our msg type. This enables us – when we perform an operation – to provide a function that converts the return value of that operation into a msg . That way, at a later point, when the operation is executed, the runtime can transform it into a msg , and then eventually pass that msg to our update function. Let’s put this into practice by defining our Msg type to just be a String . Whenever our application receives a Msg , it updates the current model to the value of the Msg . type alias Model = String type alias Msg = String Next, let’s change the definition of our update function to properly accept our new Msg type, and update the model appropriately. update : Msg -> Model -> (Model, Cmd Msg) update new model = (new, Cmd.none) Okay, now we just need to tell the Elm runtime to perform an operation that will eventually result in our update being called with a Msg . There are many ways of doing this, but for this tutorial we’ll perform a Task . Here’s what our current program looks like – in full – now… module Main exposing (main) import Html import Html.App import Platform.Cmd as Cmd import Platform.Sub as Sub import Task type alias Model = String type alias Msg = String main : Program Never main = Html.App.program { init = ("Hello, world!", changeModel "It changed!") , view = Html.text , update = update , subscriptions = always Sub.none } update : Msg -> Model -> (Model, Cmd Msg) update new model = (new, Cmd.none) changeModel : String -> Cmd Msg changeModel string = let onError = identity onSuccess = identity in Task.perform onError onSuccess (Task.succeed string) Now, in the init of our application, we create an initial Cmd operation, which the Elm runtime will execute in the background. We did this by calling Task.perform . And the task we created to be performed is Task.succeed string . Along with the task, we tell Elm how to transform failure and success return values into a Msg . Since we know Task.succeed can’t fail, and the result of the operation is a Msg already, we can use the identity function. Now, if we run the program, we’ll see that it says "Hello, world!" ever so briefly, but then quickly changes to "It changed!". A More Complex Msg Usually, your Msg type won’t be so simple. Let’s modify our Msg data type so that instead of a String , let’s make it a Maybe . type alias Msg = Maybe String Now, our update function needs to understand that maybe (ha!) the Msg doesn’t have anything for us… update msg model = case msg of Just new -> (new, Cmd.none) Nothing -> (model, Cmd.none) Last, let’s fix our changeModel function so that it properly transforms the resulting task into our new Msg type based on whether or not the task succeeds or fails. changeModel : String -> Cmd Msg changeModel string = let onError = always Nothing onSuccess = Just in Task.perform onError onSuccess (Task.succeed string) Excellent! If we run, we should see everything still works. And, just for kicks, let’s make sure it does the right thing if the task fails. We’ll do this by creating a Task that we know will fail. Task.perform onError onSuccess (Task.fail string) And, just as it should, the model doesn’t change. Quick Summary Let’s recap… - We initial our program with an initial Modeland Cmd. - A Cmdis an operation performed by the Elm runtime sometime later. - For type safety, the result of an operation is transformed into a Msgtype. - The runtime then sends the resulting Msgto our updatefunction. - Most Cmdoperations can succeed or fail. So, when you see a return value from an Elm function that is a Cmd , you know that it is an operations that will be executed sometime later by the Elm runtime, and the result of which will eventually make it to your update function. Subscriptions Besides Cmd , another way of getting a Msg to our update function is via subscriptions (the Sub type). If you understand Cmd , though, subscriptions are a walk in the park. The Sub type represents an event that the application listens to, and the Elm runtime will forward to the update function with the data associated with that event. But, just like the results of operations, events contain data of all different types. So, when we subscribe to one, we also need to tell the Elm runtime how to transform the data of that event into our application’s Msg type. As an example, let’s modify our program to create a simple subscription that updates our Model with the current time about every second. main : Program Never main = Html.App.program { init = ("Hello, world!", Cmd.none) , view = Html.text , update = update , subscriptions = subscriptions } subscriptions : Model -> Sub Msg subscriptions model = Time.every Time.second (Just << toString) When our application begins, and whenever the model changes, the subscriptions function is called. The event we’re going to listen to is Time.every Time.second : an event that will fire once every second, and whose result is the current time. And the function we’re using to transform the event’s result into a Msg is Just << toString . When our program starts, we’ll start listening for the event, and when it trips, we’ll transform the current time into our Msg type, which will then get routed along by the runtime into our update function. That’s it. Note: if you have many events you’d like to subscribe to, use Sub.batch to aggregate multiple subscriptions into a single subscription. Summarizing The Elm Architecture - TEA is the method of building applications in Elm. - It wraps your program in the Model, View, Updatepattern. - You initialize the program with the Modeland Cmd. - You provide the program with a function to render the Model(the View). - You define a message type that is used to Updatethe Model. - A Cmdan operation that will be performed later by the Elm runtime. - A Subis a subscription to an event. - You transform operation results and event data into your message type. - The Updateis called by the Elm runtime with your transformed message. That’s it! It’s very important that you understand this moving forward. And once it "clicks", Elm is wonderful to use. 评论 抢沙发
http://www.shellsec.com/news/22809.html
CC-MAIN-2017-13
refinedweb
2,015
66.54
The.5.2 (default, Dec 10 2013, 11:35:01) [GCC 5.4. the extension .py . Type the following source code in a test.py file − print ("Hello, Python!") We assume that you have the Python interpreter set in PATH variable. Now, try to run this program as follows − Linux $ python test.py This produces the following result − Hello, Python! Windows C:\Python35>Python test.py This produces the following result − Hello, Python! Let us try another way to execute a Python script in Linux. Here is the modified test.py file − #!/usr/bin/python3 print ("Hello, Python!") We assume that you have Python interpreter available in the /usr/bin directory. Now, try to run this program as follows − $ chmod +x test.py # Сделать файл исполяемым $./test.py This produces the following result − Hello, import sys try: # = input("Enter filename: ") if len(file_name) == 0: print ("Next time please enter something") sys.exit() try: file = open(file_name, "r") except IOError: print ("There was an error reading file") sys.exit() file_text = file.read() file.close() print (file_text) Multi-Line Statements Statements in Python typically end with a new line. Python, # This is a comment. # This is a comment, too. # This is a comment, too. # I said that already. Using Blank Lines A line containing only whitespace, possibly with a comment, is known as a blank line and Python totally ignores it. In an interactive interpreter session, you must enter an empty physical line to terminate a multiline statement. Waiting for the User The following line of the program displays the prompt and, the statement saying “Press the enter key to exit”, and then waits for the user to take action − #!/usr/bin/python3 input("\n\nPress the enter key to exit.") Here, "\n\n" is used to create two new lines before displaying the actual line. Once the user presses the key, the program ends. This is a nice trick to keep a console window open until the user is done with an application. Multiple Statements on a Single Line The semicolon ( ; ) allows multiple statements on a single line given that no statement starts a new code block. Here is a sample snip using the semicolon − import sys; x = 'foo'; sys.stdout.write(x + '\n') Multiple Statement Groups as Suites Groups Command Line Arguments Many programs can be run to provide you with some basic information about how they should be run. Python enables you to do this with - h − $ python -h usage:. ]Парсер съел отступы ====== Многострочные объяления но ведь есть тройные ковычки В примерах поправил.
https://evileg.com/en/post/333/
CC-MAIN-2021-17
refinedweb
424
68.26
0 Members and 1 Guest are viewing this topic. What are you stuck on, what is your approach? Show your steps? There are subsequent questions too so perhaps all the info is related to those q's. From your statement, "you don't need a calculator" then I'm starting to think that the impedance approach is not the correct way to go about this question. If both capacitors have the same value, then at all frequencies they have the same reactance and the same impedance. Continue from there. Do they still teach engineers how to use a Smith Chart these days or is it now all based around computers? Yep, I have done resistance networks. Which is why my approach was to use the impedances to go for the voltage divider. I'm a little confused though, is this the (a) correct approach? So I have Ztotal = 50 - j5 (+ C)then Vo(t) = Vs(t) * C/(50-j5)+CBut I'm not sure where to go from here (if this is correct) import sympy, numpyfrom sympy import I as jfrom numpy import pi# Define symbolsR,C,Co,w,Vs = sympy.symbols('R C Co w Vs')# Impedances across the two capsZc = 1./(j*w*C)Zco = 1./(j*w*Co)# Total impedanceZ = R+Zc+Zco# CurrentIz = Vs/Z# Voltage across ZcoVo = Zco*Iz# VcVc = Vs-Iz*R# Solve for C at Vo = Vc/2 | Co=0.2e-6; R=50; w=10MHz; Vs=2.0solution = sympy.solve(sympy.Eq(Vo, Vc/2.0).subs({Co: 0.2e-6, Vs: 2.0, R: 50.0, w: 10e6*2*numpy.pi}), {C})print solution [2.00000000000000e-7]
https://www.eevblog.com/forum/beginners/complex-impedances/msg677616/
CC-MAIN-2020-10
refinedweb
276
76.62
Out of date: This is not the most recent version of this page. Please see the most recent version DigitalIn Use the DigitalIn class to read the value of a digital input pin. You can use any of the numbered mbed pins can be used as a DigitalIn. API API summary mypin(SW2); // change this to the button on your board DigitalOut myled(LED1); int main() { // check mypin object is initialized and connected to a pin if(mypin.is_connected()) { printf("mypin is connected and initialized! \n\r"); } // Optional: set mode as PullUp/PullDown/PullNone/OpenDrain mypin.mode(PullNone); // press the button and see the console / led change while(1) { printf("mypin has value : %d \n\r", mypin.read()); myled = mypin; // toggle led based on value of button wait(0.25); } } Related API To handle an interrupt, see InterruptIn. Examples of logical functions - boolean logic NOT, AND, OR, XOR: #include "mbed.h" DigitalIn a(D0); DigitalIn b(D1); DigitalOut z_not(LED1); DigitalOut z_and(LED2); DigitalOut z_or(LED3); DigitalOut z_xor(LED4); int main() { while(1) { z_not = !a; z_and = a && b; z_or = a || b; z_xor = a ^ b; } }
https://docs.mbed.com/docs/mbed-os-api-reference/en/latest/APIs/io/DigitalIn/
CC-MAIN-2018-30
refinedweb
184
57.98
Question Emergency room arrivals in a large hospital showed the statistics below for 2 months. At α = .05, has the variance changed? Show all steps clearly, including an illustration of the decision rule. .png) Answer to relevant QuestionsConcerned about graffiti, mayors of nine suburban communities instituted a citizen Com- munity Watch program. (a) State the hypotheses to see whether the number of graffiti incidents declined. (b) Find the test ...Rates of return (annualized) in two investment portfolios are compared over the last 12 quarters. They are considered similar in safety, but portfolio B is advertised as being "less volatile." (a) At α = .025, does the ...In preliminary tests of a vaccine that may help smokers quit by reducing the "rush" from tobacco, 64 subjects who wanted to quit smoking were given either a placebo or the vaccine. Of the 32 in the placebo group, only 3 quit ...Refer to Exercise 11.6. Which pairs of mean examination times differ significantly (4 physicians)? Exercise 11.6 One particular morning, the length of time spent in the examination rooms is recorded for each patient seen by ...Refer to Exercise 11.8. Are the population variances the same for weekly sales (4 stores)? Exercise 11.8 Sales of People magazine are compared over a 5-week period at four Borders outlets in Chicago. Do the data show a ... Post your question
http://www.solutioninn.com/emergency-room-arrivals-in-a-large-hospital-showed-the-statistics
CC-MAIN-2016-44
refinedweb
228
60.11
Introduction: Christmas Tree PCB This is a fun way to light up a Christmas card. It will help if you have some experience in any of the following areas: chemistry, PCB etching, avr programming, writing simple C programs, and / or soldering (surface mount soldering). If you don't then the trickiest part will be the soldering since it is tougher than through hole components. If you do know what you are doing, it only takes half an hour (tops) to finish each card. Here we go! Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: PCB Etching You will want to be in a well ventilated area (preferably outside) and try not to breathe the gases or let any spills find your skin or clothes... with that said, it isn't the end of the world or your life if that does happen. Just wash thoroughly with water for a couple mins. I personally use a well ventilated indoor room (a kitchen would work) with a sink nearby (a porcelain sink, not metal so it doesn't etch the sink). Materials / equipment: - pyrex baking dish - acetone - HCl (muriatic acid) - hydrogen Peroxide (3%) - stir plate [optional] - sink - needlenose pliers or tongs or something similar - Cu coated fiberglass board with pattern on The chemicals used are 2 parts hydrogen peroxide (3%) and 1 part muriatic acid (I think 68% HCl? whatever you find at most hardware stores). Fill the pyrex dish up with enough of this 2:1 solution such that it covers your pcb board that you want to etch (getting to this part). Let sit / stir for 5-10 mins and use a pliers to remove. Wash with water. Then get a paper towel and dab on some acetone to wipe off the permanent marker mask. Done! Step 2: Draw the Design Once you know what you plan on drawing, use permanent marker (sharpie) to draw it on your board. The sharpie will act as a mask during the etching process and can later be cleanly removed with acetone. The final design that I used included 6 LEDs, 4 resistors, a ATtiny13a, and a coin cell to power it all. A schematic is also included in the .zip file attached. Components List: (all components are surface mount) Step 3: Soldering This is the toughest step just because it is so hard to work with such small parts. I despise myself for using 0402 components. I will never do that again (hopefully). The 0603 SMD (surface mount device) components were just fine to work with though. To be even safer, I would recommend the 0804 parts. See the previous step for a parts listing. Make sure you know which side of the LED connects to - and to +. For SMD LEDs, there is a small green dot on the top of the LED that indicates that side connects to the - lead. There is also an arrow generally on the bottom that points from the + to the - side. It doesn't matter for the resistors. Also, if your SMD ATtiny13a doesn't come with a dot on it to denote pin 1, figure out which way the label reads (Atmel, etc). Whatever it says, you just need to know the direction words go in. Pin 1 will be the one below the first letter (bottom left as you are reading it). It can be tough with these small guys if you don't have a magnifying glass. Make sure to be extra careful with the microcontroller not to apply to much heat and ruin it. Also, be careful not to connect pins to each other. Step 4: Prepare for Programming - Reset (pin 1) - ground (pin 4) - Vcc (pin 8) - SCK (pin 7) - MISO (pin 6) - MOSI (pin 5) You will need an AVR programmer (again, I used the one from adafruit, USBtiny). If you are using such a programmer, there are either (or both) a 6-pin and a 10-pin ISP connector. You can use either, but you only need 6 of the wires. Those being the same 6 listed above. I have included a picture of the layout of each of these ISP connectors for reference so you know how to wire them up to the circuit board. I would suggest using a target board or breadboard or just stray wires will work fine if you just want to jam them into the connector. Also, if you are using your programmer to supply power to the chip, you will want to bring down the 5V that it supplies down to something closer to 2.8V so that you don't blow the LED on the top. To do this, just attach a resistor (~ 240 ohm) in series to either the Vcc or ground lines between the programmer and the microcontroller / pcb. Step 5: Programming & Code Hook up the programmer to match the configuration in the previous step. You will need the .hex file (I attached it in the .zip file so you can download it). Or you can also compile it yourself if you want to modify the code. If you don't know what software to use, you can always use the free Atmel Studio 6 software. The code basically uses XOR to flip / flop the state of the LED. It chooses which LED to flip based on a random number modulus 3. This way LEDs can be on at the same time and each have their own random chance to stay on. It makes a flip / flop decision every x seconds where x is another random number. I know it is pointless, but it also scrambles the seed everytime it loops. So it still uses the same random number everytime it gets turned on. A better way would be to read an input off of pin 3 or 2 (PB3 or PB4, respectively). The code is pretty straightforward: #include <avr/io.h> #include <util/delay.h> #include <stdlib.h> int main(void) { DDRB = 1<<PB0; DDRB |= 1<<PB1; DDRB |= 1<<PB2; PORTB = 0; int n; while(1) { n=((n+57)*13)%10057; srand(n); int r = rand()%3; if (r==0) PORTB ^= 1<<PB0; if (r==1) PORTB ^= 1<<PB1; if (r==2) PORTB ^= 1<<PB2; int r3 = rand()%10; for (int t=0;t<r3;t++) { _delay_ms(75); } } return 0; } Attachments Step 6: Uploading Code Assuming you have downloaded what you need for avrdude, lets open up a command prompt window (for you windows users that is). If you are using the USBtinyISP that adafruit sells, use the code exactly as I have it (skip down), else you may have to find out what your programmer is called. To figure out what avrdude calls your programmer, type in avrdude -c ? Once you have your programmer found, you will also need to know what microcontroller you are using. If you are using the attiny13 like I am, use the code I have, otherwise you will have to find it by searching the list generated by typing in: To navigate to where your .hex file is, use the change director command "cd". For example, lets say you were in the directory "C:\Users\" and you wanted to get to the folder Documents (which is in the folder "Sasha" [my name] which is in Users), you could either use one command "cd Sasha\Documents" or use two consecutive commands "cd Sasha" + "cd Documents" to get there. I'll let you figure it out from there. It is helpful to know the command "ls" which lists what files and folders are in your current directory. Once you have navigated to the directory with xmas_card.hex you can proceed to the avrdude command that comes next. You can see the directory for my computer in the top line of the picture on this step. If you have names the .hex file something else, you should change your command to match that. Assuming you are in the right directory and you have done everything above, and have plugged in the programmer properly, execute the following command: avrdude -c usbtiny -p t13 -U flash:w:xmas_card.hex You may now desolder the wires we connected for the programmer. This is fairly straightforward I believe. Step 7: Plug and Play To get a pdf of this project, go to my webpage here. Participated in the Make It Glow Participated in the Instructables Design Competition Be the First to Share Recommendations 5 Discussions 6 years ago on Step 2 yeah it would be nice thank you! marC:) Reply 6 years ago on Introduction I added 2 pictures to step 2. Together they should explain the schematic well enough. If not let me know and I will try to help as best as I can. 6 years ago on Step 2 there is no schematic? :( Reply 6 years ago on Step 2 Step 2 has a schematic of sorts. I can draw and post a new one later tonight though. 6 years ago on Introduction Very nice!
https://www.instructables.com/id/Christmas-Tree-PCB/
CC-MAIN-2020-05
refinedweb
1,516
80.31
BootBot is a simple but powerful JavaScript Framework to build Facebook Messenger's Chat bots. 💬 Questions / Comments? Join our Slack channel! FeaturesFeatures - Helper methods to send any type of message supported by Facebook. - Subscribe to a particular type of message, or to certain keywords sent by the user. - Start conversations, ask questions and save important information in the context of the conversation. - Organize your code in modules. - Send automatic or manual typing indicators. - Set your bot's properties, such as a persistent menu, a greeting text or a get started CTA. UsageUsage $ npm install bootbot --save 'use strict';const BootBot = ;const bot =accessToken: 'FB_ACCESS_TOKEN'verifyToken: 'FB_VERIFY_TOKEN'appSecret: 'FB_APP_SECRET';bot;botstart; Video ExampleVideo Example Creating a Giphy Chat Bot in 3 minutes: Getting StartedGetting Started - Install BootBot via NPM, create a new index.js, require BootBot and create a new bot instance using your Facebook Page's / App's accessToken, verifyTokenand appSecret: Note: If you don't know how to get these tokens, take a look at Facebook's Quick Start Guide or check out this issue. // index.js'use strict';const BootBot = ;const bot =accessToken: 'FB_ACCESS_TOKEN'verifyToken: 'FB_VERIFY_TOKEN'appSecret: 'FB_APP_SECRET'; bot.on()and bot.hear()methods: bot;bot; - Reply to user messages using the chatobject: bot;bot;bot;bot; - Start a conversation and keep the user's answers in context: bot; - Set up webhooks and start the express server: botstart; - Start up your bot by running node: $ node index.js > BootBot running on port 3000 > Facebook Webhook running on localhost:3000/webhook - If you want to test your bot locally, install a localhost tunnel like ngrok and run it on your bot's port: $ ngrok http 3000 Then use the provided HTTPS URL to config your webhook on Facebook's Dashboard. For example if the URL provided by ngrok is, use. DocumentationDocumentation BootBot ClassBootBot Class new BootBot(options) Creates a new BootBot instance. Instantiates the new express app and all required webhooks. options param must contain all tokens and app secret of your Facebook app. Optionally, set broadcastEchoes to true if you want the messages your bot send to be echoed back to it (you probably don't need this feature unless you have multiple bots running on the same Facebook page). If you want to specify a custom endpoint name for your webhook, you can do it with the webhook option. .start([ port ]) Starts the express server on the specified port. Defaults port to 3000. Closes the express server (calls .close() on the server instance). Receive APIReceive API Use these methods to subscribe your bot to messages, attachments or anything the user might send. .on(event, callback) Subscribe to an event emitted by the bot, and execute a callback when those events are emitted. Available events are: You can also subscribe to specific postbacks and quick replies by using a namespace. For example postback:ADD_TO_CART subscribes only to the postback event containing the ADD_TO_CART payload. If you want to subscribe to specific keywords on a message event, see the .hear() method below. When these events ocurr, the specified callback will be invoked with 3 params: (payload, chat, data) .on() examples: bot;bot;bot;bot; .hear(keywords, callback) A convinient method to subscribe to message events containing specific keywords. The keyword param can be a string, a regex or an array of both strings and regexs that will be tested against the received message. If the bot receives a message that matches any of the keywords, it will execute the specified callback. String keywords are case-insensitive, but regular expressions are not case-insensitive by default, if you want them to be, specify the i flag. The callback's signature is identical to that of the .on() method above. .hear() examples: bot;bot;bot; Note that if a bot is subscribed to both the message event using the .on() method and a specific keyword using the .hear() method, the event will be emitted to both of those subscriptions. If you want to know if a message event was already captured by a different subsciption, you can check for the data.captured flag on the callback. Send APISend API BootBot provides helper methods for every type of message supported by Facebook's Messenger API. It also provides a generic sendMessage method that you can use to send a custom payload. All messages from the Send API return a Promise that you can use to apply actions after a message was successfully sent. You can use this to send consecutive messages and ensure that they're sent in the right order. Important Note:Important Note: The Send API methods are shared between the BootBot, Chat and Conversation instances, the only difference is that when you use any of these methods from the Chat or Conversation instances, you don't have to specify the userId. Example - These two methods are identical: bot;// is the same as...bot; You'll likely use the Send API methods from the Chat or Conversation instances (ex: chat.say() or convo.say()), but you can use them from the BootBot instance if you're not in a chat or conversation context (for example, when you want to send a notification to a user). .say() Send a message to the user. The .say() method can be used to send text messages, button messages, messages with quick replies or attachments. If you want to send a different type of message (like a generic template), see the Send API method for that specific type of message. The message param can be a string an array, or an object: - If messageis a string, the bot will send a text message. - If messageis an array, the .say()method will be called once for each element in the array. - If messageis an object, the message type will depend on the object's format: // Send a text messagechat;// Send a text message with quick replieschat;// Send a button templatechat;// Send a list templatechat;// Send a generic templatechat;// Send an attachmentchat;// Passing an array will make subsequent calls to the .say() method// For example, calling:chat;// is the same as:chat; The options param can contain: .sendTextMessage() The text param must be a string containing the message to be sent. The quickReplies param can be an array of strings or quick_reply objects. The options param is identical to the options param of the .say() method. .sendButtonTemplate() The text param must be a string containing the message to be sent. The buttons param can be an array of strings or button objects. The options param is identical to the options param of the .say() method. .sendGenericTemplate() The elements param must be an array of element objects. The options param extends options param of the .say() method with imageAspectRatio property. .sendListTemplate() The elements param must be an array of element objects. The buttons param can be an array with one element: string or button object. The options param extends options param of the .say() method with topElementStyle property. .sendTemplate() Use this method if you want to send a custom template payload, like a receipt template or an airline itinerary template. The options param is identical to the options param of the .say() method. .sendAttachment() The type param must be 'image', 'audio', 'video' or 'file'. The url param must be a string with the URL of the attachment. The quickReplies param can be an array of strings or quick_reply objects. The options param is identical to the options param of the .say() method. .sendAction() The action param must be 'mark_seen', 'typing_on' or 'typing_off'. To send a typing indicator in a more convenient way, see the .sendTypingIndicator method. The options param is identical to the options param of the .say() method. .sendMessage() Use this method if you want to send a custom message object. The options param is identical to the options param of the .say() method. .sendTypingIndicator() Convinient method to send a typing_on action and then a typing_off action after milliseconds to simulate the bot is actually typing. Max value is 20000 (20 seconds). You can also use this method via the typing option (see .say() method). .getUserProfile() This method is not technically part of the "Send" API, but it's listed here because it's also shared between the bot, chat and convo instances. Returns a Promise that contains the user's profile information. bot; ConversationsConversations Conversations provide a convinient method to ask questions and handle the user's answer. They're useful when you want to set a flow of different questions/answers, like an onboarding process or when taking an order for example. Conversations also provide a method to save the information that you need from the user's answers, so the interaction is always in context. Messages sent by the user won't trigger a global postback, attachment or quick_reply event if there's an active conversation with that user. Answers must be managed by the conversation. bot.conversation() Starts a new conversation with the user. The factory param must be a function that is executed immediately receiving the convo instance as it's only param: bot.on('hello', (payload, chat) => { chat.conversation((convo) => { // convo is available here... convo.ask( ... ); }); }); convo.ask(question, answer, [ callbacks, options ]) If question is a string or an object, the .say() method will be invoked immediately with that string or object, if it's a function it will also be invoked immedately with the convo instance as its only param. The answer param must be a function that receives the payload, convo and data params (similar to the callback function of the .on() or .hear() methods, except it receives the convo instance instead of the chat instance). The answer function will be called whenever the user replies to the question with a text message or quick reply. The callbacks array can be used to listen to specific types of answers to the question. You can listen for postback, quick_reply and attachment events, or you can match a specific text pattern. See example bellow: The options param is identical to the options param of the .say() method. convo.ask() example: const question =text: `What's your favorite color?`quickReplies: 'Red' 'Green' 'Blue';const answer = {const text = payloadmessagetext;convo;};const callbacks =event: 'quick_reply'{ /* User replied using a quick reply */ }event: 'attachment'{ /* User replied with an attachment */ }pattern: 'black' 'white'{ /* User said "black" or "white" */ };const options =typing: true // Send a typing indicator before asking the question;convo; convo.set(property, value) Save a value in the conversation's context. This value will be available in all subsequent questions and answers that are part of this conversation, but the values are lost once the conversation ends. convo; convo.get(property) Retrieve a value from the conversation's context. convo.end() Ends a conversation, giving control back to the bot instance. All .on() and .hear() listeners are now back in action. After you end a conversation the values that you saved using the convo.set() method are now lost. You must call convo.end() after you no longer wish to interpret user's messages as answers to one of your questions. If you don't, and a message is received with no answer callback listening, the conversation will be ended automatically. ModulesModules Modules are simple functions that you can use to organize your code in different files and folders. .module(factory) The factory param is a function that gets called immediatly and receives the bot instance as its only parameter. For example: // help-module.jsmodule {bot;};// index.jsconst helpModule = ;bot; Take a look at the examples/module-example.js file for a complete example. Messenger Profile APIMessenger Profile API .setGreetingText(text) Set a greeting text for new conversations. The Greeting Text is only rendered the first time the user interacts with a the Page on Messenger. Localization support: text can be a string containing the greeting text, or an array of objects to support multiple locales. For more info on the format of these objects, see the documentation. .setGetStartedButton(action) React to a user starting a conversation with the bot by clicking the Get Started button. If action is a string, the Get Started button postback will be set to that string. If it's a function, that callback will be executed when a user clicks the Get Started button. .deleteGetStartedButton() Removes the Get Started button call to action. .setPersistentMenu(buttons, [ disableInput ]) Creates a Persistent Menu that is available at any time during the conversation. The buttons param can be an array of strings, button objects, or locale objects. If disableInput is set to true, it will disable user input in the menu. The user will only be able to interact with the bot via the menu, postbacks, buttons and webviews. Localization support: if buttons is an array of objects containing a locale attribute, it will be used as-is, expecting it to be an array of localized menues. For more info on the format of these objects, see the documentation. .deletePersistentMenu() Removes the Persistent Menu. Bypassing ExpressBypassing Express You may only want to use bootbot for the Facebook related config and the simple to use Send API features but handle routing from somewhere else. Or there may be times where you want to send a message out of band, like if you get a postback callback and need to end a conversation flow immediately. Or maybe you don't want to use express but a different HTTP server. .handleFacebookData(data) Use this to send a message from a parsed webhook message directly to your bot. const linuxNewsBot = argz;const appleNewsBot = argz;const windowsNewsBot = argz;myNonExpressRouter; ExamplesExamples Check the examples directory to see more demos of: - An echo bot - A bot that searches for random gifs - An example conversation with questions and answers - How to organize your code using modules - How to use the Messenger Profile API to set a Persistent Menu or a Get Started CTA - How to get the user's profile information To run the examples, make sure to complete the examples/config/default.json file with your bot's tokens, and then cd into the examples folder and run the desired example with node. For example: $ cd examples $ node echo-example.js CreditsCredits Made with 🍺 by Maxi Ferreira - @Charca LicenseLicense MIT
https://www.npmjs.com/package/bootbot-ts
CC-MAIN-2022-27
refinedweb
2,371
55.95
A split pane for dash based on react split pane. Project description Dash Split pane A Dash Split-Pane component, can be nested or split vertically or horizontally! This is based on React Split Pane: Usage import dash_split_pane dash_split_pane.DashSplitPane( children=[html.Div("LEFT PANE"), html.Div("RIGHT PANE")], id="splitter", split="vertical", size=50, ) All the props from react-split-pane are exposed, except for defaultSize (only size needs to be used). The size property will be updated to dash when a drag is complete. For more information see react-split-pane Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/dash-split-pane/
CC-MAIN-2021-04
refinedweb
122
57.87
1, Installation Environment: Centos7 1. Uninstall old version The older version of docker is called docker or docker engine. If you have installed these programs, uninstall them and their related dependencies. sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine 2. Installing using a repository Note: there are three official installation methods. Here we choose the most commonly used repository installation The docker repository needs to be set before the Docker Engine is installed on the new host for the first time. Docker can then be installed and updated from the repository. sudo yum install -y yum-utils # Note: This is a big pit. You can replace it with Ali's source sudo yum-config-manager \ --add-repo \ # We use Ali's sudo yum-config-manager --add-repo # Installation (if the source is not changed, the speed will be very slow) sudo yum install docker-ce docker-ce-cli containerd.io 3. Start up and verification systemctl start docker # start-up systemctl status docker # View status docker version # View version docker run hello-world # test 4. Set accelerator (alicloud) Note: everyone has his own address. Open Alibaba cloud official website - > console - > search container image service - > find the image accelerator in the lower right corner sudo mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": [" } EOF sudo systemctl daemon-reload sudo systemctl restart docker docker run hello-world 5. Unload # Uninstall Docker Engine, CLI and container packages sudo yum remove docker-ce docker-ce-cli containerd.io # Images, containers, volumes, or custom profiles on the host are not automatically deleted. To delete all images, containers, and volumes sudo rm -rf /var/lib/docker 2, Docker three elements 1. Repository Warehouse: a place for centralized storage of images. Note: there is a difference between a Repository and a Repository registration server. The Repository registration server often stores multiple warehouses, and each warehouse contains multiple images. Each image has a different tag The warehouse is divided into two forms: Public warehouse and Private warehouse. The largest public warehouse is Docker Hub( ), a large number of images are stored for users to download. Domestic public warehouses include Alibaba cloud and Netease cloud. 2. Image A read-only template is used to create Docker containers. One image can create many containers. The relationship between container and image is similar to that of objects and classes in object-oriented programming 3. Container An application or group of applications that run independently. The container uses the running instance created by the image. It can be started, started, stopped and deleted. Each container is isolated from each other to ensure a safe platform. You can think of the container as a simple version of the Linux environment (including root user permissions, process space, user space, cyberspace, etc.) and an application running on the boot. The definition of a container is almost the same as that of an image. It is also a unified view of a stack of layers. The only difference is that the top layer of a container is readable and writable. 3, Simple understanding of underlying principles 1. How does Docker work Docker is a system with client server structure. The docker daemon runs on the host and then accesses from the client through Socket connection. The daemon receives commands from the client and manages the containers running on the host. 2. Why is Docker faster than VM? (1) Docker has less abstraction layer than virtual machine. Because docker does not need Hypervisor to realize hardware resource virtualization, programs running on docker container directly use the hardware resources of actual physical machine. Therefore, docker will have obvious advantages in CPU and memory utilization. (2) Docker uses the kernel of the host computer instead of the Guest OS. Therefore, when creating a new container, docker does not need to load an operating system kernel overlapped with the virtual machine, so as to avoid the process of searching and loading the operating system kernel and returning to a time-consuming and resource-consuming process. When creating a new virtual machine, the virtual machine software needs to load the Guest OS, which is a minute level process. Because docker directly uses the operating system of the host, it only takes a few seconds to create a docker container. 4, Related commands 1. Help command docker version # View docker version information docker info # detailed description docker --help # Command help 2. Mirror command (1) Local mirror list docker images [OPTIONS] [REPOSITORY[:TAG]] # OPTIONS Description: -a: List all local mirrors (including intermediate image layer) -q: Show only mirrors ID --digests: Displays summary information for the mirror --no-trunc: Display complete image information # Description of each option: # REPOSITORY: represents the warehouse source of the image # TAG: the label of the image # IMAGE ID: IMAGE ID # CREATED: image creation time # SIZE: mirror SIZE The same warehouse source can have multiple tags, representing different versions of the warehouse source. We use REPOSITORY:TAG to define different images. If we do not specify the version label of an image, for example, we only use ubuntu,docker will use ubuntu:latest image by default. (2) Find mirror docker search [options] Some xxx Mirror name # Will be Go up and find it docker search mysql --filter=STARS=3000 # Search for images with 3000 stars or more # OPTIONS Description: # --No TRUNC: displays the full image description # -s: List the images whose likes are not less than the specified value # --Automated: only images of type automated build are listed (3) Get image docker pull Mirror name[:TAG] # If you do not write the TAG, you will get latest by default. At this time, you will get the image from alicloud configured by us docker pull mysql # Get the latest version of mysql Using default tag: latest latest: Pulling from library/mysql 852e50cd189d: Pull complete # Sub volume Download 29969ddb0ffb: Pull complete a43f41a44c48: Pull complete 5cdd802543a3: Pull complete b79b040de953: Pull complete 938c64119969: Pull complete 7689ec51a0d9: Pull complete a880ba7c411f: Pull complete 984f656ec6ca: Pull complete 9f497bce458a: Pull complete b9940f97694b: Pull complete 2f069358dc96: Pull complete Digest: sha256:4bb2e81a40e9d0d59bd8e3dc2ba5e1f2197696f6de39a91e90798dd27299b093 Status: Downloaded newer image for mysql:latest docker.io/library/mysql:latest # docker pull mysql is equivalent to docker pull docker io/library/mysql:latest docker pull mysql:5.7 # Download the image of the specified version 5.7: Pulling from library/mysql 852e50cd189d: Already exists # Federated file system. Existing files will not be downloaded repeatedly 29969ddb0ffb: Already exists a43f41a44c48: Already exists 5cdd802543a3: Already exists b79b040de953: Already exists 938c64119969: Already exists 7689ec51a0d9: Already exists 36bd6224d58f: Pull complete cab9d3fa4c8c: Pull complete 1b741e1c47de: Pull complete aac9d11987ac: Pull complete Digest: sha256:8e2004f9fe43df06c3030090f593021a5f283d028b5ed5765cc24236c2c4d88e Status: Downloaded newer image for mysql:5.7 docker.io/library/mysql:5.7 (4) Delete mirror # Delete a single mirror docker rmi -f Image name/ID # If the image name is not followed by TAG, the latest is deleted by default # Delete multiple mirrors docker rmi -f Image name 1 image name 2 ... docker rmi -f id1 id2 ... // Note: the two methods cannot be mixed # Delete all mirrors docker rmi -f $(docker images -aq) 3. Container command A container can only be created with an image, which is the fundamental premise (download a Centos image demo) (1) New and start container (Interactive) docker run [OPTIONS] IMAGE [COMMAND] [ARG...] docker run -it --name mycentos centos // If you do not specify an alias, the system automatically assigns it # OPTIONS Description: # --Name: specifies a name for the container # -d: Run the container in the background and return the container ID, that is, start the daemon container # -i: Run the container in interactive mode, usually with - t # -t: Reassign a pseudo input terminal to the container, usually in conjunction with - i # -P: Random port mapping # -p: The specified port mapping has the following four formats ip:hostPort:containerPort ip::containerPort hostPort:containerPort containerPort # test [root@iz2zeaj5c9isqt1zj9elpbz ~]# docker run -it centos /bin/bash # Start and enter the container [root@783cb2f26230 /]# ls # View in container bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var [root@783cb2f26230 /]# exit # Exit container exit [root@iz2zeaj5c9isqt1zj9elpbz ~]# (2) Lists all currently running containers docker ps [OPTIONS] # Without OPTIONS, only running containers are listed # OPTIONS description (common): # -a: Lists all currently running containers and those that have been running in history # -l: Displays recently created containers # -n: Displays the last n containers created # -q: In silent mode, only the container number is displayed # --No TRUNC: do not truncate output (3) Exit container exit // Close and exit the container directly Reopen a terminal, execute docker ps -l, the container information we just created will be returned, and the STATUS will prompt that it has exited. So can you quit temporarily and come back later without closing the container interactively? Ctrl + P + Q # After execution, we will exit the container and return to the host. Using docker ps -l, we will find that the STATUS of the container we just exited is Up. (4) Start container docker start [OPTIONS] CONTAINER [CONTAINER...] # Multiple containers can be started at the same time, and the container name and ID can be mixed # OPTION description (common): # -i: enter interactive, and only one container can be entered at this time Enter the container above which we exit but still live docker start -i 186ae928f07c (5) Restart container docker restart [OPTIONS] CONTAINER [CONTAINER...] # OPTIONS Description: # -t: the time to wait to stop before killing the container (the default is 10) (6) Stop container docker stop [OPTIONS] CONTAINER [CONTAINER...] # OPTIONS Description: # -t: waiting time to stop before killing (default is 10) docker kill [OPTIONS] CONTAINER [CONTAINER...] // Forced shutdown (equivalent to unplugging) # OPTIONS Description: # -s: Signal sent to container (default is "KILL") (7) Delete container docker rm [OPTIONS] CONTAINER [CONTAINER...] # OPTIONS Description: # -f: Force deletion, whether running or not # The above command can delete one or more containers, but if you want to delete all containers, should we write all the container names or ID S? docker rm -f $(docker ps -aq) # Delete all docker ps -aq | xargs docker rm -f (8) Start daemon container docker run -d Container name/container ID Note: when we use the docker ps -a command to check, we will find that the container just started and exited. Why? It is important to note that when the Docker container runs in the background, there must be a foreground process. If the commands run by the container are not those that have been suspended (such as running top and tail), they will exit automatically. This is the mechanism of Docker. For example, we now run the WEB container. Take Nginx as an example. Under normal circumstances, we only need to start the responding service to configure the startup service, such as service nginx start. However, in this way, Nginx runs in the background process mode, resulting in no running application in the foreground of Docker. After such a container is started in the background, it will commit suicide immediately because it thinks it has nothing to do. So the best solution is to run the program in the form of a previous process. So how to make the guarded container not exit automatically? We can execute commands that have been suspended. docker run -d centos /bin/sh -c "while true;do echo hello Negan;sleep 2;done" (9) View container log docker logs [OPTIONS] CONTAINER # OPTION Description: # -t add timestamp # -f follow the latest log print # --tail displays the last number (10) View the processes in the container docker top CONTAINER (11) View container interior details docker inspect CONTAINER (12) Enter the running container and interact with it on the command line - exec opens a new terminal in the container and can start a new process docker exec [OPTIONS] CONTAINER COMMAND [ARG...] docker exec -it CONTAINER /bin/bash // Enter the container and interact # Beat cattle across the mountain docker exec -it 3d00a0a2877e ls -l /tmp # 3d00a0a2877e is the guardian container running above me # Enter the container, execute the ls -l /tmp command and return the result to my host computer, but the user interface still stays in the host computer and does not enter the container - attach directly enters the terminal of the container startup command and will not start a new process docker attach CONTAINER # Note that after entering the above guard container, we will see that hello Negan is still printed every two seconds, and we can't exit. We can only reopen a terminal and execute the docker kill Command (13) File copy between container and host docker cp CONTAINER:SRC_PATH DEST_PATH # Copy the files in the container to the host docker cp 90bd03598dd4:123.txt ~ docker cp SRC_PATH CONTAINER:DEST_PATH # Copy the files on the host to the container docker cp 12345.txt 90bd03598dd4:~ practice Exercise 1: deploying Docker # Find an nginx image docker search nginx # Download Image docker pull nginx # start-up docker run -d --name nginx01 -p 3344:80 nginx # -P 3344 (host): 80 (container port) # Test (browser access) 123.56.243.64:3344 5, Visualization 1,portainer Docker graphical interface management tool provides a background panel for us to operate. docker run -d -p 8088:9000 \ --restart=always -v /var/run/docker.sock:/var/run/docker.sock --privileged=true protainer/portainer 6, Docker image 1. What is mirroring Image is a lightweight and executable independent software package, which is used to package the software running environment and the software developed based on the running environment. It contains all the contents required to run a software, including code, runtime, library, environment variables and configuration files. All applications can run directly by directly packaging the docker image. How to obtain images - Download from remote operation - Friend copy - Make an image DockerFile by yourself 2. Mirror loading principle (1) Federated file system Union fs (Federated file system): Union file system is a layered, lightweight and high-performance file system. It supports the superposition of file system modifications as one submission, and can hang different directories under the same virtual file system. The federated file system is the foundation of Docker image. The image can be inherited through layering. Based on the basic image (without parent image), various specific application images can be made. Features: multiple file systems can be loaded at the same time, but from the outside, only one file system can be seen. Joint loading will overlay all layers of file systems, so that the final file system will contain all underlying files and directories. (2)Docker image loading principle The image of Docker is actually composed of file systems layer by layer, that is, the federated file system mentioned above. bootfs(boot file system) mainly includes bootloader and kernel. Bootloader is mainly used to boot and load the kernel. Bootfs file system will be loaded when Linux starts up. unload bootfs. rootfs(root file system), on top of bootfs, contains standard directories and files such as / dev,/proc,/bin,/etc in a typical Linux system. rootfs is a variety of operating system distributions, such as Ubuntu,CentOS and so on. For a streamlined OS,rootfs can be very small. You only need to include the latest commands, tools and program libraries. Because the underlying layer directly uses the Host kernel, you only need to provide rootfs. Therefore, for different Linux distributions, bootfs are basically the same, and rootfs will be different. Therefore, different distributions can share bootfs. 3. Hierarchical understanding (1) Tiered mirroring We download an image and pay attention to the log output of the download. We can see that it is downloading layer by layer. Why does Docker image adopt this hierarchical structure? The biggest advantage, I think, is resource sharing. For example, if multiple images are built from the same Base image, the host only needs to keep one Base image on the disk, and only one Base image needs to be loaded in the memory, so that it can serve all containers, and each layer of the image can be shared. (2) Understand All Docker images begin with a basic image layer. When modifying or adding new content, a new image layer will be created on top of the current image layer. If you create a new image based on Ubuntu Linux 16.04, this is the first layer of the new image; If you add a Python package to the image, a second image layer will be created on top of the basic image. If you continue to add a security patch, a third image layer will be created. Add an additional mirror layer, while the mirror always remains the combination of all current mirrors. Note: Docker images are read-only. When the container is started, a new writable layer is loaded to the top of the image. This layer is what we usually call the container layer. What is under the container is called the mirror layer. 4,commit docker commit The commit container becomes a new image docker commit -m="Description information submitted" -a="author" container id Target image name:[TAG] 7, Container data volume If the data is in the container, we will delete the container and lose the data! Requirements: data needs to be persistent. MySQL, delete the container, delete the database and run away? The data generated in the Docker container needs to be synchronized locally. This is volume technology, directory mounting. Mount the directory in our container to Linux. 1. Using data volumes - Mount directly using the command docker run -it -v Host Directory:In container directory # Two way binding, one party changes and the other automatically changes docker run -it -v /home/ceshi:/home centos /bin/bash docker inspect d2343e9d338a # View /* ... "Mounts": [ # Mount - v { "Type": "bind", "Source": "/home/ceshi", # Path within host "Destination": "/home", # Path within container "Mode": "", "RW": true, "Propagation": "rprivate" } ], ... */ 2. Actual combat: install MySQL Thinking: data persistence of MySQL docker run -d -p 13306:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7 3. Named mount and anonymous mount (1) . anonymous mount docker run -d -P --name nginx01 -v /etc/nginx nginx # Only paths within the container are specified docker volume ls # View all volume s DRIVER VOLUME NAME local 55050407d8fd052403bbf6ee349aa6268c2ec5c1054dafa678ac5dd31c59217a # Anonymous mount local fd074ffbcea60b7fe65025ebe146e0903e90d9df5122c1e8874130ced5311049 # This is anonymous mount. In -v, we only write the path inside the container, not the path outside the container. (2) . named mount docker run -d -P -v juming:/etc/nginx --name nginx02 nginx # Named mount docker volume ls DRIVER VOLUME NAME local 55050407d8fd052403bbf6ee349aa6268c2ec5c1054dafa678ac5dd31c59217a local fd074ffbcea60b7fe65025ebe146e0903e90d9df5122c1e8874130ced5311049 local juming # Our name on it # Pass -v volume name: path in container # All the volumes in the docker container are in the / var/lib/docker/volumes directory if no directory is specified. docker inspect juming [ { "CreatedAt": "2020-12-09T00:22:54+08:00", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/juming/_data", "Name": "juming", "Options": null, "Scope": "local" } ] We can easily find the volume we mount through named mount. In most cases, we will use named mount. (3) , expand -v Path in container # Anonymous mount -v Volume name: path inside container # Named mount -v /Host path: path within container # Specified path mount # Change the read and write permissions through the path in the - V container: ro rw ro readonly # read-only rw readwrite # Readable and writable # Once read-only is set, the container limits the content we mount. The file can only be operated through the host, and the container cannot be operated. docker run -d -P --name nginx01 -v juming:/etc/nginx:ro nginx 4. Data volume container Information synchronization between containers. docker run -it -v /home --name c1 centos /bin/bash # Start c1 as a parent container (specify the mount directory in the container) docker run -it --name c2 --volumes-from c1 centos /bin/bash # Start c2 and mount c1. At this time, the operations in / home in the two containers will be synchronized # The following containers are created by ourselves. You can refer to the following dockerfile for construction docker run -it --name docker01 negan/centos # Start a container as the parent container (mounted container) docker run -it --name docker02 --volumes-from docker01 negan/centos #Start container 2 and mount container 1 docker run -it --name docker03 --volumes-from docker02 negan/centos #Start container 3 and mount container 2 # The above container 3 can also be directly mounted on container 1, and then we enter any container and mount volumes volume01/volume02 for operation, and the data between containers will be synchronized automatically. Conclusion: For the transfer of configuration information between containers, the declaration cycle of data volume containers continues until no containers are used. However, once persistent to the local, the local data will not be deleted. 8, DockerFile 1. First acquaintance with DockerFile DockerFile is the build file used to construct the docker image, including the command parameter script. Through this script, you can generate an image. Mirroring is layer by layer. Scripts are commands, and each command is a layer. # dockerfile1 content, all commands are capitalized FROM centos # centos based VOLUME ["volume01","volume02"] # Mount data volume (anonymous mount) CMD echo "----end-----" CMD /bin/bash # structure docker build -f dockerfile1 -t negan/centos . # -f specify path - t specify name, without tag, default to the latest Sending build context to Docker daemon 2.048kB Step 1/4 : FROM centos ---> 0d120b6ccaa8 Step 2/4 : VOLUME ["volume01","volume02"] ---> Running in 0cfe6b5be6bf Removing intermediate container 0cfe6b5be6bf ---> 396a4a7cfe15 Step 3/4 : CMD echo "----end-----" ---> Running in fa535b5581fa Removing intermediate container fa535b5581fa ---> 110d9f93f827 Step 4/4 : CMD /bin/bash ---> Running in 557a2bb87d97 Removing intermediate container 557a2bb87d97 ---> c2c9b92d50ad Successfully built c2c9b92d50ad Successfully tagged negan/centos:latest docker images # see 2. DokcerFile build process (1) Basic knowledge Each reserved keyword (instruction) must be uppercase Execute from top to bottom #Indicates a comment Each instruction creates and commits a new mirror layer Dockerfile is development oriented. We need to write dockerfile files if we want to publish projects and make images in the future. Docker image has gradually become the standard of enterprise delivery. (2) . basic commands FROM #Who is the mother of this mirror? (basic image, everything starts from here) MAINTAINER # Tell others who is responsible for raising him? (who wrote the image, designated maintainer information, name + email) RUN # What do you want him to do? (commands to be run during image construction) ADD # Give him some venture capital (copy the file and it will be automatically decompressed) WORKDIR # Mirrored working directory VOLUME # Give him a place to store his luggage (set up a volume, mount it in the container to the directory of the host, mount it anonymously) EXPOSE # What's the house number? (specify external port) CMD # Specify the command to run when the container starts. Only the last one will take effect and can be replaced ENTRYPOINT # Specify the command to be run when the container is started, and you can append the command ONBUILD # When building an inherited DockerFile, the ONBUILD instruction will be run COPY # Similar to ADD, copy our files to the image ENV # Setting environment variables during construction 3. Actual operation (1) Create your own CentOS # vim Dockerfile FROM centos MAINTAINER Negan<huiyichanmian@yeah.net> ENV MYPATH /usr/local WORKDIR $MYPATH RUN yum -y install vim RUN yum -y install net-tools EXPOSE 80 CMD echo $MYPATH CMD echo "---end---" CMD /bin/bash # structure docker build -f Dockerfile -t negan/centos . # test docker run -it negan/centos [root@ffae1f9eb97e local]# pwd /usr/local # Enter the working directory we set in dockerfile # View the image construction process docker history Image name/ID (2) Difference between CMD and ENTRYPOINT Both of them are commands to be executed when the container is started. Only the last CMD command will take effect. Additional commands are not supported later and will be replaced. ENTRYPOINT will not be replaced, and commands can be appended. CMD # vim cmd FROM centos CMD ["ls","-a"] # structure docker build -f cmd -t cmd_test . # Run and find that our ls -a command takes effect docker run cmd_test . .. .dockerenv bin dev etc ...... # Append command run docker run cmd_test -l # An error is thrown and the command cannot be appended. The original ls -a command is replaced by - l, but - l is not a valid command docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-l\": executable file not found in $PATH": unknown. # Append complete command docker run cmd_test ls -al total 56 drwxr-xr-x 1 root root 4096 Dec 10 14:36 . drwxr-xr-x 1 root root 4096 Dec 10 14:36 .. -rwxr-xr-x 1 root root 0 Dec 10 14:36 .dockerenv lrwxrwxrwx 1 root root 7 Nov 3 15:22 bin -> usr/bin drwxr-xr-x 5 root root 340 Dec 10 14:36 dev ...... ENTRYPOINT # vim entrypoint FROM centos ENTRYPOINT ["ls","-a"] # structure docker build -f entrypoint -t entrypoint_test . # function docker run entrypoint_test . .. .dockerenv bin dev etc # Append command run docker run entrypoint_test -l total 56 drwxr-xr-x 1 root root 4096 Dec 10 14:41 . drwxr-xr-x 1 root root 4096 Dec 10 14:41 .. -rwxr-xr-x 1 root root 0 Dec 10 14:41 .dockerenv lrwxrwxrwx 1 root root 7 Nov 3 15:22 bin -> usr/bin drwxr-xr-x 5 root root 340 Dec 10 14:41 dev drwxr-xr-x 1 root root 4096 Dec 10 14:41 etc ...... 4. Practical construction of tomcat (1) Environmental preparation ll total 166472 -rw-r--r-- 1 root root 11437266 Dec 9 16:22 apache-tomcat-9.0.40.tar.gz -rw-r--r-- 1 root root 641 Dec 10 23:26 Dockerfile -rw-r--r-- 1 root root 159019376 Dec 9 17:39 jdk-8u11-linux-x64.tar.gz -rw-r--r-- 1 root root 0 Dec 10 22:48 readme.txt (2) Build image # vim Dockerfile (Dockerfile is the official recommended name) FROM centos MAINTAINER Negan<huiyichanmian@yeah.net> COPY readme.txt /usr/local/readme.txt ADD jdk-8u271-linux-aarch64.tar.gz /usr/local/ ADD apache-tomcat-9.0.40.tar.gz /usr/local/ RUN yum -y install vim ENV MYPATH /usr/local WORKDIR $MYPATH ENV JAVA_HOME /usr/local/jdk1.8.0_11 ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.40 ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.40 ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin EXPOSE 8080 CMD /usr/local/apache-tomcat-9.0.40/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.40/bin/logs/catalina.out # structure docker -t tomcat . (3) Start container docker run -d -P --name tomcat01 -v /home/Negan/tomcat/test:/usr/local/apache-tomcat-9.0.40/webapps/test -v /home/Negan/tomcat/logs:/usr/local/apache-tomcat-9.0.40/logs tomcat 9, Publish your own image 1,docker hub First, you need to register your account on DockerHub and make sure that this account can log in. Log in to our server and submit our image after successful login. # Sign in docker login [OPTIONS] [SERVER] Log in to a Docker registry. If no server is specified, the default is defined by the daemon. Options: -p, --password string Password --password-stdin Take the password from stdin -u, --username string Username # Push our image after successful login docker push [OPTIONS] NAME[:TAG] Push an image or a repository to a registry Options: --disable-content-trust Skip image signing (default true) docker tag tomcat huiyichanmian/tomcat # If you need to change your name, push the changed name (add your own user name in front) docker push huiyichanmian/tomcat 2. Alibaba cloud Log in to alicloud, find the container image service and use the image warehouse. Create a namespace and create a mirror warehouse. Select a local warehouse. There are particularly detailed steps on Alibaba cloud, which will not be repeated here. 10, Docker network 1. Understand docker0 (1) View local network card information ip addr # Local loopback # Alibaba cloud intranet address93195sec preferred_lft 286793195sec # docker0 address 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:5e:2b:4c:05 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever (2) View container network card information We get a tomcat image for testing. docker run -d -P --name t1 tomcat docker exec -it t # We found that when the container starts, we will get a eth0@ifxxx And the ip address is in the same network segment as that in docker0 above. 233: eth0@if234: (3) . view the local network card information92020sec preferred_lft 286792020sec 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:5e:2b:4c:05 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever # We found an additional piece of network card information, which has a certain correspondence with the network card in the container. (233,234) 234: veth284c2d9@if233: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default We repeat the above operations. It is not difficult to find that as long as we install docker, there will be one more docker0 in the local network card. Moreover, every time we start a container, docker will assign a network card to the container, and there will be one more network card information locally, which corresponds to the information in the container. This is veth_pair technology is a pair of virtual device interfaces, which appear in pairs, one connected to the protocol and the other connected to each other. Because of this feature, we usually veth_pair acts as a bridge to connect various virtual network devices. When all containers do not specify a network, they are routed by docker0. Docker will assign a default ip to our containers. Docker uses the bridge of Linux, and docker0 in the host is a bridge of docker container. All network interfaces are virtual. As long as the container is deleted, the corresponding bridge is also deleted in response. 2,--link Question: every time we restart the container, the ip address of the container will change, and the fixed ip used in some configurations in our project also needs to be changed accordingly. Can we directly set the service name? When we restart the next time, the configuration will directly find the service name. Let's start two tomcat, test them, ping their corresponding names to see if they can pass? docker exec -it t1 ping t2 ping: t2: Name or service not known # The answer is yes. If t2 cannot be recognized, how to solve it? # We use -- link to connect docker run -d -P --name t3 --link t2 tomcat # We try to ping t2 with t3 docker exec -it t3 ping t2 # We found that it was connected PING t2 (172.17.0.3) 56(84) bytes of data. 64 bytes from t2 (172.17.0.3): icmp_seq=1 ttl=64 time=0.099 ms 64 bytes from t2 (172.17.0.3): icmp_seq=2 ttl=64 time=0.066 ms ...... So -- what did link do? docker exec -it t3 cat /etc/hosts # Let's look at the hosts file of t3.3 t2 6bf3c12674c8 # Here's the reason. t2 is marked here. When ping t2, it will automatically go to 172.17.0.3 172.17.0.4 b6dae0572f93 3. Custom network (1) View docker network docker network ls NETWORK ID NAME DRIVER SCOPE 10684d1bfac9 bridge bridge local 19f4854793d7 host host local afc0c673386f none null local # bridge bridging (docker default) # Host share with host # none not configured (2) Default network at container startup docker0 is our default network and does not support domain name access. You can use -- link to get through. # Generally, we start the container like this, using the default network, and the default network is bridge, so the following two commands are the same docker run -d -P --name t1 tomcat docker run -d -P --name t1 --net bridge tomcat (3) Create network # --driver bridge is default and can be left blank # --subnet 192.168.0.0/16 subnet mask # --gateway 192.168.0.1 default gateway docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet docker network ls docker network ls NETWORK ID NAME DRIVER SCOPE 10684d1bfac9 bridge bridge local 19f4854793d7 host host local 0e98462f3e8e mynet bridge local # Our own network afc0c673386f none null local ip addr ..... # Our own network 239: br-0e98462f3e8e: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b6:a7:b1:96 brd ff:ff:ff:ff:ff:ff inet 192.168.0.1/16 brd 192.168.255.255 scope global br-0e98462f3e8e valid_lft forever preferred_lft forever ..... Start two containers and use the network we created ourselves docker run -P -d --name t1 --net mynet tomcat docker run -P -d --name t2 --net mynet tomcat # View the network information created by ourselves docker network mynet inspect # We found that the two containers we just started use the network we just created ...... "Containers": { "1993703e0d0234006e1f95e964344d5ce01c90fe114f58addbd426255f686382": { "Name": "t2", "EndpointID": "f814ccc94232e5bbc4aaed35022dde879743ad9ac3f370600fb1845a862ed3b0", "MacAddress": "02:42:c0:a8:00:03", "IPv4Address": "192.168.0.3/16", "IPv6Address": "" }, "8283df6e894eeee8742ca6341bf928df53bee482ab8a6de0a34db8c73fb2a5fb": { "Name": "t1", "EndpointID": "e462941f0103b99f696ebe2ab93c1bb7d1edfbf6d799aeaf9a32b4f0f2f08e01", "MacAddress": "02:42:c0:a8:00:02", "IPv4Address": "192.168.0.2/16", "IPv6Address": "" } }, ....... So what are the benefits of using your own network? Let's go back to our previous problem, that is, the domain name ping is not available. docker exec -it t1 ping t2 PING t2 (192.168.0.3) 56(84) bytes of data. 64 bytes from t2.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.063 ms ...... docker exec -it t2 ping t1 PING t1 (192.168.0.2) 56(84) bytes of data. 64 bytes from t1.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.045 ms ...... We found that the domain name can be ping ed, which means that our customized network has helped us maintain the corresponding relationship. In this way, different clusters use different networks, which can also ensure the safety and health of the cluster. 4. Network connectivity Now there is a requirement that t1 and t2 use our own defined network, and t3 and T4 use the default docker0 network. Can t3 and t1 or t2 communicate now? We know that the default gateway of docker0 is 172.17.0.1 and that of mynet is 192.168.0.1. They directly belong to different network segments and cannot communicate. So how to solve the above problems now? So can mynet assign an ip address to t3? If it can be allocated, the problem should be solved. docker network connect [OPTIONS] NETWORK CONTAINER Connect a container to a network Options: --alias strings Add network-scoped alias for the container --driver-opt strings driver options for the network --ip string IPv4 address (e.g., 172.30.100.104) --ip6 string IPv6 address (e.g., 2001:db8::33) --link list Add link to another container --link-local-ip strings Add a link-local address for the container # One container two ip addresses docker network connect mynet t3 # Join t3 to mynet network # View mynet information docker network inspect mynet "Containers": { ...... "d8ecec77f7c1e6d26ad0fcf9107cf31bed4b6dd553321b737d14eb2b497794e0": { "Name": "t3", # We found t3 "EndpointID": "8796d63c1dd1969549a2d1d46808981a2b0ad725745d794bd3b824f278cec28c", "MacAddress": "02:42:c0:a8:00:04", "IPv4Address": "192.168.0.4/16", "IPv6Address": "" } }, ...... At this time, t3 can communicate with t1 and t2. 5. Deploy Redis cluster # Create network docker network create redis --subnet 172.38.0.0/16 # Create six redis configurations through scripts for port in $(seq 1 6); \ do \ mkdir -p /mydata/redis/node-${port}/conf touch /mydata/redis/node-${port}/conf/redis.conf cat << EOF >/mydata/redis/node-${port}/conf/redis.conf port 6379 bind 0.0.0.0 cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 cluster-announce-ip 172.38.0.1${port} cluster-announce-port 6379 cluster-announce-bus-port 16379 appendonly yes EOF done # Start container vim redis.py import os for i in range(1, 7): str = "docker run -p 637{}:6379 -p 1637{}:16379 --name redis-{} \ -v /mydata/redis/node-{}/data:/data \ -v /mydata/redis/node-{}/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.1{} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf".format(i,i,i,i,i,i) os.system(str) python reidis.py # Create cluster docker exec -it redis-1 /bash/sh # Enter redis-1 container redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1 # Create a cluster.... >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.38.0.15:6379 to 172.38.0.11:6379 Adding replica 172.38.0.16:6379 to 172.38.0.12:6379 Adding replica 172.38.0.14:6379 to 172.38.0.13:6379 M: 875f0a7c696fcd584c4f5a7fd5cc38b343acbc49 172.38.0.11:6379 slots:[0-5460] (5461 slots) master M: 9d1d33301aea7e4cc9eb41ec5404e2199258e94e 172.38.0.12:6379 slots:[5461-10922] (5462 slots) master M: d63e90423a034f9c42e72cc562706919fd9fc418 172.38.0.13:6379 slots:[10923-16383] (5461 slots) master S: a89026d4ea211d36ee04f2f3762c6e3cd9692a28 172.38.0.14:6379 replicates d63e90423a034f9c42e72cc562706919fd9fc418 S: bee27443cd5eb6f031115f19968625eb86c8440b 172.38.0.15:6379 replicates 875f0a7c696fcd584c4f5a7fd5cc38b343acbc49 S: 53d6196c160385181ff23b15e7bda7d4387b2b17 172.38.0.16:6379 replicates 9d1d33301aea7e4cc9eb41ec5404e2199258e94e Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join .... >>> Performing Cluster Check (using node 172.38.0.11:6379) M: 875f0a7c696fcd584c4f5a7fd5cc38b343acbc49 172.38.0.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: a89026d4ea211d36ee04f2f3762c6e3cd9692a28 172.38.0.14:6379 slots: (0 slots) slave replicates d63e90423a034f9c42e72cc562706919fd9fc418 S: 53d6196c160385181ff23b15e7bda7d4387b2b17 172.38.0.16:6379 slots: (0 slots) slave replicates 9d1d33301aea7e4cc9eb41ec5404e2199258e94e M: 9d1d33301aea7e4cc9eb41ec5404e2199258e94e 172.38.0.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: bee27443cd5eb6f031115f19968625eb86c8440b 172.38.0.15:6379 slots: (0 slots) slave replicates 875f0a7c696fcd584c4f5a7fd5cc38b343acbc49 M: d63e90423a034f9c42e72cc562706919fd9fc418 172.38.0.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. # Test redis-cli -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:160 cluster_stats_messages_pong_sent:164 cluster_stats_messages_sent:324 cluster_stats_messages_ping_received:159 cluster_stats_messages_pong_received:160 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:324 11, Docker Compose 1. Introduction Compose is an official open source project of Docker and needs to be installed. Compose is a tool for defining and running multi container Docker applications. With compose, you can use YAML files to configure the services of your application. Then, with one command, you can create and start all services from the configuration. Using Compose is basically a three-step process: - Dockerfile ensures that our projects run anywhere - Docker compose file - start-up 2. Quick start (1) Installation curl -L " -s)-$(uname -m)" -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose docker-compose --version # Installation succeeded # docker-compose version 1.27.4, build 40524192 (2) Use # Create a directory for the project mkdir composetest cd composetest # Write a flash program vim app.py import time import redis from flask import Flask app = Flask(__name__) cache = redis.Redis(host='redis', port=6379) # Here, the host name directly uses "redis" instead of the ip address) # Write requirements Txt file (download the latest version without specifying the version) flask redis # Write Dockerfile file"] # Write docker compose YML file # The file defines two services, web and redis. The web is built from Dockerfile, and redis uses a public image version: "3.9" services: web: build: . ports: - "5000:5000" volumes: - .:/code redis: image: "redis:alpine" # function docker-compose up 3. Build a blog # Create and enter directory mkdir wordpress && cd wordpress # Write docker compose YML file to start and create a separate Mysql instance with volume mount for data persistence: W0RDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress WORDPRESS_DB_NAME: wordpress volumes: db_data: {} # Start operation docker-compose up -d 12, Docker Swarm 1. Environmental preparation Prepare four servers. Install docker. 2. swarm cluster construction docker swarm COMMAND Commands: ca Display and rotate the root CA init Initialize a swarm # Initialize a node (management node) join Join a swarm as a node and/or manager # Join node join-token Manage join tokens # Join the node through token leave Leave the swarm # Leave node unlock Unlock swarm unlock-key Manage the unlock key update Update the swarm #to update # First, we initialize a node docker swarm init --advertise-addr + own ip(The intranet address is used here to save money) docker swarm init --advertise-addr 172.27.0.4 # Prompt us if the node is created successfully Swarm initialized: current node (yamss133bil4gb59fyangtdmm) is now a manager. To add a worker to this swarm, run the following command: # Execute on other machines and join this node # Generate a token of the management node, To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. # We add the node command on the uplink of machine 2 # We view the node information on machine 1 docker node ls # We found that the states of a management node and a work node are ready ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION yamss133bil4gb59fyangtdmm * VM-0-4-centos Ready Active Leader 20.10.0 mfxdgj1pobj0idbl9cesm2xnp VM-0-7-centos Ready Active 20.10.0 # Now add machine 3, which is also the work node # So far, only machine 4 has not joined. At this time, we want to set it as the management node. # Execute the command to generate the management node on machine 1 and on machine 4 docker swarm join-token manager # The generated command is executed on machine 4 # At this time, node 4 is also managed This node joined a swarm as a manager. # At this point, we can carry out other operations on machine 1 or machine 4. (you can only operate on the management node) 3. Raft protocol In the previous steps, we have completed the construction of dual master and dual slave clusters. Raft protocol: ensure that most nodes survive before they can be used. It must be greater than 1, and the cluster must be greater than 3 at least. Experiment 1 Stop the docker on machine 1 and stop it. Now there is only one management node in the cluster. Whether the cluster is available. #We check the node information on machine 4 docker node ls # It is found that our cluster is no longer available Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded # We restart the docker on machine 1 and find that the cluster can be used again, but at this time, the docker on machine 1 is no longer the leader, and the leader is automatically transferred to machine 4 Experiment 2 We leave the work node from the cluster and view the cluster information # We execute on machine 2 docker swarm leave # View node information on machine 1 docker node ls # We found that the state of machine 2 is Down ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION yamss133bil4gb59fyangtdmm * VM-0-4-centos Ready Active Reachable 20.10.0 mfxdgj1pobj0idbl9cesm2xnp VM-0-7-centos Down Active Experiment 3 Now let's also set machine 2 as the management node (at this time, the cluster has three management nodes), randomly down one management node, and check whether the cluster is running normally. # Run on machine 2 # Shutdown docker of machine 1 systemctl stop docker # View node information on machine 2 docker node ls # The cluster is normal, and you can see that machine 1 is down ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION yamss133bil4gb59fyangtdmm VM-0-4-centos Ready Active Unreachable 20.10.0 mfxdgj1pobj0idbl9cesm2xnp VM-0-7-centos Down Active 20.10.0 vdwcwr3v6qrn6da40zdrjkwmy * VM-0-7-centos Ready Active Reachable 4. Elastic creation service docker service COMMAND Commands: create Create a new service # Create a service inspect Display detailed information on one or more services # View service information logs Fetch the logs of a service or task # journal ls List services # list ps List the tasks of one or more services # View our services rm Remove one or more services # Delete service rollback Revert changes to a service's configuration scale Scale one or multiple replicated services # Dynamic expansion and contraction capacity update Update a service # to update docker service create -p 8888:80 --name n1 nginx # Create a service and assign it randomly in the cluster kj0xokbxvf5uw91bswgp1cukf overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service converged # Add three copies to our service docker service update --replicas 3 n1 # Dynamic expansion and contraction capacity docker service scale n1=10 # Like updata above, this is convenient docker ps # see Service, which can be accessed by any node in the cluster. The service can have multiple replicas to dynamically expand and shrink the capacity to achieve high availability. 5. Deploying blogs using Docker stack Now we need to run the above blog in our cluster and open ten copies. # Edit docker compose YML file version: '3.3' services: db: image: mysql:5.7 volumes: - db_data:/var/lib/mysql restart: always environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest deploy: replicas: 10 ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress WORDPRESS_DB_NAME: wordpress volumes: db_data: {} # Start up docker stack deploy -c docker-compose.yml wordpress # see docker service ls
https://algorithm.zone/blogs/introduction-to-docker-basics.html
CC-MAIN-2022-21
refinedweb
7,678
53.1
User talk:Procopius From Uncyclopedia, the content-free encyclopedia test Hello, Procopius,) 00:15, 3 July 2006 (UTC) So you've decided to make a game... However... We have a namespace for things like that... It's called ... Game:... So, be a dear, will you, and move all them pages you made to [[Game:Pagename pagenumber]] . I'll take care of the redirects that get created with each move. :) --00:44, 6 July 2006 (UTC) Left on MoneySign's talk page originally:56, 6 July 2006 (UTC) - Oh, and sorry about the ban... I just find it's the most effective way to get a message to the person before he/she/it does anymore "damage". - No problem -- I definitely got the message. :)--Procopius 00:57, 6 July 2006 (UTC) - Heh... Don't worry about the what-you-might-think-is smudge on your record (the block-log). Every good Uncyclopedian gets blocked at least once. (Even those who forget to leave their name when putting a message on someone's talk pages). --⇔ Sir Mon€¥$ignSTFU F@H|VFP|+S 01:01, 6 July 2006 (UTC) Oh.. Heh... No... They're running some tests on Wikia right now, so any such occurances just might be due to that. Guess you'll have to wait 'til later on or tomorrow. --⇔ Sir Mon€¥$ignSTFU F@H|VFP|+S 01:10, 6 July 2006 (UTC) - OK. Thanks. Believe it or not, I appreciate your help tonight. :)--Procopius 01:11, 6 July 2006 (UTC) - Awwwww, no problem. Believe it or not, but that's why I'm (or should be) here: to help those who need it. --⇔ Sir Mon€¥$ignSTFU F@H|VFP|+S 01:15, 6 July 2006 (UTC) So I went and moved them for you anyways. ^_^ However, I feel guilty because I would have liked to see you learn something from this whole situation. So, I guess, to make it up to you, I'll describe what to keep an eye out for, should you ever move a page yourself. - 1. Do not break existing links - Be sure to click on What links here in the toolbox (left of the screen, just under the search box). If there are pages that link to the current name of the article, adjust them accordingly. Exceptions to this are if the page containing the link is actually about the moving (e.g.: the links in your talk page to Choose Your Own Adventure); or if you plan to keep the redirects (e.g.: you want to change the article name from "WWW" to "World Wide Web"). - 2. Have unwanted resulting redirects deleted - When you move a page, a redirect to the new article space is created on the orignal space. If these are unwanted or disallowed redirects (e.g.: moved "Chirst" to "Christ"; moved "Procopius" to "User:Procopius"), be sure to have a sysop delete them by placing them on QuickVFD (read the rules first) or contact a sysop on his talk page. - 3. Be careful what you're doing - If the history goes back quite a bit (possibly with various contributors), then the article might not want to be moved. Perhaps the name/spelling is part of the joke, for example. In any case, you should read the discussion page and/or contact the main contributor or a sysop if you need clarification. Also, to save yourself from trouble, be sure your destination article name is not already taken. - 4. Do not abuse - The move function should only really be used when there is a better, more suitable name for an article. Moving an article for humorous reasons only, is usually not as funny as you first thought (e.g. moving Teachers to Dickheads). It will likely be seen as vandalism and any consequences that may have. Of course, once in a decade, an exception occurs ... (e.g.: 12:34 . . Spang (Talk | contribs) (moved Jews to Concentration Camp)). There, hope that helps you out a bit. Good luck and happy editing! --⇔ Sir Mon€¥$ignSTFU F@H|VFP|+S 06:37, 6 July 2006 (UTC) - Thanks. I've done moves before on Wikipedia, but this is a handy reminder.--Procopius 16:46, 6 July 2006 (UTC) - Oh geez... I'm sorry... I just thought... Guh... Sorry... I shouldn't have assumed that just because you didn't know about the Game namespace and what the local norm is for talk page correspondence, you're a total wiki-noob... Didn't mean to sound condescending or anything. Hmmz... *kicks self* - Well then... I guess I'll just leave it at the clichéd "Good luck and happy editing" well wishes yet again! :D --⇔ Sir Mon€¥$ignSTFU F@H|VFP|+S 20:40, 6 July 2006 (UTC) Whoa, I had almost the exact same idea about the godzilla and missile crsis connection for UnNews.. except i was going to write an article about how it was a calculated attack by the ghost of stalin but oh well. Good job. --Upascal 18:00, 14 July 2006 (UTC) - Hey -- thanks a lot! You know, I wouldn't mind reading a story about the ghost of Stalin . . .--Procopius 18:35, 14 July 2006 (UTC) UnNews:Diogenes searches world for honest mechanic The above UnNews story — that's your doing, correct? Diogenes of Sinope? Jesus, you weren't educated in an American public school, were you? This UnNews story is outstanding... I laughed my ass off. Now I'm assless, which is going to hinder my ability to get jiggy with it, but 'twas well worth it. I actually just used Diogenes in Euripides and it's talk page, but those were much briefer references. Anyway, just wanted to thank you for the laughs. -- Imrealized IMme 17:50, 18 July 2006 (UTC) - Thanks a lot -- I thoroughly enjoyed that Euripides article, by the way. Oh, and as to your question -- I was educated in an American Catholic School. --Procopius 22:36, 18 July 2006 (UTC) - Wow, looks like we're not the only Diogenes fans, huh? Congratulations on the nom and the doing well with the nom (last I checked). Catholic school? That may explain it. Although as much as I bad mouth American public schools, I was educated in one (well, not really, but I went to one) and I turned out alright. Though that may have had more to do with my after school activities. Oh, and thanks for the vote on Euripides. -- Imrealized 15:22, 19 July 2006 (UTC) - De nada and thanks for your vote on Diogenes. Something about Greek philosophers and modern society seems inherently funny to me.--Procopius 17:35, 19 July 2006 (UTC) UnNews Bono piece Excellent, if only life could really work like this! :) -- Brigadier Sir Mordillo GUN UotY WotM FP UotM AotM MI3 AnotM VFH +S 12:29, 19 July 2006 (UTC) - Thanks for the kind words -- I love the template. Where are you in relation to the fighting?--Procopius 17:35, 19 July 2006 (UTC) - Sorry for the long delay, wasn't paying attention. I'm about an hour's drive away from the big mess...waiting for a possible reserve duty...Oh, and thanks a lot for the Foolitzer nomination...appreciate it! Mordillo Battle of Gettysburg One of the funniest articles I've read here, just wanted to let you know how amused you made me. Especially with the aftermath section. Since I'm new, I don't really know if I'm allowed to post here or whatever, or if I'm allowed to nominate this for featured article, but boy do I want to. --Palsworth 04:10, 25 July 2006 (UTC) Snopes Just letting you know I love this article and nommed it for VFH. As a regular visitor to Snopes, I found it a great parody of some of the stuff on there. Copyright issues aside, I think it should do well - fingers crossed. I would also like to take this opportunity to thank you for the vote on HowTo: Be a Tramp at VFH, and if you have not already voted, may I cheekily point you in the direction of Adobe Potatochop? Cheers and good luck. --Hindleyite | PL | GUN | WOTM | Image Review - Use it | Converse 13:03, 29 July 2006 (UTC) - Vote registered for Adobe Potatochop. I held off on voting because I thought the article was good, but didn't feel qualified to speak about Photoshop (which I haven't used regularly in ages). But hell, all the work that went into it means it ought to be good. Thanks for the nod!--Procopius 14:42, 29 July 2006 (UTC) UnNews:Baby not taking baptism well Are the Rolfsons people you know, with their real names? Just curious, I'm going to hack it anyway... and do an:39, 30 July 2006 (UTC) - No, I just kind of picked those names out of a hat (I'm not from Minnesota). But something like that happened to me recently at a Mass. I'm sure I did it myself.--Procopius 20:48, 30 July 2006 (UTC) N00b Of The Month Hey man, I just nominated you for NOTM. I have no idea why u werent nommed last month. I really like the work you've done here so far, so I think you're deserving. Good Luck! --:10, 1 August 2006 (UTC) Children's Crusade This is awesome. I love the way you can make fun of your own religion without being too offensive. The Children's Crusade was possibly the worst moment in Catholic history (heck, the whole period from 1000-1600 was horrible). Great article. Damn you! Up until know I thought I had a pretty good chance with scoring the Poo-Lit. But yer cover letter thingy is too good! DAMN YOU TO HELL! oh, and...good show ch:46, 11 August 2006 (UTC) - Bah. The Hebrew story is great, and you've got audio, too, which should more than outweigh the pictures I stole from Wikipedia.--Procopius 19:51, 11 August 2006 (UTC) YOU WITH THE FACE! Well, well, well, it appears you've made quite a name for yourself here at Uncyclopedia. Great job! FYI, the War of 1812 is nominated for VFH as well! Considering you wrote a large portion of it (or so you claim), most of that credit may go to you, and you might have yourself another little tag to put on your page to boast about it. I highly recommend that you make an "awards" page or something in the near feature. Speaking of which: Keep up the good work...or else... --Hotadmin4u69 [TALK] 21:04, 15 August 2006 (UTC) - Hey, thank ye kindly -- Grue's Clues was fantastic. Oh, and Diogenes thanks you for your change of heart. :)--Procopius 00:52, 17 August 2006 (UTC) Fawning Sycophancy I haven't made a template for shameless sucking up to people, but if I had I'd present one here - I enjoy your stuff, well done. FreeMorpheme 16:23, 17 August 2006 (UTC) - Thank ye kindly. I'm a big fan of Disco Jesus myself -- the picture is priceless. Has it gone through VFP?--Procopius 01:44, 18 August 2006 (UTC) - How kind of you to say. No, the dancing one has never faced the firing line, although feel free to set him up if you like, with a smile like that, how can he fail? FreeMorpheme 19:51, 18 August 2006 (UTC) VFP Thanks for the vote. You just helped fulfill one of the more obscure prophecies in the Book o' Revelations.--Sir Modusoperandi Boinc! 04:54, 20 August 2006 (UTC) - Hmmm, I was wondering why all these overall-clad kittens were chewing my face off.--Procopius 12:48, 20 August 2006 (UTC) - Are you sure it's not just your ham sombrero?--Sir Modusoperandi Boinc! 13:58, 20 August 2006 (UTC) Reverts Hi, thanks for the award. I need more of those on my page!--Shandon 15:36, 20 August 2006 (UTC) Let me be the first to congratulate you on your Poo Lit Surprise win. :) Excellent article, in what was a very difficult to judge) - Dammit Mhaille, I was going to write the exact same thing: Let me be the first to congratulate. You enjoy crushing my dreams, don't you? Anyway, Let me be the second to congratulate you on an ace job Procopius.--Sir ENeGMA (talk) GUN WotM PLS 00:00, 21 August 2006 (UTC) That's fantastic! Thanks a lot. I agree -- there were a lot of excellent entries in the contest, which makes this humbling.--Procopius 11:38, 21 August 2006 (UTC) - Congratulations! This is miles better than last year's winner... I'm not saying that was bad, just not as good as HowTo:Write a Cover Letter. From a past winner to a present one, again congrats. -- Hindleyite 12:13, 21 August 2006 (UTC) Another Honor* Showered Lovingly upon the great and wondrous Procopius Rather worthless, but I try my best to praise you where I can. -- §. | WotM | PLS | T | C | A06:44, 23 August 2006 (UTC) - I shall tatoo it to my face. Incidentally, did I thank you for nominating Battle of Gettysburg? If not, thanks.--Procopius 11:29, 23 August 2006 (UTC) Congrats! Poo Lit Surprise Prize! (at last) So I guess a prize is in order.... To receive your sooperdooper first prize, and the less sooper $25 for coming top of your category, please send a name and address to Wikia community support. A bright shiny $25 coin and a wonderful and valuable wonderful first prize will be on their way to your door. Congratulations! -- sannse (talk) 13:35, 24 August 2006 (UTC) CD MP3 players The last time I took a Greyhound across California, I got a GPX MP3 CD player at the SF Greyhound station for $10.99+tax. Now, GPX:gadgets::GPC:cigarettes, so I wouldn't recommend it, but if you're willing to spend a few dollars more, and/or shop somewhere besides the bus depot, you can probably get something much better. -- !!! ??? 19:13, 1 September 2006 (UTC) - Ah, that's what I figured. Good to know, though. I think I'll go to Best Buy.--Procopius 20:08, 1 September 2006 (UTC) Duped Read the section in the Potatore with your name in it, then edit it...tell me what you see.--:04, 1 September 2006 (UTC) - I see a box with USERNAME inbetween it, and I feel very embarrassed. Good joke.--Procopius 20:08, 1 September 2006 (UTC) - Don't feel bad, I almost did the same:09, 1 September 2006 (UTC) Another award seems to be just what the doctor ordered Thanks for the VFH nomination, and congrats on the PLS and NOTM and just about every other prize they got! :) keep on the great work... - Woo-hoo! Thanks! I'm gonna go and have me a Shabbot meal! :D --Procopius 21:05, 2 September 2006 (UTC) VFH Oi! Stop using --USERNAME--that really screwed me up!!--Shandon 05:06, 3 September 2006 (UTC) - But you inspired me.--Procopius 13:52, 3 September 2006 (UTC) Self-hating Jew For what it's worth, I agree that the pics are weak. (Recycled, no--half of the pics have never appeared in the main UN namespace; not funny enough, definitely. And that's what matters.) However, your comment about the Lieberman pic being a nonsequitur worries me. Seeing Joe Lieberman give a Nazi salute during his concession speech is half of what inspired the article. I think the pic is inherently funny, the caption makes the Nazi connection obvious, and it appears in the appropriate part of the text. So, if you think it's a non-sequitur, I must have done something wrong, and I'd like to fix it. Anyway, thanks for the vote; if you can help me improve it, thanks even more. -- !!! ??? 16:09, 7 September 2006 (UTC) - First, thanks for showing me how to spell "non-sequitur" correctly. :) The problem with the Lieberman picture, for me anyway, is that I didn't see the Nazi salute in it. It looks like a straight wave to me. And the caption was just OK. It might be because I've lived in the bad areas of Connecticut and not the places of WASPdom (like Greenwich). If there was something closer to an actual salute, I might like it. And I'm a fan of Dr. Zoidberg, and I know where his accent comes from, but wouldn't he be more a self-hating Crustacean? :)--Procopius 12:08, 8 September 2006 (UTC) - The problem is that the time I clearly saw Lieberman give a Nazi salute was near the end of his concession speech, but every since press photo or video clip I can find clips his arm so you can't see it. (It must be a conspiracy by those Jews that control Hollywood, or at least CT.) So this was the best pic I could find; it sort of looks like a heil, but you're right, it's not great. - As for Zoidberg, I think the joke is pretty obvious; the real reason it's not that funny is that too many of the jokes come from animated shows (I guess I don't watch enough live-action shows?), and this is the weakest of the bunch. - Anyway, I understand the weak for votes. I think it's pretty funny as-is, but could be much funnier, and I probably shouldn't have nommed it yet. But, as I mentioned on the talk page, I don't want to work on it too heavily while it's in the middle of a vote. Plus, almost all of the changes people have tried to make have made it less funny (especially the anonymous users trying to make pro-Israel or anti-Israel political points). At any rate, thanks for your vote and your constructive criticism. -- !!! ??? 00:57, 13 September 2006 (UTC) - I'm glad you didn't destroy my pic...lol --)}" > 02:14, 13 September 2006 (UTC) You know (of) me? Whilst browsing through VFH I noticed the article Suburban Homeboy. I read through it, thought it was great and was about to vote til my eye caught something: "For. Great stuff. Might even be better than all the stuff The Spith has done (and that's saying a lot).--Procopius 20:47, 2 September 2006 (UTC)" I'm assuming you must mean someone else cos I'm a bit of a n00b and haven't written much stuff. The Spith 12:39, 12 September 2006 (UTC) - Click on the "Edit" tab and look at the line where I put my for vote. It's getting to be an old joke around here. On an unrealted note, welcome!--Procopius 13:13, 12 September 2006 (UTC) Oh I see ^^ That's really clever. Messed me up when I first saw it. --The Spith 17:34, 12 September 2006 (UTC) Suburban homeboy In honour of your good taste and breeding for selecting a deep, socially conscious and literate satire as Suburban homeboy for featuring, I tip my cap to you. Hurrah!--Sir Modusoperandi Boinc! 06:10, 19 September 2006 (UTC) - Ah, gracias -- and excellent work on that article. It helps me deal with the Steelers' loss. Almost.--Procopius 11:41, 19 September 2006 (UTC) - That's football, right?--Sir Modusoperandi Boinc! 15:05, 19 September 2006 (UTC) - Yep -- and in this instance, painful football. :)--Procopius 15:06, 19 September 2006 (UTC) - Don't worry, I'm sure they'll take the pennant this year. Or cup. Or trophy. Purse?--Sir Modusoperandi Boinc! 15:13, 19 September 2006 (UTC):17, 23 September 2006 (UTC) Just a reminder You still have to vote for user:Armando for the foolitzer! After all, he did share the info about the red:47, 27 September 2006 (UTC) - Thanks! you're a good man:58, 27 September 2006 (UTC) - That Mahmoud looks like a real ladies man. I can see him staring in kind of a low rent version of "Stayin' Alive". You know he's got a closet full of red satin shirts...--Sir Modusoperandi Boinc! 14:10, 27 September 2006 (UTC) secret project psstt....take a look here and let me know if you'd like to:56, 28 September 2006 (UTC) UGotM Thanks for helping to defend me from the foolish unhumour of a user who, by now, knows better. Hopefully. Even on a wiki with as widely varying tastes as Uncyc, it's nice to know that there's some semblance of community.--Sir Modusoperandi Boinc! 05:11, 1 October 2006 (UTC) Hey Procope Thanks for the Foolitzer vote. Glad you like my stuff and I wish you coulda seen the red:36, 3 October 2006 (UTC) Why Solid Gold? Thanks for the vote. As for your confusion see "The Mysterious Answer to Your Confusion".--Sir Modusoperandi Boinc! 19:36, 9 October 2006 (UTC) - Ah! Got it. Thanks. Nice work (and thanks for the Hays Code vote and WotM nod).--Procopius 19:48, 9 October 2006 (UTC) - What little flak I caught for American Fundie Magazine was for its, ahem, lack of subtlety, so for this I tried to stretch a bit...and I vote for what I like. As the Poo Lit says, "You're write good".--Sir Modusoperandi Boinc! 19:55, 9 October 2006 (UTC) Defender of the Clown Damn, you brought back some good memories...you must be a veteran gamer as well no? Anyway, great piece! I think another nom is on the way....--:49, 12 October 2006 (UTC) - Thanks! Yeah, getting an Amiga was an unfulfilled dream of my childhood. But now Cinemaware puts the complete Defender of the Crown on its website, and I can happily regress.--Procopius 12:54, 12 October 2006 (UTC) - I actually made (as a school project) a board game out of it, what a geek I:57, 12 October 2006 (UTC) Hays Code Mishap "replacing it with today's current MPAA rating system of G, PG, R and X."? Nowadays, we get PG-13, and X has become NC-17. I guess you don't get out much, but the Marx Brothers totally make up for it. --Donut Buy One!|F@H|MUN|NS|Please Help Me|. 03:33, 16 October 2006 (UTC) - I should have known that -- anyway, it's fixed. And hey, if you like it, why not vote for it over at VFH?--Procopius 12:06, 16 October 2006 (UTC) Award Keep up the good work, and I'm nominated, mind you, as is one of my books. --Hotadmin4u69 [TALK] 02:51, 17 October 2006 (UTC) God the Wholly Incompetent - Thanks for the message, Procopius. I understand why you're doing this, but I just want to help. Let's talk about it when you come back. Sir Master Pain (also known as Betty) 20:29, 25 October 2006 (UTC) - Okay! Sounds good to me. But, wait, does that mean that everyone that already voted would have to vote again? Sir Master Pain (also known as Betty) 23:55, 25 October 2006 (UTC) This is a persistent problem with this user. Well, *was* a persistent problem with this user. Your articles are safe from massive revert-warring now. Sir Famine, Gun ♣ Petition » 26/10 01:38 - Thank you very much. Oy. I almost negotiated a truce with a troll . . .--Procopius 01:42, 26 October 2006 (UTC) - So I wasn't the only one...and Famine, I...I love you.--Sir Modusoperandi Boinc! 01:46, 26 October 2006 (UTC) - I did a bit of reverting, as most of his edits obviously weren't in tune with what others wanted. Feel free to rev back anything I missed. I assumed that one ban for edit-warring would indicate that it was a bad thing. I assumed that two other bans for being a dick would make him sit up and take notice. We'll see if the fourth time is a charm. Sir Famine, Gun ♣ Petition » 26/10 02:05 God the Wholly Incompetent is likely the best article I've seen on this cursed site. If you're female, just stay where you are and wait for me to arrive. If you're male, I'll pay for your sex change. We's gettin' married. --Concernedresident 10:13, 20 March 2009 (UTC) - Thanks. Unfortunately, I'm a dude.--Procopius 14:00, 20 March 2009 (UTC) - To be honest it's a lucky escape for you. --Concernedresident 17:10, 20 March 2009 (UTC) Thanks for the great image on User:Tooltroll/The_Great_Aspie_War_of_Ought_Six That headline is perfect! That's exactly what we need, stuff that makes this look like it was a real war with real life political consequences. That way, we can parody the fact that the real wars are actually as stupid as what happened in the Aspergers' rant forum. Super work! --Hrodulf 20:17, 27 October 2006 (UTC) - Thanks! I was aiming for a parody of the famous "Ford to City: Drop Dead" headline, but if it speaks to eternal truths, then, uh, good. :)--Procopius 22:03, 27 October 2006 (UTC) - I'm a NY'er so I recognized the headline, I was just commenting on the enhanced realism, and therefore humor, your image provided to the article. I'm actually overwhelmed by how well it's turned out so far, it's exceeded my expectations significantly in terms of the quality and the humor. I've never worked with this many other users before also, that's also very cool. --Hrodulf 22:06, 27 October 2006 (UTC) Awards If you keep this up, we're going to have to get you a bigger "awards page"...something nice, with big bay windows and a view of the ocean, maybe?--Sir Modusoperandi Boinc! 19:48, 1 November 2006 (UTC) - Ah, thank ye kindly for the nod and the support. Being presented with a potato and a trophy does my Irish heart proud.--Procopius 21:15, 1 November 2006 (UTC) - As a fellow Irishman, I only have one bit of advice: wait a few months before making mead out of it. If you mash it up too quickly, it offends the voters. Later, you can substitute a plastic potato into the trophy, and no-one's the wiser. ;-) ~ T. (talk) 21:26, 1 November 2006 (UTC) - I was actually gonna make it into a stew, but that's a good point. :)--Procopius 23:55, 1 November 2006 (UTC) Nom Thanks for the WotM nom. The last four months certainly have been productive...the odd part is that in September I resolved to do less, as I was starting to feel burned out after Poo Lit. Funny how things work out, eh?--Sir Modusoperandi Boinc! 23:29, 1 November 2006 (UTC) - Yes, indeed -- I went dark for a few weeks after LBJ elevated into Holy Trinity because of work/lack of creativity, but the pump got primed again. Weird. Anyway, you are more than worthy of the Potato. As is Composure1 -- but I give the nod to you.--Procopius 23:55, 1 November 2006 (UTC) - I, meanwhile, took a self-imposed three day vacation; off work, no writing/chopping here, and read a book about zombies outside under the warm sun with a cool beer. It did wonders. - The last quarter has definitely been quite the change from my first six months here, when I mostly tinkered in anonymity with existing pages...I'm a little stunned that there are three of my pages on VFH at the moment, and only one's a selfnom (so I must be doing something right, methinks). - But enough about me; your WotM was well earned, as were the eight featured pages, and whatever else you've received in the two minutes since I started typing. --Sir Modusoperandi Boinc! 00:07, 2 November 2006 (UTC) Nom: a very special conclusion Thanks again for the WotM nom. I've been writing a ton of stuff (most of which isn't finished yet, or is but is on PeeReview, or is but is "pre-loaded" on UnNews and hasn't appeared yet, or is but isn't very good, or is but this aside is going on way too long...etc) to keep up with Squiggle but his star shines too brightly this month. I hear that he's already overcome with whatever you get overcome with when getting one of them things. He's down to 80lbs and running a 105 fever. Poor kid. I hope he pulls through. I visited him in Uncyc Children's hospital and he asked me to hit him a home run. I gotta, just can't let him down. --Sir Modusoperandi Boinc! 20:30, 30 November 2006 (UTC) - Yep, Squiggle is deserving. (He's 13, huh? When I was 13, I was thrilled to have EGA graphics. Anyway . . .) But I'll nominate you again first thing I can.--Procopius 20:32, 30 November 2006 (UTC) - EGA, I remember that. Oh Mechwarrior, we hardly knew ye. I still see Simcity when I close my eyes (Oh, no! Godzilla is attacking my city!). I troubleshot my parents computer awhile ago (the fault was: "Your father got a new game, and it doesn't all fit on the screen"), and it still had an Oak 256k vga card...ah, good times--Sir Modusoperandi Boinc! 20:56, 30 November 2006 (UTC) Congrats I think you're still missing the EGADM award and the foolitzer...but other than that I'd say you got just about everything...:) OH! and congr:16, 2 November 2006 (UTC) - Well, if you want to nominate me . . . :) Thanks a lot for the vote, too. 'Twas flattering.--Procopius 12:46, 2 November 2006 (UTC) Em, Trojan War I changed the Trojan War article a lot since you last read it... could you tell me if you think it is better, or what? I tried to make it less random, but I don't know what other people think now that I changed it around a lot. For one thing, I got rid of Gandhi. Thanks! --The Llama Llover!!! Sexual harassment Nice one! But I rather take you to dinner,:01, 11 November 2006 (UTC) - Thanks! Um . . . do we have a sexual harassment policy on Uncyclopedia?--Procopius 18:51, 11 November 2006 (UTC) - If not, we should. There's an idea for yet another ignorable policy. Buy the way, you smell amazing today. --:52, 11 November 2006 (UTC) Blatantly soliciting feedback. Since I know you've expressed interest in my Presidential Election articles in the past, I'd like to know what you thought of the new one. You see, I'm a very impatient person, and the folks at Pee Review just aren't responding fast enough for my tastes. And yes, I know I'm an asshole, but I'm also a Republican, and they require you to be an asshole to get in the club. --Kwakerjak 03:58, 22 November 2006 (UTC) - Hee hee, bitch-slapping Harold Stassen -- I enjoyed it quite a bit. I might put more in about the thurmond and Wallace campaigns, although that could be distracting from the narrative you've built up here. I liked it. If you do go the Thurmond route, I give you permission to use images from this article.--Procopius 13:33, 22 November 2006 (UTC) Your request I've made an image for you (not made, more like, photoshopped). Please check it out on RadicalX's corner. It's gneomI 12:30, 26 November 2006 (UTC) Congrats for whatever it was that you won this month Checks above Proc's fireplace...ah...no...Foolitzer! Congrats on the Foolitzer. You earned it. So, what's left for you, Benson of the Month? Employee of the Month?--Sir Modusoperandi Boinc! 20:57, 1 December 2006 (UTC) Jew of the 22:55, 24 December 2006 (UTC) Hey there I was wondering, since you've won one already, if you'd be interested in judging in the PLS this go around? Please let me know one way or the other at my talk page (as I've been known to completely forget about these things).--<< >> 00:42, 3 December 2006 (UTC) Bucket of Piss! Damn you and all of the good that you have brought to Uncyclopedia. Damn it all! --Hotadmin4u69 [TALK] 02:45, 8 December 2006 (UTC) Wow I just read Battle of Gettysburg and I have to tell you, it is possibly the finest article I've read yet on Uncyc. --Super90 21:33,:39, 14 December 2006 (UTC):58, 18 December 2006 (UTC) PLS !!! Your judge packet, sir. :)--<< >> 21:57, 17 December 2006 (UTC) Heh Muchos thankos for writing me in to the Uncyclopedian Christmas Tale. It's the sort of thing I'd feel too awkward to do myself, but at the same time feel sad for not appearing in. ;-) --Sir Todd GUN WotM MI UotM NotM MDA VFH AotM Bur. AlBur. CM NS PC (talk) 01:02, 18 December 2006 (UTC) 3 Thessalonians and the Church of God the Wholly Incompetent - Procopius, you might want to check out the 3 Thessalonians page. The epistle is now part of the official canon of the Church of God the Wholly Incompetent. Have a look; I hope you enjoy. The Humbled Master 20:53, 22 December 2006 (UTC) - 3 Thessalonians is now featured on the front page as being written on this day, and I have just nominated it for article of the day. Enjoy! The Humbled Master 02:09, 23 December 2006 (UTC) Merry Christmas If you are another child that thinks they need a present, leave a message here (Santa never forgets, but he is getting on a bit.) Ho Ho Ho from Santa Claus 15:21, 23:00, 24 December 2006 (UTC) A Reward from the Humbled Master For you, my Saviour...
http://uncyclopedia.wikia.com/wiki/User_talk:Procopius?direction=prev&oldid=5317908
CC-MAIN-2014-42
refinedweb
5,659
73.98
Hello, Can you lend me your Python programming skills? I think this will be an easy one. I will try to be quick. What I have: I have a stack of images and a macro that does the job for a folder with the individual images (not a stack) What I wished I had: The program to do the job with a stack instead of individual images. What this all means: I have bone images (slices) with a thickness of 0,041 mm and I would like to have slices with a thickness of 1 mm (and 3 mm), too. For this I would like to make a new stack of the sum of x images (to set the new thickness) from the original stack to increase the thickness of the slices. That means, to get a 1 mm thick slice I would need x to be 24. The macro: I did not write this myself and I am still looking where I got this from. Please excuse my stupidity. from ij import IJ, ImageStack, ImagePlus from ij.plugin import ZProjector, FolderOpener view_size = 24 # Open all images from a folder #opener = FolderOpener() #opener.run("") # Get image stack info, and create a stack to receive the merged images tif_stack = IJ.getImage() image_x_size, image_y_size, null0, stack_size, null1 = tif_stack.getDimensions() new_stack = ImageStack(image_x_size, image_y_size) # Create the Projector that will process the stack portions zp = ZProjector(tif_stack) zp.setMethod(ZProjector.AVG_METHOD) # Now create a slice for each block of size view_size, and add it the the new_stack for view in range(stack_size - view_size + 1): zp.setStartSlice(view+1) zp.setStopSlice(view+view_size) zp.doProjection() new_stack.addSlice(zp.getProjection().getProcessor()) # Create and show the final image stack final_stack = ImagePlus("imp", new_stack) final_stack.show() Thank you for taking a bit of your time for me. I really appreciate it.
https://forum.image.sc/t/stack-to-images-in-python-into-a-existing-macro/30342
CC-MAIN-2021-10
refinedweb
303
64.2
lazy-reload 1.1 The Lazy Python Reloader This is one way to control what happens when you reload. Modules are reloaded in the same order they would be if they were being loaded for the first time, and for the same reasons, thus eliminating some of the unpredictability associated with circular module references. Usage from lazy_reload import lazy_reload import foo.bar.baz lazy_reload(foo) from foo.bar import baz # <= foo, bar, and baz reloaded here Motivation The problems with reloading modules in Python are legion and well-known. During the course of ordinary execution, references to objects in the modules and to the modules themselves end up distributed around the object graph in ways that can be hard to manage and hard to predict. As a result, it's very common to have old code hanging around long after the reload, possibly referencing things you expect to have reloaded. This is not necessarily Python's fault: it's just a hard problem to solve well. As a result, most applications that need to update their code dynamically find a way to start up a new python process for that purpose. I strongly recommend you do that if it's an option for you; you'll save yourself lots of debugging headaches in the long run. For the rest of us, there's lazy_reload. What Python's __builtin__.reload Does The reload() function supplied by Python is very simple-minded: it causes the module's source file to be interpreted in the context of the existing module object. Any attributes of the module that aren't overwritten by that interpretation remain in place. So for example, a module can detect that it's being reloaded as follows: if 'already_loaded' in globals(): print 'I am being reloaded' already_loaded = True Also, Python makes no attempt to update references to that module elsewhere in your program. Because the identity of the module object doesn't change, direct module references will still work. However, any existing references to functions or classes defined within that module will still point to the old definitions. Objects created before the reload still refer to outdated classes via their __class__ attribute, and any local names that have been imported into other modules still reference their old definitions. What lazy_reload Does lazy_reload(foo) (or lazy_reload('foo')) removes foo and all of its submodules from sys.modules, and arranges that the next time any of them are imported, they will be reloaded. What lazy_reload Doesn't Do It doesn't eliminate references to the reloaded module from other modules. In particular, having loaded this: # bar.py import foo def f(): return foo.x the reference to foo is already present in bar, so after lazy_unload(foo), a call to bar.f() will not cause foo to be reloaded even though it is used there. Thus, you are safest using lazy_unload on top-level modules that are not known to other parts of your program by name. It doesn't immediately cause anything to be reloaded. Remember that the reload operation is lazy, and only happens when the module is being imported. It also doesn't cause anything to be "unloaded," nor does it do anything explicit to reclaim memory. If the program is holding references to functions and classes, don't expect them to be garbage-collected. (Watch out for backtraces; information from the last exception raised can keep things alive longer than you'd like). It doesn't fold your laundry or wash your cats. If you don't enjoy these activities yourself, consider the many affordable alternatives to pets and clothes. - Author: Dave Abrahams - License: Boost License 1.0 - Categories - Package Index Owner: bewst - DOAP record: lazy-reload-1.1.xml
http://pypi.python.org/pypi/lazy-reload
crawl-003
refinedweb
622
63.09
In response to my previous article, some folks have been asking about the JIT optimizations I listed, as well as a lot of other interesting questions. I'm not sure I can address all of the questions here. But on the topic of JIT optimizations, I can provide more insight on what they are as well as why hardware cannot implement them. Before I get started, just to be clear, I'm not personally against hardware Java processors. I certainly think that they fit nicely in some domains. I am also not against any vendors who make Java processors out there. I applaud them for serving the needs of a market that a JIT may not fit. Also, just because a JIT fits doesn't mean that it is always the best solution to deploy. In a previous article, I've made the case that engineering decisions should always be made on a case by case basis. A "one size fits all" mentality can work, but may not always yield the best solution. However, I do want to debunk the myth that a hardware processor can be faster than an optimizing JIT. But, of course, the JIT isn't free. There is some cost to it in terms of CPU cycles and memory, though it is often a lot less than most people believe. I will address the JIT cost issue in a future article. For today, let's look at JIT optimizations. Since I work on the phoneME Advanced VM for CDC (aka CVM), along the way, I'll point out if these optimizations are available in CVM as it exists today (for those who are interested in CVM details). Resources: When is Software faster than Hardware? JIT Optimizations In my last entry, I rambled off a random list of JIT compiler optimizations. The list is by no means comprehensive nor necessarily indicative of the most desirable optimizations to have in a JIT. Previously, I have explained how more performance isn't always a good thing. Each optimization comes with a cost of some sort. The VM/JIT engineer must weigh the cost against the benefits in choosing to include or leave out an optimization. That said, let's go over the optimizations I've already mentioned as examples to illustrate why a JIT has the advantage over Java processors when performance is the criteria of comparison. The list again is: Inlining Consider this example: public class MyProperty { protected int value; public int getValue() { return value; } } public class User { public void doStuff(MyProperty p) { System.out.println(p.getValue()); } } This example shows a common coding pattern in the Java programming language i.e. the use of getter/setter methods to access private data. This is done to achieve better encapsulation. We use getter methods like getValue() because accessing fields like value directly would introduce a whole slew of software engineering problems which I won't go into here. While using a getter method is good for encapsulation, it is bad for performance because you will have to incur the cost of a method call. The cost of a method call includes pushing arguments (e.g. the this pointer), setting up and tearing down a stack frame for the target method (getValue() in this case), and popping the return value of the stack. Inside the target method, there's also the added cost of more pushing and popping of operands and results. In this trivial example, we only need to push the result inside getValue(). In a more complex example, there can be other costs not shown here. This cost adds up to somewhere between 10s to 100+ machine instructions. Note that these instructions are all method overhead. The getValue() method will still has to do the real work of accessing the field which can take as little as 2 machine instructions. To deal with this, when compiling doStuff(), a JIT compiler would inline the call to getValue() to effectively get the following code: public void doStuff(MyProperty p) { System.out.println(p.value); } In so doing, you still get the benefits of encapsulation (in the source code, at development time) for good software engineering practice, but still get optimal performance (at runtime) as if you accessed the field directly. All the cost of method invocation is removed. The access to p.value takes less than 10 instructions. Compare this with the extra 10s to 100+ instructions to do the method invocation. Note that I said earlier that the getValue() could do the work of accessing the field in as little as 2 instructions. But in the inlined case, I said it will take less than 10 instructions. Why the discrepancy? Well, getValue() is a virtual method. Hence, there may be some added cost to check if we're actually going to end up invoking MyProperty.getValue() as opposed to an overriding method in a subclass. This is the reason for the 10 or so instruction estimate. However, in the case where this method is not overridden, the JIT can truly optimize this down to the minimum 2 instructions. I pointed out the added complexity with dealing with virtual methods because I want you to understand that there is more to doing inlining correctly than meets the eye. There are many other details to the implementation of inlining that I can't go into here. Hardware Method Invocation Now, let's consider the Java processor (JPU). When executing doStuff(), the JPU will encounter an invokevirtual bytecode where it tries to call getValue(). By definition, the JPU will treat the invokevirtual bytecode as its machine instruction and execute it. However, the JPUs won't know how a VM structure its stack. Hence, it will need to trap to software to do all the work that I pointed out above as overhead. One might argue that a really advanced JPU will define the stack structure and the VM software will just have to conform to that so that hardware will know how to push and pop a frame itself. But even without the stack issues, there are a few other things that make it really hard for the JPU to do a method invocation purely in hardware. For one, the invokevirtual bytecode specifies an index into the class constant pool (CP). The JPU will also need to be able to understand the structure of the CP. But the class constant pool has symbolic references to the method to be invoked. This will need to be resolved first. Resolution will trigger class lookup. In the case of invoking static methods, resolution can trigger classloading, class initialization, garbage collection, and exceptions being thrown. As you can see, invoking a method is not a trivial thing. It would take a seriously advanced and extremely complex JPU to do method invocations in hardware. Note, you don't actually have to do classloading, garbage collection, etc. in hardware in order to do method invocations in hardware. You just need to be able to find some way to trap to these when the hardware can't handle it. If the JPU can just execute the common invocation cases in hardware (and leave the rest to software), then that's a big win. However, in order to achieve this, it will require that in addition to having to specify the stack structure, the JPU will at least also have to specify a constant pool structure that the hardware can understand. Using Miraculous Hardware Now, let's grant you that the hardware designer is relentless and gives you all that. With that, the JPU will still have to execute the method invocation which involves all the overhead I pointed out. Executing it in hardware doesn't mean that the overhead is gone. The work done in the overhead incurs a lot of memory accesses. What is the chance that you will never have a cache miss? And if you have a large enough cache to make cache misses improbable, then what would it take to be able to move multiple words of data (for the arguments, stack frame values, and result) around the cache without incurring multiple machine cycles? Chances are, the number of cycles incurred by the JPU will be non-zero. Now compare that with the JIT where that cost can be 0. There's no beating inlining when it comes to performance. If you're still an optimist for the JPU, the next thing you may ask is if we can have the JPU do inlining too. But remember what I said about having to do a check in some cases when we're dealing with inlining virtual method calls (not to mention the other complexities that I did not talk about)? It will be a whole lot of extra work to be able to handle all those cases in hardware. Yes, theoretically, anything one can do in software, you can also do in hardware. But the difficulty of doing it in hardware is significantly more difficult and costly (in terms of hardware design, manufacturing, etc.) compared to a software solution. So, a real world JPU would probably trap to software to do method invocation. At best, it can do something to help the software do less work, but not reduce the work to 0 as a JIT can in this case. Inlining is available in the CVM JIT. Constant Folding Consider this example: public class O1 { public static final int OFFSET = 5; } public class O2 { public static final int OFFSET = 3; } public class MyClass { int calcValue(int v1, int v2) { return (v1 + O1.OFFSET) + (v2 + O2.OFFSET); } } The JIT can effectively compile calcValue() into: int calcValue(int v1, int v2) { return (v1 + v2 + 8); } Constant folding is basically an optimization where we fold the constants together to reduce the amount of work that needs to be done to compute a result. In this case, the JIT takes advantage of the algebraic properties of addition and pre-add the 2 constants together instead of having to do it every time this method is called. Hence, only 2 add operations are needed when the method is called. A JPU by definition will execute its instructions which are the bytecodes. In this case, the bytecodes for the constants will include pushing 2 constants, and doing 3 additions. With the possible hardware feature where the top N operands of the stack are mirrored in registers, the JPU can avoid some of the cost of the pushing and popping cost. However, it still needs to initialize the values of those registers. Compared to the JIT, the JPU will incur these additional register initialization cost plus one extra addition. The JIT can not only eliminate the add, but also encode the constant (in this case, the value 8), if it is not too big, into the one of the add instructions. This allows it to avoid the register initialization altogether. OK, you may ask: won't javac be smart enough to do the constant folding when the Java source code is compiled into bytecode? Maybe. I didn't check. In practice, constant folding usually becomes more meaningful when used in conjunction with inlining. Inlining may yield opportunities for constant folding that don't exist at the source level. For example: int adjustValue(int value) { return value + 5; } int adjustMore(int value) { return adjustValue(value) + 3; } After inlining adjustValue() into adjustMore(), the JIT can also fold the constants as follows: int adjustMore(int value) { return value + 8; } Some types of constant folding is available in the CVM JIT. In practice, constant folding has not yielded a lot of performance gains in real world benchmarks. Hence, accordingly, we didn't put a lot of effort into implementing every possible type. Loop Unrolling Consider this example: int a = ... // some value. for (int i = 0; i < 3; i++) { a = a + i; } The anatomy of the above loop includes the following operation: 1. initialize the iterator i to 0. 2. check to see if the iterator has exceeded the limit (i.e. 3). 3. execute the addition within the loop. 4. increment the iterator. 5. branch back to the top of the loop. Again, by definition, a JPU will execute the bytecode as its own native instruction set. Since the bytecode basically expresses the above operations, the JPU will execute the above steps 3 times. With loop unrolling, the JIT can compile the above code fragment into the following: int a = ... // some value. int i = 0; a = a + i; i++; a = a + i; i++; a = a + i; With a little extra smarts, the JIT can further optimize the above to: int a = ... // some value. a = a + 0; // i is 0. a = a + 1; // i is 1. a = a + 2; // i is 2. Add constant folding: int a = ... // some value. a = a + 3; // 0 + 1 + 2. Loop unrolling, in of itself, works to remove the loop overhead like the branch back to the top of the loop, and possibly the iterator incrementing, as well as the limit check. But when combined with other optimizations, as we can see, the performance gains can be dramatic. It's not possible for the JPU to implement this optimization too because by contract, the JPU needs to execute the bytecodes as specified. In practice, loop unrolling is not as trivial as the example shown above. Consider what happens if the loop iterator limit is a variable (as opposed to a constant) that is passed into the method. How many iterations do we unroll the loop into then? Alternatively, what if the limit is a very large constant? Unrolling all the way to the limit can result in some serious code bloat, which in turns reduces cache locality and can hurt performance. What if the code inside the loop can throw an exception e.g. indexing into an array beyond its bounds? I won't go into the details of how a JIT deals with all these. I just want to point out that there are a lot of extra complexity to this optimization then initially apparent. Loop unrolling is not currently available in the CVM JIT. It is not easy to implement, and it is not an important nor cost-effective optimization to implement for the CDC space based on our previous experience. That's not to say that things won't change in the future. Loop Invariant Hoisting Consider this example: void foo(int[] data) { int a = ... // some value. for (int i=0; i < data.length; i++) { ...; } } In the above example, the length of the array is fetched in every iteration of the loop. If the JIT can determine that the array data won't change in length inside the loop, we can hoist the fetching of its length outside of the loop so that we don't incur the cost repeatedly for each iteration. The JIT effectively emits code that does the following: void foo(int[] data) { int a = ... // some value. // pre-fetch the array length into a register: int tempReg = data.length; for (int i=0; i < tempReg; i++) { ...; } } This type of optimization is called loop invariant hoisting. In the JIT's case, fetching the array length requires accessing the array's data structure in memory (and memory accesses are expensive). Prefetching it into a register will allow the JIT to avoid this cost on every loop iteration. The JPU on the other hand has to execute the bytecode verbatim. As a result, it will fetch the array length on every loop iteration. More advanced cases of loop invariant hoisting includes interactions with inlining. Let's say the body of the loop invokes some method that gets inlined. If the method happens to perform some operation that is invariant, that operation can be hoisted out of the loop to avoid unnecessary redundant work. This is, of course, not possible for the JPU to implement because of the inlining issues. Loop invariant hoisting is not currently available in the CVM JIT. It isn't easy to implement in a generic way. Again, it isn't the most important optimization to have for applications in the CDC space. Common Subexpression Elimination Consider this example: int a = p.value + p.value; The bytecodes for the above include 2 fetches of the field value from the object p. Field accesses will result in memory accesses which can be expensive. The JIT recognizes that the above code can be expressed as follows: int tempReg = p.value; int a = tempReg + tempReg; In this case, the fetching of the field is a subexpression of the addition expression. The JIT eliminated one subexpression by fetching the field only once and reusing its value as the second operand in the addition. In this case, it saves one memory access. This optimization is called common subexpression elimination (aka CSE). In contrast, a JPU will have to execute the bytecode verbatim and do the field access twice. The above is only a very simple form of CSE. More complex forms exists, and those take a lot more effort to implement in the JIT. Some block local types of CSE is available in CVM's JIT. Intrinsic Methods Consider the following example: ... time = System.currentTimeMillis(); ... The JPU will execute the above as a method invocation to a native method that gets the systems millisecond timer value. Let's say we have a system that the milliseconds timer is a 64 bit hardware timer/counter that is memory mapped. In other words, software can read from it directly at some address in memory. A JIT can take advantage of this knowledge. Instead of emitting code that invokes the System.currentTimeMillis() method, it emits a single memory load from the location of the hardware timer. The gain here is that we need not incur all the method call overhead, as well as other cost for invoking a native method (see Beware of the Natives). In other words, the JIT can eliminate many hundreds of machine cycles down to a single 64-bit memory access. This optimization is called intrinsifying the method, or using intrinsic methods. The idea is basically that there are certain standard library methods that the JIT knows the semantics/behavior of. This special knowledge allows the JIT to emit code that implements the semantics of the method without doing an actual method call, or alternatively, to do the method call in a less expensive manner. Intrinsics is also one way that the JIT can make use of special hardware features instead of calling a software method. For example, Math.cos() can be replaced with a cos instruction if the hardware provides such a feature. A JPU can't implement this optimization because it has to execute the invoke bytecode as specified. There's also the hurdle of needing to understanding the VM's constant pool structure, and having to deal with resolution, class initialization, etc. that I mentioned earlier. In the least, a JPU cannot afford to implement as many intrinsics (in number and types) as a JIT. Intrinsics are available in CVM's JIT. Closing Thoughts Again, theoretically it is possible to implement any software features in hardware. However, the cost of doing so makes it impractical, and therefore effectively impossible. Also, so far, I've been saying that a JPU can't implement all these optimizations because it has to execute the bytecodes verbatim. You might ask: why can't the JPU solution employ some sort of code transformation like the JIT so that it doesn't have to execute bytecode in a simple minded way i.e. verbatim? Well, if you do that, then what you have is a JIT. Code transformation is what a JIT does. It transform bytecodes into a form that is optimal for the CPU to execute. Hence, by definition, a JPU (without a JIT) must execute the bytecode verbatim, and consequently, will not be able to implement JIT type optimizations. Another reminder: the above is a only sampling of possible JIT optimizations. This list is neither exhaustive nor representative of all the most important / cost-effective optimizations that a JIT can implement, though some of these are really important. Inlining is one that yields a lot of performance gain without too much cost when applied in a JIT. Ok, time to stop. I hope this article helps shed some additional light on this topic. Have a nice day. :-) BTW, regarding JavaOne, I will probably be there on one or more days. If folks are interested in getting together to have a little technical discussion, I'd be happy to oblige (assuming schedules will allow it). Hi, Many of the optimizations you present, apart inlining which requires knowledge of the runtime environment because new classes could have been loaded, are classical and are already implemented in static compilers like gcc. Could we think that Sun, for unknown reasons, had not implemented them in javac to get simpler bytecode? I imagine that the JIT compiler uses pattern matching to decide where it can apply a JIT optimization, and the simpler the bytecode is, the easier it finds instructions patterns to match. If these kind of optimizations (loop unrolling, constant folding...) had been applied in the bytecode generation by javac, the JPU would have a simpler job. And on other platforms, I think that the JIT compiler would not spend 5% of the CPU time but only 3%, to apply the remaining inlining optimization. JIT compilers would be simpler to write, so use less runtime memory, and Java applications would start quickier... A problem with JIT compiler is that they do again and again the same optimizations, at every new run. If they had some kind of memory of the previous runs, the optimization decisions would be more straightforward. Posted by: genepi on February 16, 2007 at 09:47 AM Great and very clear explanation of some difficult subjects! About loop unrolling with a variable I understood Java SE can do something like this (which is probably not an option for ME since it increases code size): for(int i = 0 ; i < x ; i++) { doSomething with i..; } So the JIT would convert it to something like this which can potentially reduce conditional statements by a factor of 3 + 1: if(x % 3 == 0) { for(int i = 0 ; i < x ; i++) { doSomething with i..; i++; doSomething with i..; i++; doSomething with i..; } } else { for(int i = 0 ; i < x ; i++) { doSomething with i..; } } I also noticed this Sun announcement which gives the best of both worlds Jazelle acceleration and a JIT. Does the CVM support something like this? Posted by: vprise on February 16, 2007 at 10:01 AM Dear genepi, Yes, most of those are classical optimizations (discussed in most compiler text books). I don't know the reason why javac does not apply these. One reason may be to preserve symbollic information in the bytecodes. But there are other cases where javac could have optimized the bytecodes but did not. I'm sure the javac team had reasons for doing so. I'm just not aware of it. But I think you missed an important point: even an optimizing javac can only go so far in applying these classical optimizations. Inlining is determined at runtime, and inlining yields opportunities for applying these classic optimizations where javac could not apply it before. Hence, there can be a good reason to support these in the JIT. As for the complexity of JITs and the amount of work they do, there are different tradeoffs made in each JIT implementation (for CLDC, CDC, and JavaSE). It is true that it is possible to write really simple JITs, but such JITs may not yield as much performance. It's all a tradeoff.. Regards, Mark Posted by: mlam on February 16, 2007 at 12:24 PM Dear vprise, Thanks for your more complete example of how loop unrolling is done. I was aware of it. As for Jazelle support on CVM, I am not at liberty to comment on that. Sorry. If there are parties who are interested in this feature, they should inquire with their Sun sales rep. Posted by: mlam on February 16, 2007 at 12:39 PM I would also add devirtualization and speculative devirtualization to the list. Posted by: olegpliss on February 16, 2007 at 04:31 PM Dear Mark, I would also be very interested in an explanation of the drawbacks of caching JIT's, if you could go into that in a future post that would be great! I recall Symantec had something like that in the early days of Java and I still don't understand why this hasn't caught on. My only guess is that it is due to space constraints in devices and complexity on the PC. Posted by: vprise on February 16, 2007 at 10:50 PM Hi Mark Thanks for continuing this thread--I love these blog entries, both informative and useful. You write well, please keep it up! A question that's interested me for awhile now is the impact of the final keyword in VM optimizations. I believe that early on it was recommended to use final liberally, because it allowed the VM to optimize certain algorithms (e.g. no need to worry about a new subclass appearing). However, I believe that this is no longer recommended best practice--can you comment on how/if final actually helps the CVM in optimizing code? Thanks Patrick Posted by: pdoubleya on February 19, 2007 at 07:37 AM Hi Patrick Since Mark didn't answer this I'll take the liberty, I have no idea whether CVM implements it but Java SE has a feature where Hotspot automatically marks classes as final. So if at runtime a class is detected to have no subclasses it is marked as final without much of a performance penalty. A Java One presentation from a couple of years back (I forget which) showed that using final doesn't get you much of anything in terms of performance. Maybe this is different for Java ME though. Posted by: vprise on February 19, 2007 at 11:11 PM For JavaOne you could grab some time in the java.net pavilion. They've got chairs and a couch -- good for discussion. Or you could take over part of a local bar. Posted by: dwalend on February 20, 2007 at 09:55 AM Hi Patrick (pdoubleya), Regarding the final keyword, I think vprise is correct. I haven't thought through the issues in great detail, but on the surface, for methods, I don't think the final keyword adds much. The devirtualization and speculative devirtualization optimizations, that my colleague Oleg mentioned, will make non-final methods look like final ones. CVM implements these optimizations as well (and so does, JavaSE Hotspot). Hence, whether you specify final or not, it may not make a big difference. In general, my advice to folks who are writing Java code is that they should primarily be concerned with writing good code based on sound software design (i.e. judicious use of OO principles and good design patterns, avoid anti-patterns, etc) instead of worrying about whether the VM or JIT will optimize something or not. You can never be sure what VM your code will be deployed on. So, it's unwise to make tradeoffs based on expected behavior of one VM or another. That's not to say that one should write bad code and just expect the VM to fix its performance problems. But something fine grain like final is probably not going to make a lot of difference. So, use final when your software design intends the method to be final, and not because of any potential performance gains. Thanks for the good question. Posted by: mlam on February 21, 2007 at 09:54 AM Constant folding is supported by the JLS, since modifying a static final field is not binary compatible. Posted by: konrad_schwarz on March 02, 2007 at 06:39 AM Could you comment on the usefulness of the "Jazelle" instruction set extension for ARM? Posted by: konrad_schwarz on March 02, 2007 at 06:40 AM Hi Konrad, Thanks for your clarification about the JLS. I recall that that was the case, but was too lazy to look it up when I was writing the entry. Regarding Jazelle, I'm not sure I am at liberty to comment about details due to various restrictions. However, in this article (as well as the previous and the next), I tried to outline the technical considerations that one would make (from the perspective of constraints in any Java implementation) when considering any JPU. I also talked about the downside as well as some upsides based on the general principles of a JPU. I hope that this will give you (the developer) one side of the information you will need to make an informed decision. Of course, the other bit of information you will need is the JPU's specific features and if/how they overcome some of these constraints. The usefulness of any JPU will depend on these factors as well as how well it addresses your requirements in terms of performance, startup time, memory usage, power conservation, etc. I'll have to leave that determination to the individual developer as requirements can vary. Posted by: mlam on March 02, 2007 at 12:59 PM Hi Mark,. Sorry if my thoughts were not correctly interpreted. I don't want to reproduce a static compiler into the JVM. I think that the JVM optimzer could take better decisions if it remembered the context, and not necessarily the compiled result, from previous runs. Much more like profiling information but for runtime use. For instance, when it starts, if the JVM can obtain from a previous run the list of hot spot candidate methods, it won't have to wait a few thousands CPU cycles to decide which methods it should optimize in priority. I know that we can't keep naively an image of the previous compilation result (or JVM dump) and start a new run from it. There are so many parameters like static initializers or changes in the classpath/class loader for instance, that make the process more difficult than with a static compiler. But there are also patterns which can be detected, like some final classes for which the JVM could reuse compiled code from the previous run... I think the static Intel C++ compiler has such a option. You start your application with a special runtime flag and it generates a profile dump. And then you submit this dump to the compiler which recompiles and optimize your application code for real life. So even static compilers need to have dynamic information to do the best optimizations! Is there some work done in the JVM team to share dynamic and static information for optimization decisions? Posted by: genepi on March 05, 2007 at 11:02 AM Hi genepi, Thanks for the clarification. You are correct that such information can be used as compilation hints for the JIT. I am still wary of the potential problem of over-eager compilation though. By over-eager, I mean that compilation may take place before the method has had adequate time to warm up. Warming up in this case, can mean more than a single run. It requires that all critical code paths be exercised before compilation. Otherwise, those code paths will get sub-optimal code generated for them. Regardless, I can see how your idea can help. It is at least worthy of some exploration (for possible refinement) and experimentation to get empirical data on how beneficial it is. There's always the chance that this may not make any noticeable difference. But the idea is interesting enough to warrant an investigation. As for work done for this, I am not personally aware of such work in CVM, though it is entirely possible that others at Sun have already attempted this. I would be tempted to explore this as soon as my schedule frees up a little though. But since this is open source, anyone who is able and willing is welcomed to try this if I don't get to it first (which may not be for a long time yet). Posted by: mlam on March 05, 2007 at 11:29 AM
http://weblogs.java.net/blog/mlam/archive/2007/02/software_territ_1.html
crawl-002
refinedweb
5,393
62.88
In this blog post, we will be discussing how to use tf.variable_scope in TensorFlow 2.0. We will go over the syntax and a few examples on how to use this function. What is tf.variable_scope? In TensorFlow 2.0, tf.variable_scope is used to share variables between different parts of the graph. It allows you to create new variables and access existing ones. It also keeps track of the variables in the current scope, so that you can reuse them later. What are the benefits of using tf.variable_scope? TensorFlow 2.0 offers many benefits over previous versions of the software, including improved performance and ease of use. One of the most significant changes is the introduction of tf.variable_scope, which makes it easier to manage variables in TensorFlow programs. Variable scope allows you to create new variables and access existing ones in a more organized way. It also helps prevent collisions between names of different variables. In general, using variable scope can make your TensorFlow programs more readable and easier to debug. There are two ways to use tf.variable_scope: with a “with” statement or as a context manager. The with statement is the recommended way to use variable scope, as it helps ensure that variables are properly disposed of when they are no longer needed. For example: with tf.variable_scope(“foo”): v = tf.get_variable(“v”, [1]) … # v is disposed of here when the with block exits In this example, the “v” variable is only accessible within the “foo” variable scope. If you try to access it outside of this scope, an error will be raised. How to use tf.variable_scope in TensorFlow 2.0 TensorFlow 2.0 brings many changes and additions, one of which is the new tf.variable_scope API. This API makes it easier to create and manage variables in TensorFlow, and can be helpful in a number of different situations. In this tutorial, we’ll show you how to use tf.variable_scope in TensorFlow 2.0, and how it can be helpful in managing variables in your models. What are some best practices for using tf.variable_scope? There are a few best practices to keep in mind when using tf.variable_scope: 1. Use tf.variable_scope to share variables between different parts of your code. This can make your code more modular and easier to understand. 2. When using tf.variable_scope, don’t create new variables unless you explicitly specify that you want to do so with the “reuse” keyword argument. By default, tf.variable_scope will reuse variables if they already exist, which can save you memory and computational resources. 3. Pay attention to the scope names that you use. If two different parts of your code use the same scope name, TensorFlow will reuse variables from the first part of the code in the second part, which may not be what you want. To avoid this, make sure to use unique scope names. How can tf.variable_scope be used to improve code clarity? Using tf.variable_scope allows for much better code clarity when working with TensorFlow 2.0. By giving each section of code its own variable scope, it becomes much easier to see what is happening and to debug if something goes wrong. How can tf.variable_scope be used to improve code organization? In TensorFlow 2.0, tf.variable_scope has been Deprecated. Use tf.name_scope instead. tf.variable_scope allows you to create new variables and also gives you the ability to reuse existing ones while caching the information about their compatibility. This can come in handy when you want to share a set of weights between two different models but don’t want to have duplicate copies of the weights in memory. Another use-case is creating separate training and testing graphs that share the same weights. To use tf.variable_scope, simply wrap the code that you want to share variables in with a call to tf.variable_scope: “` with tf.variable_scope(“foo”): with tf.variable_scope(“bar”): v = tf.get_variable(“v”, [1]) assert v.name == “foo/bar/v:0” “` This will create a new variable called “v” with the given shape and add it to the collection of variables in scope “foo/bar”. If there was already a variable with that name, it would be reused instead of creating a new one. How can tf.variable_scope be used to improve code reuse? TensorFlow 2.0 no longer supports tf.variable_scope. In its place, there is now a tf.name_scope context manager that should be used to improve code reuse. When using tf.name_scope, any Variables created within its context will be prefixed with the given name (followed by a slash). For example: with tf.name_scope(“foo”): v = tf.get_variable(“var”, [1]) assert v.name == “foo/var:0” This can be useful if you want to create multiple instances of a model and need to ensure that each instance has its own set of variables. By using a unique name prefix, you can easily distinguish between variables belonging to different instances of the same model. How can tf.variable_scope be used to improve training time? TensorFlow 2.0 offers a number of new features and improvements, one of which is the tf.variable_scope() function. This function can be used to improve training time by helping to manage variables in a more efficient way. In this article, we’ll take a look at how tf.variable_scope() works and how it can be used to improve training time. How can tf.variable_scope be used to improve model performance? In TensorFlow 2.0, the tf.variable_scope module has been redesigned to be more intuitive and easier to use. In this article, we will explore how to use tf.variable_scope to improve model performance. tf.variable_scope allows us to create variables and hold them in a collection. We can then use the collection to initialize or reuse variables in our model. This is especially useful when we want to create multiple networks in one model, or when we want to share weights between networks. To use tf.variable_scope, we first need to create a scope object. We can do this by calling tf.VariableScope with a name argument: “`python with tf.variable_scope(“my_scope”): # do something here… “` We can also specify if we want to reuse variables in the scope: “`python with tf.variable_scope(“my_scope”, reuse=True): # do something here… What are some other tips for using tf.variable_scope? When you’re using tf.variable_scope, there are a few potential gotchas that can trip you up: – If you create a new tf.variable_scope within an existing one, any new variables created will inherit the properties of the parent scope, including its name. To avoid this, use the tf.variable_scope(reuse=True) context manager when creating a nested scope. – Variables created with tf.get_variable have a different namespace than those created with tf.Variable(). This can lead to confusion if you’re not careful, so make sure to use the right function for each situation. – If you want to create variables with regular Python string objects (rather thantf.Variable objects), you need to use tf.get_variable(). This can be helpful if you want to share weights between different parts of your model
https://reason.town/tf-variable_scope-tensorflow-2-0/
CC-MAIN-2022-40
refinedweb
1,202
69.18
Node: Surface Hiding, Previous: Focuses Getstart, Up: Pictures In [next figure] , Circle c lies in front of Rectangle r. Since c is drawn and not filled, r is visible behind c. default_focus.set(1, 3, -5, 0, 3, 5, 10); Point p(0, -2, 5); Rectangle r(p, 3, 4, 90); r.draw(); Point q(2, -2, 3); Circle c(q, 3, 90); c.draw(); current_picture.output(); Fig. 64. If instead, c is filled or filldrawn, only the parts of r that are not covered by c should be visible: r.draw(); c.filldraw(); Fig. 65. What parts of r are covered depend on the point of view, i.e., the position and direction of the Focus used for outputting the Picture: default_focus.set(8, 0, -5, 5, 3, 5, 10); Fig. 66. Determining what objects cover other objects in a program for 3D graphics is called surface hiding, and is performed by a hidden surface algorithm. 3DLDF currently has a very primitive hidden surface algorithm that only works for the most simple cases. The hidden surface algorithm used in 3DLDF is a painter's algorithm, which means that the objects that are furthest away from the Focus are drawn first, followed by the objects that are closer, which may thereby cover them. In order to make this possible, the Shapes on a Picture must be sorted before they are output. They are sorted according to the z-values in the projective_coordinates of the Points belonging to the Shape. This may seem strange, since the projection is two-dimensional and only the x and y-values from projective_coordinates are written to out_stream. However, the perspective transformation also produces a z-coordinate, which indicates the distance of the Points from the Focus in the z-dimension. The problem is, that all Shapes, except Points themselves, consist of multiple Points, that may have different z-coordinates. 3DLDF currently does not yet have a satisfactory way of dealing with this situtation. In order to try to cope with it, the user can specify four different ways of sorting the Shapes: They can be sorted according to the maximum z-coordinate, the minimum z-coordinate, the mean of the maximum and minimum z-coordinate (max + min) / 2, and not sorted. In the last case, the Shapes are output in the order of the drawing and filling commands in the user code. The z-coordinates referred to are those in projective_coordinates, and will have been calculated for a particular Focus. The function Picture::output() takes a const unsigned short sort_value argument that specifies which style of sorting should be used. The namespace Sorting contains the following constants which should be used for sort_value: MAX_Z, MIN_Z, MEAN_Z, and NO_SORT. The default is MAX_Z. 3DLDF's primitive hidden surface algorithm cannot work for objects that intersect. The following examples demonstrate why not: using namespace Sorting; using namespace Colors; using namespace Projections; default_focus.set(5, 3, -10, 3, 1, 1, 10, 180); Rectangle r0(origin, 3, 4, 45); Rectangle r1(origin, 2, 6, -45); r0.draw(); r1.draw(); current_picture.output(default_focus, PERSP, 1, MAX_Z); r0.show("r0:"); -| r0: fill_draw_value == 0 (-1.5, -1.41421, -1.41421) -- (1.5, -1.41421, -1.41421) -- (1.5, 1.41421, 1.41421) -- (-1.5, 1.41421, 1.41421) -- cycle; r0.show("r0:", 'p'); -| r0: fill_draw_value == 0 Perspective coordinates. (-5.05646, -4.59333, -0.040577) -- (-2.10249, -4.86501, -0.102123) -- (-1.18226, -1.33752, 0.156559) -- (-3.51276, -1.2796, 0.193084) -- cycle; r1.show("r1:"); -| r1: fill_draw_value == 0 (-1, 2.12132, -2.12132) -- (1, 2.12132, -2.12132) -- (1, -2.12132, 2.12132) -- (-1, -2.12132, 2.12132) -- cycle; r1.show("r1:", 'p'); -| r1: fill_draw_value == 0 Perspective coordinates. (-5.09222, -0.995681, -0.133156) -- (-2.98342, -1.03775, -0.181037) -- (-1.39791, -4.05125, 0.208945) -- (-2.87319, -3.93975, 0.230717) -- cycle; Fig. 67. In [the previous figure] , the Rectangles r_0 and r_1 intersect along the x-axis. The z-values of the world_coordinates of r_0 are -1.41421 and 1.41421 (two Points each), while those of r_1 are 2.12132 and -2.12132. So r_1 has two Points with z-coordinates greater than the z-coordinate of any Point on r_0, and two Points with z-coordinates less than the z-coordinate of any Point on r_0. The Points on r_0 and r_1 all have different z-values in their projective_coordinates, but r_1 still has a Point with a z-coordinate greater than that of any of the Points on r_0, and one with a z-coordinate less than that of any of the Points on r_0. In [next figure] , the Shapes on current_picture are sorted according to the maximum z-values of the projective_coordinates of the Points belonging to the Shapes. r_1 is filled and drawn first, because it has the Point with the positive z-coordinate of greatest magnitude. When subsequently r_0 is drawn, it covers part of the top of r_1, which lies in front of r_0, and should be visible: current_picture.output(default_focus, PERSP, 1, MAX_Z); Fig. 68. In [next figure] , the Shapes on current_picture are sorted according to the minimum z-values of the projective_coordinates of the Points belonging to the Shapes. r1 is drawn and filled last, because it has the Point with the negative z-coordinate of greatest magnitude. It thereby covers the bottom part of r0, which lies in front of r1, and should be visible. current_picture.output(default_focus, PERSP, 1, MIN_Z); Fig. 69. Neither sorting by the mean z-value in the projective_coordinates, nor suppressing sorting does any good. In each case, one Rectangle is always drawn and filled last, covering parts of the other that lie in front of the first. 3DLDF's hidden surface algorithm will fail wherever objects intersect, not just where one extends past the other in both the positive and negative z-directions. Rectangle r(origin, 3, 4, 45); Circle c(origin, 2, -45); r.filldraw(); c.filldraw(black, gray); current_picture.output(default_focus, PERSP, 1, NO_SORT); Fig. 70. Even where objects don't intersect, their projections may. In order to handle these cases properly, it is necessary to break up the Shapes on a Picture into smaller Shapes, until there are none that intersect or whose projections intersect. Then, any of the three methods of sorting described above can be used to sort the Shapes, and they can be output. Before this can be done, 3DLDF must be able to find the intersections of all of the different kinds of Shapes. If 3DLDF converted solids to polyhedra and curves to sequences of line segments, this would reduce to the problem of finding the intersections of lines and planes, however it does not yet do this. Even if it did, a fully functional hidden surface algorithm must compare each Shape on a Picture with every other Shape. Therefore, for n Shapes, there will be n! / ((n - r)! r!) (possibly time-consuming) comparisons. Fig. 71. Clearly, such a hidden surface algorithm would considerably increase run-time. Currently, all of the Shapes on a Picture are output, as long as they lie completely within the boundaries passed as arguments to Picture::output(). See Pictures; Outputting. It would be more efficient to suppress output for them, if they are completely covered by other objects. This also requires comparisions, and could be implemented together with a fully-functional hidden surface algorithm. Shadows, reflections, highlights and shading are all effects requiring comparing each Shape with every other Shape, and could greatly increase run-time.
http://www.gnu.org/software/3dldf/manual/user_ref/3DLDF/Surface-Hiding.html
CC-MAIN-2014-52
refinedweb
1,254
55.74
Intro: Interfacing Brushless DC Motor (BLDC) With Arduino This is a tutorial about how to interface and run a Brushless DC motor using Arduino. If you have any questions or comments please reply in comments or mail to rautmithil[at]gmail[dot]com. You can also get in touch with me @mithilraut on twitter. To know more about me: Step 1: List of Components - Arduino UNO - BLDC outrunner motor (Any other outrunner motor will work fine) - Electronic Speed Controller (Choose according to the current rating of the motor) - LiPo Battery (to power the motor) - Male-Male Jumper cable * 3 - USB 2.0 cable type A/B (To upload the program and power the Arduino). Note: Make sure you check the connectors of battery, ESC and Motors. In this case we have 3.5mm male bullet connectors on the Motor. So I soldered 3.5mm female bullet connectors on the output of ESC. The Battery had a 4.0mm Male Female connector. Hence I soldered appropriate female male connectors on the input side of ESC. Step 2: Connections Connect the motor to the output of ESC. Here, the polarity doesn't matter. If you switch any 2 of the 3 wires, the motor will rotate in opposite direction. Connect the '+' & '-' of battery to the Red(+) and Black(-) wires of ESC respectively. From the 3pin servo cable coming out of the ESC, connect the Brown cable to the 'GND' pin on Arduino. Connect the Yellow cable to any digital pin. In our case its digital pin 12. Step 3: Programming Arduino UNO If you are new to Arduino then you can download, install and setup the Arduino from here. Connect the Arduino to the PC. Open Arduino IDE and write this code. Under 'Tools' select Board: Arduino/Genuino UNO Port: COM15 (Select appropriate COM port. To find out the COM port open device manager and look for Arduino UNO under 'Ports') Click Upload button on the upper left corner. #include <Servo.h> Servo esc_signal; void setup() { esc_signal.attach(12); //Specify here the pin number on which the signal pin of ESC is connected. esc_signal.write(30); //ESC arm command. ESCs won't start unless input speed is less during initialization. delay(3000); //ESC initialization delay. } void loop() { esc_signal.write(55); //Vary this between 40-130 to change the speed of motor. Higher value, higher speed. delay(15); } Step 4: Note The correct way to run the motors is to 1. Connect the battery to the ESC to power up the ESC. 2. Power the Arduino. If you do the other way round, the Arduino will run the arm sequence and the ESC will miss those commands since it isn't powered up. In this case press the reset button on the Arduino. 62 Discussions 2 months ago Hallo, how can I control the brushless motor and esc via my iPhone Bluetooth ? I have already Hm-10 bluetooth, but I need 1- the sketch of this project 2- and IOS App Can anyone please help me ? 2 months ago Sir, i want to control speed of BLDC motor in diffrent stage. please help me sir Reply 2 months ago Follow the tutorial. Program the Arduino to control the motor. Reply 2 months ago Where sir? Reply 2 months ago Refer the program in step 3 of this instructable. That is where you would program it. The comments in the program would help to see what each command is doing. Question 5 months ago Hi! Great tutorial! what amp is recommended? Thanks. Answer 4 months ago Choose the Amp according to the max current rating of the motor. 2 years ago Nice work:) with the same procedure can i control a 4 BLDC? So can i build an arduino based quadrocopte? Someone said to me the arduino not "fast enough" for it. Reply 2 years ago You can control 4 BLDC motors. But the clock rate of arduino is less so even if you make one it won't be stable. I would recommend you go for a different controller with a faster clock rate. Reply 4 months ago which controler Reply 4 months ago For the motor, select according to the max power rating of the motor. Reply 2 years ago Thank you for your fast response:) Question 7 months ago Hi, Thanks for this instruction. How can I control the direction of the motor? Thanks Moshe Answer 7 months ago I am not aware of any systems to do that right away but you could look online for ways to control the direction of BLDC motor. Answer 7 months ago Electronically you cannot. Changing the direction of BLDC motors involves switching the connections on two of the three wires. So either yellow-black, red-black or red-yellow. Answer 7 months ago OK... So how is it possible? Which other component to add in between to control also the direction? With regular DC, I did it with L298N and Arduino. Is there something similar for BLDC? Thanks Moshe Question 7 months ago on Introduction Hi, I want to run bldc motor used in segway using audrino mega 2560 and swagbridge with 36 v input from power supply . Whenever we are connecting the whole circuit to motor, motor is not starting . Please suggest Answer 7 months ago What is the ESC used with the motor? I can't find out what swagbridge is. Question 9 months ago When i paste the code into my Programm (Arduino), than the programm say: Error compiling for board Arduino/Genuino Uno Whats wrong? Please answer fast Answer 9 months ago That is a generic error response. If you can paste all the details of the error or make a video showing this error it would be easier to figure out why it's not compiling.
https://www.instructables.com/id/Interfacing-Brushless-DC-Motor-BLDC-With-Arduino/
CC-MAIN-2018-47
refinedweb
968
75.71
![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]> Hi everyone, I am using a S7G2-SK board with E2studio 6.2.0 and SSP 1.3.0. I have a winc1500 xplained pro I would like to initializes the Wi-Fi module I downloaded and installed the "ATWINC1500 WIFI MODULE DRIVER " found on the Renesas site: I created a new synergy project. I followed the steps in section 3 of the "ATWINC15X0 Wi-Fi Add-on Component User’s Manual " found on the same page as previously: I end up with the following configuration: I used the PMODB of the S7G2 to connect the Atwinc1500: The MCU P411 connected on the Atwinc1500 Pin SPI_MOSI The MCU P410 connected on the Atwinc1500 Pin SPI_MISO The MCU P412 connected on the Atwinc1500 Pin SPI_SCK The MCU P413 connected on the Atwinc1500 Pin SPI_SSN The MCU GND connected on the Atwinc1500 Pin GND The MCU PMODB VCC connected on the Atwinc1500 Pin VBATT 3.3V The MCU P400 connected on the Atwinc1500 Pin IRQ_EN The MCU P603 connected on the Atwinc1500 Pin RESET_N The MCU P604 connected on the Atwinc1500 Pin CHIP_EN I started with the beginning, so it will be just the initialization of the module. Here my code: #include "wifi_thread.h" /* Wifi Thread entry function */void wifi_thread_entry(void){ ssp_err_t err; err = g_sf_wifi0.p_api->open(g_sf_wifi0.p_ctrl, g_sf_wifi0.p_cfg); if(SSP_SUCCESS != err) { tx_thread_sleep (10); } while (1) { tx_thread_sleep (1); }} I set a breakpoint if err is different of SSP_SUCCESS. Unfortunately err is “SSP_ERR_WIFI_CONFIG_FAILED” wich mean “Configuration of Wi-Fi driver failed". Is there anything wrong with the setup / code above? Where can I find a fully working project with the ATWINC1500 + S7G2? Kind regards Hi Alexis, be very careful when mapping a different device to the Pmod connection of the SK-S7G2 as the designer did not use the 'standard' Digilent numbering convention.....see: LINK In addition you should connect both the GND pins (2 and 19) on the WINC1500-XPRO Extension Header to GND on the PmodB header. Also be aware that the WINC1500-XPRO Extension Header has a WAKE Active-high host wake control signal to the wireless module on pin 10. Initially you can tie this pin HIGH to enable the module, later you may drive it from an I/O pin to allow the module to go to sleep when it isn't used. Please take a look at the following post: Cheers, Lou In reply to Lou Leen:
http://renesasrulz.com/synergy/f/synergy---forum/10630/sk-s7g2-wi-fi-framework-with-winc1500-xplained-pro/36235
CC-MAIN-2020-10
refinedweb
417
67.79
Blender 3D: Noob to Pro/Advanced Tutorials/Advanced Game Engine/Game Creating Techniques(GUI)/Creating Pop-Up Menus< Blender 3D: Noob to Pro | Advanced Tutorials | Advanced Game Engine | Game Creating Techniques(GUI) In this tutorial, we will be making a Title Screen and a Main Menu Screen to come from that Title Screen. While most people hardly pay attention to these areas on a real video game, they do make the game seem much better to play, and gives the player a certain choice of what they can do before they play the game. Before beginning this, make sure you familiarize yourself with several logic sensors and actuators, such as "Scene" and "Property", as they will heavily be relied upon. It is also best if you have several Pictures ready to use for your game or are ready for additional functions with certain programs (a Word Processor Program and Image Editing Program [that has a transparency setting] can help move along your game.) Contents Making An Easy Title MenuEdit The Title Screen will be very simple -- it will be the "Press Enter to Continue" kind. This will lead to a more interactive kind of Main Menu, in which you can use either key commands or the mouse pointer. This tutorial will not be the most visually pleasing, but then again, you can always add whatever graphics you would like. Again, it would be wise to have handy a word-processing program and a program like Microsoft Paint. "Print Screen" will be a useful function to use while following this tutorial -- the corresponding button is generally located to the right of the F12 key in the row of function keys on your keyboard. If you have never used different scenes before, you're about to get a crash course. Also, Blender hotkeys and actions will not be explained with (A-key) or (numpad-6); it is assumed you have already learned at least how to perform basic Blender functions. Making the MenuEdit First, start up Blender and delete everything in the scene. Then, move to top orthographic view (press numpad 7, or if you have not a numpad, go to the hidden top menu and in the System & OpenGL section select "emulate numpad"). The first thing you should do is add a new scene. In the Information view panel at the top, click the SCE: drop-down menu and click on ADD NEW. Label this new scene something like "Main Menu". Now go back to your original scene and name it something like "Title Menu". These names will be important when it comes to changing the scenes around. So, on your "Title Menu" Scene, begin by adding a Camera. Move this Camera in the Z direction about a point, just to get it above the grid. Now, set the camera view to Orthographic. Now switch to Camera View. Add a plane that is scaled to cover the entire surface of the Camera's view-bounds. Move that plane just a little bit in the -Z direction. Then, add another plane, but scale this plane to be a thin rectangle about 2/3rds down the Y-length of the Camera, centered in X-direction. (Image Shown with Materials for visual purposes, DON'T ADD MATERIALS) Now this next step isn't necessary, but if you want a good-looking game you will probably need this step. Let's add some UV textures. Open a good photo/image-editing program like Photoshop, GIMP, or Microsoft Photo Editor; something that preferably supports Transparency. Type some text appropriate for your game such as, "Press any key to continue." What I do is add lots of extra space and add additional text that I will use in the main menu too, such as "New Game", "Continue", "Story Mode", etc. so it is all in one image for multiple UV textures. If your editor supports it, set the background color to transparent. If not, set it to black so it will register as 0 on the alpha channel in Blender. Save the image (GIF or PNG are reasonable choices for use with transparency). Now, in Blender, split the 3D screen into 2 screens, and change one screen over to UV face select. Select your small rectangle, and switch to Edit Mode and Unwrap the face in the 3D window, then open your saved image in the UV face window. Scale your face in the UV window so it holds only the "Press Any Key to Continue" graphic. If you wish, also add a starting Image on the larger rectangle. Setting Up The ActionsEdit Now this step is necessary. Select the large background rectangle, in the Buttons Window at the bottom switch it to the Logic Tab. This will be very simple. Add one Sensor, one Actuator, and one Controller. Make the sensor a Keyboard Sensor, depress the "True Pulse" button, and in the key area depress the "All Keys" button. Make the Controller an "AND" controller. Make the Actuator a Scene Actuator. Select the Drop-Down box and make it "Set Scene" wit "SCE:Main Menu". Before you test the game, make sure to save and add a Camera to the "Main Menu" Scene. Otherwise the game WILL crash. If you want the option of returning to this scene, instead of one Actuator make two. Make both of them Scene, but make one "Suspend Scene" with "SCE:Title Menu" and make the other Actuator "Add Overlay Scene" with "SCE:Main Menu". Also, that means in the Main Menu Scene later you'll have to add some key to hit or some action to "Remove Scene" "SCE:Main Menu" and "Resume Scene" "SCE:Title Menu". Author's Starting Menu ExampleEdit The following contains actual in-progress work by the original Author of this page. The work and ideas are heavily copyrighted and it is highly encouraged (begged) that you do not rip this content off--the author has submitted this work generously to help less advanced users more efficiently. The following is my work on a game I am making, and I hope it helps you too. This will get you to the next menu, which will be on the next section. A Main MenuEdit This menu will be more difficult,using a mouse-python function to use and make the mouse appear when you play your game and a maze of sensors and actuators entangled. Here is the first thing you need: a python script to make the Mouse show --- *This code is attributed to the author of the chapter "Blender Game Engine/A Simple Mouse Pointer" and this is not my own code. import Rasterizer as r r.showMouse(1) Put this in the text editor, and save the text as something simple like "Mouse". Now, on your Main Menu scene, select the camera and add 1 sensor, 2 controllers, 1 actuator, and 1 property. Name the property "Switch" "Int" Type, and set it to 0. Set up your logic as follows: A change will need to be made in order to make yours work. The python controller should be Script:Mouse, not Script:showpointer. If you can't see the connections: - Connect the Sensor to the 2 controllers - Connect the AND controller to the actuator - Leave the Python controller open-ended If you play the game, you will see the mouse then! Now, back to the Menu, there are multiple kinds of Menus you can make when it comes to Menus popping up. The one I believe works best is a 3-layered button. To explain what this means, I'll run you through the steps of making one. Let's start with the layers themselves. (For this to work best, make the background of any images you use have transparency, so you can make invisible-yet-clickable faces.) Create a rectangle facing the camera where you want your first menu option to be. Copy the object until you have 3 rectangles, and make each one very slightly in front of the last. Now, Select the back rectangle, and on this one apply the UV face of the option's words. The second rectangle is the UV face for a graphic outline around the words if the option is selected in the game. It's not necessary but then again, it helps the player with selecting options and it looks nice. The top rectangle will be the selectable one--make the UV face applied only over a completely transparent area. Now, for the logic. Text-Rectangle: -No Logic Needed Outline-Rectangle: -Give it a Property named "Selected", Int Type, starting at 0. -Make a Property Sensor, True Pulse, "Selected" Property is Equal to 1, Connected to an AND controller, Connected to a Visibility Actuator at "Visible" -Make a Property Sensor, True Pulse, "Selected" Property is Equal to 0, Connected to an AND controller, Connected to a Visibility Actuator at "Invisible" -Make a Property Actuator, "Selected" Property is Assigned Amount 1 -Make a Property Actuator, "Selected" Property is Assigned Amount 0 -Create AND Controller and connect both Mouse Sensors. Make this AND Controller connect to the Outline-Rectangle's Property Actuator "Selected" Assign=1. Transparent-Rectangle: -2 Mouse sensors: "Mouse Over" and "Left Button". Set them to "True Pulse". Now here is the trickier part to explain. To make things a little more simple, name that last Outline-Rectangle's AND controller connected to the 2 mouse sensors. Select all 3 layers, and duplicate them 3 times. Make each one (in camera view) below the last, making sure to keep them parallel on the Z axis. Now, to make only one selected at a time, take the named controller of the first, and connect it to the Outline-Rectangle's Property Actuator "Selected" Assign=0 of second and third button. Do the same from the second button to the first and third one, and same with the third to the first and second. It should look like the picture: This logic arraignment should yield (when the game is played) options in which if you click one it will light up, and if you click another, that one will go down and the other will light up. Now this looks nice aesthetically, but the use of it is now all you have to do is make some Scene Actuators. Add 2 more actuators to each of the Transparent Planes. Use the same method as you did with the Properties, but instead of "Property Assign 1/0" make 1 AND controller, 1 Scene Overlay and 1 Scene Remove for each transparent button. Now make 3 additional Scenes, and either make an entirely new menu for each, or to be easier just make colored rectangles on the scene, different colors for each scene -- either way, make sure that, if the scenes overlay, the additional scene won't cover up the buttons of the Main Menu Scene. So, now connect the 2 Mouse Sensors to the corresponding AND Controller, and connect that to the Corresponding "Overlay Scene" Actuator and 2 Opposing "Remove Scene" Actuators. Do this with the other 2 Transparent Rectangles. Make sure your new scenes also have cameras and that they are scaled properly. You can use this technique to make Menus and Sub-Menus Galore! It's fairly simple to make multiple sublevels of this and get back to the original scene with this method. I hope this helped you make your menus work well.
https://en.m.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Advanced_Tutorials/Advanced_Game_Engine/Game_Creating_Techniques(GUI)/Creating_Pop-Up_Menus
CC-MAIN-2018-26
refinedweb
1,908
69.31
One). I'd argue that $54 is the ridiculous price, not $16.49 well when Scott Hanselman posted on Twitter last night that the book was going for so cheap I had to try and order it and to my surprise, Amazon are shipping to South Africa again so I'm stoked! Can't wait for it to arrive! You've been kicked (a good thing) - Trackback from DotNetKicks.com Cant wait to grab the offer :-) Pingback from Professional ASP.NET 3.5 Book (only $16 on Amazon for a short time) · Buwin Technology Pingback from Young Wizard » Blog Archive » Professional ASP.NET 3.5 Book (only $16 on Amazon for a short time) Thanks to the Euro/Dollar exchange rate, it's even cheaper for us Europeans... Thanks for this tip, Scott! Damn.. I got this book a few weeks ago. Great book! but it would have been better had the price been lower ;) Hi ASP.NET 3.5 Book is very good book But i think ASP.Net 3.5 Unleashed is the King :-) Thank you Hey Scott, Are there any VB books that you would recommend (with regards to ASP.NET)? Pingback from Dew Drop - May 6, 2008 | Alvin Ashcraft's Morning Dew This looks like it has everything but the kitchen sink. Nice. I just ordered a copy. Thanks for the info! It's at #41 as I write this. Too good a price for me to pass up. Thanks for the heads-up and the shout out Scott. its now at #14 !!!!!!!!!! bought this this morning for £8, thanks scott you are king! ORDERED Scott's Book Club is almost outselling Oprah's Book Club. Developers Unite! Thanks for the heads up. Not quite the same sale at Amazon.ca ($37.79 CAD) but good enough! :) Thanks Scott..I have ordered one! This book is now up to #8 WOW. Thanks for letting us know Scott. Ordered mine today, the reviews are great and should be a keeper for all .NET developers! Now on #8. Incredible! Thanks for the info, Scott! Read "Pro LINQ: Language Integrated Query in C# 2008". Great book. One of the best explanations of Lamba. Book ordered. Can't wait to read it. I agree with Max: I have both books, and the "Unleashed" Walther book is perhaps three times as valuable, because it has perhaps that much more content that isn't just a restatement of the documentation. Still, $17 is a can't-miss price. The price just went up from $16 to $27 while I was looking at it! Maybe they'll discount it again. Good book. Looks like the price just rose. I got it before it went to $27.49 and even that is still a great deal. I had already added it to my cart, and by the time I got to checkout (shopped for something else), they'd changed the price. Luckily for me, a co-worker was already checking out and was able to add a second one without his price changing. Don't forget "ASP.NET 3.5 For Dummies" a really well written book on the same subject that is very "entertaining". I had a chance to meet the author at the MVP Summit and he is a great guy. Promotion over already? Dang, it's back up to $27.49. It seems that they've stopped selling the book for the $16.00... It now is $27... Dang it! 27. I should get it before they rise the price again. It looks like the price just went up to $27 (still 50% off though). The book is currently ranked #5 for *all* items on Amazon. I suspect they raised the price since it is burning up the charts and selling a bit better than they origionally planned. :-) Here are a few other .NET books still on the list: C# 2008: VB 2008: Price is up, but it's at #5 now. Professional ASP.NET 3.5 Book (only $16 on Amazon for a short time) Instead of the usual $54 price,... Some1 please tell Amazon to make it USD 16.49 again, pls :) I was about to order, and prices shot up, like Gas!! Actually the price has gone up by $7.49. Details:-- Original Price:- $16 + $4 (shipping) = $20 Current Price:- $27.49 (Free Shipping) = $27.49. It is still a pretty good deal, and knowledge contained in it. Thanks for the info! I can't wait for Scott's book on Silverlight :-) ..Ben Now they are selling "Beginning ASP.NET 3.5: In C# and VB.NET' for only $17.99. That's a good deal too. I bought the book just because it was $16 and full of information. However, in general I do not buy Wrox series books as I don't find them useful at all. I couldn't beat this price though. This morning Scott Guthrie announced this book was only $16 ; when normally it is $54.99. Rigth now it Scott, How do you compare this book "Professional ASP.NET 3.5: In C# and VB" with "ASP.NET 3.5 Unleashed by Stephen Walther" which one is the best. Any comments?? Thanks Can't i use LINQ with Oracle have been playing around with linq thinking of adopting it for a new project with oracle database. Une des choses que je suis de prêt est les ventes de livres sur Amazon.com, car cela me donne une bonne I agree with Billkamm. Wrox series books are not that great. Most of Wrox books contain unnecessary information, like how to navigate visual studio. Who has time to read 1600 pages. I think they are getting good but not best yet. However, you can not beat this price. Basically, it’s not programmer to programmer. It is cr*p to programmer. It looks like the price went up to $27.50 which is still better than $54.99! Does it have a chapter on how to use MasterPages without name/ID mangling in the generated HTML? It still grinds my gears that I cannot use MasterPages... :( What are the other books on Asp.net 3.5 that you would recommend (other then worx series) its a whopping £85 on Amazon UK. if that's not a ridiculous price i don't know what is... The price is something That made me order this book. Hopefully it's decent. First time I've ever bought a book based soley on price. Although these epic 1600-page programming books have always seemed silly to me. I mean who really sits down reads on of these all the way through? "Wrox Sucks" - this was how wrox was before they got acquired by John Wiley. Present day Wrox books are a lot better. Also, I would recommend to buy Wrox book which has less # of authors like 2-3 max....not some book which is authored by a Football team ;). Price is 27.49 now. The Amazon.UK price in the UK was just £14.0. So really tempted, but at 1600 pages I find this rather daunting to me. As a RAILS fan, I have only just become interested in ASP.MVC which does not seem to be covered :( Waiting for a great Silverlight 2.0 book (on ASP, MVC and SharePoint) AJAX is RIP ASAIK Hi, There is no way to contact you or to add comments in old posts, so I am adding one here. This is for weblogs.asp.net/.../tip-trick-url-rewriting-with-asp-net.aspx Please move it there. I have run into issues because of the wrong C# code that was posted in the comments of the page and I wanted to contribute with a corrected version so others will not go through this problem again. But it's difficult to contribute in this Blog or contact you. Comment for other blog post: Do not use the C# code someone posted above or you will run into problems, use this fixed version: using System.Web; using System.Web.UI; RewriteFormHtmlTextWriter(System.IO.TextWriter writer) base.InnerWriter = writer; public override void WriteAttribute(string name, string value, bool fEncode) // If the attribute we are writing is the "action" attribute, and we are not on a sub-control, // then replace the value to write with the raw URL of the request - which ensures that we'll // preserve the PathInfo value on postback scenarios if (name == "action") { HttpContext Context = null; Context = HttpContext.Current; if (Context.Items["ActionAlreadyWritten"] == null) { // Because we are using the UrlRewriting.net HttpModule, we will use the // Request.RawUrl property within ASP.NET to retrieve the origional URL // before it was re-written. You'll want to change the line of code below // if you use a different URL rewriting implementation. value = Context.Request.RawUrl; // Indicate that we've already rewritten the <form>'s action attribute to prevent // us from rewriting a sub-control under the <form> control Context.Items["ActionAlreadyWritten"] = true; } } base.WriteAttribute(name, value, fEncode);) There's actually a way to do URL rewriting without having to modify the action attribute or anything - it involves multiple RewritePath calls. I'll try to post it on nathanaeljones.com as soon as I can. The book price is back at $27, btw. Pingback from Professional ASP.NET 3.5 Book (only $16 on Amazon for a short time) « .NET Framework tips Just wanted to point out that some images are not showing up (the ones with URLs pointing to). The ones in the sidebar are one example, but what's worse is the ones in the tutorials are missing, making it much harder to understand them (for example, the articles about MVC). Is it possible to make those images available? Other than that, great blog, and thanks for all the expertise shared here. PS: Posted this here and not on the articles I mentioned because comments were disabled on those articles. I'm in the middle of reading this book and must say - unfortunately a lot of annoyning mistakes/typos. At least too many for a book of such a caliber. Another thing. I have to back "asrfarinha" (post above) - images from excellent past tutorials (e.g. Linq to SQL series) have dissapeared effectively making these tutorials useless. Can this please be restored. Evgeny Pingback from Professional ASP.NET 3.5 (s??lo 16$ en Amazon por unos d??as) « Thinking in .NET El martes en la mañana Scott Guthrie anunció que este libro estaba a sólo $16 ; cuando normalmente cuesta $16 USD? $27 USD?? Where? Its $35 USD now! At that price I go to Bookpool.com because I have never, ever, ever had a good experience with Amazon. More like Amadont for me... $16 would have been worth the trouble. Is there anyway I could get the hardcopy for free? I think I overspent on books (amazon should be happy!) lately. Any good samaritan out there? Mine just arrived today. Save your money. 1500 pages of bad. I think this is just a case of Scott helping Scott with some shameless promoting. Well I know why it's so cheap now. Great book so far, BUT the paper is dirt cheap. Feels like those copies you see for sale everyonce in a while of cheap text books where they tell you it's made with cheap paper. Seems like they would have told ya. Rick, Your comment is in a very poor taste. If you don't like the book, that's fine but don't you bad mouth Scott Guthrie. Do you even know what it takes to be that man? Keep your stupid comments to yourself.
http://weblogs.asp.net/scottgu/archive/2008/05/06/professional-asp-net-3-5-book-only-16-on-amazon-for-a-short-time.aspx
crawl-002
refinedweb
1,949
85.59
Kill all Aliens! Hello. Just added my space shooter game if any of you are interested in testing it. It's far from completed but it has a boss now Things in need of improvement and bugs - Boss only moves when beeing hit ( wanted it to move all the time - Enemy lasers do not always need to hit the ship ( if they are really close it considered a hit) - Distance to Boss needs to reset when boss is killed so distance to next boss is displayed) - PowerUps!!! - Alot more but these are the things im working on now Please leave feedback for improvments Im only doing this because it's fun and I want to learn Hello again! I have created a gist and updated the game now there are powerups that when the player collect will make it shoot green twin lasers Still haven't fixed so they shoot for a period of time, so for now they only fire once If anybody has any idea for how to make them better please let me know Here is the url: link text Hello again! Game update: - Power-up works fine now (They do not work on bosses) - Added second boss - Fixed some issues with enemy lasers and boss lasers - Added score (Just for fun) Link to game: ToDo: - Add new enemies - Add final boss or more bosses - more power-ups - Start up screen/ end screen Hi everybody. I have a problem with a function and Im wondering if anybody can help me with it I have create "Bombs" that explode if you shot them, however my issue is that I would like the ship to die if it touches the explosion and I thought that it should look something like this for it to work: def bomb_gone(self, bomb): explode = SpriteNode('shp:Explosion00', parent=self) explode.scale = 0.5 explode.position = bomb.position if self.ship.position == explode.frame: exit() explode.run_action(A.move_by(10, 10, 1.6, TIMING_EASE_OUT)) explode.run_action(A.sequence(A.wait(1), A.remove())) bomb.remove_from_parent() as you can see if the ships and explode occupies the same frame the game is over, but it does not work, it works if the "bomb" has not exploded and I fly right into it but not if they explode, any suggestions? if self.ship.position == explode.frame: - Position is a point - Frame is a rect - I would have written: - if self.ship.position in explode.frame: @ccc thanks for the help, but I stll don't get it to work maybe I have forgotten something else. Anyhow thank you for your help will look through it all again Your code only checks the first time, not while the explosion moves. Is that what you wanted? Also, really what you want is if the two frames intersect --@ccc's code only checks the position, which might be the center, or corner, so probably isnt what you want rect offers an intersect method: if self.ship.frame.intersects(explode.frame) which would check if the bounding boxes intersect. note that this might be "unfair", since if you have a round explosion, and round ship, this test would pass when the corners touch, not the circles. I forget if shapenodes or textures have an intersect method... It might be better to use a circular distance check, which would let the explostion intersect slightly. abs(self.ship.position-explode) < (explode.width + self.ship.width)/2. you could play with a scale factor multilier on the right, to get something that looks/feels right. @JonB yes, I want the ship to "die" if it touches the explosion, gonna try your solution when I get home, thank you
https://forum.omz-software.com/topic/4304/kill-all-aliens
CC-MAIN-2018-26
refinedweb
615
64.54
Intro: Epic Basement Renovation! A warning to the weak willed! A basement renovation can be trying to your patience, the patience of your co-dwellers, and that of your credit card! That said, it's also a whole lot of fun! After all, who wouldn't want to smash stuff with a giant hammer, fill a room with concrete dust, or paste huge sheets of styrofoam to the walls? Yeah, I couldn't think of anyone either. This Step 1: Planning OK, put the sledge and saw back down, you won't need them for a while. Instead, grab a tape measure, some graph paper and a pencil. It's time to make some plans. First, draw a scale drawing of the space you're planning to renovate. Include the outside walls, interior supporting walls, existing doors and windows, existing ductwork, plumbing fixtures, and any other immovable objects in the room. You likely won't be touching these (and I won't tell you how!) so you'll have to plan around them. Next you have to consider what the new space will be used for. Are you going to put in a bathroom? An extra bedroom? Perhaps a workshop or office... Take your time to really think about what you want to build down there. After all, it will be more or less permanent, so you want it to be a useful space for a long time. You should also resist the urge to carve up a basement space into a lot of small rooms - it will end up feeling like a dungeon. Also remember that each type of room will have a few requirements stipulated by your local building code that you must follow. For instance, a bedroom will probably need a large window for egress (emergency exit) if there is no secondary exit in the basement. Take these requirements into consideration as you make your plans so that you don't get stuck later on (or worse yet, receive a failing grade from the building inspector!) In this instructable I will be using my own renovation as an example. I started with an 11x22 foot space that was divided into two rooms, a workshop and an empty room. Well, empty aside from a rather quaint toilet stall! Step 2: Planning - Part 2 It's important to know what you're getting into before you start a renovation, both in terms of money and time spent. It's also very important that you determine beforehand whether you are capable of doing the work - if not, you should hire a contractor or at least get some help from someone who knows what they're doing. If you encounter any serious structural issues (say, a cracked foundation), you must get them fixed by a pro before continuing with the renovation. MONEY The renovation will probably cost you more than you planned for. Chances are, you'll need a few more 2x4s, a few more pieces of drywall, a few more tubes of glue and a few more boxes of screws. On the plus side, you only have to buy what you need, and no more. I'll go over estimating costs in detail in each of the sections, but if you're planning a complete renovation like I did, be prepared to spend thousands. Also don't forget to factor in the cost of tools. If you've got a fully stocked workshop then this cost will probably be negligible, but if you've only got a few screwdrivers, a hammer, and a power drill, you're going to have to get a whole lot more. Fortunately, now is a good time to go out and buy all those fun power tools you've always wanted! In most cases it will be cheaper to simply buy the tool than it would be to rent one, especially if your renovation spans a few months. Finally, I highly discourage you from starting a renovation unless you can afford to finish it. Don't buy anything with a credit card unless you've got the funds in your bank account to pay the bill that very day. One of the worst situations you can get yourself into is a half-finished renovation and nothing in the bank. TIME If you're like me, you have very little spare time. Between work and helping raise an infant daughter, I'm lucky if I can get an hour of time for myself these days - and usually I have to be super quiet because the little munchkin is napping! With that in mind, you must be prepared for two things. First, that the space you're working on will not be fully usable for several months. And second, that you will not have time for much else during the time you're renovating. Sure, you can work in stages, but that space won't be functional until the floors are installed, paint is on the wall, and outlet plates are screwed on - and all that stuff comes at the very end. Oh, one other thing - it's important that you prepare your co-dwellers (especially a spouse or significant other) for what's in store. Make sure they understand that their house will be torn apart for a while. Make sure they know that you'll be tracking dirt through their house, driving to Home Depot a lot, and staying up late hammering in nail after nail. The renovation will impact their life nearly as much as it does yours. Step 3: Get Approval So, you've got your plan all drawn out? Check! Do you have permission from your significant other to spend thousands of dollars at the local hardware store? Yay! It's almost time to pick up that sledge hammer, but there is one more thing left to do: Get a building permit. Where I live, a building permit is required for basement renovations. Without it, the city can force you to tear everything out if there's a problem, and your insurance company can refuse to insure your house. Fortunately, at least where I live, it's pretty easy to get a permit. All I needed was a scale drawing of the planned renovation, indicating the locations of walls, doors, windows and plumbing. The size of the windows and doors and the ceiling height also need to be marked down. The permit will cost you a few hundred dollars depending on the size of the renovation. Later on, the building inspector will want to see the permit when he/she comes by to inspect your work. They will probably come three times, once for structural work, once for plumbing and electrical, and once more for a final inspection. Of course, you can proceed with the renovation without a permit, but do so at your own (legal) peril. Step 4: Demoltition - Intro Okay, NOW it's time for the fun part! Tools you may require: Sledge hammer - for taking down concrete block walls, for separating 2x4s, and generally hitting stuff really hard. Large Pry Bar - for pulling nails and prying off drywall Small Pry Bar - for pulling nails A Large Hammer - for prying out nails, for hitting the pry bar, for smashing out drywall Demolition Saw - for cutting through 2x4s and other framing members Power drill - for removing screws Side Cutters - for cutting electrical wire Dustpan, garbage bags - for cleanup Vacuum with fine particle filter - for cleanup Angle grinder with masonry disc - for smoothing out concrete Fan - for fresh air, and blowing dust out the window Ear, Eye, Hand, Foot and Breathing protection - To save yourself from the Pain of a construction injury Depending on how "finished" the space already is, you may need to rent one of those large garbage bins for all the debris. Smaller jobs, or rooms with little work done to them, may only require a lot of industrial-strength garbage bags. Find out from the city what you can and can't throw out in the regular garbage. Step 5: Demolition - Tearing Down Walls At its most basic level, demolition consists of one basic rule: smash stuff until it's lying on a heap on the floor. But, we are not so primal, so here is a more civilized way to go about it. 1. Start by removing anything from the space that isn't nailed down. The only thing that should be in the room you're tearing out should be the tools you need for the job. 2. Shut off the power to the space. Turn off the power at the breaker or fuse panel. Better yet - turn off the power to the house and remove the space you're working on from the circuit entirely. This will keep you safe as you smash through walls, remove electrical fixtures and cut wires. Once the power is off, go around with a circuit tester and MAKE SURE there is no power present. From now on, your power tools will be running on extension cords plugged in elsewhere in the building. 3. Start smashing! Remove everything down to the concrete walls and floors - it's better to redo everything. It will also allow you to inspect the foundation for cracks and leaks - issues you'll want to fix before going any further. Most of the stuff you tear down can't or shouldn't be used again. Here is a short list of stuff you're likely to remove, and whether you should bother keeping it: Drywall: Throw it out! Trim (baseboards, etc): Keep if it's in good shape, and only the long pieces Electrical outlets and switches: Keep if they are the style you wish to use. Throw out old dimmers. Light fixtures: Throw out or give away on Freecycle Electrical wire: Keep only newer, plastic-jacket wire, and long pieces only. 2x4s and other structural stuff: Keep long pieces, remove old nails. Note these pieces should not be used in new construction, they're good only for bracing and firewood. Old nails, screws: throw them out! Concrete: throw it out Old flooring: throw it out in a safe manner! Old insulation: throw it out in a safe manner! 4. Clean up your mess! Toss all your debris in garbage bags or in the big bin parked in your driveway. Sweep up and vacuum all the saw dust and drywall dust on the floor. Store any pieces you saved in a safe place. Step 6: Demolition - Inspecting the Foundation Now that you've stripped the walls and floor bare, you can inspect the foundation for leaks and cracks. These are issues that will only get worse with time, and will cause major headaches if left unfixed. Here are some things to look for: Hairline fractures (or worse!) that run along the wall. Cracks in the floor Mold (black, red, green - it's all bad!) Pools or drips of water Condensation, or damp walls and floor If you find any issues similar to those listed above, call in a pro to get it checked out! They are all signs of foundation movement and moisture penetration. Step 7: Minor Foundation Repairs During the process of demolition, I managed to pull a few chunks of concrete out of the wall. This happened where the previous owner had used concrete nails to attach strips of wood directly to the wall. The damage to the wall was minor, so I patched it up with some concrete repair compound. There were also a few drill holes from old screws, which I also filled in. Fortunately for me, the previous owner had already painted the concrete walls with a moisture-proofing paint. If your walls and floor are bare, then I'd suggest doing this prior to putting up any of the walls. Step 8: Insulation - Intro A few months before starting on my renovation project, I had an energy evaluation done on my home. The results were surprising, especially concerning the basement. As it turns out, most of the heat leaking out of the house was leaving through the basement walls, especially the section between the ground outside and the main floor. Just 18 inches of concrete wall above grade was responsible for 25% of my heat loss! Obviously, insulating the basement walls makes all kinds of sense. Unfortunately, insulating a basement wall is not as easy as slapping on some fiberglass batting. There are issues of moisture to contend with, and if you don't do it properly then you risk setting up a perfect little habitat for growing toxic mold. THEORY: Concrete, despite its ability to crush your foot most effectively if dropped, is not solid. It is porous to water, and conducts heat pretty well, too. Drywall, vapor barrier and fiberglass insulation also permit the movement of moisture, though to a lesser extent. It is important to remember this, because it means that moisture can enter from both sides of the interior wall - from the moist ground outside, and from the moist air inside! The goal here is to minimize the buildup of moisture between the concrete wall and the inside of the walls you're about to put up. That means you have to do two things: Prevent moist interior air from reaching the cool concrete wall (thus preventing condensation), and preventing moisture that seeps through the concrete from building up inside the wall. After doing tons of research, I found the answer. What we're going to do is paste styrofoam panels directly onto the wall, creating an airtight barrier around the perimeter of the exterior walls. Moist warm interior air will not be able to reach the cold concrete walls, and the moisture that seeps though the concrete is stopped because it has no air gap to evaporate into. In addition to this, the foam inhibits mold growth adding further protection. No vapor barrier is needed! HOW MUCH INSULATION TO USE? In my case, I used three layers of insulation. I started with two layers of foam insulation, 2 inch and 0.5 inches thick, staggered for maximum restriction of air movement. Once the stud walls were installed I stuffed in 3.5 inches of fiberglass insulation. This provides an R-value of 27 on the top half of the wall, and R13 on the bottom half. TOOLS: Large Caulking Gun - used for dispensing the foam adhesive Hand Saw - for cutting foam sheets and fiberglass batts Utility Knife - for cutting foam sheets Tape Measure - for measuring, of course! Carpenter's square - useful for accurate measurements Permanent Marker - for drawing cut lines on foam MATERIALS: 4x8' 2" thick styrofoam sheets, as required 4x8' 0.5" thick styrofoam sheets, as required Vapor barrier sealing tape - for sealing seams between foam sheets Fiberglass batting, for 4" stud wall construction with 16" on-center spacing, rated for basement use Foam adhesive, 800mL tubes, as required Great Stuff insulating foam Step 9: Insulation - Pasting on Foam Panels Before you start, make sure the walls are clean and dry. If you're planning to expand the windows to comply with building codes, do all that first. If you have plumbing running directly along the wall, move it out a bit if you can. You don't want to bury pipes under layers of insulation. The process is pretty simple. The 2" foam panels will have lips along the long ends that are designed to fit together. When you measure each piece of foam, make sure that the panels are aligned properly for a good fit. Simply measure the space in which the foam panel will fit, and mark out anything that needs to be cut out. Try to leave as small a gap as possible between the edge of the foam panel and any surfaces it butts up against. Dry fit the foam panel, and trim as necessary. When you're satisfied with the fit, grab the caulking gun with foam adhesive and lay a 1/4" bead in a wave pattern along the back of the foam. Stick the foam into place, then gently peel it back again. Leave the glue "open" for a minute or two before sticking the foam back onto the wall. This process aids in fast and proper adhesion. Now, just work your way around the room, filling the entire outside wall surface with the foam panels. If you're putting on a second layer, as I did, the process is the same. Measure, cut, fit, glue, and stick. When fitting the second layer, make sure that it totally overlaps the seam of the layer beneath. This will further reduce airflow and improve the performance of the panels. Go one step further and seal all the seams with Tuck Tape. The insulation step is done for now, until the framing and electrical are installed. Step 10: Insulation - Adding Fiberglass Batts With the stud wall in place and all of your electrical wiring installed, you can finish off the insulation with a layer of fiberglass insulation. This isn't absolutely necessary, but it boosts thermal resistance to a healthy R27, in precisely the location where your house is leaking the most heat. This is one of the easiest steps of the renovation. However, you must wear adequate personal protective equipment. Wear eye and breathing protection, and make sure all your skin is covered, especially your hands! Fiberglass can give you a rash, and is very dangerous if you inhale it. With a knife, slit open the bag of insulation. It comes compressed so do this carefully - it will expand rather quickly! The walls in our renovation are framed using 2x4s with 16" spacing, so if you get the matching fiberglass size the batts will fit in there perfectly. Simply slide each batt between the studs, being careful not to compress the batt too much. If it doesn't fit, then use a knife or saw to cut the batt to size, don't try to squish it in. In spaces where an electrical box is located, cut a notch in the fiberglass to fit around it. That's all there is to it. Make sure you clean up carefully after you finish - you don't want to transfer fiberglass particles to other clothing, or track it around the house on your shoes. I suggest getting started with the drywall soon, so the fiberglass is exposed to the air for the least amount of time possible. Step 11: Framing - Intro In my opinion, framing is the most enjoyable part of this process, second only to smashing stuff with a sledge hammer. There's a certain joy in seeing the walls take shape, skeletal as they may be. ANATOMY OF A STUD WALL A basic wall consists of three members: the top plate, the bottom plate, and the studs. The top plate is horizontal and runs along the ceiling, and the bottom plate, also horizontal, runs along the floor. The studs are vertically aligned and run from ceiling to floor. There are two basic methods for erecting the walls. You can build wall sections flat on the ground and raise them into place, or you can attach the top and bottom plates and fit the studs in between. Since the walls, ceiling joists and floor in the room I'm working in are all somewhat uneven, I decided to use the second method to save my sanity. TOOLS Hammer - for hammering nails, of course. Power drill - for turning in screws Miter Saw - for cutting studs Hammer Drill - for drilling holes in concrete A high quality concrete drill bit - because the one included in the box of screws is garbage. Measuring Tape - make sure it's one with metric markings Four foot level - to make sure everything is level. Laser Level - not necessary, but makes things so much easier! Plumb bob - for aligning the top and bottom plates Hand saw - for special cuts A Few Clamps - to hold studs in place while you nail or screw them in Marking implement - pencil, marker, whatever - as long as it can write on wood and on concrete. Eye, ear and breathing protection - power tools are loud and make a mess. Protect yourself! Nail Gun - this is optional. It's certainly faster than nailing by hand, but they're expensive to buy or rent. MATERIALS 2x4x8 lumber - Lots of it! You'll be placing one every 16 inches, and using even more around windows, doors and corners. 2x4x8 Pressure Treated lumber - for the bottom plate only 10D Bright Spiral Nails - Boxes of 'em. 2.5 -3" long construction screws - useful for securing studs before finishing the job with a few nails 4" long Tapcon concrete screws - for attaching the bottom plate to the floor Step 12: Framing - the Bottom Plate The bottom plate is the base of all the walls you're about to build. It will be secured to the floor using Tapcon concrete screws, and the studs will be nailed and/or screwed onto it. Since it will be in contact with the concrete floor, it must be pressure treated in case of moisture issues. When you select the lumber for the bottom plate, make sure it is straight, flat, and without any warps or twists. Before you load each piece onto your cart at Home Depot (or wherever you get your lumber), sight along the length to look for defects. If you spot anything, put it back. To reduce headaches later on in construction, the bottom plate (and the top plate) must be perfect. *Note: Don't go lumber shopping with a spouse or child. They'll go nuts with boredom as you spend an hour or two sorting through a skid of wood looking for the best pieces. Trust me. LAYOUT The bottom plate will determine where all of the walls are, so make sure you position the pieces carefully. Place the 2x4s close to the wall, with no more than a 1/4" gap between the wood and the wall/insulation. For long spans requiring more than one piece of wood, make sure that the pieces are parallel. At corners, double check your angles - 90, 45, etc. It pays to take your time here. CUTTING Nothing complicated here. Measure twice, cut once with the miter saw. Strive for an easy fit with no gaps between pieces. DRILLING & SCREWING Grab the hammer drill and install the masonry bit. The drill should have a gauge on the side, that you can use to control the depth of the hole. In this case, the hole will be the length of the screw plus a bit of margin (say, 1/2" extra). With the bottom plate in place where you want it to be, drill straight through the center of the wood and into the concrete, perpendicular to the floor. Drill the first hole near the end of the piece, about 6" from the end. It may be necessary to pull the drill out a few times, so the concrete dust can escape. It helps to stand on the wood as you're drilling to keep it from moving around. With the first hole drilled, drive in a Tapcon screw using the power drill. I suggest using a socket head bit to do this, it slips a lot less. Don't drive it all the way in just yet. Again, it helps to stand on the wood so that it stays flat on the ground. Now, go to the other end of the bottom plate, realign the wood if necessary, and drill a second hole. Drive in another Tapcon. Now the bottom plate won't move, and you can go ahead and drive in a few more screws along the length of the bottom plate. At the very least, place one every two feet or so. Continue in this manner with the rest of the room. Take extra care when aligning interior dividing walls - they should be perpendicular to the outside walls (unless you have something avant garde planned). In places where a door will be, place the bottom plate right across the gap - that section will be cut out later when the door frame is finished. Step 13: Framing - the Top Plate Obviously, the top plate must be absolutely parallel and in alignment with the bottom plate. If not, your headache will be enormous and your curses loud and profane. As with the bottom plate, it's essential that the top plate lumber be as perfect as possible. Take your time to pick good pieces, and you will be rewarded later with easier installation and less scrapped lumber. Take a moment to check out the joists above you. They will cross the room lengthwise or widthwise, and it's onto these joists that the top plate will be attached. Where the top plate is perpendicular to the joists, this process is easy. Just screw the top plate onto the joist wherever they cross. Where they're parallel, you may first have to nail in a few small pieces of wood that span the distance of the joists, and then attach the top plate onto them instead. ALIGNING THE TOP PLATE This is probably one of the trickier parts of the process. The best way that I found to do this is to line up one end using a plumb bob or laser level, screw it in, and then line up the other end. Check the alignment using a straight 2x4 held against the top and bottom plates, with a four foot level held against the 2x4. With everything lined up at the two ends, go ahead and drive screws into each joist. Move around the room, attaching a top plate directly above each bottom plate. Take your time, and get it right the first time. Step 14: Framing - Studs Okay, here's where everything will start to take shape! Most building codes in North America will specify a 16" on-center stud spacing. This means that each stud will be 16" away from its neighbour, if measured from the center of each stud. This will change when you hit corners of course - you may have a shorter section. When planning out the placement of the studs, it's important to remember the drywall step. Specifically, will you have somewhere to screw the drywall onto? This is critical for corners, because you will need to make sure a stud is in place - for outside corners, a stud placed right at the end, and for inside corners a stud at each end of the meeting walls. You can always shove in an extra stud if you make a mistake, but why waste wood? MARKING With placement rules in mind, start at one end of a wall and mark the center of where the first stud will be placed. I prefer to mark the side of the bottom plate. Stretch out your tape measure, and mark out the locations of the next studs along the wall, at 16" intervals. Your tape measure may even have these intervals conveniently marked for you. Now, using a laser or plumb, transfer the marks to the top plate as well. CUTTING Measure each stud position very carefully. You don't want it too tight or too loose. Too tight and you might pop some nails (if you can force in the stud in the first place!), too loose and you'll have to waste time with shims. That's why I suggest you use metric to measure the studs. It's much easier to remember an exact number down to the millimeter, than it is to mess about with feet, inches and fractions. Mark the measurement on the stud with a pencil or marker. With the miter saw, cut the stud precisely down the outside of the mark. When you fit the stud back into place, it should stay upright but be easily removed. FASTENING THE STUD The easiest way to attach the stud is by first driving in a screw at both ends with a power drill. Holding the stud in place at the top with one hand, slowly drive in a screw so the stud doesn't shift. Then, move to the bottom. Once the screws are in place, quick check the stud with a level. Now you can finish the job with a few nails at both ends of the stud, without worrying about the stud shifting left or right due to the force of the hammering. Put at least two nails at each end, on opposite sides of the stud. After driving in the nails, check to make sure the stud is level one last time. Now, repeat a few dozen times. WHAT TO DO WITH ALL THE CUT ENDS? The ceiling in my basement is only about 6'6", so I ended up with one 18" piece of cutoff for every stud. These pieces are perfect for placing horizontally between the studs! Cut each piece to the proper length (just under 15" or so) and nail in place in a staggered pattern. There's no requirement to do this, but it adds a significant amount of structural rigidity to the wall. SPECIAL CASES Windows and doors have special requirements when it comes to framing. Read up on how to handle them in their respective sections. Step 15: Framing - Building a Soffit The big, ugly return air vent cannot be allowed to make itself seen in our beautifully finished room! It will be enclosed in something called a soffit - a frame built around the vent that can be covered in drywall. This same method can be used to enclose steel H-beams, vent pipes, water pipes and other ugliness that would detract from the look of the room. The soffit is built on the ground and then lifted into place. First take careful measurements, making sure to leave a bit of space between the inside edge of the soffit and the vent. Also remember that the wall underneath the soffit will need to be specially constructed, since the top plate can't be screwed to the ceiling - it will be secured to the wall instead. The soffit is constructed of 2x2s, 2x4s, and half inch plywood. Start by cutting a piece of plywood that extends from the ceiling to down below the vent. You may need to place more than one piece end-to-end. Then screw 2x2s onto the edges on either side of the plywood for the entire length. This will create a rigid, perfectly straight cover for the face of the vent. Next, create the "top plate" for the wall beneath the vent, which will extend the length of the soffit. This top plate will eventually be supported by studs on either side of the vent and by supports anchored into the wall. Between the top plate and the plywood piece, cut short pieces of 2x4 that rest on top of the top plate, and attach to the back of the plywood cover. Very careful measurement is required here! With the aid of a helper, hoist the soffit into place over the vent, and screw it into the ceiling joists and onto the studs at either end of the soffit. It should now support its own weight. Finish the wall below the soffit by installing studs as usual every 16 inches. To make the stud wall beneath the soffit stronger, use some scrap pieces of 2x4 to build a bracket. The bracket should be anchored to the wall using Tapcon screws, and screwed onto the stud close to the top plate. Use one or more brackets depending on the length of the soffit. There you go - the soffit can now be drywalled just like any other wall! Step 16: Electrical - Intro Things are really starting to come together now. The walls are framed and you can start to tell what the room will look like! It's time for the next step, electrical! Now, you may have heard from others that you must have a licensed electrician do this type of work. If you're not familiar with basic electrical theory, perhaps that's what you should do. However, most electrical work is pretty simple and straightforward, so all you really need is for a professional to inspect your work once it's done! You could have an actual electrician do this for you, or have the building inspector do it. Either way, it's a good thing to do if only for some peace of mind. THEORY I will be telling you how to hook up the three most basic elements of an electrical system: outlets, lights, and switches. Inside each jacketed wire you will find three wires, a bare copper wire (ground), a white insulated wire (neutral), and a black insulated wire (hot or live). We'll keep it simple and say that the electrical current flows from black to white. The ground wire is a safety net; if something inside an electrical device shorts out, current can flow through the ground wire instead of through you. Outlets in a circuit are all connected in parallel. This means that each outlet will get a ground, neutral and hot wire attached. A switch is connected differently. An ordinary single pole switch is connected in series with the hot wire, and the neutral wire passes right by. Turning off the switch will therefore cut off the current going to the light fixture - an important safety feature. TOOLS Power Drill - for drilling access holes in studs Spade Bits - for drilling large diameter holes that the wires can fit through Wire cutter/Stripper - used for cutting electrical cable and stripping off insulation Screwdriver - for mounting electrical boxes, and for securing wires to terminals Hammer - for mounting electrical boxes, and for attaching cable clips MATERIALS 14 gauge 3-conductor (2 conductor + ground) household wire Rectangular junction boxes - for outlets and switches Octagonal junction boxes - for branching cables, and for some light fixtures Octagonal junction box cover plates - to cover boxes used for branching Yellow and Orange Marrettes - used to join two or more wire ends together (note: Marrette is a brand name) Cable Clamps - for attaching cables to the studs 10D nails or 1" construction screws - for attaching electrical boxes to studs Step 17: Electrical - Attaching Junction Boxes Picking where outlets and switches will go is sometimes harder than you may think. You want to make sure that outlets will be placed where they are needed, and that switches are in logical positions where anyone could find them in the dark. OUTLETS Back in the 50s and 60s, people didn't have many electronic gadgets. Often, one or two outlets for an entire room were enough. These days we're wired up to our eyeballs so more outlets are called for. In my renovation, I placed outlets about every three to four feet. I'm not planning to use all of them, but they are ready to go if I need them. Outlets are typically placed about a foot off the ground, except in some special cases. In my new workshop the walls are lined with tables, so the outlets are placed about a foot higher than the table top. In a bathroom, you may want to place an outlet above the sink so you can plug in a shaver or electric toothbrush. SWITCHES Switches are typically placed four feet off the ground. You may only need a single switch, or a whole bank of switches for controlling different lights in the room. If the room has more than one point of entry, you may want to connect a switch near each of the entrances. In this case, you'll need to use double pole switches (also called three-way switches) and a four-conductor cable (bare, white, black, red). LIGHTS The lights you choose for your project may come with special mounting hardware, as mine did, or they may simply attach to a standard octagonal junction box. Lights can be wall mounted or ceiling mounted depending on the style. When deciding where to locate your lights, you must first decide what the purpose of the light will be - is it meant to light a specific area, or is it simply accent lighting? Consider where furniture, doors, shelves and workspaces will be when the room is finished and plan ahead. ATTACHING JUNCTION BOXES The junction box may have a few spikes on one side - this is the side that goes against the stud. Line up the box so that the open end sticks out past the edge of the stud about 1/4". Hammer it in so that the box stays in place on its own, then make it permanent by driving a few nails through the holes at the top and bottom of the box. Octagonal junction boxes can be bought with or without spikes. In this case, once the box is in position, use construction screws instead of nails. Since you can easily access the inside of the box you can use shorter fasteners, and screws are faster. If your lights use special hardware, follow the supplied instructions when you're mounting the boxes. In my renovation I used flush mounted pot lights, which attached directly to the joists using special hanger brackets. Step 18: Electrical - Running Wires With your junction boxes in place, you can being running wires. Before starting, I suggest drawing out the path that the wires will take. You can use a copy of your blueprints for this. Try to make the paths as efficient as possible - this will save you time and money. In my renovation I used a junction box mounted in the ceiling to divide the power coming from the breaker panel into two circuits: one for the outlets and one for lights. From there, the outlets are all daisy chained, to keep the amount of wire used to a minimum. The wiring for the lights is a bit different - the wire passes through the switch before connecting to the lights. They are also daisy chained. DRILLING HOLES You will need to drill a few holes here and there, so the wires can pass through the studs and top plate. Using your wiring diagram, locate every place where a wire must pass through a stud, and drill a hole there. Try to drill the holes an inch from the rear of the stud, if possible. The hole you drill should be just large enough to fit the wire or wires that are passing though - and no larger. RUNNING WIRES In most cases, you can pull a long piece of wire off the reel, without cutting it yet. Start at one end of the run (say, outside an outlet box), and run the wire up along the studs feeding the wire through any drilled holes as you go. Try to avoid twisting the wire if possible. Leave about a foot of extra wire at each end of the run. Then, cut the cable from the reel. The junction box will have four or more tabs, two on the top and two on the bottom, which you can bend off to feed a cable through. Bend off a tab closest to the incoming wire, and feed the wire through. An integrated clamp inside the box can then be tightened down to hold the wire in place. SECURING WIRES WITH WIRE CLIPS With one wire fed into a junction box, you can begin nailing the wire to the stud using wire clips. Place the first one a few inches away from the junction box, and spaced every 12-16 inches afterward. Where the wire turns a corner, nail a clamp at the beginning and end of the turn, leaving a small amount of slack in the wire so it doesn't have to make a sharp angle. Try to keep the wire nice and flat against the stud, avoiding twists. Work your way along the length of the wire until you reach the end of the wire. Note that you can run wires parallel to each other along a stud, but I wouldn't recommend more than two to a side. Also, never stack wires or use one clip for two wires. RUNNING A WIRE FROM THE BREAKER PANEL I often start with this wire. It is usually the longest run, especially if you have to cross the house. In my case, I had to fish the wire through the joists and across the rec room ceiling, a task accomplished using two long 1x1s with bent nails on the ends. You may wire it up to the outlets and switches in the room you're renovating, but don't connect it to the breaker panel just yet. FINISHING OFF - FOR NOW. Since the outlets themselves will sit on top of the dry wall, you can't hook up the outlets or switches just yet. For now, just take the foot long pieces of wire hanging out of the junction boxes and roll them back over themselves, then tuck them into the box and out of the way. Step 19: Electrical - Connecting Outlets and Switches Once the drywall is up you can install the outlets and switches. OUTLETS The outlet will have five screws, two on each side and one on a tab near the bottom. The screw at the bottom is ground. The pair of screws on the right are "hot", and the pair on the left are "neutral." You can also tell the difference between the two by looking at the size of the outlet holes - the smaller slot is hot and the larger slot is neutral. The round hole is ground. You will be connecting the black wire(s) to the hot side, the white wire(s) to neutral, and the ground wire to (you guessed it!) ground. Take the wire and strip back the plastic jacket all the way to the edge of the outlet box. Once you get good at it, you won't need much wire to work with. If you're new at this, leave a bit of extra. Cut the black and white wires so that 6" or so protrudes from the outlet box. With wire strippers, strip about three quarters of an inch of insulation off each end. Cut the ground wire so that about 8" protrudes from the wall. With needle nose pliers, bend a small loop into the end of each wire, including ground. Feed each wire end under the appropriate screw and tighten it down. There should be a minimum of copper exposed outside the edge of the screw when you're done. If there is any excess wire, trim it down with side cutters. With the outlet wired, you can tuck the wires back into the outlet box and screw in the outlet using the built-in screws. Make sure the ground wire isn't touching the neutral or hot screws on the sides of the outlet! When you tighten the outlet in place, make sure it's perpendicular to the floor. SWITCHES A regular single pole switch will have just two screws. The hot (black) wire of the incoming live wire will connect to one screw, and the hot wire from the outgoing wire will connect to the other. The white and ground wires are simply connected together, end-to-end, using marrettes. The process is almost identical to tat of wiring up an outlet. Strip back the plastic jacket, and cut the wires to about 6 inches. Strip 1/2" off each end, but only put a loop in the black wires. Start with the white and ground wires: Hold the two white wires side by side, with the tips lined up. Twist an orange marrette onto the wires, until the wires begin twisting around each other. You should be able to tug on the wires without them pulling out of the marrette. Repeat for the ground wires. Then, tuck the white and ground wires back into the switch junction box. Now, secure the two black wires onto the screw terminals on the switch. Tuck the black wires into the switch box, then screw in the switch. Make sure the ground wire is not touching the screw terminals on the switch! LIGHTS Depending on the lights you're mounting, the connection method will be a little different. Some lights, like basic single-bulb fixtures, are designed to be mounted directly to an octagonal junction box. They have screw terminals that the white and black wires attach to, and the ground wire is connected to a screw terminal inside the junction box. The light is then fastened to the junction box using the included screws. Other lights are designed to be mounted to the same junction box, but have wire leads instead of screw terminals. In this case it's just a matter of matching colours - white to white, black to black,. and ground to ground. Use marrettes to tie the ends of the wires together. Then, attach the light fixture to the junction box using the supplied hardware. This hardware can differ, so follow the included instructions. The inset pot lights I used my renovation used their own special junction box, which was mounted directly to the light fixture. The fixture attached to the joists in the ceiling using special brackets. If your lights are like this, then follow the supplied instructions. Step 20: Ductwork - Intro During my renovation I didn't have to do much with the ductwork. I installed a second cold air return (since the basement didn't have one), and installed a few register covers. The usual rules apply when working with sheet metal: measure a few times, and cut carefully. TOOLS Shears - for cutting sheet metal "Nibbly" cutter - for tricky cuts in sheet metal that you can't do with shears Power Drill - for drilling starter holes in existing ductwork Tape measure - for measurements Permanent marker - for writing on sheet metal MATERIALS Flat sheet metal for ductwork - the raw material that ducts are made of Pre-made ductwork pieces - for installing new vents or ducts Register covers - to cover and protect vents Aluminum duct tape - not the cloth stuff! Straps - for securing ductwork to joists and studs Screws - the perfect thing to attach straps to wood Step 21: Ductwork - Install a New Cold Air Return When the energy efficiency inspector checked out our house, he suggested I add a cold air return in the basement. This improves air circulation throughout the house - in the summer, cool basement air can be pumped to upper levels to help cool the house.. Step 22: Drywall - Intro Welcome to what is perhaps the most challenging part of a renovation. I must admit, before starting this project I knew virtually nothing about how to drywall, and at best I'm still a novice. All of what I learned (and indeed, much of the work) can be attributed to my father in law, who did this stuff for years. First, a bit of background. Drywall is basically a sheet of gypsum encased in paper. It comes in a number of different thicknesses, 1/2" being the most common. 5/8" drywall is often used in bedrooms because it resists the spread of fire a bit longer, thus giving any occupants more time to escape. There are also different varieties, most notably "greenboard" which is moisture resistant (but NOT waterproof). It is used in bathrooms, though not in the actual shower or tub. Drywall is fastened to stud walls using drywall screws, or it can be glued directly to foam insulation. It is typically 48 inches wide (the width of three studs), and anywhere from 8 to 12 feet long. The gaps between drywall boards are filled with drywall compound, a sort of quick-drying plaster that is easily sanded smooth. In this renovation I chose to use plain old 1/2" drywall for the walls and ceiling. TOOLS Utility Knife - used for scoring and cutting drywall Drywall Saw - a small, aggressive handheld saw used for cutting drywall Power drill - for screwing in drywall screws Drywall dimpler bit - a special bit used for drywall screws, that creates a recessed "dimple" in the drywall surface surrounding the screw Measuring Tape - should be obvious by now! Drywall T-square - used for drawing perfectly perpendicular lines on drywall Pencil Shims - used to elevate drywall off the floor during installation Deadman - a "T" shape made of 2x4s used to support drywall being mounted on the ceiling. Essential if you're working alone! 3", 6" and 10" drywall trowels - for laying down drywall compound Small and large corner trowels - for neat corners Metal Shears - for cutting corner braces Assorted sanding pads - for sanding drywall compound A Vacuum with a fine particle filter - For cleaning up the mess Breathing protection - because inhaling drywall dust can't be good for you. Laser Level (the type that draws a vertical line on the wall) - fantastically useful device, this is! MATERIALS Drywall - I used regular 1/2" sheets for everything. Estimate one sheet for every four feet, plus some extra Drywall Screws - Each 4x8 sheet gets about 30-35 screws. Drywall Compound - get the "dust control" stuff, it's nicer to work with. Durabond 90 - a high-strength drywall repair compound useful for filling large gaps and covering corner braces Drywall tape - used to bridge the gap between drywall sheets, giving the drywall compound something to adhere to Step 23: Drywall - Mounting Sheets on the Walls I actually think this part is pretty fun. Measure the space, cut the drywall sheet to fit, and screw it in place. Well, it's a bit trickier than that. ;) I'm going to explain how to mount a sheet of drywall on a wall first, even though you typically start with the ceiling. The reason for this is that at the corners, where the wall and ceiling meet, it's a lot easier to get a sheet of drywall on the wall to fit perfectly. Thus, the drywall on the wall covers any gaps left behind by a sheet on the ceiling. MEASURING Before mounting the first sheet of drywall, take a moment to plan where you are going to start. Chances are, most of your walls will not be an exact multiple of four feet, so at least one sheet of drywall will have to be shortened to fit. Ideally, that cut edge should be placed so that it's buried in an inside corner. Conversely, try to line up the nice factory-finished edges at outside corners. Start by measuring across the wall to determine how many sheets will be required. With that number in mind, make note of where the shortened sheet will be (ideally, in a corner). You can do this sheet first or last, it doesn't really matter. Lay a sheet of drywall on the floor. Make sure the floor is clean, and that you have room to work all the way around. With the tape measure, first determine the exact height and width of the wall section to be covered. Note that with 16" on-center stud wall construction, a full sheet of drywall should line up perfectly with the studs, with the edge of the drywall lining up with the center of the stud. Transfer those measurements onto the finished side of the drywall. Reduce the height measurement by about 1/4", so the drywall can be elevated off the floor. Next, locate any outlets in that section of wall, and mark out where a hole needs to be cut for the outlet. CUTTING DRYWALL There are two main ways to cut drywall. Each has its benefits. The first way is to score along the cut line with a utility knife, so that the paper and a bit of the gypsum are sliced through. A good, sharp blade is essential. Then, just snap the drywall board along the line. It happens very easily! With the knife, cut the paper backing to complete the cut. This results in a nice straight line with a relatively clean edge. The downside is that you can only do straight lines, and the cut must be from end to end. The second way is using a drywall saw. Operation is simple - just cut the board with the saw. For interior cuts (say, to cut a hole for an outlet), you don't even need to drill a hole - just push the point of the saw through the drywall. The drywall saw also allows you to cut corners and curves in the drywall, but at a price. The edge is more ragged than a scored cut, and it makes a lot more mess. MOUNTING THE DRYWALL This step is made a whole lot easier with the use of a laser line. This is a simple little tool that draws a perfectly level vertical line on the wall. Before putting the drywall on the wall, mark the locations of all the studs on the floor, using a marker or piece of tape. Place two shims on the floor where the drywall will go, then lift the drywall into place. It should fit evenly and flush against the corner or adjacent drywall sheet. Check for good fit around outlets and vents, and trim if necessary. Increase or decrease the thickness of the shims for proper fit against the ceiling. Once everything fits, drive in a few drywall screws along the edges, where you're sure a stud is located. Do the edges first, using about 7-10 screws along the height of the wall. Now, line up the laser line with the stud markings on the floor, and finish the job. The laser line should indicate the exact center of the studs behind the drywall sheet, eliminating any guesswork. Continue in this manner until all your walls are covered. Step 24: Drywalling - Ceilings The process of drywalling the ceiling is very similar to the walls. Measure to fit, and screw in place with drywall screws. The trick is holding the sheets in place while you're working. It can be done if you're working alone, with the right tools. MEASURING This process is similar to walls, with one big difference: To make it easier, place the sheet face down on the floor, and mark dimensions on the back of the drywall sheet instead. For me at least, it's easier to picture how the sheet will fit on the ceiling, simply by lifting it straight up and into place. Mark height and width on the sheet, and any holes that need to be cut (say, for recessed lighting). CUTTING Cut normally, but be extra careful when you score lines. Make sure that when you snap the sheet, you don't tear away any of the paper on the finished side. MOUNTING OK, here's where things get interesting. The easiest and fastest way to hold a sheet of drywall on the ceiling is with two helpers. Simply position the drywall and drive in screws. You can even use the laser line tool for screwing onto the "blind" joists - if you position it back far enough, it will draw a line on the ceiling as well. Line it up with the joists on either side of the drywall sheet, and use the line as a guide. If you're working alone or with just one other helper, you'll need to build something called a Deadman. It's basically a T-shaped support that is slightly taller than the height of the ceiling. It can be made with 2x4s left over from framing. To use a Deadman, measure and cut the drywall sheet as above. Lift the sheet into place, and have your helper shove a Deadman under each end of the sheet. It should wedge in place and hold the sheet firmly against the joists above. Once the sheet is screwed in place, unwedge the Deadman and move to the next section. You can do this if you're working alone too, but I don't recommend it. In this case, prop one Deadman up against the wall, and set the other one in a place where you can reach it from a standing position. Carefully lift the drywall sheet into place, and rest one end on the Deadman leaning against the wall. Quickly (before your arms give out), reach over and grab the second Deadman and shove it in place at the other end of the sheet. Align the sheet, and firmly wedge both ends in place. Screw in place with lots of screws before the crazy thing collapses on your head. ADDITIONAL NOTES: - The ceiling covers a large surface area, and it's possible that you may have to line up not only the nice factory-finish edges, but also the rougher ends. In this case, you may want to bevel the edge slightly with a very sharp knife, which will make finishing easier and neater later on. - You may end up in a situation where the end of a piece of drywall hangs in mid-air because there is no joist to screw it onto. In a case like this, attach pieces of scrap 2x4 between the joists at 12" intervals prior to screwing on the drywall sheet. With these in place, you'll have something to attach that loose end to. - When measuring the drywall for the ceiling, pay close attention to whether the walls actually meet at a perfect 90 degree angle. They might not, in which case you'll need to trim the sheet accordingly or it won't fit properly. Step 25: Drywalling - Corners and Taping CORNERS To protect outside corners from damage, metal corner braces are nailed onto the drywall. If something were to smash into the corner, the metal brace takes the majority of the impact, often without suppering much damage at all. A bare drywall corner would be damaged by even a light impact with something hard. To attach a corner brace, first cut it to size using metal shears. It should fit from the ceiling right to the floor. Press the brace onto the corner with one hand (or have a helper hold it), and nail it in place using drywall nails. The brace should be as flush with both sides of the wall as possible. Drive in nails at regular intervals, making sure not to damage the drywall or brace with an errant hammer blow. TAPING Some people hate this job, but it's not so bad. Drywall tape is used to bridge the gap between adjacent sheets of drywall, so the drywall compound has something to cling to. Otherwise, it would sink into the gap and you'd have to go over it half a dozen times. Drywall tape should be applied anywhere that two drywall edges meet, both on flat sections of wall and ceiling, and in inside corners. It's not necessary on outside corners, because the corner braces perform double-duty in that regard. It's important to be neat. There should be no folds or wrinkles in the tape, and it should be as straight as possible. For corners, try pre-folding the tape before laying it in place. Use a putty knife to push the tape into the corner for a nice, sharp angle. Step 26: Drywalling - Where to Use Durabond Durabond, a brand of high-strength drywall repair compound, is pretty nifty stuff. When it dries it's very hard and strong. form, so you have to mix it yourself with water. The "90" in the name refers to the drying time - 90 minutes. In a tub you don't mind throwing out, mix the Durabond with tap water according to the instructions on the box. The result should be a smooth grey paste, that sticks to walls without running. APPLY THE DURABOND Using a trowel or wide putty knife, apply the Durabond paste along the corner brace, covering the metal completely. With a long, even stroke, skim along the surface to achieve a smooth finish with no bumps. If you scrape too deep and reveal nail heads or edges of the brace, reapply some more Durabond and try again. Scrape any excess back into the tub, unless it gets contaminated with debris. Do both sides of the corner in this manner, working quickly before the Durabond dries. This process has a bit of a learning curve, so start someplace less noticeable if possible. To repair a large gap, use a smaller putty knife to force Durabond into the gap. Then, with a wider putty knife or trowel, skim along the length of the gap to smooth it out. Make sure the filled area is not raised above the surface of the drywall, or you'll spend a long time fixing it later. As soon as you're finished, immediately clean off your tools. Durabond will stick to metal and ruin the fine edge necessary for achieving a smooth finish. Wipe off the tools with a clean rag, then wash off any residue in water. Step 27: Drywalling - Applying Drywall Compound The process of applying drywall compound is often called mudding or skimming. The basic idea is to apply one or more layers of drywall compound (mud) to hide imperfections in the wall - most notably, gaps and screw heads. Between each layer, the dried 'mud' is lightly sanded to maintain a smooth finish. PREPARING THE DRYWALL COMPOUND Drywall compound is usually sold pre-mixed in big tubs. Try to get the "dust control" kind if you can, it makes cleanup easier later on. Just open the tub, mix it up with a clean stir stick, and you're good to go! APPLYING THE FIRST LAYER You may have noticed that the edges of the drywall sheets are slightly beveled. This is done so that drywall compound can cover the gap and drywall tape, without creating a raised vertical stripe. Start with a narrower trowel, 6 inches is perfect. Just slop on the mud, roughly flattening it against the wall as you go. When the entire length of the gap is covered, skim lightly over it again to get everything smooth and even. Again, this takes a bit of practice to get just right, but I'm sure you'll figure it out soon enough. To cover the screw heads, use a small 3" putty knife and simply skim over the dimpled area. It takes very little mud to do this, and you can often do five or six screws with a single scoop of mud. When you're done, clean up any extra mud that slopped onto the wall or floor before it dries. Be sure to wash off your tools as well - mud doesn't stick to tools as much as Durabond, but it sure is easier to clean off while it's wet. MUDDING INSIDE CORNERS Corners are tricky. I like to use a 3" putty knife to glob a thick layer of mud all along the corner, then use a wide corner trowel to skim over it for a smooth finish. This is probably the hardest part of skimming, and it will take you a while to get it right. Again, I suggest starting in a dark corner first, an moving to the more visible locations when you're more confident in your skills. MUDDING OUTSIDE CORNERS Previously, you applied Durabond to the outside corners, to cover the metal corner bracing. Once the Durabond is fully dry, you can skim over it with mud for a nice smooth finish. Use a wide trowel that covers from the corner to well past the edge of the Durabond layer. SANDING THE FIRST LAYER When drywall compound dries, it turns from a light grey colour to white. It will be hard to the touch, and will feel warmer than damp compound. Usually, this takes a few hours to a full day, depending on the temperature , humidity, and air movement in the room. The first layer can be sanded using a more aggressive sanding pad (100 grit), mounted on a sponge sanding pad. It doesn't take a lot of pressure, so go easy. You only need to remove the worst of the irregularities. My father in law likes to use a pole sander for this, but I prefer to get up close and do it by hand. Go over everything, using your hand to feel for bumps. In corners, use a special sanding block with abrasive sides that meet at a 90 degree angle. By the time you finish, the room will be filled with clouds of dust and nearly every inch of you will be covered in a thin layer of dust. When the dust has settled, sweep up as much of it as you can with a broom, then take care of the rest with a vacuum. SKIMMING THE SECOND LAYER The second layer of mud goes on much like the first, though you will be covering a wider area. Instead of the 6" trowel used for the first skim, use a 10" trowel. As before, glop on some mud and skim it smooth. If you work carefully, this may be the last layer you have to do. Some of the screws may need a second layer of mud as well. Use the same 3" putty knife as before. SANDING THE SECOND LAYER For this layer, switch to a higher grit sandpaper, such as 150 or 180. This will leave a smoother finish. Carefully sand everywhere, making sure not to remove too much material. Take special care to blend the edge of the mud into the wall, for a nice smooth transition. Use long sweeping motions for better blending. When the sanding is done, go ahead and clean yourself and the room once again. Hopefully, you'll only need to do two skims, but in some areas three may be required. It shouldn't take more than that, though. Step 28: Painting - Intro Most people have done a little painting before. The biggest challenge here is preventing the paint from going places you don't want it to be. GO ahead and pick up a few dozen colour swatches. When you've decided on a colour (or colours!) come on back and we'll go over the basics. TOOLS Paint Brushes - Small and medium should be enough. Paint Roller - makes painting large areas so much easier Paint pan - for use with the paint roller Edging tool - for precise lines in corners, without having to use masking tape Drop sheets - optional, only needed if flooring is already installed. MATERIALS Paint - you'll need primer and a top coat in the colours you chose. Masking tape - to mask off small details Step 29: Painting - Ceiling It's a good idea to start with the ceiling, because any little drips that fall onto the walls will be painted over. Before you start painting, it's a good idea to brush off any leftover drywall dust that's still clinging to the drywall using a broom or rag. Make sure you have plenty of light in the room you're working on. If you haven't connected the lights yet, then use a portable worklight or work during daylight hours. Clear everything off the floor, because drips may occur. There's no need to cover the floor with a drop sheet unless you've already installed flooring. PRIMER The primer serves two functions, it helps to seal the drywall, and it provides a base colour for the topcoat, so that the colour matches what's on the swatch. It also helps cover over pencil marks and other blemishes so they won't show through. You can use white primer, but usually you can get it tinted a shade or two lighter than the topcoat. With a medium brush, paint around all of the corners and anywhere that you can't reach with a paint roller - lights, vents, etc. You can work directly from the paint can if you like. Once the small sections and corners are done, you can start using the roller. Set the roller tray in the middle of the room, and fill the bottom of the tray with paint. Dip the roller in the paint and use it to draw a bit of paint up onto the ridged section of the tray. Then, roll the entire surface of the roller in the paint. Roll the paint roller along the ceiling, painting in 4 foot sections. When the paint on the roller is exhausted, repeat and continue. INSPECTION Once the primer is dry, check for visible blemishes in the surface of the drywall. Often an even layer of paint on the surface will help reveal bumps, dips and other imperfections that weren't visible before. If you notice any defects in your skimming, simply break out the drywall compound and slap it right on top of the primer. Once it has dried, paint over the section you repaired with primer. TOPCOAT The method here is the same as the primer, only with a different paint. This time, be especially careful with your workmanship - make sure the entire surface is covered with no missed spots and no lighter areas with less paint. It's easier to correct problems now, while the paint is still wet, than going over them with a small brush when your wife points it out a month later. Step 30: Painting - the Walls The process here is the same as the ceiling, but with one catch - how do you avoid accidentally getting paint on the ceiling? CUTTING IN Professional painters will do this without any extra tools - with a high quality brush, simply paint along the inside corners in a precisely straight line, with the edge of the brush right in the corner but never touching the ceiling. It sounds simple enough, but when you're working above your head, your arms get tired really fast. On taller ceilings, it's very difficult to reach that high AND get the precision you need. Luckily, there is a tool to help. It's basically a long, flat piece of metal or plastic that you can push into the corner and paint against. It's like mobile masking tape. MASKING TAPE You probably won't need much masking tape while painting. But, it can be useful in tight areas where a larger tool won't fit. It's great if you're planning to paint stripes and other patterns on the wall, though. When applying masking tape, make sure the surface is clean and dry. Press firmly, especially on the edges, so that paint can't wick underneath. Also remember that masking tape has a limited "working life," often printed on the package. Painter's tape usually has a work life of 7-14 days depending on how much you want to spend - so make sure you finish any work soon after sticking it down. AVOIDING OUTLETS AND SWITCHES Try not to paint the outlets, switches, and light fixtures. That is all. Step 31: Installing a Door - Intro ADDITIONAL TOOLS Nail Punch - for driving finishing nails below the surface of the door frame Screwdriver - for mounting door hardware MATERIALS "Pre-hung" Door - basically a door already mounted in a frame 8D finishing nails - for nailing the door in place Shims - for shimming the door in place Door hardware - door handle, latch and catch, usually sold together in a package. - NOTE: If the ceiling in your basement is lower than 7 feet, double-check to make sure the door you selected will have proper clearance. If a standard-height door (80 inches high) is too tall, you can get a shorter door (78", or a custom height). - NOTE 2: Pre-hung doors come pre-assembled to open on either the right or the left. Make sure you select the right one based on how you want it to open. Step 32: Installing a Door - Framing A door frame needs to be strong, to support the weight of the door and the force of opening and closing (opening and slamming?). We'll be adding extra studs on either side of the door and across the top. MEASURING THE ROUGH OPENING Doors are available in a number of different widths, the standard being 32". When you buy a pre-hung door, measurements are provided for the "rough opening" that is required for the door to fit into. The rough opening is slightly larger than the door frame on all sides, so that you can insert shims to precisely align the frame for smooth opening and closing. With the aid of a tape measure, mark out the width of the rough opening on the bottom plate where the door will be. Then, transfer those measurements to the top plate using a plumb or a laser line. Measure and cut one stud for each side of the door, and nail them in place on the outsides of the marks. Make absolutely sure that these two studs are perpendicular to the floor, and perfectly parallel to each other. Now, measure the height of the rough opening and mark it on each of the studs you just installed. Make sure you're measuring from the floor and not the top of the bottom plate. If the flooring you're installing is thicker than 1/4", make sure you compensate for this by placing the door frame a bit higher. There is a small gap for flooring built into a pre-hung door, but not a lot. Measure and cut a piece of 2x4 that will span the width of the rough opening and nail it into place. Nail another block between this piece and the top plate, if you have room. Finally, cut two more studs and attach them directly beside the ones that are already in place. Make sure you secure these extra studs to the top and bottom plates, AND onto the adjacent studs. Double check everything one more for proper level and squareness. Once you are satisfied with the rough opening, you may cut the section of bottom plate that spans the door opening. Use a hand saw to cut flush to the edge of the studs on either side of the frame. Step 33: Installing a Door - Mounting the Door This can get a little fiddly and perhaps a bit annoying. Unpackage the door and lift it into place into the rough opening. Make sure you have it opening in the right direction! The door frame will extend past the edge of the studs about half an inch (the width of standard drywall!). When you're lining up the door, make sure that the frame remains centered on the studs so that it lines up properly when it comes time to put on the drywall. USING SHIMS Shims are usually used in pairs. They are about 1/4" at the thickest end, and taper to a point. When sandwiched together with the thick end of one against the thin end of the other, they form a "flat" spacer or adjustable thickness. Moving the thick ends closer or further apart will change the overall thickness, while remaining flat. SHIMMING IN A DOOR With the door frame roughly centered in the rough opening, you can begin inserting shims between the gaps. Use the shims in pairs as described above, adjusting for thickness to fit the gap. The goal is to use the shims to get the door frame parallel and perfectly square, so that the door opens and closes without resistance. As you work, continually measure the frame with a tape measure, checking the distance across the door (these measurements should all be identical) and from corner to corner (these two measurements should be the same). Insert shims at regular intervals, with at least 4 per side. Use more around the latch. It's okay if the shims stick out like crazy, they will be trimmed later. Once you're satisfied that the door is square, you can begin nailing it in place. Here's where it gets frustrating: many of the shims will move as you're hammering. Be patient, and be prepared to readjust things a lot. Using 8D finishing nails, drive a nail straight through the frame and shim, and into the stud. Use two nails per shim, on either side of the door. Don't hammer the nail in all the way just yet, in case you need to pull it out to make a correction. As you drive in the nails, continue to measure the door to make sure it remains square. Once all the nails are in and the frame is square, you can go ahead and drive each of the nails all the way in. Use a nail punch to drive the nail 1/16" below the surface of the frame. Later, prior to painting the door, fill the holes with wood putty. TRIM OFF THE SHIMS With a utility knife, score each shim flush with the edge of the stud. Then, snap off the shim. It should break off cleanly, but if it doesn't then trim it down with a utility knife or a handsaw. ADDING DOOR HARDWARE Later, when the door is painted, you can mount the door hardware. The pre-hung door should have a pre-drilled hole for the latch, handle and catch (in the frame), that is compatible with most of the door hardware sets you'll find. Pick one that suits your decor, and install it according to the supplied instructions. Step 34: Installing Windows - Intro No, don't recoil in fear - we're not installing that type of Windows! The small hopper-style windows in my basement were installed in 1953. Needless to say, while they were in OK condition, they were about as energy efficient as a screen door. I was very wary going into this part of the renovation, and to be honest I'm still not 100% sure I did it right. I read as much about it as I could, but you'd be surprised at how little useful information there is out there. Don't even bother asking for help at a hardware store, they were completely useless! So here goes - if you're a pro and wish to correct any mistaken advice I've given, please feel free. TOOLS Demolition Saw (Reciprocating Saw) - for cutting out the old window frames 12" recip. saw blade - it needs to be long so you can reach all the way across the frame from the edge of the concrete foundation. Pry Bar - for prying out the frame Hammer - for use with the pry bar and chisel Jigsaw, recip. saw, table saw, or the like - for cutting new wood frames Angle Grinder with masonry disc - for cutting away old concrete Cold Chisel - for chipping away old concrete Large caulking gun - for use with construction adhesive Hammer Drill and bit - for driving concrete screws through the new frame Level - for proper alignment of the window Vacuum - for cleaning up sawdust and concrete dust Ventilation fan - for blowing concrete dust out the window. Eye, ear, breathing and hand protection - 'cause this is gonna get messy Measuring Tape Marking Utensil MATERIALS Windows - get nice energy efficient ones made for basement windows (no nailing flange) Pressure treated 2x6 and 1x6 lumber - for making the new frame Construction adhesive - for gluing in the frame and window Silicone caulking - for sealing small gaps Foam backer rod - for filling large gaps Tapcon concrete screws - 3.5" with countersunk head for securing the fame to the foundation Step 35: Installing Windows - Removing the Old Windows My old windows were made entirely of wood, with the frame embedded in the concrete when it was poured. I suppose it would have been possible to pop the old window out, do a bit of trimming on the frame, then put in the new window. But, I wasn't sure what condition the wood was in (side note: perfectly fine) and so I decided to yank everything out. First, I unscrewed the old window panes and set them aside. I haven't decided yet what to do with them. Make sure that you do all this when the weather is warm and clear, since you'll be working inside as well as out. Now for the fun part! With the demolition saw, cut straight through the vertical parts of the frame, close to where they meed the top and bottom sill. You must be patient here, being careful not to cut into the concrete. This is more to prevent damage to the blade than the concrete! Work in the blade until it is flush with the edge of the concrete at both sides of the frame. To get a better angle and to prevent fatigue, you may have to work from either inside or outside the window. Once you've cut each of the vertical parts of the frame, jam a pry bar in there and try to get them out. This will be difficult. You may even have to cut the frame a third time in the middle so it pries out easier. Eventually, it should come out and leave a relatively smooth concrete surface behind. To remove the top and bottom frame pieces, cut each of them through the middle as above. Get the pry bar in there and slowly wiggle them loose until they pop out. There were some loose pieces of concrete that broke loose while I was doing this. Don't worry, you can fix this later if you like. With a vacuum, clean out any debris that's left behind. If your windows were like mine, you'll probably see some ridges left behind where the window frame used to be. They correspond to grooves cut into the frames - working together, I assume, for a better seal between the concrete and wood. These ridges will have to be removed as well. Chip away as much as you can with the chisel and hammer to start - this is much cleaner and faster than using the grinder. Use the grinder only for smoothing out what's left - I found this out the hard way. Hit the area again with the vacuum to clean out all the dust and debris. Step 36: Installing Windows - Dry-fit and Measure the New Frame I advise against installing the new windows directly against the concrete. Instead, we're going to build a new frame out of pressure treated lumber, and set the window into that. The windows I bought were customer returned units that just happened to fit perfectly. I was lucky. I suggest getting windows made to fit, rather than fitting pre-made windows into the space you've got. The frame will be made of 2x6 pressure treated lumber. In my case, it was just about the right width to fit inside the wells left behind by the old frame. The lumber will be cut to size, glued in place, then screwed in place with Tapcon screws. I have pity for the next person who tries to replace these windows! Start with the bottom sill. In my case, the width of the 2x6 had to be trimmed by about half an inch to fit. I cut it to the proper length first, then trimmed off the edge with the reciprocating saw (a table saw would be easier to use, if you've got one). Pop it in place to ensure a good fit. Leave it unglued for now, until the rest of the frame is finished. Next do the top of the frame in a similar manner. Cut and trim to fit, and dry-fit the piece. Finally, finish with the side pieces, making sure they are a snug fit against the top and bottom parts of the frame. With all the frame members dry-fit and holding together with friction, measure the new opening. Add about 1/4 inch to 3/8 inch to each dimension (1/8" to 3/16" on all sides) and order a window in that size. Chances are, the only window commonly available will be a double swing-out slider. Be sure to get a nice double glazed energy efficient window if you can. Step 37: Installing Windows - Installing the Window The first step is to permanently attach the frame to the concrete. Remove all the frame members, taking note of where they go. Again, start with the bottom first. Lay a thick bead of construction adhesive around the perimeter of the wood on each of the sides facing concrete. Paste it down. Do the top next - hold it in place with one of the side pieces. Finally, glue down the side pieces, putting glue on the sides facing concrete and the top and bottom frame pieces. As soon as everything is glued in place it should be screwed down. I used two screws for the side pieces and three for the bottom. I didn't put any screws in the top because there was very little to screw onto. Drill straight through the wood and into the concrete using a hammer drill, then drive in the Tapcon screws with a power drill. Once the glue is dry you can install the windows. I used the same construction adhesive to glue the window in place as I used before. First, dry-fit the window to work out spacing. The fit should be close, but not tight. If the fit is too loose you can insert shims. If it's really loose then you can glue in a 1x6 to fill the gap. The glue will be applied to the frame, and the window will be slid into place. Apply a double bead of glue all around the inside of the frame where the window will be. Hoist the window into place, ensuring that the inside of the window faces into the room (obvious, I know). Then, wiggle it around until it's positioned properly. Immediately wipe off any excess glue that got squished out with a damp rag. The fit with my windows was so tight that I didn't need any shims. However, yours might. Using the shims in the same manner as you would for shimming in a door, center the window in the frame so that it is level and perfectly perpendicular. Use a small 12" level to help get it perfect. Once the glue is dry, look for air gaps. Try to spot light shining through or feel for air movement. Any small gaps can be filled with adhesive, and larger gaps can be filled using a piece of foam backer rod. The last step is to seal up everything with a bead of caulking. Apply the bead where the wood meets concrete, where wood meets wood, and where the window meets the frame. Do both inside the window and out. Use a damp finder to smooth out the caulking for a nice finish. When the caulking is dry, paint the wood frame with weatherproof paint. Step 38: Installing Windows - Build a Window Box Once the framing is complete and the drywall is installed, you will want to install a window box to hide all the insulation, concrete and bare frame that are exposed. In my case, the box ended up being nearly 12 inches deep, thanks to the thickness of the concrete wall, the few inches of foam insulation, and the stud walls on top of that. The window box should be made of something that won't be damaged by a little bit of water. Stay away from MDF and plywood - use solid wood like pine instead. At my local Home Depot, they had 12" wide, 3/4" thick jointed boards for a very good price. I decided to install each piece on its own, though you may be able to build the window box on the floor or bench and slide it into place. In my case, the top of the window box actually slants up to meet the window frame, making it impossible to slide the box in. Start with the bottom piece. Measure it to fit flush against the window frame, and flush with the edge of the finished (drywalled) wall. It will span the entire width of the window frame. Chances are, you'll be putting stuff in the window box, so use wedges under the wood to make sure the bottom is level front-to-back and side-to-side. When you're satisfied with the fit, nail it in place with finishing nails. If the box will be stained, you may want to glue it instead so there are no nail holes to fill. Do the top next, in a manner similar to the bottom. Measure to fit, so the board is flush with the window frame and with the wall. In my case, with the angle on the top section, I had to precisely measure and cut the edge facing into the room so that trim applied to the window would sit flat. If you're nailing the top piece by hand, it's a good idea to start the nails on the ground. This will make it much easier to attach the board later on. The sides go in last. Measure carefully, since the distance between the top and bottom may be different at the front and back of the window box (this is especially true if the top is on an angle like mine is!) Cut to fit, and dry-fit the sides before driving in any nails. As with the top, I recommend starting all the nails on the ground, then finishing them off with the board held in place. When all four sides are finished, fill the nail holes and any gaps or knots with wood filler. Paint the window box with a paint that will withstand moisture - you never know when you'll accidentally leave the window open in a thunder storm! Step 39: Install a Built-In Shelving Unit - Intro This is a mini project which you may or may not want to do. You can modify a wall that's already installed, or you can do it as part of a larger renovation. It was inspired by a project I saw at ReadMade.com. Some walls are a waste of space. Sure, they provide privacy and a place to hang a picture. But they could do more. In this case, I took a 32" wide space and turned it into a multi-level shelving unit that holds hundreds of CDs and DVDs, all the while protruding just an inch into the room. That's right - the shelf is built into the wall! All the tools you need to do this will be scattered around the room if you're in the middle of a larger renovation. If you're modifying an existing wall, here's what you'll need: TOOLS Miter Saw - for cutting the shelf pieces Jigsaw - for cutting the backer panel Power drill - for driving in screws Sanding block or hand held power sander - to give the shelves a nice finish when they're installed Small Carpenter's Square - for precision marking Short Level - for perfectly level shelves Measuring Tape Meter stick (yard stick) CDs and DVDs - for reference measurements MATERIALS 1x6x8 Pine shelf boards - as straight as possible! Be picky when choosing. 4x8 sheet of cheap shelf backing board - flat on one side, rough on the other Construction Screws - 2.5 or 3" long Construction adhesive - for gluing the shelf backing onto the wall studs Assorted dry walling stuff - left over from dry walling the rest of the room. Paint Step 40: Install a Built-In Shelving Unit - Installation The shelf can occupy the space between one or two studs, or a custom width. During construction, I specifically left out a stud, in order to install my shelves. These shelves will be installed after the stud walls are in, but before the drywall is up. First you will need to decide on shelf heights. Using your CDs, DVDs and other knick knacks as a reference, work out the heights of the shelves you'll be installing. I decided on four CD-height shelves, and two DVD-height shelves. When measuring, make sure you include the width of the wood and leave a gap for easier removal of the CDs and other media. To make it easier, I cut two short pieces of the shelf board to use as spacers, equivalent to the height of a CD plus the gap. Start at the bottom of the shelf. Place a 2x4 horizontally across the bottom, equivalent to the height of three 2x4s. Make sure it's level. Screw it in place, using construction screws, through the stud and into the end of the horizontal piece. Do the same thing at the top of the shelf. The wall behind my shelf is rough concrete, which needed to be covered with something smooth and even. If the wall behind your shelf is just the back of a piece of drywall, then you can leave it as-is. Otherwise, read on. Measure and cut out a piece from the shelf backing board, to fit from ceiling to floor and across the span of the studs supporting the shelf. Slide it behind the stud wall, between the studs and concrete. Glue it into place against the studs, using wedges to hold the backing sheet against the studs. Now, cut a piece of shelf board to fit between the studs, and fit it in place on top of the bottom horizontal stud. Using the carpenter's square, scribe a line from the center of the shelf board around to the other side of the stud. Drive two screws through the stud along the line and into the shelf, on both ends. Using the spacers, work your way up the shelf, cutting shelf boards to fit and screwing them into place. Make sure the shelf is level both side to side and front to back, using the short bubble level. Be careful when turning in the screws with the power drill - the shelf boards are thin. The screws should be driven perfectly perpendicular to the stud, and at slightly lower speed than normal. Step 41: Install a Built-In Shelving Unit - Finishing The next step in installing the built-in shelf is completed during the drywalling stage. Drywall will be wrapped around the studs supporting the shelf, but not against the back. Drywall right up to the edges of the shelf boards normally. Try to get the finished edge of the drywall to end here. Using scraps of drywall left over from drywalling the rest of the room, cut pieces to fit between the shelf boards, with a tight fit from top to bottom and reaching from the back board to the front edge of the drywall mounted on the walls. Screw each small piece of drywall into place with drywall screws, or glue them in place with construction glue. There will probably be a gap between the pieces of drywall where they meet, which will have to be filled in. Start by taping around the shelves with masking tape. The tape will prevent any drywall compound from getting on the wood. If the gaps are small, simply cover them with drywall tape, and drywall over the corner as you would any other corner. If the gap is large use some Durabond 90 first. Note that no metal corner brace is needed here, as there is little chance of the corner being damaged by falling furniture. Keep the masking tape in place until the walls have been painted. Once the walls are painted you can finish the shelves. Start by sanding the shelves with a power sander or a sanding block. Then, paint the shelves with stain or latex paint. Step 42: Flooring - Intro Well! If you've made it this far, then hopefully your spouse is breathing a little easier. ;) Your new room is practically ready to to be used, but one major thing remains - flooring. As attractive as a concrete floor may be, it's a good idea to cover it with something a bit more attractive. I chose to use glueless snap-together bamboo laminate flooring. It's durable, easy to install, and looks cool, too! Laminate is pretty easy to install. Here's what you will need: TOOLS Jigsaw - for cutting laminate planks Meter stick - for measuring Carpenter's square - for accurate 90 degree cuts Mallet - for driving planks together Scissors - for cutting underlayment Utility knife - also for cutting underlayment Marking implement - a pencil works well MATERIALS Laminate Flooring - Estimate the square footage of the room, plus 10% extra Underlayment - I found a nifty 3-in-1 product that's perfect for this task Step 43: Flooring - Install the Vapor Barrier As a minimum, you will need to install a layer of vapor barrier between the concrete and the flooring. In fact, it's even specified in the warranty for the flooring I bought! Without vapor barrier, moisture can seep through the concrete and into the flooring, causing it to swell and grow mold. If you're using regular vapor barrier, the process is similar to what I'm about to describe, except you'll need to use vapor barrier tape to seal the seams.). Step 44: Flooring - Installing Laminate Flooring l chose Pergo Presto laminate flooring for one of the rooms in my renovation. They have instructions posted online, so I'll just summarize what I did and let you read the manufacturer's instructions here. The flooring that you buy might require a different process, but hopefully the pictures and tips will be helpful. Once you've bought your hardwood or laminate flooring it's important to let it acclimatize to the environment in your house. That means it'll basically sit around for a week before you can do anything with it. This laminate is the standard click-together glueless floating floor. That's right - it doesn't get attached to the floor or anything else in any way! It's simply held down at the edges by the baseboard (and in the middle by gravity, I suppose). I guess that means you can take it with you if you move out... Use a carpenter's square to draw cut lines on the laminate. Precision is important here, and it will minimize the amount of scrapped boards. I found the best way to cut the laminate is with a jigsaw - it cuts like a hot knife through butter. You can also use a table saw, if you have one. Use a blade with a high tooth count for a cleaner cut. Also be sure to support the board on both sides when you're cutting. Don't let it break off or fall on the floor, because pieces may chip off, rendering the plank useless (or at least smaller, since you'll have to cut the chipped part off). If you're cutting in the same room that you're installing the floor in, make sure that the sawdust doesn't sneak in under the vapor barrier - that's food for mold. Plan ahead and think carefully about how to do doors and other tricky bits. The doorway was probably the hardest part for me, since the flooring had to go under the door frame a bit. Step 45: Baseboards and Trim - Intro Welcome to the final step of your basement renovation! With the light at the end of the tunnel gleaming a brilliant yellow-white and with fresh air giving life to your dust-clogged nostrils, surely every hammer blow from this point on will be a drum beat of victory.. Step 46: Baseboards and Trim - Door and Window Trim Installation Here is one task where you really have to take your time to get it right. It also helps tremendously to properly calibrate your saw for accurate cuts. When joining edges of trim or baseboards, you have to take a lot into account, including the angle at which the boards will meet, whether the walls are exactly 90 degrees to each other, and how baseboards may meet up with trim (say, at the door). Knowing those angles, you can set your saw to cut the trim perfectly - reducing repair work later and boosting your own pride! In my basement I used two types of trim, baseboards and window trim (used on the door and the windows). The two are very alike, though the baseboard is slightly wider. The baseboards also come in 14 foot lengths compared to the window trim at 8 feet - make sure you have a way to transport it, and a place to store it! It's a good idea to start with the door trim, since it's usually the easiest to install, and the baseboards butt up against it. Measure the opening with a tape measure and transfer the measurement to the trim. In my case there is no room for trim along the top of the door frame so I just did a 90 degree cut on both ends. If you have enough room for trim along the top of the frame, you'll need to cut the trim at 45 degrees at the mark. When cutting, you may want to cut the trim a little long (say, a few millimeters) and trim it down later if you need to. It's better to cut too long, than too short! Fit the trim in place over the door frame, overlapping all but about a millimeter. Check to make sure that the corner cut at 45 degrees matches perfectly with the corner in the door frame. When you're satisfied with the fit, nail the trim in place with a nail gun. Make sure the nail penetrates the trim and a sufficient amount of drywall or wood - the trim should not pull off easily. If you're worried, you can glue the trim on first, but this shouldn't be necessary. Window trim goes on in a similar way. Start at the bottom, cutting the trim at 45 degree angles on both sides so that the trim fits together like a frame. Again, measure very carefully, ensuring that the cut edges of the trim line up perfectly with the corners of the window box. When everything fits, nail it in place with a nail gun. As with the door frame, make sure the trim is firmly fastened to the wall. When the trim is installed, you can cover the nail holes and any gaps with wood filler. When the wood filler dries, sand it smooth with sandpaper. Then, paint the trim in whatever colour you choose. Step 47: Baseboards and Trim - Baseboard Installation Wow, nearly finished! The baseboards are pretty easy to install - the trickiest part is cutting them accurately. With a compound miter saw, there are two ways to cut the baseboard, horizontally or vertically. To cut vertically, the saw must have a large enough blade to cut the entire height of the baseboard. If it does, just set the saw to cut at 45 degrees (the same setting you used for the window trim). The baseboard is laid flat against the fence. To cut horizontally, angle the blade on its side using the adjustment screw on the back of the saw. The baseboard is then laid flat against the table of the saw. If your saw is like mine, it only adjusts to 45 degrees in one direction, so you'll have to flip around the saw or the workpiece for some cuts. With very long spans of baseboard, it's easier to mis-measure or lose accuracy. Measure the length of the wall with a tape measure and transfer that measurement to the baseboard, but add a few millimeters to the length. Cut as accurately as possible - when cutting horizontally, it's easy to mess up the cut by making it too short. Experiment with a piece of scrap board if you need to. Hopefully, the few millimeters of length that you added will result in the baseboard piece bowing outwards when you try to fit it in. Just shave a few millimeters off the end of the baseboard until it slides in place perfectly. I recommend starting with the longest piece first, just in case you screw up - that way, you can still re-use it on a shorter length of wall. When you're satisfied with the fit of the baseboard, nail it in place with a nail gun. I put a few globs of glue along the length of the baseboard for added strength. When the baseboards are in place, you can go back along each piece and fill the nail holes with wood putty. Any gaps between boards can also be filled with putty. Large gaps may need more than one application of putty. When the putty is dry, sand it smooth with fine grit sandpaper. And now, for the last big step! As with the window trim, paint the baseboards with a layer or two of interior latex in your desired colour. Be careful not to get any paint on the floor or walls - wipe off any specks of paint immediately with a rag. Step 48: Finishing Up Well I guess that's just about it! There are a few more small things here and there to take care of, but you should be in move-in condition by this point. Here are a few more small things to take care of: Reinstall Doors: If you removed the door to install flooring and trim, put it back on again. Fix damage to painted walls: You may have taken a few nicks out of the paint while moving long pieces of baseboard. Grab a small brush and fix them using leftover paint. Install shelves: If you're installing wall-mounted shelves, do that now. Mount pictures and other wall ornaments: This may create a small amount of dust, do it now before vacuuming everything. A final clean-up: Remove any tools that are lying around. Clean them off if they are covered in sawdust/paint/drywall dust/wood filler/glue/etc. Vacuum and dust: Vacuum the floors, dust the shelves and walls, clean the windows. Step 49: Conculsion and Thanks! Well this concludes the longest Instructable I've ever written. I hope it will be of value to you. I certainly learned a lot during this half-year process! I highly encourage you to try a renovation like this yourself - just take your time, read a lot, watch a lot of DIY TV shows, and attend seminars at your local Big Box home improvement stores. If you screw up, remember that pretty much anything can be fixed. I'd like to thank all of the friends and family who helped me along the way: John, for helping me move materials, for showing me how to drywall, for help with framing and with pretty much everything else! I don't think I could have done it without you. Gunther, for helping move materials, for doing most of the painting, and for other help along the way. Oh, and for cash and gift cards to help pay for materials. Joel, for smashing down the concrete wall and helping remove all the debris. It was a dirty job, and I am thankful for your help! Joanna, for your constant encouragement and help with framing, for watching the baby while I toiled away, and for approving many tool purchases! To the rest of my family, thanks for your constant encouragement, praise, and advice. It was all helpful and is appreciated. And finally to God, for (hopefully) being cool with me staying home from church to work away on this. I'll try to make it up to you someday. :) 75 Discussions 3 years ago Hi..what laser level were you using? I've been looking for one that wont cost a fortune. 4 years ago Extremely helpful! Hat off to you! 6 years ago on Introduction Looks nice! I was wondering how your floor is holding up after a few years. I've been doing some research on waterproofing my basement and I have not seen anyone else use that type of vapor barrier...just other (and much more expensive) options that I am hoping to avoid. Have you had any problems with moisture seeping through or causing problems underneath the flooring? Reply 6 years ago on Introduction No moisture or mold issues as far as I can tell. But, we have a relatively dry basement. Before starting on a basement flooring project I'd suggest doing a moisture test - tape a 1 square meter piece of plastic to the floor and let it sit for a week or two. If no water condenses under the plastic in that time, then you're good to go! Reply 6 years ago on Introduction Good to know, thanks! I just found out during the inspection that the house I'm buying has already been waterproofed, so at least that part is already taken care of :) Will still do a moisture test regardless. 8 years ago on Step 5 Most likely, those are spaced like that (the staggered pieces between the studs) as a fire block. This is code in some areas. It prevents, or at the very least slows the spread of fire up the interior of the wall. Homes built without these in an exterior wall may find their attic on fire before they even know there is a fire in the wall. 8 years ago on Introduction Think I'd be wearing steel toe cap boots during the demolition. Nails through the instep are not nice.... Reply 8 years ago on Introduction Agreed. And for most of the renovation, I did! Many of the pictures taken here were, *ahem*, posed a little... ;) Reply 8 years ago on Introduction Beautiful work, and a stunningly complete 'ible Well done. Steve Reply 8 years ago on Introduction Thanks. I have plans to do a bathroom this summer. Hopefully I'll find the time to do it! 8 years ago on Step 9 Did you have to use anything to hold the foamboards in place while the PL300 was curing or are they light enough not to require that? (The user manual that comes with it says you need to use some type of a fastener to keep things under pressure until cured. ) Also, how long did it take in your case to dry? Reply 8 years ago on Step 9 Nothing in particular. I stuck the boards down, pulled them off for a minute or two, then stuck them back on as per instructions. After that, they stuck all by themselves quite nicely. The foam boards aren't heavy at all, and PL300 is very, very sticky. I'm not sure how long they took to dry. Unused glue squeezed from the tube and left in the open air took about a day to get rock hard. 8 years ago on Step 8 What would happen if/when adhesive eventually fails? Would you rely on the studs holding the foam and things remaining airtight? Reply 8 years ago on Step 8 Presumably the adhesive is intended to be permanent. But, in the unlikely event that it does fail, the second layer of insulation pasted on top, plus the tape, plus the studs on top should hold it in place. 8 years ago on Step 10 I love Roxul too, I'm planning to use it for my basement reno, as well. but dude, Roxul isn't fiberglass -- it's mineral fiber, so it doens't make you as itchy as fiberglass though. You still need the breathing / eye protection like you describe. :D Reply 8 years ago on Step 10 Yeah, someone informed me of my mistake on a different step. Whoops! But, it doesn't really matter which you use; either fiberglass or mineral fiber (aka rock wool) will work here. 8 years ago on Introduction Have you noticed a significant change in your energy bill from the fiberglass insulation? I'm debating on whether or not it's necessary with the foam panels...I am on a very limited budget... Reply 8 years ago on Introduction Hard to say how much more of a difference the fiberglass makes, since it was installed at the same time as the foam. The top of the wall feels *slightly* warmer than the bottom, but that may be because heat rises.. 9 years ago on Introduction amazing instructable!! Reply 9 years ago on Introduction Thanks!
https://www.instructables.com/id/Epic-Basement-Renovation/
CC-MAIN-2018-39
refinedweb
18,820
79.09
NAME mimedefang-filter - Configuration file for MIMEDefang mail filter. DESCRIPTION mimedefang-filter is a Perl fragment that controls how mimedefang.pl disposes of various parts of a MIME message. In addition, it contains some global variable settings that affect the operation of mimedefang.pl. CALLING SEQUENCE Incoming messages are scanned as follows: 1) A temporary working directory is created. It is made the current working directory and the e-mail message is split into parts in this directory. Each part is represented internally as an instance of MIME::Entity. 2) If the file /etc/mail/mimedefang-filter.pl defines a Perl function called filter_begin, it is called with a single argument consisting of a MIME::Entity representing the parsed e-mail message. Any return value is ignored. 3) For each leaf part of the mail message, filter is called with four arguments: entity, a MIME::Entity object; fname, the suggested filename taken from the MIME Content-Disposition header; ext, the file extension, and type, the MIME Content-Type value. For each non-leaf part of the mail message, filter_multipart is called with the same four arguments as filter. A non-leaf part of a message is a part that contains nested parts. Such a part has no useful body, but you should still perform filename checks to check for viruses that use malformed MIME to masquerade as non-leaf parts (like message/rfc822). In general, any action you perform in filter_multipart applies to the part itself and any contained parts. Note that both filter and filter_multipart are optional. If you do not define them, a default function that simply accepts each part is used. 4) After all parts have been processed, the function filter_end is called if it has been defined. It is passed a single argument consisting of the (possibly modified) MIME::Entity object representing the message about to be delivered. DISPOSITION mimedefang.pl examines each part of the MIME message and chooses a disposition for that part. (A disposition is selected by calling one of the following functions from filter and then immediately returning.) Available dispositions are: action_accept The part is passed through unchanged. If no disposition function is returned, this is the default. action_accept_with_warning The part is passed through unchanged, but a warning is added to the mail message. action_drop The part is deleted without any notification to the recipients. action_drop_with_warning The part is deleted and a warning is added to the mail message. action_replace_with_warning The part is deleted and instead replaced with a text message. action_quarantine The part is deleted and a warning is added to the mail message. In addition, a copy of the part is saved on the mail server in the directory /var/spool/MIMEDefang and a notification is sent to the MIMEDefang administrator. action_bounce The entire e-mail message is rejected and an error returned to the sender. The intended recipients are not notified. Note that in spite of the name, MIMEDefang does not generate and e- mail a failure notification. Rather, it causes the SMTP server to return a 5XX SMTP failure code. action_discard The entire e-mail message is discarded silently. Neither the sender nor the intended recipients are notified. CONTROLLING RELAYING You can define a function called filter_relay in your filter. This lets you reject SMTP connection attempts early on in the SMTP dialog, rather than waiting until the whole message has been sent. Note that for this check to take place, you must use the -r flag with mimedefang. filter_relay is passed two arguments: $hostip is the IP address of the relay host (for example, "127.0.0.1"), and $hostname is the host name if known (for example, "localhost.localdomain") If the host name could not be determined, $hostname is $hostip enclosed in square brackets. (That is, ("$hostname" eq "[$hostip]") will be true.) filter_relay must return a two-element list: ($code, $msg). $msg specifies the text message to use for the SMTP reply, but because of limitations in the Milter API, this message is for documentation purposes only---you cannot set the text of the SMTP message returned to the SMTP client from filter_relay. $code is a literal string, and can have one of the following values: 'REJECT' if the connection should be rejected. 'CONTINUE' if the connection should be accepted. 'TEMPFAIL' if a temporary failure code should be returned. 'DISCARD' if the message should be accepted and silently discarded. 'ACCEPT_AND_NO_MORE_FILTERING' if the connection should be accepted and no further filtering done. Earlier versions of MIMEDefang used -1 for TEMPFAIL, 0 for REJECT and 1 for CONTINUE. These values still work, but are deprecated. In the case of REJECT or TEMPFAIL, $msg specifies the text part of the SMTP reply. $msg must not contain newlines. For example, if you wish to reject connection attempts from any machine in the spammer.com domain, you could use this function: sub filter_relay { my ($ip, $name) = @_; if ($name =~ /spammer\.com$/) { return ('REJECT', "Sorry; spammer.com is blacklisted"); } return ('CONTINUE', "ok"); } FILTERING BY HELO You can define a function called filter_helo in your filter. This lets you reject connections after the HELO/EHLO SMTP command. Note that for this function to be called, you must use the -H flag with mimedefang. filter_helo is passed three arguments: $ip and $name are the IP address and name of the sending relay, as in filter_relay. The third argument, $helo, is the argument supplied in the HELO/EHLO command. filter_helo must return a two-to-five element list: ($code, $msg, $smtp_code, $smtp_dsn, $delay). $code is a return code, with the same meaning as the $code return from filter_relay. $msg specifies the text message to use for the SMTP reply. If $smtp_code and $smtp_dsn are supplied, they become the SMTP numerical reply code and the enhanced status delivery code (DSN code). If they are not supplied, sensible defaults are used. $delay specifies a delay in seconds; the C milter code will sleep for $delay seconds before returning the reply to Sendmail. $delay defaults to zero. (Note that the delay is implemented in the Milter C code; if you specify a delay of 30 seconds, that doesn't mean a Perl slave is tied up for the duration of the delay. The delay only costs one Milter thread.) FILTERING BY SENDER You can define a function called filter_sender in your filter. This lets you reject messages from certain senders, rather than waiting until the whole message has been sent. Note that for this check to take place, you must use the -s flag with mimedefang. filter_sender is passed four arguments: $sender is the envelope e-mail address of the sender (for example, "<dfs@roaringpenguin.com>"). The address may or may not be surrounded by angle brackets. $ip and $name are the IP address and host name of the SMTP relay. Finally, $helo is the argument to the SMTP "HELO" command. Inside filter_sender, you can access any ESMTP arguments (such as "SIZE=12345") in the array @ESMTPArgs. Each ESMTP argument occupies one array element. filter_sender must return a two-to-five element list, with the same meaning as the return value from filter_helo. For example, if you wish to reject messages from spammer@badguy.com, you could use this function: sub filter_sender { my ($sender, $ip, $hostname, $helo) = @_; if ($sender =~ /^<?spammer\@badguy\.com>?$/i) { return ('REJECT', 'Sorry; spammer@badguy.com is blacklisted.'); } return ('CONTINUE', "ok"); } As another example, some spammers identify their own machine as your machine in the SMTP "HELO" command. This function rejects a machine claiming to be in the "roaringpenguin.com" domain unless it really is a Roaring Penguin machine: sub filter_sender { my($sender, $ip, $hostname, $helo) = @_; if ($helo =~ /roaringpenguin.com/i) { if ($ip ne "127.0.0.1" and $ip ne "216.191.236.23" and $ip ne "216.191.236.30") { return('REJECT', "Go away... $ip is not in roaringpenguin.com"); } } return ('CONTINUE', "ok"); } As a third example, you may wish to prevent spoofs by requiring SMTP authentication when email is sent from some email addresses. This function rejects mail from "king@example.com", unless the connecting user properly authenticated as "elvisp". Note that this needs access to the %SendmailMacros global, that is not available in filter_sender until after a call to read_commands_file. sub filter_sender { my($sender, $ip, $hostname, $helo) = @_; read_commands_file(); ### notice: This assumes The King uses authentication without realm! if ($sender =~ /^<?king\@example\.com>?$/i and $SendmailMacros{auth_authen} ne "elvisp") { return('REJECT', "Faking mail from the king is not allowed."); } return ('CONTINUE', "ok"); } FILTERING BY RECIPIENT You can define a function called filter_recipient in your filter. This lets you reject messages to certain recipients, rather than waiting until the whole message has been sent. Note that for this check to take place, you must use the -t flag with mimedefang. filter_recipient is passed nine arguments: $recipient is the envelope address of the recipient and $sender is the envelope e-mail address of the sender (for example, "<dfs@roaringpenguin.com>"). The addresses may or may not be surrounded by angle brackets. $ip and $name are the IP address and host name of the SMTP relay. $first is the envelope address of the first recipient for this message, and $helo is the argument to the SMTP "HELO" command. The last three arguments, $rcpt_mailer, $rcpt_host and $rcpt_addr are the Sendmail mailer, host and address triple for the recipient address. For example, for local recipients, $rcpt_mailer is likely to be "local", while for remote recipients, it is likely to be "esmtp". Inside filter_recipient, you can access any ESMTP arguments (such as "NOTIFY=never") in the array @ESMTPArgs. Each ESMTP argument occupies one array element. filter_recipient must return a two-to-five element list whose interpretation is the same as for filter_sender. Note, however, that if filter_recipient returns 'DISCARD', then the entire message for all recipients is discarded. (It doesn't really make sense, but that's how Milter works.) For example, if you wish to reject messages from spammer@badguy.com, unless they are to postmaster@mydomain.com, you could use this function: sub filter_recipient { my ($recipient, $sender, $ip, $hostname, $first, $helo, $rcpt_mailer, $rcpt_host, $rcpt_addr) = @_; if ($sender =~ /^<?spammer\@badguy\.com>?$/i) { if ($recipient =~ /^<?postmaster\@mydomain\.com>?$/i) { return ('CONTINUE', "ok"); } return ('REJECT', 'Sorry; spammer@badguy.com is blacklisted.'); } return ('CONTINUE', "ok"); } INITIALIZATION AND CLEANUP Just before a slave begins processing messages, mimedefang.pl calls the functions filter_initialize (if it is defined) with no arguments. By the time filter_initialize is called, all the other initialization (such as setting up syslog facility and priority) has been done. If you are not using an embedded Perl interpreter, then performing an action inside filter_initialize is practically the same as performing it directly in the filter file, outside any function definition. However, if you are using an embedded Perl interpreter, then anything you call directly from outside a function definition is executed once only in the parent process. Anything in filter_initialize is executed once per slave. If you use any code that opens a descriptor (for example, a connection to a database server), you must run that code inside filter_initialize and not directly from the filter, because the multiplexor closes all open descriptors when it activates a new slave. When a slave is about to exit, mimedefang.pl calls the function filter_cleanup (if it is defined) with no arguments. This function can do whatever cleanup you like, such as closing file descriptors and cleaning up long-lived slave resources. The return value from filter_cleanup becomes the slave's exit status. If filter_cleanup takes longer than 10 seconds to run, the slave is sent a SIGTERM signal. If that doesn't kill it (because you're catching signals, perhaps), then a further 10 seconds later, the slave is sent a SIGKILL signal. CONTROLLING PARSING If you define a function called filter_create_parser taking no arguments, then mimedefang.pl will call it to create a MIME::Parser object for parsing mail messages. Filter_create_parser is expected to return a MIME::Parser object (or an instance of a class derived from MIME::Parser). You can use filter_create_parser to change the behavior of the MIME::Parser used by mimedefang.pl. If you do not define a filter_create_parser function, then a built-in version equivalent to this is used: sub filter_create_parser () { my $parser = MIME::Parser->new(); $parser->extract_nested_messages(1); $parser->extract_uuencode(1); $parser->output_to_core(0); $parser->tmp_to_core(0); return $parser; } EXTENDING MIMEDEFANG The man page for mimedefang-protocol(7) lists commands that are passed to slaves in server mode (see "SERVER COMMANDS".) You can define a function called filter_unknown_cmd to extend the set of commands your filter can handle. If you define filter_unknown_cmd, it is passed the unknown command as a single argument. It should return a list of values as follows: The first element of the list must be either "ok" or "error:" (with the colon.) The remaining arguments are percent-encoded. All the resulting pieces are joined together with a single space between them, and the resulting string passed back as the reply to the multiplexor. For example, the following function will make your filter reply to a "PING" command with "PONG": sub filter_unknown_cmd ($) { my($cmd) = @_; if ($cmd eq "PING") { return("ok", "PONG"); } return("error:", "Unknown command"); } You can test this filter by typing the following as root: md-mx-ctrl PING The response should be: ok PONG If you extend the set of commands using filter_unknown_cmd, you should make all your commands start with an upper-case letter to avoid clashes with future built-in commands. REJECTING UNKNOWN USERS EARLY A very common mail setup is to have a MIMEDefang machine act as an SMTP proxy, accepting and scanning mail and then relaying it to the real mail server. Unfortunately, this means that the MIMEDefang machine cannot know if a local address is valid or not, and will forward all mail for the appropriate domains. If a mail comes in for an unknown user, the MIMEDefang machine will be forced to generate a bounce message when it tries to relay the mail. It's often desirable to have the MIMEDefang host reply with a "User unknown" SMTP response directly. While this can be done by copying the list of local users to the MIMEDefang machine, MIMEDefang has a built- in function called md_check_against_smtp_server for querying another relay host: md_check_against_smtp_server($sender, $recip, $helo, $server, $port) This function connects to the SMTP server $server and pretends to send mail from $sender to $recip. The return value is always a two-element array. If the RCPT TO: command succeeds, the return value is ("CONTINUE", "OK"). If the RCPT fails with a permanent failure, the return value is ("REJECT", $msg), where $msg is the message from the SMTP server. Any temporary failures, connection errors, etc. result in a return value of ("TEMPFAIL", $msg). The optional argument $port specifies the TCP port to connect to. If it is not supplied, then the default SMTP port of 25 is used. Suppose the machine filter.domain.tld is filtering mail destined for the real mail server mail.domain.tld. You could have a filter_recipient function like this: sub filter_recipient { my($recip, $sender, $ip, $host, $first, $helo, $rcpt_mailer, $rcpt_host, $rcpt_addr) = @_; return md_check_against_smtp_server($sender, $recip, "filter.domain.tld", "mail.domain.tld"); } For each RCPT TO: command, MIMEDefang opens an SMTP connection to mail.domain.tld and checks if the command would succeed. Please note that you should only use md_check_against_smtp_server if your mail server responds with a failure code for nonexistent users at the RCPT TO: level. Also, this function may impose too much overhead if you receive a lot of e-mail, and it will generate lots of useless log entries on the real mail server (because of all the RCPT TO: probes.) It may also significantly increase the load on the real mail server. GLOBAL VARIABLES YOU CAN SET The following Perl global variables should be set in mimedefang-filter: $AdminAddress The e-mail address of the MIMEDefang administrator. $DaemonAddress The e-mail address from which MIMEDefang-originated notifications come. $AddWarningsInline If this variable is set to 0, then all MIMEDefang warnings (such as created by action_quarantine or action_drop_with_warning) are collected together and added in a separate MIME part called WARNING.TXT. If the variable is set to 1, then the warnings are added directly in the first text/plain and text/html parts of the message. If the message does not contain any text/plain or text/html parts, then a WARNING.TXT MIME part is added as before. $MaxMIMEParts A message containing many MIME parts can cause MIME::Tools to consume large amounts of memory and bring your system to its knees. If you set $MaxMIMEParts to a positive number, then MIME parsing is terminated for messages with more than that many parts, and the message is bounced. In this case, none of your filter functions is called. By default, $MaxMIMEParts is set to -1, meaning there is no limit on the number of parts in a message. Note that in order to use this variable, you must install the Roaring Penguin patched version of MIME::Tools, version 5.411a-RP-Patched-02 or newer. $Stupidity{"NoMultipleInlines"} Set this to 1 if your e-mail is too stupid to display multiple MIME parts in-line. In this case, a nasty hack causes the first part of the original message to appear as an attachment if warning are issued. Mail clients that are not this stupid are Netscape Communicator and Pine. On the other hand, Microsoft Exchange and Microsoft Outlook are indeed this stupid. Perhaps users of those clients should switch. The following global variables may optionally be set. If they are not set, sensible defaults are used: $AddApparentlyToForSpamAssassin By default, MIMEDefang tries to pass SpamAssassin a message that looks exactly like one it would receive via procmail. This means adding a Received: header, adding a Message-ID header if necessary, and adding a Return-Path: header. If you set $AddApparentlyToForSpamAssassin to 1, then MIMEDefang also adds an Apparently-To: header with all the envelope recipients before passing the message to SpamAssassin. This lets SpamAssassin detect possibly whitelisted recipient addresses. The default value for $AddApparentlyToForSpamAssassin is 0. $SyslogFacility This specifies the logging facility used by mimedefang.pl. By default, it is set to "mail", but you can set it to other possibilites. See the openlog(3) man page for details. You should name facilities as all-lowercase without the leading "LOG_". That is, use "local3", not "LOG_LOCAL3". $WarningLocation (default 0) If set to 0 (the default), non-inline warnings are placed first. If you want the warning at the end of the e-mail, set $WarningLocation to -1. $DaemonName (default "MIMEDefang") The full name used when MIMEDefang sends out notifications. $AdminName (default "MIMEDefang Administrator") The full name of the MIMEDefang administrator. $SALocalTestsOnly (default 1) If set to 1, SpamAssassin calls will use only local tests. This is the default and recommended setting. This disables Received, RBL and Razor tests in an all or nothing fashion. To use Razor this MUST be set to 0. You can add 'skip_rbl_checks 1' to your SpamAssassin config file if you need to. $NotifySenderSubject (default "MIMEDefang Notification") The subject used when e-mail is sent out by action_notify_sender(). If you set this, you should set it each time you call action_notify_sender() to ensure consistency. $NotifyAdministratorSubject (default "MIMEDefang Notification") The subject used when e-mail is sent out by action_notify_administrator(). If you set this, you should set it each time you call action_notify_administrator() to ensure consistency. $QuarantineSubject (default "MIMEDefang Quarantine Report") The subject used when a quarantine notice is sent to the administrator. If you set this, you should set it each time you call action_quarantine() or action_quarantine_entire_message(). $NotifyNoPreamble (default 0) Normally, notifications sent by action_notify_sender() have a preamble warning about message modifications. If you do not want this, set $NotifyNoPreamble to 1. $CSSHost (default 127.0.0.1:7777:local) Host and port for the Symantec CarrierScan Server virus scanner. This takes the form ip_addr:port:local_or_nonlocal. The ip_addr and port are the host and port on which CarrierScan Server is listening. If you want to scan local files, append :local to force the use of the AVSCANLOCAL command. If the CarrierScan Server is on another host, append :nonlocal to force the file contents to be sent to the scanner over the socket. $SophieSock (default /var/spool/MIMEDefang/sophie) Socket used for Sophie daemon calls within message_contains_virus_sophie and entity_contains_virus_sophie unless a socket is provided by the calling routine. $ClamdSock (default /var/spool/MIMEDefang/clamd.sock) Socket used for clamd daemon calls within message_contains_virus_clamd and entity_contains_virus_clamd unless a socket is provided by the calling routine. $TrophieSock (default /var/spool/MIMEDefang/trophie) Socket used for Trophie daemon calls within message_contains_virus_trophie and entity_contains_virus_trophie unless a socket is provided by the calling routine. FILTER The heart of mimedefang-filter is the filter procedure. See the examples that came with MIMEDefang to learn to write a filter. The filter is called with the following arguments: $entity The MIME::Entity object. (See the MIME::tools Perl module documentation.) $fname The suggested attachment filename, or "" if none was supplied. $ext The file extension (all characters from the rightmost period to the end of the filename.) $type The MIME type (for example, "text/plain".) The filename is derived as follows: o First, if the Content-Disposition header has a "filename" field, it is used. o Otherwise, if the Content-Type header has a "name" field, it is used. o Otherwise, the Content-Description header value is used. Note that the truly paranoid will check all three fields for matches. The functions re_match and re_match_ext perform regular expression matches on all three of the fields named above, and return 1 if any field matches. See the sample filters for details. The calling sequence is: re_match($entity, "regexp") re_match_ext($entity, "regexp") re_match returns true if any of the fields matches the regexp without regard to case. re_match_ext returns true if the extension in any field matches. An extension is defined as the last dot in a name and all remaining characters. A third function called re_match_in_zip_directory will look inside zip files and return true if any of the file names inside the zip archive match the regular expression. Call it like this: my $bh = $entity->bodyhandle(); my $path = (defined($bh)) ? $bh->path() : undef; if (defined($path) and re_match_in_zip_directory($path, "regexp")) { # Take action... } You should not call re_match_in_zip_directory unless you know that the entity is a zip file attachment. GLOBAL VARIABLES SET BY MIMEDEFANG.PL The following global variables are set by mimedefang.pl and are available for use in your filter. All of these variables are always available to filter_begin, filter, filter_multipart and filter_end. In addition, some of them are available in filter_relay, filter_sender or filter_recipient. If this is the case, it will be noted below. %Features This hash lets you determine at run-time whether certain functionality is available. This hash is available at all times assuming the detect_and_load_perl_modules() function has been called. The defined features are: $Features{"SpamAssassin"} is 1 if SpamAssassin 1.6 or better is installed; 0 otherwise. $Features{"HTML::Parser"} is 1 if HTML::Parser is installed; 0 otherwise. $Features{"Virus:FPROTD"} is currently always 0. Set it to 1 in your filter file if you have F-Risk's FPROTD scanner earlier than version 6. $Features{"Virus:FPROTD6"} is currently always 0. Set it to 1 in your filter file if you have version 6 of F-Risk's FPROTD scanner. $Features{"Virus:SymantecCSS"} is currently always 0. Set it to 1 in your filter file if you have the Symantec CarrierScan Server virus scanner. $Features{"Virus:NAI"} is the full path to NAI uvscan if it is installed; 0 if it is not. $Features{"Virus:BDC"} is the full path to Bitdefender bdc if it is installed; 0 if it is not. $Features{"Virus:NVCC"} is the full path to Norman Virus Control nvcc if it is installed; 0 if it is not. $Features{"Virus:HBEDV"} is the full path to H+BEDV AntiVir if it is installed; 0 if it is not. $Features{"Virus:VEXIRA"} is the full path to Central Command Vexira if it is installed; 0 if it is not. $Features{"Virus:SOPHOS"} is the full path to Sophos sweep if it is installed; 0 if it is not. $Features{"Virus:SAVSCAN"} is the full path to Sophos savscan if it is installed; 0 if it is not. $Features{"Virus:CLAMAV"} is the full path to Clam AV clamscan if it is installed; 0 if it is not. $Features{"Virus:AVP"} is the full path to AVP AvpLinux if it is installed; 0 if it is not. $Features{"Virus:AVP5"} is the full path to Kaspersky "aveclient" if it is installed; 0 if it is not. $Features{"Virus:CSAV"} is the full path to Command csav if it is installed; 0 if it is not. $Features{"Virus:FSAV"} is the full path to F-Secure fsav if it is installed; 0 if it is not. $Features{"Virus:FPROT"} is the full path to F-Risk f-prot if it is installed; 0 if it is not. $Features{"Virus:FPSCAN"} is the full path to F-Risk fpscan if it is installed; 0 if it is not. $Features{"Virus:SOPHIE"} is the full path to Sophie if it is installed; 0 if it is not. $Features{"Virus:CLAMD"} is the full path to clamd if it is installed; 0 if it is not. $Features{"Virus:TROPHIE"} is the full path to Trophie if it is installed; 0 if it is not. $Features{"Virus:NOD32"} is the full path to ESET NOD32 nod32cli if it is installed; 0 if it is not. NOTE: Perl-module based features such as SpamAssassin are determined at runtime and may change as these are added and removed. Most Virus features are predetermined at the time of configuration and do not adapt to runtime availability unless changed by the filter rules. $CWD This variable holds the working directory for the current message. During filter processing, mimedefang.pl chdir's into this directory before calling any of the filter_ functions. Note that this variable is set correctly in filter_sender and filter_recipient, but not in filter_relay. $SuspiciousCharsInHeaders If this variable is true, then mimedefang has discovered suspicious characters in message headers. This might be an exploit for bugs in MIME-parsing routines in some badly-written mail user agents (e.g. Microsoft Outlook.) You should always drop such messages. $SuspiciousCharsInBody If this variable is true, then mimedefang has discovered suspicious characters in the message body. This might be an exploit for bugs in MIME-parsing routines in some badly-written mail user agents (e.g. Microsoft Outlook.) You should always drop such messages. $RelayHostname The host name of the relay. This is the name of the host that is attempting to send e-mail to your host. May be "undef" if the host name could not be determined. This variable is available in filter_relay, filter_sender and filter_recipient in addition to the body filtering functions. $RelayAddr The IP address of the sending relay (as a string consisting of four dot-separated decimal numbers.) One potential use of $RelayAddr is to limit mailing to certain lists to people within your organization. This variable is available in filter_relay, filter_sender and filter_recipient in addition to the body filtering functions. $Helo The argument given to the SMTP "HELO" command. This variable is available in filter_sender and filter_recipient, but not in filter_relay. $Subject The contents of the "Subject:" header. $Sender The sender of the e-mail. This variable is set in filter_sender and filter_recipient. @Recipients A list of the recipients. In filter_recipient, it is set to the single recipient currently under consideration. Or, after calling read_commands_file within filter_recipient, the current recipient under consideration is in the final position of the array, at $Recipients[-1], while any previous (and accepted) recipients are at the beginning of the array, that is, in @Recipients[0 .. $#Recipients-1]. $MessageID The contents of the "Message-ID:" header if one is present. Otherwise, contains the string "NOQUEUE". $QueueID The Sendmail queue identifier if it could be determined. Otherwise, contains the string "NOQUEUE". This variable is set correctly in filter_sender and filter_recipient, but it is not available in filter_relay. $MsgID Set to $QueueID if the queue ID could be determined; otherwise, set to $MessageID. This identifier should be used in logging, because it matches the identifier used by Sendmail to log messages. Note that this variable is set correctly in filter_sender and filter_recipient, but it is not available in filter_relay. $VirusScannerMessages Each time a virus-scanning function is called, messages (if any) from the virus scanner are accumulated in this variable. You can use it in filter_end to formulate a notification (if you wish.) $VirusName If a virus-scanning function found a virus, this variable will hold the virus name (if it could be determined.) $SASpamTester If defined, this is the configured Mail::SpamAssassin object used for mail tests. It may be initialized with a call to spam_assassin_init which also returns it. %SendmailMacros This hash contains the values of some Sendmail macros. The hash elements exist only for macros defined by Sendmail. See the Sendmail documentation for the meanings of the macros. By default, mimedefang passes the values of the following macros: ${daemon_name}, ${if_name}, ${if_addr}, $j, $_, $i, ${tls_version}, ${cipher}, ${cipher_bits}, ${cert_subject}, ${cert_issuer}, ${auth_type}, ${auth_authen}, ${auth_ssf}, ${auth_author}, ${mail_mailer}, ${mail_host} and ${mail_addr}. If any macro is not set or not passed to milter, it will be unavailable. To access the value of a macro, use: $SendmailMacros{"macro_name"} Do not place curly brackets around the macro name. This variable is available in filter_sender and filter_recipient after a call to read_commands_file. @SenderESMTPArgs This array contains all the ESMTP arguments supplied in the MAIL FROM: command. For example: sub print_sender_esmtp_args { foreach (@SenderESMTPArgs) { print STDERR "Sender ESMTP arg: $_0; } } %RecipientESMTPArgs This hash contains all the ESMTP arguments supplied in each RCPT TO: command. For example: sub print_recip_esmtp_args { foreach my $recip (@Recipients) { foreach(@{$RecipientESMTPArgs{$recip}}) { print STDERR "Recip ESMTP arg for $recip: $_0; } } } %RecipientMailers This hash contains the Sendmail "mailer-host-address" triple for each recipient. Here's an example of how to use it: sub print_mailer_info { my($recip, $mailer, $host, $addr); foreach $recip (@Recipients) { $mailer = ${RecipientMailers{$recip}}[0]; $host = ${RecipientMailers{$recip}}[1]; $addr = ${RecipientMailers{$recip}}[2]; print STDERR "$recip: mailer=$mailer, host=$host, addr=$addr\n"; } } In filter_recipient, this variable by default only contains information on the recipient currently under investigation. Information on all recipients is available after calling read_commands_file. ACTIONS When the filter procedure decides how to dispose of a part, it should call one or more action_ subroutines. The action subroutines are: action_accept() Accept the part. action_rebuild() Rebuild the mail body, even if mimedefang thinks no changes were made. Normally, mimedefang does not alter a message if no changes were made. action_rebuild may be used if you make changes to entities directly (by manipulating the MIME::Head, for example.) Unless you call action_rebuild, mimedefang will be unaware of the changes. Note that all the built-in action... routines that change a message implicitly call action_rebuild. action_add_header($hdr, $val) Add a header to the message. This can be used in filter_begin or filter_end. The $hdr component is the header name without the colon, and the $val is the header value. For example, to add the header: X-MyHeader: A nice piece of text use: action_add_header("X-MyHeader", "A nice piece of text"); action_change_header($hdr, $val, $index) Changes an existing header in the message. This can be used in filter_begin or filter_end. The $hdr parameter is the header name without the colon, and $val is the header value. If the header does not exist, then a header with the given name and value is added. The $index parameter is optional; it defaults to 1. If you supply it, then the $index'th occurrence of the header is changed, if there is more than one header with the same name. (This is common with the Received: header, for example.) action_insert_header($hdr, $val, $index) Add a header to the message int the specified position $index. A position of 0 specifies that the header should be prepended before existing headers. This can be used in filter_begin or filter_end. The $hdr component is the header name without the colon, and the $val is the header value. action_delete_header($hdr, $index) Deletes an existing header in the message. This can be used in filter_begin or filter_end. The $hdr parameter is the header name without the colon. The $index parameter is optional; it defaults to 1. If you supply it, then the $index'th occurrence of the header is deleted, if there is more than one header with the same name. action_delete_all_headers($hdr) Deletes all headers with the specified name. This can be used in filter_begin or filter_end. The $hdr parameter is the header name without the colon. action_drop() Drop the part. If called from filter_multipart, drops all contained parts also. action_drop_with_warning($msg) Drop the part, but add the warning $msg to the e-mail message. If called from filter_multipart, drops all contained parts also. action_accept_with_warning($msg) Accept the part, but add the warning $msg to the e-mail message. action_replace_with_warning($msg) Drop the part and replace it with a text part $msg. If called from filter_multipart, drops all contained parts also. action_replace_with_url($entity, $doc_root, $base_url, $msg, [$cd_data, $salt]) Drop the part, but save it in a unique location under $doc_root. The part is replaced with the text message $msg. The string "_URL_" in $msg is replaced with $base_url/something, that can be used to retrieve the message. You should not use this function in filter_multipart. This action is intended for stripping large parts out of the message and replacing them to a link on a Web server. Here's how you would use it in filter(): $size = (stat($entity->bodyhandle->path))[7]; if ($size > 1000000) { return action_replace_with_url($entity, "/home/httpd/html/mail_parts", "", "The attachment was larger than 1,000,000 bytes.\n" . "It was removed, but may be accessed at this URL:\n\n" . "\t_URL_\n"); } This example moves attachments greater than 1,000,000 bytes into /home/httpd/html/mail_parts and replaces them with a link. The directory should be accessible via a Web server at. The generated name is created by performing a SHA1 hash of the part and adding the extension to the ASCII-HEX representation of the hash. If many different e-mails are sent containing an identical large part, only one copy of the part is stored, regardless of the number of senders or recipients. For privacy reasons, you must turn off Web server indexing in the directory in which you place mail parts, or anyone will be able to read them. If indexing is disabled, an attacker would have to guess the SHA1 hash of a part in order to read it. Optionally, a fifth argument can supply data to be saved into a hidden dot filename based on the generated name. This data can then be read in on the fly by a CGI script or mod_perl module before serving the file to a web client, and used to add information to the response, such as Content-Disposition data. A sixth optional argument, $salt, is mixed in to the SHA1 hash. This salt can be any string and should be kept confidential. The salt is designed to prevent people from guessing whether or not a particular attachment has been received on your server by altering the SHA1 hash calculation. action_defang($entity, $name, $fname, $type) Accept the part, but change its name to $name, its suggested filename to $fname and its MIME type to $type. If $name or $fname are "", then mimedefang.pl generates generic names. Do not use this action in filter_multipart. If you use action_defang, you must define a subroutine called defang_warning in your filter. This routine takes two arguments: $oldfname (the original name of an attachment) and $fname (the defanged version.) It should return a message telling the user what happened. For example: sub defang_warning { my($oldfname, $fname) = @_; return "The attachment '$oldfname' was renamed to '$fname'\n"; } action_external_filter($entity, $cmd) Run an external UNIX command $cmd. This command must read the part from the file ./FILTERINPUT and leave the result in ./FILTEROUTPUT. If the command executes successfully, returns 1, otherwise 0. You can test the return value and call another action_ if the filter failed. Do not use this action in filter_multipart. action_quarantine($entity, $msg) Drop and quarantine the part, but add the warning $msg to the e- mail message. action_quarantine_entire_message($msg) Quarantines the entire message in a quarantine directory on the mail server, but does not otherwise affect disposition of the message. If "$msg" is non-empty, it is included in any administrator notification. action_sm_quarantine($reason) Quarantines a message in the Sendmail mail queue using the new QUARANTINE facility of Sendmail 8.13. Consult the Sendmail documentation for details about this facility. If you use action_sm_quarantine with a version of Sendmail that lacks the QUARANTINE facility, mimedefang will log an error message and not quarantine the message. action_bounce($reply, $code, $dsn) Reject the entire e-mail message with an SMTP failure code, and the one-line error message $reply. If the optional $code and $dsn arguments are supplied, they specify the numerical SMTP reply code and the extended status code (DSN code). If the codes you supply do not make sense for a bounce, they are replaced with "554" and "5.7.1" respectively. action_bounce merely makes a note that the message is to be bounced; remaining parts are still processed. If action_bounce is called for more than one part, the mail is bounced with the message in the final call to action_bounce. You can profitably call action_quarantine followed by action_bounce if you want to keep a copy of the offending part. Note that the message is not bounced immediately; rather, remaining parts are processed and the message is bounced after all parts have been processed. Note that despite its name, action_bounce does not generate a "bounce message". It merely rejects the message with an SMTP failure code. WARNING: action_bounce() may cause the sending relay to generate spurious bounce messages if the sender address is faked. This is a particular problem with viruses. However, we believe that on balance, it's better to bounce a virus than to silently discard it. It's almost never a good idea to hide a problem. action_tempfail($msg, $code, $dsn) Cause an SMTP "temporary failure" code to be returned, so the sending mail relay requeues the message and tries again later. The message $msg is included with the temporary failure code. If the optional $code and $dsn arguments are supplied, they specify the numerical SMTP reply code and the extended status code (DSN code). If the codes you supply do not make sense for a temporary failure, they are replaced with "450" and "4.7.1" respectively. action_discard() Silently discard the message, notifying nobody. You can profitably call action_quarantine followed by action_discard if you want to keep a copy of the offending part. Note that the message is not discarded immediately; rather, remaining parts are processed and the message is discarded after all parts have been processed. action_notify_sender($message) This action sends an e-mail back to the original sender with the indicated message. You may call another action after this one. If action_notify_sender. NOTE: Viruses often fake the sender address. For that reason, if a virus-scanner has detected a virus, action_notify_sender is disabled and will simply log an error message if you try to use it. action_notify_administrator($message) This action e-mails the MIMEDefang administrator the supplied message. You may call another action after this one; action_notify_administrator does not affect mail processing. If action_notify_administrator. append_text_boilerplate($entity, $boilerplate, $all) This action should only be called from filter_end. It appends the text "\n$boilerplate\n" to the first text/plain part (if $all is 0) or to all text/plain parts (if $all is 1). append_html_boilerplate($entity, $boilerplate, $all) This action should only be called from filter_end. It adds the text "\n$boilerplate\n" to the first text/html part (if $all is 0) or to all text/html parts (if $all is 1). This function tries to be smart about inserting the boilerplate; it uses HTML::Parser to detect closing tags and inserts the boilerplate before the </body> tag if there is one, or before the </html> tag if there is no </body>. If there is no </body> or </html> tag, it appends the boilerplate to the end of the part. Do not use append_html_boilerplate unless you have installed the HTML::Parser Perl module. Here is an example illustrating how to use the boilerplate functions: sub filter_end { my($entity) = @_; append_text_boilerplate($entity, "Lame text disclaimer", 0); append_html_boilerplate($entity, "<em>Lame</em> HTML disclaimer", 0); } action_add_part($entity, $type, $encoding, $data, $fname, $disposition [, $offset]) This action should only be called from the filter_end routine. It adds a new part to the message, converting the original message to mutipart if necessary. The function returns the part so that additional mime attributes may be set on it. Here's an example: sub filter_end { my($entity) = @_; action_add_part($entity, "text/plain", "-suggest", "This e-mail does not represent" . "the official policy of FuBar, Inc.\n", "disclaimer.txt", "inline"); } The $entity parameter must be the argument passed in to filter_end. The $offset parameter is optional; if omitted, it defaults to -1, which adds the new part at the end. See the MIME::Entity man page and the add_part member function for the meaning of $offset. Note that action_add_part tries to be more intelligent than simply calling $entity->add_part. The decision process is as follows: o If the top-level entity is multipart/mixed, then the part is simply added. o Otherwise, a new top-level multipart/mixed container is generated, and the original top-level entity is made the first part of the multipart/mixed container. The new part is then added to the multipart/mixed container. USEFUL ROUTINES mimedefang.pl includes some useful functions you can call from your filter: detect_and_load_perl_modules() Unless you really know what you're doing, this function must be called first thing in your filter file. It causes mimedefang.pl to detect and load Perl modules such as Mail::SpamAssassin, Net::DNS, etc., and to populate the %Features hash. send_quarantine_notifications() This function should be called from filter_end. If any parts were quarantined, a quarantine notification is sent to the MIMEDefang administrator. Please note that if you do not call send_quarantine_notifications, then no quarantine notifications are sent. get_quarantine_dir() This function returns the full path name of the quarantine directory. If you have not yet quarantined any parts of the message, a quarantine directory is created and its pathname returned. change_sender($sender) This function changes the envelope sender to $sender. It can only be called from filter_begin or any later function. Please note that this function is only supported with Sendmail/Milter 8.14.0 or newer. It has no effect if you're running older versions. add_recipient($recip) This function adds $recip to the list of envelope recipients. A copy of the message (after any modifications by MIMEDefang) will be sent to $recip in addition to the original recipients. Note that add_recipient does not modify the @Recipients array; it just makes a note to Sendmail to add the recipient. delete_recipient($recip) This function deletes $recip from the list of recipients. That person will not receive a copy of the mail. $recip should exactly match an entry in the @Recipients array for delete_recipient() to work. Note that delete_recipient does not modify the @Recipients array; it just makes a note to Sendmail to delete the recipient. resend_message($recip1, $recip2, ...) or resend_message(@recips) This function immediately resends the original, unmodified mail message to each of the named recipients. The sender's address is preserved. Be very careful when using this function, because it resends the original message, which may contain undesired attachments. Also, you should not call this function from filter(), because it resends the message each time it is called. This may result in multiple copies being sent if you are not careful. Call from filter_begin() or filter_end() to be safe. The function returns true on success, or false if it fails. Note that the resend_message function delivers the mail in deferred mode (using Sendmail's "-odd" flag.) You must run a client-submission queue processor if you use Sendmail 8.12. We recommend executing this command as part of the Sendmail startup sequence: sendmail -Ac -q5m remove_redundant_html_parts($entity) This function should only be called from filter_end. It removes redundant HTML parts from the message. It works by deleting any part of type text/html from the message if (1) it is a sub-part of a multipart/alternative part, and (2) there is another part of type text/plain under the multipart/alternative part. replace_entire_message($entity) This function can only be called from filter_end. It replaces the entire message with $entity, a MIME::Entity object that you have constructed. You can use any of the MIME::Tools functions to construct the entity. read_commands_file() This function should only be called from filter_sender and filter_recipient. This will read the COMMANDS file (as described in mimedefang-protocol(7)), and will fill or update the following global variables: $Sender, @Recipients, %RecipientMailers, $RelayAddr, $RealRelayAddr, $RelayHostname, $RealRelayHostname, $QueueID, $Helo, %SendmailMacros. If you do not call read_commands_file, then the only information available in filter_sender and filter_recipient is that which is passed as an argument to the function. stream_by_domain() Do not use this function unless you have Sendmail 8.12 and locally- submitted e-mail is submitted using SMTP. This function should only be called at the very beginning of filter_begin(), like this: sub filter_begin { if (stream_by_domain()) { return; } # Rest of filter_begin } stream_by_domain() looks at all the recipients of the message, and if they belong to the same domain (e.g., joe@domain.com, jane@domain.com and sue@domain.com), it returns 0 and sets the global variable $Domain to the domain (domain.com in this example.) If users are in different domains, stream_by_domain() resends the message (once to each domain) and returns 1 For example, if the original recipients are joe@abc.net, jane@xyz.net and sue@abc.net, the original message is resent twice: One copy to joe@abc.net and sue@abc.net, and another copy to jane@xyz.net. Also, any subsequent scanning is canceled (filter() and filter_end() will not be called for the original message) and the message is silently discarded._domain will not work. Using stream_by_domain allows you to customize your filter rules for each domain. If you use the function as described above, you can do this in your filter routine: sub filter { my($entity, $fname, $ext, $type) = @_; if ($Domain eq "abc.com") { # Filter actions for abc.com } elsif ($Domain eq "xyz.com") { # Filter actions for xyz.com } else { # Default filter actions } } You cannot rely on $Domain being set unless you have called stream_by_domain(). stream_by_recipient() Do not use this function unless you have Sendmail 8.12 and locally- submitted e-mail is submitted using SMTP. This function should only be called at the very beginning of filter_begin(), like this: sub filter_begin { if (stream_by_recipient()) { return; } # Rest of filter_begin } If there is more than one recipient, stream_by_recipient() resends the message once to each recipient. That way, you can customize your filter rules on a per-recipient basis. This may increase the load on your mail server considerably. Also, a "recipient" is determined before alias expansion. So "all@mydomain.com" is considered a single recipient, even if Sendmail delivers to a list._recipient() will not work. stream_by_recipient() allows you to customize your filter rules for each recipient in a manner similar to stream_by_domain(). LOGGING md_graphdefang_log_enable($facility, $enum_recips) Enables the md_graphdefang_log function (described next). The function logs to syslog using the specified facility. If you omit $facility, it defaults to 'mail'. If you do not call md_graphdefang_log_enable in your filter, then any calls to md_graphdefang_log simply do nothing. If you supply $enum_recips as 1, then a line of logging is output for each recipient of a mail message. If it is zero, then only a single line is output for each message. If you omit $enum_recips, it defaults to 1. md_graphdefang_log($event, $v1, $v2) Logs an event with up to two optional additional parameters. The log message has a specific format useful for graphing tools; the message looks like this: MDLOG,msgid,event,v1,v2,sender,recipient,subj "MDLOG" is literal text. "msgid" is the Sendmail queue identifier. "event" is the event name, and "v1" and "v2" are the additional parameters. "sender" is the sender's e-mail address. "recipient" is the recipient's e-mail address, and "subj" is the message subject. If a message has more than one recipient, md_graphdefang_log may log an event message for each recipient, depending on how you called md_graphdefang_log_enable. Note that md_graphdefang_log should not be used in filter_relay, filter_sender or filter_recipient. The global variables it relies on are not valid in that context. If you want to log general text strings, do not use md_graphdefang_log. Instead, use md_syslog (described next). md_syslog($level, $msg) Logs the message $msg to syslog, using level $level. The level is a literal string, and should be one of 'err', 'debug', 'warning', 'emerg', 'crit', 'notice' or 'info'. (See syslog(3) for details.) Note that md_syslog does not perform %-subsitutions like syslog(3) does. Depending on your Perl installation, md_syslog boils down to a call to Unix::Syslog::syslog or Sys::Syslog::syslog. See the Unix::Syslog or Sys::Syslog man pages for more details. md_openlog($tag, $facility) Sets the tag used in syslog messages to $tag, and sends the logs to the $facility facility. If you do not call md_openlog before you call md_syslog, then it is called implicitly with $tag set to mimedefang.pl and $facility set to mail. RBL LOOKUP FUNCTIONS mimedefang.pl includes the following functions for looking up IP addresses in DNS-based real-time blacklists. Note that the "relay_is_blacklisted" functions are deprecated and may be removed in a future release. Instead, you should use the module Net::DNSBL::Client from CPAN. relay_is_blacklisted($relay, $domain) This checks a DNS-based real-time spam blacklist, and returns true if the relay host is blacklisted, or false otherwise. (In fact, the return value is whatever the blacklist returns as a resolved hostname, such as "127.0.0.4") Note that relay_is_blacklisted uses the built-in gethostbyname function; this is usually quite inefficient and does not permit you to set a timeout on the lookup. Instead, we recommend using one of the other DNS lookup function described in this section. (Note, though, that the other functions require the Perl Net::DNS module, whereas relay_is_blacklisted does not.) Here's an example of how to use relay_is_blacklisted: if (relay_is_blacklisted($RelayAddr, "rbl.spamhaus.org")) { action_add_header("X-Blacklist-Warning", "Relay $RelayAddr is blacklisted by Spamhaus"); } relay_is_blacklisted_multi($relay, $timeout, $answers_wanted, [$domain1, $domain2, ...], $res) This function is similar to relay_is_blacklisted, except that it takes a timeout argument (specified in seconds) and an array of domains to check. The function checks all domains in parallel, and is guaranteed to return in $timeout seconds. (Actually, it may take up to one second longer.) The parameters are: $relay -- the IP address you want to look up $timeout -- a timeout in seconds after which the function should return $answers_wanted -- the maximum number of positive answers you care about. For example, if you're looking up an address in 10 different RBLs, but are going to bounce it if it is on four or more, you can set $answers_wanted to 4, and the function returns as soon as four "hits" are discovered. If you set $answers_wanted to zero, then the function does not return early. [$domain1, $domain2, ...] -- a reference to an array of strings, where each string is an RBL domain. $res -- a Net::DNS::Resolver object. This argument is optional; if you do not supply it, then relay_is_blacklisted_multi constructs its own resolver. The return value is a reference to a hash; the keys of the hash are the original domains, and the corresponding values are either SERVFAIL, NXDOMAIN, or a list of IP addresses in dotted- quad notation. Here's an example: $ans = relay_is_blacklisted_multi($RelayAddr, 8, 0, ["sbl.spamhaus.org", "relays.ordb.org"]); foreach $domain (keys(%$ans)) { $r = $ans->{$domain}; if (ref($r) eq "ARRAY") { # It's an array -- it IS listed in RBL print STDERR "Lookup in $domain yields [ "; foreach $addr (@$r) { print STDERR $addr . " "; } print STDERR "]\n"; } else { # It is NOT listed in RBL print STDERR "Lookup in $domain yields " . $ans->{$domain} . "\n"; } } You should compare each of $ans->{$domain} to "SERVFAIL" and "NXDOMAIN" to see if the relay is not listed. Any other return value will be an array of IP addresses indicating that the relay is listed. Any lookup that does not succeed within $timeout seconds has the corresponding return value set to SERVFAIL. relay_is_blacklisted_multi_list($relay, $timeout, $answers_wanted, [$domain1, $domain2, ...], $res) This function is similar to relay_is_blacklisted_multi except that the return value is simply an array of RBL domains in which the relay was listed. relay_is_blacklisted_multi_count($relay, $timeout, $answers_wanted, [$domain1, $domain2, ...], $res) This function is similar to relay_is_blacklisted_multi except that the return value is an integer specifying the number of domains on which the relay was blacklisted. md_get_bogus_mx_hosts($domain) This function is not really an RBL lookup. What it does is look up all the MX records for the specified domain, and return a list of "bogus" IP addresses found amongst the MX records. A "bogus" IP address is an IP address in a private network (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), the loopback network (127.0.0.0/8), local-link for auto-DHCP (169.254.0.0/16), IPv4 multicast (224.0.0.0/4) or reserved (240.0.0.0/4). Here's how you might use the function in filter_sender: sub filter_sender { my ($sender, $ip, $hostname, $helo) = @_; if ($sender =~ /@([^>]+)/) { my $domain = $1; my @bogushosts = md_get_bogus_mx_hosts($domain); if (scalar(@bogushosts)) { return('REJECT', "Domain $domain contains bogus MX record(s) " . join(', ', @bogushosts)); } } return ('CONTINUE', 'ok'); } TEST FUNCTIONS mimedefang.pl includes some "test" functions: md_version() returns the version of MIMEDefang as a string (for example, "2.71"). message_rejected() Returns true if any of action_tempfail, action_bounce or action_discard have been called for this message; returns false otherwise. If you have the Mail::SpamAssassin Perl module installed (see) you may call any of the spam_assassin_* functions. They should only be called from filter_begin or filter_end because they operate on the entire message at once. Most functions use an optionally provided config file. If no config file is provided, mimedefang.pl will look for one of four default SpamAssassin preference files. The first of the following found will be used: o /etc/sa-mimedefang.cf o /etc/mail/sa-mimedefang.cf o /etc/spamassassin/local.cf o /etc/spamassassin.cf Important Note: MIMEDefang does not permit SpamAssassin to modify messages. If you want to tag spam messages with special headers or alter the subject line, you must use MIMEDefang functions to do it. Setting SpamAssassin configuration options to alter messages will not work. spam_assassin_is_spam([ $config_file ]) Determine if the current message is SPAM/UCE as determined by SpamAssassin. Compares the score of the message against the threshold score (see below) and returns true if it is. Uses spam_assassin_check below. spam_assassin_check([ $config_file ]) This function returns a four-element list of the form ($hits, $required, $tests, $report). $hits is the "score" given to the message by SpamAssassin (higher score means more likely SPAM). $required is the number of hits required before SpamAssassin concludes that the message is SPAM. $tests is a comma-separated list of SpamAssassin test names, and $report is text detailing which tests triggered and their point score. This gives you insight into why SpamAssassin concluded that the message is SPAM. Uses spam_assassin_status below. spam_assassin_status([ $config_file ]) This function returns a Mail::SpamAssasin::PerMsgStatus object. Read the SpamAssassin documentation for details about this object. You are responsible for calling the finish method when you are done with it. Uses spam_assassin_init and spam_assassin_mail below. spam_assassin_init([ $config_file ]) This function returns the new global Mail::SpamAssassin object with the specified or default config (outlined above). If the global object is already defined, returns it -- does not change config files! The object can be used to perform other SpamAssassin related functions. spam_assassin_mail() This function returns a Mail::SpamAssassin::NoMailAudit object with the current email message contained in it. It may be used to perform other SpamAssassin related functions. md_copy_orig_msg_to_work_dir() Normally, virus-scanners are passed only the unpacked, decoded parts of a MIME message. If you want to pass the original, undecoded message in as well, call md_copy_orig_msg_to_work_dir prior to calling message_contains_virus. md_copy_orig_msg_to_work_dir_as_mbox_file() Normally, virus-scanners are passed only the unpacked, decoded parts of a MIME message. If you want to pass the original, undecoded message in as a UNIX-style "mbox" file, call md_copy_orig_msg_to_work_dir_as_mbox_file prior to calling message_contains_virus. The only difference between this function and md_copy_orig_msg_to_work_dir is that this function prepends a "From_" line to make the message look like a UNIX- style mbox file. This is required for some virus scanners (such as Clam AntiVirus) to recognize the file as an e-mail message. message_contains_virus() This function runs every installed virus-scanner and returns the scanner results. The function should be called in list context; the return value is a three-element list ($code, $category, $action). $code is the actual return code from the virus scanner. $category is a string categorizing the return code: "ok" - no viruses detected. "not-installed" - indicated virus scanner is not installed. "cannot-execute" - for some reason, the scanner could not be executed. "virus" - a virus was found. "suspicious" - a "suspicious" file was found. "interrupted" - scanning was interrupted. "swerr" - an internal scanner software error occurred. $action is a string containing the recommended action: "ok" - allow the message through unmolested. "quarantine" - a virus was detected; quarantine it. "tempfail" - something went wrong; tempfail the message. message_contains_virus_trend() message_contains_virus_nai() message_contains_virus_bdc() message_contains_virus_nvcc() message_contains_virus_csav() message_contains_virus_fsav() message_contains_virus_hbedv() message_contains_virus_vexira() message_contains_virus_sophos() message_contains_virus_clamav() message_contains_virus_avp() message_contains_virus_avp5() message_contains_virus_fprot() message_contains_virus_fpscan() message_contains_virus_fprotd() message_contains_virus_fprotd_v6() message_contains_virus_nod32() These functions should be called in list context. They use the indicated anti-virus software to scan the message for viruses. These functions are intended for use in filter_begin() to make an initial scan of the e-mail message. The supported virus scanners are: nai NAI "uvscan" - Bitdefender "bdc" - csav Command Anti-Virus - fsav F-Secure Anti-Virus - hbedv H+BEDV "AntiVir" - vexira Vexira "Vexira" - sophos Sophos AntiVirus - avp Kaspersky AVP and aveclient (AVP5) - clamav Clam AntiVirus - f-prot F-RISK F-PROT - nod32cli ESET NOD32 - message_contains_virus_carrier_scan([$host]) Connects to the specified host:port:local_or_nonlocal (default $CSSHost), where the Symantec CarrierScan Server daemon is expected to be listening. Return values are the same as the other message_contains_virus functions. message_contains_virus_sophie([$sophie_sock]) Connects to the specified socket (default $SophieSock), where the Sophie daemon is expected to be listening. Return values are the same as the other message_contains_virus functions. message_contains_virus_clamd([$clamd_sock]) Connects to the specified socket (default $ClamdSock), where the clamd daemon is expected to be listening. Return values are the same as the other message_contains_virus functions. message_contains_virus_trophie([$trophie_sock]) Connects to the specified socket (default $TrophieSock), where the Trophie daemon is expected to be listening. Return values are the same as the other message_contains_virus functions. entity_contains_virus($entity) This function runs the specified MIME::Entity through every installed virus-scanner and returns the scanner results. The return values are the same as for message_contains_virus(). entity_contains_virus_trend($entity) entity_contains_virus_nai($entity) entity_contains_virus_bdc($entity) entity_contains_virus_nvcc($entity) entity_contains_virus_csav($entity) entity_contains_virus_fsav($entity) entity_contains_virus_hbedv($entity) entity_contains_virus_sophos($entity) entity_contains_virus_clamav($entity) entity_contains_virus_avp($entity) entity_contains_virus_avp5($entity) entity_contains_virus_fprot($entity) entity_contains_virus_fpscan($entity) entity_contains_virus_fprotd($entity) entity_contains_virus_fprotd_v6($entity) entity_contains_virus_nod32($entity) These functions, meant to be called from filter(), are similar to the message_contains_virus functions except they scan only the current part. They should be called from list context, and their return values are as described for the message_contains_virus functions. entity_contains_virus_carrier_scan($entity[, $host]) Connects to the specified host:port:local_or_nonlocal (default $CSSHost), where the Symantec CarrierScan Server daemon is expected to be listening. Return values are the same as the other entity_contains_virus functions. entity_contains_virus_sophie($entity[, $sophie_sock]) Connects to the specified socket (default $SophieSock), where the Sophie daemon is expected to be listening. Return values are the same as the other entity_contains_virus functions. entity_contains_virus_trophie($entity[, $trophie_sock]) Connects to the specified socket (default $TrophieSock), where the Trophie daemon is expected to be listening. Return values are the same as the other entity_contains_virus functions. entity_contains_virus_clamd($entity[, $clamd_sock]) Connects to the specified socket (default $ClamdSock), where the clamd daemon is expected to be listening. Return values are the same as the other entity_contains_virus functions. SMTP FLOW This section illustrates the flow of messages through MIMEDefang. 1. INITIAL CONNECTION If you invoked mimedefang with the -r option and have defined a filter_relay routine, it is called. 2. SMTP HELO COMMAND The HELO string is stored internally, but no filter functions are called. 3. SMTP MAIL FROM: COMMAND If you invoked mimedefang with the -s option and have defined a filter_sender routine, it is called. 4. SMTP RCPT TO: COMMAND If you invoked mimedefang with the -t option and have defined a filter_recipient routine, it is called. 5. END OF SMTP DATA filter_begin is called. For each MIME part, filter is called. Then filter_end is called. PRESERVING RELAY INFORMATION Most organizations have more than one machine handling internet e-mail. If the primary machine is down, mail is routed to a secondary (or tertiary, etc.) MX server, which stores the mail until the primary MX host comes back up. Mail is then relayed to the primary MX host. Relaying from a secondary to a primary MX host has the unfortunate side effect of losing the original relay's IP address information. MIMEDefang allows you to preserve this information. One way around the problem is to run MIMEDefang on all the secondary MX hosts and use the same filter. However, you may not have control over the secondary MX hosts. If you can persuade the owners of the secondary MX hosts to run MIMEDefang with a simple filter that only preserves relay information and does no other scanning, your primary MX host can obtain relay information and make decisions using $RelayAddr and $RelayHostname. When you configure MIMEDefang, supply the "--with-ipheader" argument to the ./configure script. When you install MIMEDefang, a file called /etc/mimedefang-ip-key will be created which contains a randomly- generated header name. Copy this file to all of your mail relays. It is important that all of your MX hosts have the same key. The key should be kept confidential, but it's not disastrous if it leaks out. On your secondary MX hosts, add this line to filter_end: add_ip_validation_header(); Note: You should only add the validation header to mail destined for one of your other MX hosts! Otherwise, the validation header will leak out. When the secondary MX hosts relay to the primary MX host, $RelayAddr and $RelayHostname will be set based on the IP validation header. If MIMEDefang notices this header, it sets the global variable $WasResent to 1. Since you don't want to trust the header unless it was set by one of your secondary MX hosts, you should put this code in filter_begin: if ($WasResent) { if ($RealRelayAddr ne "ip.of.secondary.mx" and $RealRelayAddr ne "ip.of.tertiary.mx") { $RelayAddr = $RealRelayAddr; $RelayHostname = $RealRelayHostname; } } This resets the relay address and hostname to the actual relay address and hostname, unless the message is coming from one of your other MX hosts. On the primary MX host, you should add this in filter_begin: delete_ip_validation_header(); This prevents the validation header from leaking out to recipients. Note: The IP validation header works only in message-oriented functions. It (obviously) has no effect on filter_relay, filter_sender and filter_recipient, because no header information is available yet. You must take this into account when writing your filter; you must defer relay-based decisions to the message filter for mail arriving from your other MX hosts. GLOBAL VARIABLE LIFETIME The following list describes the lifetime of global variables (thanks to Tony Nugent for providing this documentation.) If you set a global variable: Outside a subroutine in your filter file It is available to all functions, all the time. In filter_relay, filter_sender or filter_recipient Not guaranteed to be available to any other function, not even from one filter_recipient call to the next, when receiving a multi-recipient email message. In filter_begin Available to filter_begin, filter and filter_end In filter Available to filter and filter_end In filter_end Available within filter_end The "built-in" globals like $Subject, $Sender, etc. are always available to filter_begin, filter and filter_end. Some are available to filter_relay, filter_sender or filter_recipient, but you should check the documentation of the variable above for details. MAINTAINING STATE There are four basic groups of filtering functions: 1 filter_relay 2 filter_sender 3 filter_recipient 4 filter_begin, filter, filter_multipart, filter_end In general, for a given mail message, these groups of functions may be called in completely different Perl processes. Thus, there is no way to maintain state inside Perl between groups of functions. That is, you cannot set a variable in filter_relay and expect it to be available in filter_sender, because the filter_sender invocation might take place in a completely different process. However, for a given mail message, the $CWD global variable holds the message spool directory, and the current working directory is set to $CWD. Therefore, you can store state in files inside $CWD. If filter_sender stores data in a file inside $CWD, then filter_recipient can retrieve that data. Since filter_relay is called directly after a mail connection is established, there is no message context yet, no per-message mimedefang spool directory, and the $CWD global is not set. Therefore, it is not possible to share information from filter_relay to one of the other filter functions. The only thing that filter_relay has in common with the other functions are the values in the globals $RelayAddr and $RelayHostname. These could be used to access per-remote-host information in some database. Inside $CWD, we reserve filenames beginning with upper-case letters for internal MIMEDefang use. If you want to create files to store state, name them beginning with a lower-case letter to avoid clashes with future releases of MIMEDefang. SOCKET MAPS If you have Sendmail 8.13 or later, and have compiled it with the SOCKETMAP option, then you can use a special map type that communicates over a socket with another program (rather than looking up a key in a Berkeley database, for example.) mimedefang-multiplexor implements the Sendmail SOCKETMAP protocol if you supply the -N option. In that case, you can define a function called filter_map to implement map lookups. filter_map takes two arguments: $mapname is the name of the Sendmail map (as given in the K sendmail configuration directive), and $key is the key to be looked up. filter_map must return a two-element list: ($code, $val) $code can be one of: OK The lookup was successful. In this case, $val must be the result of the lookup NOTFOUND The lookup was unsuccessful -- the key was not found. In this case, $val should be the empty string. TEMP There was a temporary failure of some kind. $val can be an explanatory error message. TIMEOUT There was a timeout of some kind. $val can be an explanatory error message. PERM There was a permanent failure. This is not the same as an unsuccessful lookup; it should be used only to indicate a serious misconfiguration. As before, $val can be an explanatory error message. Consider this small example. Here is a minimal Sendmail configuration file: V10/Berkeley Kmysock socket unix:/var/spool/MIMEDefang/map.sock kothersock socket unix:/var/spool/MIMEDefang/map.sock If mimedefang-multiplexor is invoked with the arguments -N unix:/var/spool/MIMEDefang/map.sock, and the filter defines filter_map as follows: sub filter_map ($$) { my($mapname, $key) = @_; my $ans; if($mapname ne "mysock") { return("PERM", "Unknown map $mapname"); } $ans = reverse($key); return ("OK", $ans); } Then in Sendmail's testing mode, we see the following: > /map mysock testing123 map_lookup: mysock (testing123) returns 321gnitset (0) > /map othersock foo map_lookup: othersock (foo) no match (69) (The return code of 69 means EX_UNAVAILABLE or Service Unavailable) A real-world example could do map lookups in an LDAP directory or SQL database, or perform other kinds of processing. You can even implement standard Sendmail maps like virtusertable, mailertable, access_db, etc. using SOCKETMAP. TICK REQUESTS If you supply the -X option to mimedefang-multiplexor, then every so often, a "tick" request is sent to a free slave. If your filter defines a function called filter_tick, then this function is called with a single argument: the tick type. If you run multiple parallel ticks, then each tick has a type ranging from 0 to n-1, where n is the number of parallel ticks. If you're only running one tick request, then the argument to filter_tick is always 0. You can use this facility to run periodic tasks from within MIMEDefang. Note, however, that you have no control over which slave is picked to run filter_tick. Also, at most one filter_tick call with a particular "type" argument will be active at any time, and if there are no free slaves when a tick would occur, the tick is skipped. SUPPORTED VIRUS SCANNERS The following virus scanners are supported by MIMEDefang: o Symantec CarrierScan Server () o Trend Micro vscan () o Sophos Sweep () o H+BEDV AntiVir () o Central Command Vexira () o NAI uvscan () o Bitdefender bdc () o Norman Virus Control (NVCC) () o Command csav () o F-Secure fsav () o The clamscan command-line scanner and the clamd daemon from Clam AntiVirus () o Kaspersky Anti-Virus (AVP) () o F-Risk F-Prot () o F-Risk F-Prot v6 () o F-Risk FPROTD (daemonized version of F-Prot) o Symantec CarrierScan Server () o Sophie (), which uses the libsavi library from Sophos, is supported in daemon-scanning mode. o Trophie (), which uses the libvsapi library from Trend Micro, is supported in daemon- scanning mode. o ESET NOD32 () AUTHORS mimedefang was written by David F. Skoll <dfs@roaringpenguin.com>. The mimedefang home page is. SEE ALSO mimedefang(8), mimedefang.pl(8)
http://manpages.ubuntu.com/manpages/oneiric/man5/mimedefang-filter.5.html
CC-MAIN-2015-48
refinedweb
11,697
56.35
I have a program where the user inputs a line of numbers, and the two highest ones are displayed. It works fine, until negative values are entered at which point it shows 0 as the result. Can someone help? Code:#include <iostream> using namespace std; int main( ) { int num = 0; int highest = 0; int secondhighest = 0; int input = 0; cout << "How many numbers? "; cin >> num; cin >> input; int input2 = input; // highest = input; // secondhighest = input; for(int track = 0; track < num; track++) { if(track == 1) { input = input2; } else { cin >> input; } if (input > highest) { secondhighest = highest; highest = input; } else if (input > secondhighest) { secondhighest = input; } } cout << endl; cout << "Highest: " << highest << endl; cout << "Second: " << secondhighest; }
https://cboard.cprogramming.com/cplusplus-programming/165684-negative-numbers-not-working.html?s=31afdd2faf219f76a44463f1139bbbe1
CC-MAIN-2020-24
refinedweb
111
58.01
Software Engineer To improve readability instead of displaying full numbers very often there is a need to display shortened numbers. Here is an example of how to create your own customisable short number pipe in Angular8. Pipe in Angular is a data processing element which is used in a presentation layer (in a template). All pipes must implement PipeTransform interface. PipeTransform interface contains a method called “transform()” which is invoked by Angular with a value passed as a first argument and optional parameters as a second argument. Pipes can be joined into a set of pipes. In such case output of the previous pipe will be input for the next one. {{item.count | shortNumber}} is an example of how pipe may be used in your templates to change item.count representation. Task: Display text “1.5M” instead of 1500000 in the view. Solution: generate custom pipe and implement short number functionality. A command to generate new shortNumber pipe in pipes/ directory: ng generate pipe pipes/shortNumber The command above creates pipes/short-number.pipe.ts file and registers ShortNumberPipe in modules declarations. // short-number.pipe.ts import { Pipe, PipeTransform } from '@angular/core'; @Pipe({ name: 'shortNumber' }) export class ShortNumberPipe implements PipeTransform { transform(value: any, args?: any): any { return null; } } ShortNumberPipe class must implement PipeTransform interface which is common interface for all Angular pipes. PipeTransform interface requires single transform() method to be implemented. See short number functionality implementation below. transform(value: any, args?: any): any { if (value === null) return null; if (value === 0) return "0"; var fractionSize = 1; var abs = Math.abs(value); var rounder = Math.pow(10, fractionSize); var isNegative = value < 0; var key = ''; var powers = [{ key: "Q", value: Math.pow(10, 15) }, { key: "T", value: Math.pow(10, 12) }, { key: "B", value: Math.pow(10, 9) }, { key: "M", value: Math.pow(10, 6) }, { key: "k", value: 1000 }]; for (var i = 0; i < powers.length; i++) { var reduced = abs / powers[i].value; reduced = Math.round(reduced * rounder) / rounder; if (reduced >= 1) { abs = reduced; key = powers[i].key; break; } } return (isNegative ? '-' : '') + abs + key; } In the example above “Q” stands for quadrillion, “T” — trillion, “B” — billion, “M” — million, “k” — kilo. And hardcoded fractionSize defines the precision of rounding numbers. You are welcome to adjust it to your needs. Lets check the result with this piece of html: <span class=”cnt-wrap”>{{1500000 | shortNumber}}</span> After interpolation it displays 1.5M. And if substitute 1500000 with 15300 the result of number transformation will be 15.3k. Done! As your homework, make shortNumber pipe parametrised. For instance, you can do this with a fractionSize passed as an argument. Take a look in the docs for help. After the changes, {{1520000 | shortNumber:2}} must produce 1.52M (fractionSize is set to 2). Tip: before starting a new pipe, check for existing one in built-in pipes! However, the list of available pipes isn’t big yet. Hopefully, this article was useful! Thanks for reading! Previously published at Create your free account to unlock your custom reading experience.
https://hackernoon.com/creating-a-short-number-format-pipe-using-angular8-9h9u32rg
CC-MAIN-2021-10
refinedweb
501
60.92
base32 0.2.0 RFC 4648 base32 encoding / decoding. To use this package, run the following command in your project's root directory: Manual usage Put the following dependency into your project's dependences section: Base32 in D This library provides a module for encoding and decoding the Base32 format, which is defined in RFC 4648. The implementation is very affected by Phobos' std.base64. Descriptions To be simple import base32; ubyte[] data = [0xde, 0xad, 0xbe, 0xef, 0x01, 0x23]; const(char)[] encoded = Base32.encode(data); assert(encoded == "32W353YBEM======"); ubyte[] decoded = Base32.decode("32W353YBEM======"); assert(decoded == [0xde, 0xad, 0xbe, 0xef, 0x01, 0x23]); encode() takes a ubyte array to encode and returns a char array which contains the Base32 encoded result. If a char array which represents a Base32 encoded string is passed to decode(), the characters will be decoded and the resulting ubyte array will be returned. More generally Actually, these functions take two parameters, input and output. You can use several types of arguments: an array or an InputRange as input, and, an array or an OutputRange as output. For example: import std.stdio : writeln; auto n = Base32.encode(data, &writeln!char); auto array = Base32.decode(SomeInputRangeWithLength(encoded), new ubyte[LARGE]); Note that the OutputRange versions return not the data but the output length. On the other hand, if you use an array as an output buffer, it must be large enough to contain the result. There are convenient functions to calculate the length, encodeLength() and decodeLength(). Both of them take an input length and return the length which the output buffer must have. auto buffer = new dchar[Base32.encodeLength(data.length)]; Base32.encode(data, buffer); Varieties of the Base32 encodings Although the above examples describe the most standard Base32 encoding, this module can handle some kinds of Base32 encodings. That is, the standard one containing A-Z, 2-7 and =, and "base32hex" containing 0-9, A-V, and =. Both are found in the RFC. They are aliased as Base32 and Base32Hex respectively, by default. Moreover, no padding "=" versions, which are not in the RFC, are also available. These, however, should be brought manually. To activate these varieties, two template parameters UseHex and UsePad are available. For example: alias Base32NoPad = Base32Impl!(UseHex.no, UsePad.no); alias Base32HexNoPad = Base32Impl!(UseHex.yes, UsePad.no); The NoPad versions neither output the paddings when encoding nor accept them when decoding. CTFEability Of course all of the functions are CTFEable. - Registered by Kazuya Takahashi - 0.2.0 released 9 months ago - e10s/d-base32 - BSL-1.0 - Authors: - - Dependencies: - none - Versions: - Show all 3 versions - Download Stats: 0 downloads today 0 downloads this week 0 downloads this month 710 downloads total - Score: - 1.1 - Short URL: - base32.dub.pm
https://code.dlang.org/packages/base32
CC-MAIN-2022-33
refinedweb
454
58.48
[ ] Jing Zhao updated HDFS-6000: ---------------------------- Attachment: HDFS-6000.000.patch Initial patch. Still need to fix/update unit tests. With the changes in this patch, the SBN will name the first checkpoint after starting rolling upgrade process as "fsimage_rollback". The current implementation requires that the NN directory does not contain a rollback image before we start the rolling upgrade process. Thus the patch also purges rollback image for "downgrade" (we already do this for "finalize"). > Avoid saving namespace when starting rolling upgrade > ---------------------------------------------------- > > Key: HDFS-6000 > URL: > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, ha, hdfs-client, namenode > Reporter: Jing Zhao > Assignee: Jing Zhao > Attachments: HDFS-6000.000.patch > > > Currently when administrator sends the "rollingUpgrade start" command to the active NN, the NN will trigger a checkpoint (the rollback fsimage). This will cause NN not able to serve for a period of time. > An alternative way is just to let the SBN do checkpoint, and rename the first checkpoint after starting the rolling upgrade to rollback image. After the rollback image is on both the ANN and the SBN, administrator can start upgrading the software. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201402.mbox/%3CJIRA.12696766.1393118760580.90236.1393205305726@arcas%3E
CC-MAIN-2018-34
refinedweb
197
64.41
amesh KunwarCourses Plus Student 4,066 Points function as parameter I am not understanding anything about function as parameter 1 Answer Cooper Runstein11,836 Points The explanation given in the video is pretty hard to understand, I struggled to understand what was going on when I first saw this one too. Here's an example of functions passed as parameters: const double = function(number){ return (number * 2) } const addTwoThenDo = function(number, func){ let newNum = number + 2; return func(newNum); } addTwoThenDo(5, double) //this will return 14 addTwoThenDo(3, double) //this will return 10 Above is an example of how passing a function to another function works: the function double is pretty self explanatory, but addTwoThenDo is a function that takes a callback. addTwoThenDo accepts a number and a function. addTwoThenDo will add 2 to the number it is given and sets that equal to a variable newNum, then when that's done, it calls back the function passed to it, and runs the new value calculated from the initial number through the function that was passed to addTwoThenDo. Notice how the function is passed in without the parentheses, that is because the function isn't going to be called until addTwoThenDo decides to call it, which happens after its work is done. This allows for a lot of really cool things that you'll likely pick up later, like the map, filter and reduce functions in ES6, but in the case of the video, it allows for the window.setTimeout function. const myFunction = function(){ ...do cool things } windom.setTimeout(myFunction, 3000) #myfunction is the callback, it's set to run after setTimeout does its work, which in this case is to wait for 3000MS. Hopefully this helps some, you're going to deal with this concept a lot in the coming videos, so if you stick with it I'm sure it'll make sense. Daniel LeaMon5,127 Points Daniel LeaMon5,127 Points Thanks this helped a lot!
https://teamtreehouse.com/community/function-as-parameter
CC-MAIN-2022-33
refinedweb
328
51.52
Announcing TypeScript 4.5 Beta Daniel Today we are excited to announce the beta release of TypeScript 4.5! To get started using the beta, you can get it through NuGet, or use npm with the following command: npm install typescript@beta You can also get editor support by - Downloading for Visual Studio 2019/2017 - Following directions for Visual Studio Code and Sublime Text 3. Some major highlights of TypeScript 4.5 are: - ECMAScript Module Support in Node.js - Supporting libfrom node_modules - Template String Types as Discriminants --module es2022 - Tail-Recursion Elimination on Conditional Types - Disabling Import Elision typeModifiers on Import Names - Private Field Presence Checks - Import Assertions - Faster Load Time with realPathSync.native - Snippet Completions for JSX Attributes - Better Editor Support for Unresolved Types - Breaking Changes; however, support for ESM in Node.js is now largely implemented in Node.js 12 and later, and the dust has begun to settle. That’s why TypeScript 4.5 brings syntax is left alone in the .js output; when it’s compiled as a CommonJS module, it will produce the same output you get today under --module commonjs. What this also means is that resolving paths works differently in .ts files that are ES modules than they do in ES!" foo": "./esm/index.js", // Entry-point for `require("my-package") in CJS "require": "./commonjs/index.cjs", // Entry-point for TypeScript resolution "types": "./types/index.d.ts" }, }, // CJS fall-back for older versions of Node.js "main": "./commonjs/index.cjs", // Fall-back for older versions of TypeScript "types": "./types/index.d.ts" } a system-native implementation of the Node.js realPathSync function on all operating systems. Previously this function was only used on Linux, but in TypeScript 4.5 it has been adopted to operating systems that are typically case-insensitive, like Windows and MacOS. On certain codebases, this change sped up project loading by 5-13% (depending on the host operating system). For more information, see the original change here, along with the 4.5-specific changes here. Snippet Completions for JSX Attributes TypeScript 4.5 brings snippet completions for JSX attributes. When writing out an attribute in a JSX tag, TypeScript will already provide suggestions for those attributes; but with snippet completions, they can remove. What’s Next? After this beta release, the team will be focusing on fixing issues and adding polish for our release candidate (RC). The sooner you can try out our beta and provide feedback, the sooner we can ensure that new features and existing issues are addressed, and once those changes land, they’ll be available in our nightly releases which tend to be very stable. To plan accordingly, you can take a look at our release schedule on TypeScript 4.5’s iteration plan. So try out TypeScript 4.5 Beta today, and let us know what you think! Happy Hacking! – Daniel Rosenwasser and the TypeScript Team Hi, small typo in the code example of Awaited Will `moduleResolution` also support `node12`/`nodenext`? e.g. In a create-react-app one might want `”module”: “esnext”` with `”moduleResolution”: “nodenext”`. Nice work! Sorry if this is a dumb question, but in what situation would a type like `TrimLeft` ever be useful? I believe this should be a code sample for the named c[tj]s import (there is no foo namespace):
https://devblogs.microsoft.com/typescript/announcing-typescript-4-5-beta/#esm-nodejs
CC-MAIN-2021-43
refinedweb
553
58.08
Injecting Scripts You can inject .js files into webpages (a .js file is a text file with the .js extension, containing JavaScript functions and commands). The scripts in these files have access to the DOM of the webpages they are injected into. Injected scripts have the same access privileges as scripts executed from the webpage’s host. About Injected Scripts Injected scripts are loaded each time an extension-accessible webpage is loaded, so you should keep them lightweight. If your script requires large blocks of code or data, you should move them to the global HTML page. For details, see Example: Calling a Function from an Injected Script. An injected script is injected into every webpage whose URL meets the access limitations for your extension. For details, see Access and Permissions. Scripts are injected into the top-level page and any children with HTML sources, such as iframes. Do not assume that there is only one instance of your script per browser tab. If you want your injected script not to execute inside of iframes, preface your high-level functions with a test, such as this: Injected scripts have an implied namespace—you don’t have to worry about your variable or function names conflicting with those of the website author, nor can a website author call functions in your extension. In other words, injected scripts and scripts included in the webpage run in isolated worlds, with no access to each other’s functions or data. Injected scripts do not have access to the safari.application object. Nor can you call functions defined in an extension bar or global HTML page directly from an injected script. If your script needs to access the Safari app or operate on the extension—to insert a tab or add a contextual menu item, for example—you can send a message to the global HTML page or an extension bar. For details, see Messages and Proxies.. To add content to a webpage, use DOM insertion functions, as illustrated in Listing 10-1. Listing 10-1 Modifying a webpage using DOM insertion Adding a Script To add an injected script, follow these steps: Create an extension folder—open Extension Builder, click +, choose New Extension, give it a name and location. Drag your script file into the extension folder. Click New Script under Injected Extension Content in Extension Builder, as illustrated in Figure 10-1. Figure 10-1 Specifying injected content You can choose to inject your script as a Start Script or an End Script. An End Script executes when the DOM is fully loaded—at the time the onloadattribute of a bodyelement would normally fire. Most scripts should be injected as End Scripts. A Start Script executes when the document has been created but before the webpage has been parsed. If your script blocks unwanted content it should be a Start Script, so it executes before the page displays. Choose your script file from the pop-up menu. You can have both Start and End Scripts. You can have more than one script of each type. In order for your scripts to be injected, you must specify either Some or All website access for your extension. You can have your script apply to a single webpage, all webpages, or only certain webpages—pages from certain domains, for example. For details, see the description of whitelists and blacklists in Access and Permissions. Copyright © 2015 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2015-09-16
https://developer.apple.com/library/safari/documentation/Tools/Conceptual/SafariExtensionGuide/InjectingScripts/InjectingScripts.html
CC-MAIN-2015-40
refinedweb
583
64.51
This is basically my assignment, I have most of it finished but I am new to programming so i'm not really sure what some of the error messages that are coming out mean. any help would be greatly appreciated The simple algorithm for the program is as follows: 1. Ask user to input numbers and the numbers must be saved in an array. 2. Display the numbers 3. Calculate the sum, mean, and standard deviation of the numbers. 4. Display them 5. Display the differences between the numbers and the mean. Define a function to display the differences between the numbers and the mean.Define a function to display the differences between the numbers and the mean. - Finish the basic requirements in main. - Define a function to display numbers in the array. - Define a function to calculate the sum of the numbers. - Define a function to calculate the sum of the squared numbers. - Define a function to calculate the mean of the numbers. - Define a function to calculate the standard deviation. Code:#include<math.h>#include<iostream> #include<array> #define MAX_ITEM 5 using namespace std; //Display numbers in the array void display_array(double ar[]) { for(int i=0; i<MAX_ITEM; i++){ cout << ar[i] << " "; } } //Calculate the sum of the numbers double cal_sum (double ar[]) { double sum = 0; for(int i=0; i< MAX_ITEM; i++){ sum+=ar[i]; } return sum; } //Calculate the sum of the squared numbers double cal_sum_sqrt (double ar[]) { double sum_sqrt = 0; for ( int i = 0; i < MAX_ITEM; i++){ sum_sqrt+=ar[i]*ar[i]; } return sum_sqrt; } //Calculate the mean of the numbers double cal_mean (double sum) { double mean = 0; mean = sum/MAX_ITEM; return mean; } //Calculate the standard deviation of the numbers double cal_st_dev (double mean, double sum_sqrt) { double st_dev = 0; st_dev = sqrt(sum_sqrt/MAX_ITEM-mean*mean); return st_dev; } main() { double x [MAX_ITEM], sum, sum_sqrt, mean, standard_deviation, square; for (int i=0; i<MAX_ITEM; i++) { cout << "Enter a number " << endl; cin >> x[i]; } cout << "The numbers are "; for (int i=0; i<MAX_ITEM; i++) { cout << x[i] << " "; } square = sum_sqrt - mean; sum = cal_sum(i); sum_sqrt = cal_sum_sqrt(i); mean = cal_mean(sum); standard_deviation = cal_st_dev(square, mean); cout << endl << "Sum of numbers " << sum << endl; cout << endl << "sum sqared " << sum_sqrt << endl; cout << endl << "Mean of numbers " << mean << endl; cout << endl << "Standard deviation of numbers " << standard_deviation << endl; system("pause"); }
https://cboard.cprogramming.com/cplusplus-programming/146157-array-problems-unfamiliar-error-messages.html
CC-MAIN-2017-30
refinedweb
384
57.91
One of the new features in Game Studio 3.1 is automatic serialization for .xnb files. This is a feature I have wanted ever since we first designed the Content Pipeline, so it makes me very happy that we finally found time to implement it! Short Version You know those ContentTypeWriter and ContentTypeReader thingamyhickies? You don't need them any more! Delete them, and the Content Pipeline will automatically serialize your data using reflection. Note: you can still write a ContentTypeWriter and ContentTypeReader by hand if you want. You just don't have to any more. The trick to successfully using the Content Pipeline is to understand which code runs at build time versus runtime. Even though the serialization is now automatic, you can still get in a muddle if you have things like cyclic references where your game tries to use a type that is defined inside itself while building itself! Here's an example of using the new serializer: Create a new Windows Game project. Let's call this MyGame. Right-click on the Solution node, Add / New Project, and choose the Windows Game Library template (not Content Pipeline Extension Library, because we want to use this at runtime as well as build time). Call it MyDataTypes. Add this class to the MyDataTypes project: public class CatData { public string Name; public float Weight; public int Lives; } We're going to use this type to build some custom content, so we need to reference it during the Content Pipeline build process. Right-click on the Content project that is nested inside MyGame, choose Add Reference, and select MyDataTypes from the Projects tab. Now we can add this file (let's call it cats.xml) to our Content project: <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="MyDataTypes.CatData[]"> <Item> <Name>Rhys</Name> <Weight>17</Weight> <Lives>9</Lives> </Item> <Item> <Name>Boo</Name> <Weight>11</Weight> <Lives>5</Lives> </Item> </Asset> </XnaContent> Hit F5, and the content will build. Look in the bin directory, and you will see that it created a cats.xnb file. If you build in Release configuration, this file will be compressed. With the above example, my 326 byte XML file becomes 270 bytes in .xnb format, but the compression ratio will improve as your files get bigger. Before we can load this data into our game, we must reference the MyDataTypes project directly from MyGame as well from its Content sub-project, in order to use our custom type at runtime as well as build time. Once we've done that, we can load the custom content: CatData[] cats = Content.Load<CatData[]>("cats"); If you create a copy of this project for Xbox 360, you will notice that although the Xbox version of MyGame references the Xbox version of MyDataTypes, its Content project still uses the Windows version of MyDataTypes, even though it is building content for Xbox. This is an important point to understand. Because our custom type is used both at build time (on Windows) and at runtime (on Xbox), we must provide both Windows and Xbox versions of this type. Type Translations Remember how some types are not the same at build time versus runtime? For instance a processor might output a Texture2DContent, but when you load this into your game it becomes a Texture2D. The .xnb serializer understands such situations, as long as you give it a little help. For instance if I used this type in MyGame: public class Cat { public string Name; public Texture2D Texture; } I could declare a corresponding build time type in a Content Pipeline extension project: [ContentSerializerRuntimeType("MyGame.Cat, MyGame")] public class CatContent { public string Name; public Texture2DContent Texture; } Note how both types have basically the same fields, and in the same order, but the runtime Cat class uses Texture2D where the build time version has Texture2DContent. Also note how the build time CatContent class is decorated with a ContentSerializerRuntimeType attribute. This allows me to use CatContent objects in my Content Pipeline code, but load the resulting .xnb file as type Cat when I call ContentManager.Load. Performance Every silver lining has a cloud, right? The automatic .xnb serialization mechanism uses reflection. Reflection is slow, and causes a lot of boxing, which can result in a lot of garbage collections. Fortunately, this is not as bad in practice as it sounds on paper. Many custom data types are small, or at least the custom part of them tends to contain just a few larger objects of types that have built-in ContentTypeWriter implementations (textures, vertex buffers, arrays or dictionaries of primitive types, XNA Framework math types, etc). In such cases the performance overhead will be low. When I converted a bunch of existing Content Pipeline samples to use the new serializer, it did not significantly affect their load times. But if you have a collection holding many thousands of custom types, you may see poor load performance. In such cases, you can provide a ContentTypeWriter and ContentTypeReader for just the specific types that are slowing you down. The new system is easier to use, but it can be more efficient to do things the old manual way.AdministratorDocumentsVisual Studio 2008ProjectsMyGameMyGameContentx.
https://blogs.msdn.microsoft.com/shawnhar/2009/03/25/automatic-xnb-serialization-in-xna-game-studio-3-1/
CC-MAIN-2017-30
refinedweb
867
62.38
This is your resource to discuss support topics with your peers, and learn from each other. 01-14-2011 08:47 AM Hi, as far as I could find out, the current emulator doesn´t support sound. Is that correct? I´ve developed a music app. The sounds seem to load correctly but I can´t hear them. Is it just an emulator problem? Will it play on the live device? Thanks, Kay 01-14-2011 09:03 AM What platform are you using (Mac, etc)? What APIs are you using to play the sound? I can confirm that "raw" sound generated using flash.media.Sound and the SampleDataEvent does work. I don't believe I've yet seen a code snippet showing how to get any other sound to work. 01-14-2011 09:45 AM I can verify that mp3's loaded from an HTTP server, using Sound.load() do work. But the entire simulator locked right afterwards (song kept playing though) 01-14-2011 10:09 AM Thanks, after the simulator update everything works fine, almost (but that requires a separate thread. Kay 01-14-2011 10:47 AM Here's a sample that works fairly consistently on mine. I have had freezes (requiring VM reset) but so far it looks like those may be associated with installing an updated app while the current one is still running. At least, it looksl like if I cleanly exit the app before starting to install/launch a changed version it doesn't seem to crash (much?). package { import flash.display.Sprite; import flash.events.Event; import flash.media.Sound; import flash.media.SoundChannel; import flash.net.URLRequest; [SWF(backgroundColor="#dddddd", frameRate="20")] public class SoundTest extends Sprite { public function SoundTest() { var request:URLRequest = new URLRequest(' the Brave.mp3'); var sound:Sound = new Sound(); sound.load(request); var channel:SoundChannel = sound.play(); channel.addEventListener(Event.SOUND_COMPLETE, onComplete); } private function onComplete(e:Event):void { stage.nativeWindow.close(); } } } Note that the volume seems to be about half what it is with programs on my host machine. I have no idea if that's VMware itself or something in the simulator. 03-27-2011 04:17 AM Thanks for sharing your code, Peter. One additional question: I already figured out, that I can stop playback with channel.stop() But once stopped I won't be able to restart it. I tried making sound and channel global variables, but calling channel=sound.play() again and even making a new URLRequest with loading the MP3 again did not work. Any ideas? 03-27-2011 09:42 AM @biggerCC: I had similar issue...i can play the mp3 but after a while sound stops working i need to reboot to have it working again.. 03-27-2011 09:46 AM Even with my app, which generates raw sound using SampleDataEvent, I find that the sound just stops coming out after a longish period of time (sometimes a few hours), for no visible reason. There are no exceptions, no related bugs in my app, no memory leaks, no change in the status of the sound buffers in the simulator's /tmp folder, or any other sign of a problem other than it goes silent. (I have not yet checked whether there's a change in threads running by checking the output of "pidin".) I've just been assuming this will work on the real hardware. It's a pretty safe bet, I think, given how many issues the simulator has, that both this and any issues with MP3 playing are just sim bugs.
https://supportforums.blackberry.com/t5/Adobe-AIR-Development/Sound-support-on-emulator/m-p/735167
CC-MAIN-2016-30
refinedweb
596
67.04
China wholesaler of baby cloth diaper with microfiber or bamb... US $2.5-3.8 50 Pieces (Min. Order) wholesale fabric china US $1.5-3.8 30 Meters (Min. Order) Products Imported From China Pull Up Baby Diapers US $0.07-0.12 180000 Pieces (Min. Order) Reliable Forwarder Agent Shipping China To Indonesia ------Ac... US $1.35-50.37 0.5 Kilograms (Min. Order) Consolidation shipping fee to Adelaide, Sydney, Brisbane ... US $50-80 1 Cubic Meter (Min. Order) freight forwarding companies Guangzhou shenzhen to australia ... US $1-20 1 Cubic Meter (Min. Order) Shanghai shipping fee to Dallas US $5-135 1 Cubic Meter (Min. Order) china import service fee cost charge price US $250-400 1 Set (Min. Order) international express forwarder from lianyungang to Mundra US $500-60000 100 Gross (Min. Order) 5 Years Golden Alibaba Membership Shipping China To Indonesia... US $1.35-50.37 0.5 Kilograms (Min. Order) Consolidation shipping fee to Oslo from shenzhen.China- Eva US $50-80 1 Cubic Meter (Min. Order) china shanghai forwarder to indonesia with ocean fee US $1-20 1 Cubic Meter (Min. Order) Shanghai to New York shipping fees US $100-150 2 Cubic Meters (Min. Order) Much Discount Freight Shipping China To Kuala Lumpur Malaysia... US $1.35-50.37 0.5 Kilograms (Min. Order) Shipping fee Zhejiang to Rio Grande Rio De Janeiro Santos por... US $50-300 1 Cubic Meter (Min. Order) fob shenzhen ocean freight rates to DAMIETTA Egypt with truck... US $1-20 1 Cubic Meter (Min. Order) Shenzhen to montreal shipping fee US $5-135 2 Cubic Meters (Min. Order) Much Discount Freight Shipping China To Kuala Lumpur Malaysia... US $1.35-50.37 0.5 Kilograms (Min. Order) compare shipping fee guangzhou to Sri Lanka- US $5-85 1 Unit (Min. Order) shipping fees in Guangzhou ,cargo to Banjul,Gambia US $1-200 1 Cubic Centimeter (Min. Order) Shanghai/Ningbo to Dallas container shipping fee US $800-3000 1 Twenty-Foot Container (Min. Order) led tube 8 fee,led tube import china,diy t8 led light tube US $1-5 5 Pieces (Min. Order) 1200mm 18W Glass UL CE T8 LED Tube 8 Fee US $5-11 100 Pieces (Min. Order) exprt and import broker in China US $500-4600 1 Cubic Centimeter (Min. Order) Import Air cargo from Paris, France to Shenzhen, China by CX US $2.41-3.00 45 Kilograms (Min. Order) import goods from china/ household clean wipe US $0.218-0.888 3000 Packs (Min. Order) Customs clearance fees of export Taiwan biscuit to China main... US $0.35-2.64 1 Set (Min. Order) 2016 Fashion Women China Factory Custom Sun Glasses Cheap ... US $0.75-1.5 600 Pieces (Min. Order) China Import T shirts Plain t shirts for printing T shirts fo... US $0.5-4 100 Pieces (Min. Order) Luxury and Fashionable virgin brazilian hair products you ca... US $12.99-105.99 1 Piece (Min. Order) import china products hybrid stand case cover for iphone 6 pl... US $0.2-2 100 Pieces (Min. Order) No Extra Fee Added Factory Supply 6a 7a Import Indian Hair US $26.5-71.5 1 Piece (Min. Order) cheese import to China 1 Unit (Min. Order) Promotional commercial leather hand bags imported handbags ... US $8-15 100 Pieces (Min. Order) Alibaba express cheap modern piano paint wood coffee tables ... US $40-50 100 Pieces (Min. Order) China guangdong most popular evod coil pen vaporizer import ... US $2.8-12.8 10 Sets (Min. Order) 2015 discount designer sandals,import slipper china US $5.7-8.8 120 Pairs (Min. Order) led tube light 25w,led tube 8 fee,led tube import china US $1
http://www.alibaba.com/showroom/import-fees-from-china-to-us.html
CC-MAIN-2015-48
refinedweb
624
71
Python is a dynamic language in more ways than one: Not only is it not a static language like C or C++, but it’s also constantly evolving. If you want to get up to speed on what happened in the world of Python in March 2021, then you’ve come to the right place to get your news! March 2021 marks a notable change to the core of the Python language with the addition of structural pattern matching, which is available for testing now in the latest alpha release of Python 3.10.0. Beyond changes to the language itself, March was a month full of exciting and historical moments for Python. The language celebrated its 30th birthday and became one of the first open-source technologies to land on another planet. Let’s dive into the biggest Python news from the past month! Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level. Python Turns 30 Years Old Although Python’s actual birth date is February 20, 1991, which is when version 0.9.0 was released, March is a good month to celebrate. This March is the 20th anniversary of the Python Software Foundation, which was founded on March 6, 2001. In its thirty years, Python has changed—a lot—both as a language and as an organization. The transition from Python 2 to Python 3 took a decade to complete. The organizational model for decision-making changed too: The creator of the language, Guido van Rossum, used to be at the helm, but a five-person steering council was created in 2018 to plan the future of Python. Happy birthday, Python! Here’s to many more years 🥂 Structural Pattern Matching Comes to Python 3.10.0 Python 3.10.0 is the next minor version of Python and is expected to drop on October 4, 2021. This update will bring a big addition to the core syntax: structural pattern matching, which was proposed in PEP 634. You could say that structural pattern matching adds a sort of switch statement to Python, but that isn’t entirely accurate. Pattern matching does much more. For instance, take an example from PEP 635. Suppose you need to check if an object x is a tuple containing host and port information for a socket connection and, optionally, a mode such as HTTP or HTTPS. You could write something like this using an if… elif… else block: if isinstance(x, tuple) and len(x) == 2: host, port = x mode = "http" elif isinstance(x, tuple) and len(x) == 3: host, port, mode = x else: # Etc… Python’s new structural pattern matching allows you to write this more cleanly using a match statement: match x: case host, port: mode = "http" case host, port, mode: pass # Etc… match statements check that the shape of the object matches one of the cases and binds data from the object to variable names in the case expression. Not everyone is thrilled about pattern matching, and the feature has received criticism from both within the core development team and the wider community. In the acceptance announcement, the steering council acknowledged these concerns while also expressing their support for the proposal: We acknowledge that Pattern Matching is an extensive change to Python and that reaching consensus across the entire community is nearly impossible. Different people have reservations or concerns around different aspects of the semantics and the syntax (as does the Steering Council). In spite of this, after much deliberation, … we are confident that Pattern Matching as specified in PEP 634, et al, will be a great addition to the Python Language. (Source) Although opinions are divided, pattern matching is coming to the next Python release. You can learn more about how pattern matching works by reading the tutorial in PEP 636. Python Lands on Mars On February 18, the Perseverance Mars rover landed on Mars after a seven-month journey. (Technically, this is a February news item, but it’s so cool that we have to include it this month!) Perseverance brought a wide range of new instruments and scientific experiments that will give scientists their best look at Mars yet. Perseverance relies on a host of open-source software and off-the-shelf hardware, making it the most accessible Mars rover project to date. Python is one of the open-source technologies living on Perseverance. It was used on-board the rover to process images and video taken during landing. One of the most exciting experiments carried by Perseverance is the Ingenuity Mars helicopter, which is a small drone being used to test flight in the thin Martian atmosphere. Python is one of the development requirements for the flight control software, which is called F’. 2020 Python Developers Survey Results Are In The results from the 2020 Python Developers Survey conducted by JetBrains and the Python Software Foundation are in, and they show some interesting changes compared to last year’s survey. In 2020, 94% of respondents report primarily using Python 3, which is up from 90% in 2019 and 75% in 2017. Interestingly, Python 2 still sees use among a majority of respondents in the computer graphics and game development segments. Flask and Django continue to dominate web frameworks with 46% and 43% adoption, respectively. The newcomer FastAPI was the third most popular web framework at 12% adoption—an incredible feat considering 2020 was the first year the framework appeared in the list of options. Visual Studio Code gained 5% more of the share of responses to the question “What is the main editor you use for your current Python development?” That puts Microsoft’s IDE at 29% of the share and further closes the gap between Visual Studio Code and PyCharm, which still tops the list at 33%. Check out the survey results to see even more stats about Python and its ecosystem. New Features Coming to Django 3.2 Django 3.2 will be released sometime in April 2021, and with it comes an impressive list of new features. One major update adds support for functional indexes, which allow you to index expressions and database functions, such as indexing lowercased text or a mathematical formula involving one or more database columns. Functional indexes are created in the Meta.indexes option in a Model class. Here’s an example adapted from the official release notes: from django.db import models from django.db.models import F, Index, Value class MyModel(models.Model): height = models.IntegerField() weight = models.IntegerField() class Meta: indexes = [ Index( F("height") / (F("weight") + Value(5)), name="calc_idx", ), ] This creates a functional index called calc_idx that indexes an expression that divides the height field by the weight field and then adds 5. Support for PostgreSQL covering indexes is another index-related change coming in Django 3.2. A covering index lets you store multiple columns in a single index. This allows queries containing only the index fields to be satisfied without an additional table lookup. In other words, your queries can be much faster! Another notable change is the addition of Admin site decorators that streamline the creation of custom display and action functions. For a complete list of new features coming in Django 3.2, check out the official release notes. Real Python contributor Haki Benita also has a helpful overview article that walks you through some of the upcoming features with more context and several examples. PEP 621 Reaches Final Status Way back in 2016, PEP 518 introduced the pyproject.toml file as a standardized place to specify a project’s build requirements. Previously, you could specify metadata only in a setup.py file. This caused some problems because executing setup.py and reading the build dependencies requires installing some of the build dependencies. pyproject.toml has gained popularity in the last few years and is now being used for much more than just storing build requirements. Projects like the black autoformatter use pyproject.toml to store package configuration. PEP 621, which was provisionally accepted in November 2020 and marked final on March 1, 2021, specifies how to write a project’s core metadata in a pyproject.toml file. On the surface, this might seem like a less significant PEP, but it represents a continued movement away from the setup.py file and points to improvements in Python’s packaging ecosystem. PyPI Is Now a GitHub Secret Scanning Integrator The Python Package Index, or PyPI, is the place to download all of the packages that make up Python’s rich ecosystem. Between the pypi.org website and files.pythonhosted.org, PyPI generates over twenty petabytes of traffic a month. That’s over 20,000 terabytes! With so many people and organizations relying on PyPI, keeping the index secure is paramount. This month, PyPI became an official GitHub secret scanning integrator. GitHub will now check every commit to public repositories for leaked PyPI API tokens and will disable repositories and notify their owners if any are found. What’s Next for Python? Python continues to grow with increasing momentum. As more users turn to the language for more and more tasks, it’s only natural that Python and its ecosystem will continue to evolve. At Real Python, we’re excited about Python’s future and can’t wait to see what new things are in store for us in April. What’s your favorite piece of Python news from March? Did we miss anything notable? Let us know in the comments, and we might feature you in next month’s Python news roundup. Happy Pythoning!
https://realpython.com/python-news-march-2021/
CC-MAIN-2022-05
refinedweb
1,616
62.78
Private project namespace revealed in email notification when issue is moved HackerOne report #452827 by ashish_r_padelkar on 2018-11-29: Summary: Hello, When issue is moved , the email is sent to the users if they are subscribed to the issues. Th email notification which is received for the issue subscriber reveals the project path of the issue that is moved in ,which may be private project under private group. Description: The email which is received when issue is moved looks like below When you hover over the issue title link, you will see the new project path where this issue is moved in. Look at the browser status bar! or just click on the link! Steps To Reproduce:Steps To Reproduce: - Subscribe to any issue from public project by clicking notification button on right side - Now as a reporter/admin, move this issue in private project under private group - Subscriber should receive an email like above with link of issue title - Hover over the link of issue title and you should see full path of the issue which includes private project path! or else click on the link! Regards, Ashish ImpactImpact AttachmentsAttachments Warning: Attachments received through HackerOne, please exercise caution!
https://gitlab.com/gitlab-org/gitlab-foss/issues/54783
CC-MAIN-2019-47
refinedweb
200
63.53
- Expanding wildcards in <exec> arguments - <fileset>'s strange behaviour - Changing default Locale - Using your own classes inside <script> - Writing a "Task" for getting the dependency list for a target - Using Ant to download files and check their integrity - Windows XP exec task : use os="Windows XP" instead of "os="Windows NT" despite docs - Implementing a PreProcessor - Compounding a property name from the instantiations of multiple previously instanced properties - Why does <javac> require you to set source="1.4" for Java1.4 features, "1.5" for Java1.5? Expanding wildcards in <exec> arguments On Unix-like systems, wildcards are understood by shell intepreters, not by individual binary executables as /usr/bin/ls or shell scripts. <target name="list"> <exec executable="sh"> <arg value="-c"/> <arg value="ls /path/to/some/xml/files/*.xml"/> </exec> </target> Under Windows, instead, you can expect wildcards to be understood even without the need of invoking the cmd interpreter. <target name="compile-groovy-scripts"> <exec executable="groovy.bat"> <arg value="C:/path/to/scripts/*.groovy"/> <arg line="-d C:/path/to/classes/destination"/> </exec> </target> Giulio Piancastelli <fileset>'s strange behaviour Here is an oddity whose solution was discovered by Jan Matèrne. In Ant, what is the simplest way to get a <fileset> that contains only files that do not have an extension in a directory tree. You might think you could just specify that the included files end in a period, like so: <fileset includes="**/*." /> but that will only select files which literally end in a period. The actual answer is a bit counterintuitive. You select all files (implicit) and then exclude those that have an extension: <fileset excludes="*.*" /> This probably only looks odd to people who think of file extensions as being something special. For Unix people, Ant's behavior isn't counterintuitive at all, as the dot in the first includes pattern is a literal dot and nothing else. Changing default Locale When I worked with CheckStyle I realized that there are localized message. So far so fine. But now I want to generate an international (English) site on my (German) machine. But how to realize that? CheckStyle - like many other programs - uses the java.util.ResourceBundle.getBundle() method which returns the appropriate bundle for the default Locale. So I will set the default Locale to the US value. Before that I store the actual one (or the 'key') as property and restore that after invoking CheckStyle. Because I need (simple) access to the Java API, I write that inside <script> tasks: <script language="javascript"> <![CDATA[ importClass(java.util.Locale); actualDefault = Locale.getDefault(); project.setProperty("---actual-default-locale---", actualDefault); Locale.setDefault(Locale.US); ]]></script> <ant .../> <script language="javascript"> <![CDATA[ importClass(java.util.Locale); actualDefault = project.getProperty("---actual-default-locale---"); Locale.setDefault(new Locale(actualDefault)); ]]></script> Jan Matèrne P.S. For another thing I needed to change the Locale by setting parameters on VM startup. I found a solution on Eclipse-Bug-Database. Kevin Barnes offered the possibility to set two VM args: -Duser.country=EN -Duser.language=US (haven´t found them in the JDK docs). Haven´t tested that for this context - so just for your info. Using your own classes inside <script> When you use <script language="javascript"> you are using the Java API. Ok so far. It´s simple to use java.io.File for getting information about a particular file or creating complex strings with java.util.StringBuffer. But there are two problems: How to import other (non java.*) classes, e.g. Ant´s own classes? How to import classes which are not on Ant´s classpath? How to import other (non java.*) classes, e.g. Ant´s own classes? The answer to this is described on the homepage of the javascript interpreter, but very hidden (I think). Following the links "Documentation" and "Scripting Java" you´ll get some examples. Inside them it is written: "If you wish to load classes from JavaScript that aren't in the java package, you'll need to prefix the package name with "Packages". For example ...". Transfered to you script task that would be: importClass(Packages.org.apache.tools.ant.types.Path); I'm confused. I was told that Java (static typing, compiled to bytecodes, etc.) is a totally different language than JavaScript (dynamic typing, interpreted, etc.). Are we using Java for scripting here ? Or are we actually using JavaScript ? Or both ? AFAICT, the scripting language is EmcaScript (aka JavaScript), but a java binding layer was added so that the java script can create and manuipulate java objects. How to import classes which are '''not''' on Ant´s classpath? Ok, now we can use Ant´s classes and the whole javax.*-stuff. But if I need some custom classes? For example I have to create a reverse sorted list of files. Getting the list of files is no problem. Sorting can be done by java.util.TreeSet. But I need a custom Comparator which does the specific comparison. So I implement one and store it in ${basedir}/ext. And now the trick: Ant´s Project class provides methods for getting classloader. And there is one which includes specified paths. So we get the ext directory create a <path> object get the classloader load the class instantiate that <script language="javascript"> <![CDATA[ importClass(java.util.TreeSet); importClass(Packages.org.apache.tools.ant.types.Path); loaderpath = new Path(project, project.getProperty("ext.dir")); classloader = project.createClassLoader(loaderpath); comparatorClass = classloader.loadClass("ReportDateComparator"); comparator = comparatorClass.newInstance(); list = new TreeSet(comparator); ]]></script> Jan Matèrne Writing a "Task" for getting the dependency list for a target That was the question I was asked recently on jGuru. That was a nice question The final result is <macrodef name="dep"> <attribute name="root"/> <attribute name="file" default="@{root}.dep"/> <sequential> <script language="javascript"> <![CDATA[ // attribute expansion from macrodef (script can not reach the values) <dep root="build"/> <dep root="dist-lite" file="dist_lite.txt"/> </target> Jan Matèrne Using Ant to download files and check their integrity While downloading the Milestone 4 of Eclipse I got an idea: why should I download all the files without knowing if there are corrupt? Ok, the following scenario: define a list with all names of the files to be downloaded download the file and its MD5 check file compute the MD5 checksum for the downloaded file compare that value with the one stored in the MD5-file And for faster handling: don´t download files if there are already downloaded and valid don´t check files multiple times if they are valid So I defined some properties in check-downloads.properties: download.zip.dir: remote directory containing the zip files () download.md5.dir: remote directory containing the MD5 files () dest.dir: local directory for storing the files (.) file.list: comma separated list of zip files to download (eclipse-Automated-Tests-3.0M4.zip,eclipse-examples-3.0M4-win32.zip,...) proxy.host: proxy settings proxy.port: proxy settings And the final buildfile is: <project default="main"> <taskdef resource="net/sf/antcontrib/antcontrib.properties"/> <property name="result.file" value="check-downloads-results.properties"/> <property file="check-downloads.properties"/> <property file="${result.file}"/> <target name="main"> <setproxy proxyHost="${proxy.host}" proxyPort="${proxy.port}"/> <foreach list="${file.list}" param="file" target="checkFile"/> </target> <target name="checkFile" depends="check.download,check.md5-1,check.md5-2" if="file"/> <target name="check.init"> <property name="zip.file" value="${file}"/> <property name="md5.file" value="${file}.md5"/> <condition property="md5-ok"><isset property="${zip.file}.isValid"/></condition> <condition property="download-ok"> <and> <available file="${dest.dir}/${zip.file}"/> <available file="${dest.dir}/${md5.file}"/> </and> </condition> </target> <target name="check.download" unless="download-ok" depends="check.init"> <echo>Download ${md5.file}</echo> <get src="${download.md5.dir}/${md5.file}" dest="${dest.dir}/${md5.file}"/> <echo>Download ${zip.file}</echo> <get src="${download.zip.dir}/${zip.file}" dest="${dest.dir}/${zip.file}"/> </target> <target name="check.md5-1" if="md5-ok" depends="check.init"> <echo>${zip.file}: just processed</echo> </target> <target name="check.md5-2" unless="md5-ok" depends="check.init"> <trycatch><try> <!-- what is the valid md5 value specified in the md5 file --> <loadfile srcFile="${md5.file}" property="md5.valid"> <filterchain> <striplinebreaks/> <tokenfilter> <stringtokenizer/> <replaceregex pattern="${zip.file}" replace=''''''/> </tokenfilter> <tokenfilter> <trim/> </tokenfilter> </filterchain> </loadfile> <!-- what is the actual md5 value --> <checksum file="${zip.file}" property="md5.actual"/> <!-- compare them --> <condition property="md5.isValid"> <equals arg1="${md5.valid}" arg2="${md5.actual}"/> </condition> <property name="md5.isValid" value="false"/> <!-- print the result --> <if> <istrue value="${md5.isValid}"/> <then> <echo>${zip.file}: ok</echo> <echo file="${result.file}" append="true" message="${zip.file}.isValid=true${line.separator}"/> </then> <else> <echo>${zip.file}: Wrong MD5 checksum !!!</echo> <echo>- expected: ${md5.valid}</echo> <echo>- actual : ${md5.actual}</echo> <move file="${zip.file}" tofile="${zip.file}.wrong-checksum"/> </else> </if> </try><catch/></trycatch> </target> </project> I got very nice results when starting with -quiet mode. Jan Matèrne Windows XP exec task : use os="Windows XP" instead of "os="Windows NT" despite docs using ant 1.6.1 with j2sdk1.4.2 Doc incorrectly says you should use os="Windows NT" for the exec task to lauch a batch file If you try to launch a batch file with 'exec' task, you should try to use instead : <exec dir="bat" executable="cmd" os="Windows XP" failonerror="true"> <arg line="/c somebatch.bat"/> </exec> If you encounter such a problem, try a verbose execution: ant - verbose build.xml this should give you these results: (I've just tailored an adapted build.xml) XP_as_NT: [echo] batch.bat creation with content : '/c dir ..\ >dirNT.txt' [echo] exec executable='cmd' os='Windows NT' failonerror='true' [exec] Current OS is Windows XP [exec] This OS, Windows XP was not found in the specified list of valid OSes: Windows NT [available] Unable to find dirNT.txt to set property Success.XP_as_NT Property ${Success.XP_as_NT} has not been set [echo] Success.XP_as_NT : ${Success.XP_as_NT} BUILD SUCCESSFUL Total time: 3 seconds Compared to this one : XP_as_XP: [echo] Batch file creation with content : '/c dir ..\ >dirXP.txt' [echo] exec executable='cmd' os='Windows XP' failonerror='true' [exec] Current OS is Windows XP [exec] Executing 'cmd' with arguments: [exec] '/c' [exec] 'batchXP.bat' [exec] [exec] The ' characters around the executable and arguments are [exec] not part of the command. [exec] F:\trucking\ant\bat>dir ..\ 1>dirXP.txt [available] Found: dirXP.txt in F:\trucking\ant\bat [echo] Success.XP_as_XP : true BUILD SUCCESSFUL Total time: 3 seconds Sadly enough, the env variable OS on Windows XP says : Windows_NT, and doesn't reflect what you see on these verbose execution. The only simple way to determine os seems then with the windir env variable; which happens to be <somedrive>\WINNT on windows NT and <somedrive>\Windows on windows XP You may test it that way: <project name="YourProject"> <property environment ="env"/> <target name="init"> <condition property="osIsXP"> <equals arg1="${env.HOMEDRIVE}\WINDOWS" arg2="${env.windir}"/> </condition> </target> Marc Persuy A potential problem with this solution arises when Windows XP is not installed in the folder "WINDOWS". While it usually resides there, it can be installed in a folder with any name. Also, upgrading from Windows 2000 keeps the default installation directory that 2000 used, "WINNT". Thus, the folder name alone can't guarantee that you're working with XP. Ryan Stinnett Implementing a PreProcessor On Bug-28738 is a question about preprocessor. You can do that without any external tasks (but I think the external tasks are more comfortable than this way The example java file is the common known HelloWorld: public class HelloWorld { public static void main(String arg[]){ //@debug start log("debug:let say this is a debug code or logging"); //@debug end System.out.println("HELLO WORLD"); } //@debug start private static void log(String msg) { System.out.println("LOG: " + msg); } //@debug end } We can use the <replaceregexp> for deleting the code between debug-start and -end statements: <replaceregexp match="//@debug start.*?//@debug end" replace="" flags="gs"> Important is the question mark in the match clause (.*?), so we´ve got minimal pattern matching. Otherwise all between the first debug-start mark and the very last debug-end will be selected. And therefore we will lose important code segments The flag "s" is responsible that we will get the whole file at once and the "g" that we catch all debug-statements. After running that command we´ll get that: public class HelloWorld { public static void main(String arg[]){ System.out.println("HELLO WORLD"); } } Maybe you will run a code beautyfier on that so you´ll delete the empty lines ... But you can see that the expected code is generated. And the class file is beautified Here the buildfile for the example. Place that in a project root directory and the original java source in "src" subdirectory. <> The output would be: Compounding a property name from the instantiations of multiple previously instanced properties The Problem Although plain Ant sytax does not allow one to do so, a simple macrodef can derive property names by pasting together the instantiations of multiple previously instanced properties. Given a set of resource-like properties such as: driver.bsd="SomeBSDDriver" driver.os2="A.Real.Old.Driver" driver.windows="GPFGalore" booter.bsd="boot" booter.os2="boot.sys" booter.windows="ntldr" You might wish to pass to some target or task properties/parameters such as ${component} and ${targetOS} in the form <do-something-with object="${${component}.${targetOS}}/> so that if, for example, ${component} were valued as "driver" and ${targetOS} were valued as "os2", the value of ${${component}.${targetOS}} would be the expansion of ${driver.os2}, i.e., "A.Real.Old.Driver". However, Ant expansions of property instantiations are not recursive. So in this instance the expansion of ${${component}.${targetOS}} is not "A.Real.Old.Driver" but instead undefined and possibly varying by Ant version. (Ant 1.6.1 would yield a literal value of ${${component}.${targetOS}} while Ant 1.7 alpha currently yields ${${component}.os2}) Solution Macrodef Define a macro allowing us to express the problem case as: <macro.compose-property <do-something-with Here is the macro (along lines suggested by Peter Reilly with reference to): <!-- Allows you define a new property with a value of ${${a}.${b}} which can't be done by the Property task alone. --> <macrodef name="macro.compose-property"> <attribute name="name"/> <attribute name="stem"/> <attribute name="selector"/> <sequential> <property name="@{name}" value="${@{stem}.@{selector}}"/> </sequential> </macrodef> Why does <javac> require you to set source="1.4" for Java1.4 features, "1.5" for Java1.5? The command line javac tool automatically selects the latest version of Java for the platform you build on. So when you build on Java1.4, it is as if -source 1.4 was passed in, while on Java1.5, you get the effect of -source 1.5 Yet on Ant, you have to explicitly say what version of the Java language you want to build against. If you have the assert keyword in source and omit the source="1.4" attribute, the compile will break. If you use fancy Java1.5 stuff and forget source="1.5", you get errors when you hit enums, attributes, generics or the new for stuff. Q. Why is this the case? Why doesnt Ant "do the right thing?" A. Because of a deliberate policy decision by the project. Ant was written, first and foremost, to build open source projects. In such a project, you don't know who builds your project; you just give out your source and rely on it compiling everywhere, regardless of which platform or JVM the person at the far end used. We also assume that a source tarball will still compile, years into the future. If <javac> automatically selected the latest version of Java, then any Java1.2 code that used "enum" as a variable name, or "assert". To build the old projects, you would have to edit every build file that didnt want to use the latest Java version and force in a source="1.3" attribute. Except if they were other people's projects, you would have to struggle to get that change committed, and things like ApacheGump would never work, because all old-JVM projects would never compile straight from the SCM repository. That is why, in Ant, if you want the latest and greatest features of the Java language, you have to ask for them. By doing so you are placing a declaration in your build file what language version you want to use, both now and into the future. An interesting follow-on question is then "why does javac on the command line risk breaking every makefile-driven Java project by automatically updating Java language versions unless told not to?" That is for Sun to answer, not the Ant team.
http://wiki.apache.org/ant/AntOddities
crawl-002
refinedweb
2,793
51.34
This C# Program Calculates the Distance Travelled by Reading Speed and Time. Here distance is calculated by multiplying speed and time. Here is source code of the C# Program to Calculate the Distance Travelled by Reading Speed and Time. The C# program is successfully compiled and executed with Microsoft Visual Studio. The program output is also shown below. /* * C# Program to Calculate the Distance Travelled by Reading Speed and Time */ using System; class program { public static void Main() { int speed, distance, time; Console.WriteLine("Enter the Speed(km/hr) : "); speed = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Enter the Time(hrs) : "); time = Convert.ToInt32(Console.ReadLine()); distance = speed * time; Console.WriteLine("Distance Travelled (kms) : " + distance); Console.ReadLine(); } } Here is the output of the C# Program: Enter the Speed(km/hr) : 5 Enter the Time(hrs) : 4 Distance Travelled (kms) : 20 Sanfoundry Global Education & Learning Series – 1000 C# Programs. If you wish to look at all C# Programming examples, go to 1000 C# Programs.
https://www.sanfoundry.com/csharp-program-distance-speed-time/
CC-MAIN-2018-13
refinedweb
162
50.94
" For the first time in a few years, virtualization was not on the agenda at the 2007 kernel summit. The related field of containers, however, was deemed worth talking about. The virtualization problem has been mostly solved, at least at the kernel level, but there is still a lot of work to do in the containers area. Paul Menage talked about the process containers patch, which has recently been rebranded "control groups." The control groups API is currently being used by the CFS scheduler, cpusets, and the memory controller code. Work in progress includes rlimits and an interface to the process freezer used by the suspend/resume code. Controlling the freezer via control groups allows user space to freeze specific groups of processes, which, in turn, is very useful when implementing checkpointing and live migration. In particular, with control groups, it will be possible to freeze an entire group of processes in an atomic way. Control groups have very little overhead when not in use. There is an approximately 1% hit on the fork() and exec() calls when control groups are being used. The control groups code is managed by way of a virtual filesystem. This filesystem is a user-space API which must be managed carefully; there needs to be consistency across the various controllers which can work with control groups. To that end, parts of this interface are being pushed into generic code when possible. One other issue is the use of control groups within containers. It would be nice if a containerized system could manage control groups for processes within the container, but that is not yet implemented. Eric Biederman talked about the container situation in general. Implementing containers requires the creation of container-specific namespaces for all of the global resources found on the system. Namespaces for time, SYSV interprocess communication primitives, and users are in the mainline now. There is a process ID namespace patch in -mm which is getting close. Network namespaces are in development now. Resources which still need to have namespaces created for them include system time (important to keep time from moving backward when containers are migrated from one system to another) and devices. Each namespace which is created requires an option to the clone() system call to say whether it should be shared or not. It seems that there may not be enough clone bits to go around; how that problem will be solved is not clear. So, how close are we to having a working container solution? It is still somewhat distant, says Eric. But, when it's done, the support for containers in Linux will be more general and more capable than the options which are available now. It is, he says, a more general solution than OpenVZ, and, unlike Solaris Zones, it will have network namespaces. An important milestone will be the incorporation of PID namespaces, which will make it possible to start actually playing with Linux containers. That code should, with luck, be merged before too long, though it is proving to be a bit of a challenge: kernel code has process IDs hidden away in a number of unexpected places. Stay tuned; perhaps, by the next kernel summit, containers will be considered to be a solved problem as well. KS2007: Containers Posted Sep 10, 2007 22:59 UTC (Mon) by kolyshkin (subscriber, #34342) [Link] By the way, slides used for this session are available here. An important milestone will be the incorporation of PID namespaces, which will make it possible to start actually playing with Linux containers. That code should, with luck, be merged before too long (Most of) PID namespaces code are already in -mm tree. It is, he says, a more general solution than OpenVZ So, how close are we to having a working container solution? A big part here is resource management. Memory controller that is now in -mm is just the very beginning -- there is a whole lot more than RSS and page cache (from the other side, Pavel Emelyanov already sent kernel memory controller patchset as an RFC). Group-based CFQ scheduling is not yet merged AFAIK. Group I/O scheduling (based on Jens Axboe's CFQ) will probably be sent for review soon; but scheduling delayed writes requires some dirty page tracking mechanism that only exists in OpenVZ for now (described in Pavel's paper), a discussion of how to implement that for mainstream is not even started. At the end -- there are a lot of issues to be solved, but given the latest progress, most of the functionality could be there in a year or so, so I more or less agree with your optimistic forecast. :) When containers are ready, we can start work on checkpointing. What is a network namespace? Posted Sep 11, 2007 19:33 UTC (Tue) by cajal (guest, #4167) [Link] Posted Sep 12, 2007 9:13 UTC (Wed) by zdzichu (subscriber, #17118) [Link] Posted Sep 12, 2007 14:42 UTC (Wed) by ebiederm (subscriber, #35028) [Link] Posted Sep 12, 2007 14:41 UTC (Wed) by ebiederm (subscriber, #35028) [Link] Posted Sep 12, 2007 20:51 UTC (Wed) by kolyshkin (subscriber, #34342) [Link] Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/249080/
crawl-002
refinedweb
866
67.79
Web Services Development: Starting Points for JAX-WS This document describes the following starting points of developing JAX-WS-enabled Web services: 1. Generating a Web Service From a WSDL File To generate a Web service from a WSDL, follow this procedure: This opens the New Web Service from WSDL dialog that lets you specify many parameters for the generated Web service, including the port, Java package, and so on, as Figure 1 shows. Figure 1. New Web Service from WSDL Dialog Selecting Keep generated Ant Script saves an Ant file for modification and reuse of the generation process. In addition to the Web service implementation class, this will create a JAR file that contains a Web service interface class, as well as types referenced in the original WSDL file. The default location for the JAR file is the project's WEB-INF/lib directory. If you select a different location that is not on the class path, your Web service is unlikely to function properly. WEB-INF/lib You can test your Web service using the Test Client. For more information, see Testing Web Services. 2. Generating a Web Service From Java 2.1 Starting From Java Class To create a Web service from a Java class, do the following: Figure 2. New Java Class Dialog javax.jws.* @WebService @WebMethod package pkg; import javax.jws.*; @WebService public class MyClass { @WebMethod public void myMethod() { } } Having followed the preceding procedure, you created a Web service from a Java class. You can run it by right-clicking the new class, and then selecting Run As > Run on Server from the drop-down menu. 2.2. Starting From Scratch To create a Web service from scratch using Java, do the following: Figure 4. New Web Service Dialog MyWebService.java Figure 5. Web Service Class 3. Starting From a Java Class and Generating a WSDL File 4. Related Information 4.1 Contents of a WSDL File Files with the WSDL extension contain Web service interfaces expressed in the Web Service Description Language (WSDL). WSDL is a standard XML document type specified by the World Wide Web Consortium (W3C). For more information, see WSDL files communicate interface information between Web service producers and consumers. A WSDL description allows a client to utilize a Web service's capabilities without knowledge of the implementation details of the Web service. A WSDL file contains the following information necessary for a client to invoke the methods of a Web service: 4.2 Imported WSDL Files WSDL files can also be found through both public and private UDDI registries. For more information about UDDI, see Once you have the WSDL file, you may use Eclipse to create a Web service. Some Web service tools produce WSDL files that do not Eclipse. Note that the encoding attribute is not required. If an encoding attribute is not present, the default encoding is utf-8. 4.3 Creating a New WSDL File Figure 6. New WSDL File Dialog 4.4 Testing Web Services Using the test client, you can do the following: Alternatively, you can launch the test client without using Eclipse IDE by launching the client through a Web browser, as follows:
http://docs.oracle.com/cd/E14545_01/help/oracle.eclipse.tools.weblogic.doc/html/webservices/start.html
CC-MAIN-2016-26
refinedweb
530
55.84
01 May 2009 16:47 [Source: ICIS news] WASHINGTON (ICIS news)--The US manufacturing sector continued to contract in April, the Institute for Supply Management (ISM) said on Friday, but the decline is easing and shows “significant improvement” and a good start for the second quarter. The institute said that its closely watched purchasing managers index (PMI) rose to 40.1% in April from 36.3% in March. A reading of 40.1% is still low, but it marks the first reading above 40% in six months. A PMI of 50% or higher indicates that manufacturing industries - key downstream consuming sectors for chemicals and resins - are experiencing growth. A reading below 50% means that the broad manufacturing sector is in contraction. The PMI has been below 40% since October last year and was last in the 40% range in September 2008 when it registered 43.4%. So while the April reading of 40.1% shows that manufacturing industries, including chemicals, are still in decline, it is a better showing that any time in the previous half-year, the institute noted. “The decline in the manufacturing sector continues to moderate,” said Norbert Ore, chairman of the institute’s survey committee. “After six consecutive months below the 40% mark, the PMI ... shows a significant improvement,” ?xml:namespace> He said the new orders index - one of ten separate indexes that combine to generate the overall PMI - shot up to 47.2% in April from a March reading of 41.2%. In addition, “This is definitely a good start for the second quarter,” He cautioned, however, that “While this is a big step forward, there is still a large gap that must be closed before manufacturing begins to grow once again”. To compile the monthly PMI, the institute surveys 19 key industries on ten business performance measures. Chemicals make up one of those surveyed industries and plastics and rubber products constitute another. Among comments from manufacturers in the April survey was one from an executive at a chemical products firm who said that “We are optimistic that things will change for the better in the third quarter”.
http://www.icis.com/Articles/2009/05/01/9213039/us-manufacturing-still-weak-but-shows-improvement-ism.html
CC-MAIN-2014-35
refinedweb
353
62.58
#include <wx/secretstore.h> A collection of secrets, sometimes called a key chain. This class provides access to the secrets stored in the OS-provided facility, e.g. credentials manager under MSW, keychain under OS X or Freedesktop-compliant password storage mechanism such as GNOME keyring under Unix systems. Currently only the access to the default keychain/ring is provided using GetDefault() method, support for other ones could be added in the future. After calling this method just call Save() to store a password entered by user and then call Load() to retrieve it during next program execution. See Secret Store Sample for an example of using this class. The service parameter of the methods in this class should describe the purpose of the password and be unique to your program, e.g. it could be "MyCompany/MyProgram/SomeServer". Note that the server name must be included in the string to allow storing passwords for more than one server. Notice that this class is always available under MSW (except when using MinGW32 which doesn't provide the required wincred.h header) and OS X but requires libsecret (see) under Unix and may not be compiled in if it wasn't found. You can check wxUSE_SECRETSTORE to test for this. Moreover, retrieving the default secret store may also fail under Unix during run-time if the desktop environment doesn't provide one, so don't forget to call IsOk() to check for this too. Example of storing credentials using this class: And to load it back: Delete a previously stored username/password combination. If anything was deleted, returns true. Otherwise returns false and logs an error if any error other than not finding any matches occurred. Check if this object is valid. Look up the username/password for the given service. If no username/password is found for the given service, false is returned. Otherwise the function returns true and updates the provided username and password arguments. Store a username/password combination. The service name should be user readable and unique. If a secret with the same service name already exists, it will be overwritten with the new value. In particular, notice that it is not currently allowed to store passwords for different usernames for the same service, even if the underlying platform API supports this (as is the case for macOS but not MSW). Returns false after logging an error message if an error occurs, otherwise returns true indicating that the secret has been stored and can be retrieved by calling Load() later.
http://docs.wxwidgets.org/trunk/classwx_secret_store.html
CC-MAIN-2018-34
refinedweb
423
63.49
A tutorial on Burrows-Wheeler indexing methods (3) >>IMAGE type that will be the basic building block of the Occ table. The purpose of this definition will be explained at the relevant section. Next, we define a couple of macros ( L4, L16 and L32) that will come in handy to manipulate the down-sampled arrays. Then come the declarations of the variables of interest. Declaring all the variables as global is bad programming practice, but here it allows me to simplify the code and to avoid the discussion of memory management in C. Compared to the second part, BWT and OCC now have fewer entries because of they will be down-sampled. We also declare two new variables: CSA and po$. The array CSA will hold the compressed suffix array, and the integer po$ will hold the position of the terminator $ in the Burrows-Wheeler transformed text, which we need to record for reasons that will be explained below. Other than that, the only change to the main is the call to compress_SA(), which will compress the suffix array. This is the last step of index construction because we need an intact suffix array to construct the Burrows-Wheeler transformed text, the Occ table and the C array. #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> struct occ_t { uint32_t bits; uint32_t smpl; }; typedef struct occ_t occ_t; #define L 14 // Length of the text. #define L4 ((L+3) / 4 ) // Length down-sampled 4 times. #define L16 ((L+15) / 16) // Length down-sampled 16 times. #define L32 ((L+31) / 32) // Length down-sampled 32 times. // Global variables. char TXT[L] = "GATGCGAGAGATG$"; int SA[L] = {0,1,2,3,4,5,6,7,8,9,10,11,12,13}; int CSA[L16] = {0}; char BWT[L4] = {0}; int C[4] = {0}; occ_t OCC[4][L32] = {0}; int po$; // Position of the terminator in BWT. int main(void) { qsort(SA, L, sizeof(int), compare_suffixes); construct_BWT(); construct_C_and_OCC(); compress_SA(); // Can delete 'SA' from here. backward_search("GAGA"); } The construction of the suffix array is carried out exactly as in the previous part. Not a single character was changed, but I reproduce the function below for completeness. int compare_suffixes (const void * a, const void * b) { const char * suff_a = &TXT[*(int *)a]; const char * suff_b = &TXT[*(int *)b]; return strcmp(suff_a, suff_b); } Constructing the Burrows-Wheeler transformed text The first way to compress the index is to realize that DNA has only four letters and that we need only two bits of storage per character. Since a byte is 8 bits, we can store 4 DNA letters per byte. The NUM array gives a numeric code between 0 (binary 0b00) and 3 (binary 0b11) to each DNA letter. Instead of storing the original characters, we will store the numeric codes. The unit of memory is the byte, so we need to do some bit twiddling. The operator << shifts bits to the left within a given byte or word. For instance, the result of 0b10 << 2 is 0b1000. To “add” a character to a byte, we use the operator |=, which performs an OR and stores the result in the initial variable. For instance, the result of 0b11 |= 0b10 << 2 is 0b1011. The last bit of syntax that we need to clarify is the symbol % that means “rest of the division by”. For instance 6 % 4 is equal to 2. In the code below NUM[TXT[SA[i]-1]] is the numeric code of the character of the Burrows-Wheeler transformed text at position i. The operation << (2*(i%4)) shifts this code by 0, 2, 4 or 6 bits to the left, depending on i. The values are stored in tmp and every 4 characters ( i % 4 == 3) the value of tmp is stored in BWT, and tmp is reset. The remaining characters in tmp are stored in BWT after the for-loop. The encoding leaves us with a dilemma: what should we do with the terminator $? All the two-bit symbols are already taken. Because this character appears only once, the easiest option is to store this information in a separate variable ( po$) and use it ad hoc when needed. If SA[i] is 0, the corresponding character of the Burrows-Wheeler transformed text is $ so we set po$ to i and do not update tmp. The corresponding value of BWT will be 0, so we must be careful not to confuse it with A. int NUM[256] = { ['A'] = 0, ['C'] = 1, ['G'] = 2, ['T'] = 3, ['a'] = 0, ['c'] = 1, ['g'] = 2, ['t'] = 3, }; void construct_BWT (void) { char tmp = 0; for (int i = 0 ; i < L ; i++) { if (SA[i] > 0) tmp |= (NUM[TXT[SA[i]-1]] << (2*(i%4))); else po$ = i; if (i % 4 == 3) { BWT[i/4] = tmp; // Write 4 characters per byte. tmp = 0; } } if (L % 4 != 3) BWT[L4] = tmp; } Retrieving the characters of the Burrows-Wheeler transformed text on demand is now more challenging than in the second part. We write a function get_BWT for this purpose. The argument pos is the position we want to query. If pos is po$, we have to return the code for the terminator $, but since there is none we return -1. In reality, this value is not important because the position of $ is never queried in this implementation. For the other positions, we first access the byte where the character is stored ( BWT[pos/4]), shift the bits to the right ( >> (2 * (pos % 4))), and “erase” the higher bits belonging to other characters ( & 0b11). The last operation sets the 6 higher bits to 0 and keeps the lower two as they are. For instance, the sequence of characters CAGT is encoded as 0b11100001 (from right to left). We retrieve the G by shifting the bits 4 positions to the right ( 0b11100001 >> 4 is 0b00001110) and by keeping only the lower two bits ( 0b00001110 & 0b11 is 0b00000010). int get_BWT (int pos) { if (pos == po$) return -1; return (BWT[pos/4] >> (2 * (pos % 4))) & 0b11; } Constructing C and Occ In this version of the code, the Occ table OCC has been declared as a double array of occ_t (see above). Each variable of type occ_t consists of two consecutive unsigned integers of 32 bits, the first called bits, the second called smpl. The second line only declares a shortcut allowing us to write occ_t for struct occ_t. The first member bits is a bit field of size 32 where a bit is set to 1 if and only if the Burrows-Wheeler transformed text contains the given character at the given position (refer to the first part of the tutorial for detail). The second member smpl contains the value of the Occ table for the given character at the given position, but it is computed only every 32 characters of BWT. To construct OCC, we define entry, an array of type occ_t to accumulate the counts of 32 consecutive positions of the Burrows-Wheeler transformed text. Each element of the array corresponds to a nucleotide and encodes the local bit field and sampled value for this nucleotide. The characters of the Burrows-Wheeler transformed text are retrieved one at a time ( get_BWT(i)) and the corresponding item of entry is updated. The statement entry[get_BWT(i)].smpl++ increments the count of the decoded nucleotide (stored in the member smpl). The statement entry[get_BWT(i)].bits |= (1 << (31 - (i % 32))) shifts 0b1 up to 31 positions to the left (depending on the value of i) and updates the member bits with the OR operator described in the previous section. If the character of the Burrows-Wheeler transformed text is $, these operations are not performed. Every 32 positions ( i % 32 == 31), the variable entry is stored in OCC and the bit field is reset (but the member smpl is not because it holds the total cumulative occurrences of the characters). The remaining characters of entry are stored in OCC after the for-loop. The last part of the function fills the C array. As in the previous part of the tutorial, tmp is an accumulator. Note that after scanning the whole Burrows-Wheeler transformed text, entry[j].smpl contains the total count of nucleotide j (numerical encoding), thus tmp contains the cumulative occurrences of each letter (plus 1). void construct_C_and_OCC (void) { occ_t entry[4] = {0}; for (int i = 0 ; i < L ; i++) { if (po$ != i) { entry[get_BWT(i)].smpl++; entry[get_BWT(i)].bits |= (1 << (31 - (i % 32))); } if (i % 32 == 31) { for (int j = 0 ; j < 4 ; j++) { OCC[j][i/32] = entry[j]; // Write entry to 'OCC'. entry[j].bits = 0; } } } if (L % 32 != 31) { // Write the last entry if needed. for (int j = 0 ; j < 4 ; j++) OCC[j][L32-1] = entry[j]; } int tmp = 1; for (int j = 0 ; j < 4 ; j++) { tmp += entry[j].smpl; C[j] = tmp; } } In the rest of the code, all the queries to OCC are preceded by a query to C, so we lump them together in a single query function get_rank. The values of bits contain all the necessary information to query the Occ table, but smpl allows us to speed it up. The query consists of a look ahead at the next sampled value ( OCC[c][pos/32].smpl), followed by a short backtrack of at most 31 positions. The backtrack is performed by OCC[c][pos/32].bits << (1+(pos%32)), which shifts the bit field preceding the sampled value up to 32 positions to the left (erasing the upper bits). The number of 1s remaining is computed by the popcount function, here implemented as __builtin_popcount. Finally, the number of 1s is subtracted from the sampled value and the total is returned. int get_rank (int c, int pos) { return C[c] + OCC[c][pos/32].smpl - __builtin_popcount(OCC[c][pos/32].bits << (1+(pos%32))); } Compressing the suffix array The last step of the index construction is to compress the suffix array. In this implementation, I just down-sample it by a factor 16. We could do better by giving each entry the exact number of bits it requires, but this would mean working with unaligned memory (i.e. variables split between consecutive words). To maintain the difficulty at an appropriate level, I prefer to leave out this optimization. The variable CSA contains every entry of SA out of 16. We simply jump over SA and update CSA on the fly. In a real implementation, we would free the memory allocated to SA after constructing CSA (in fact we would use a single variable SA and resize it after down-sampling). void compress_SA (void) { for (int i = 0 ; i <= L ; i += 16) { CSA[i/16] = SA[i]; } } To query CSA, we need to reconstruct the missing values on demand. The first part of the tutorial explains how this is done using the Burrows-Wheeler transformed text recursively. If the position we need to access is a multiple of 16, we can directly return the value. If the query is the position of $ in the Burrows-Wheeler transformed text, we know that the value of the suffix array is 0. In other cases, we need to find the position of the preceding suffix in the text, which is get_rank(get_BWT(pos),pos-1). With that information, we perform another query and continue the process until we hit a position that is a multiple of 16 (or the position of $). Each time, we increment the return value by 1 to compensate for the fact that the next suffix is one position on the left of the query. int query_CSA (int pos) { if (pos % 16 == 0) return CSA[pos/16]; if (pos == po$) return 0; return 1 + query_CSA(get_rank(get_BWT(pos),pos-1)); } Implementing the backward search Once all these modifications of the index are in place, the backward search is surprisingly similar to the basic implementation showed in the previous part of the tutorial. The only changes are the calls to get_rank and get_CSA that take care of querying the compressed Occ table and the compressed suffix array, respectively. void backward_search (char * query) { int bot = 1; int top = L-1; for (int pos = strlen(query)-1 ; pos > -1 ; pos--) { int c = NUM[query[pos]]; bot = get_rank(c, bot-1); top = get_rank(c, top)-1; if (top < bot) break; } for (int i = bot ; i <= top ; i++) { printf("%s:%d\n", query, query_CSA(i)); } } Epilogue Compression makes the code substantially more complex than the vanilla implementation. The main reason is that we have to care about the representation of the data. Such optimizations are not possible on high level languages. They are hard to understand and master at first, but they constitute the essence of this indexing method and are worth delving into. blog comments powered by Disqus
http://blog.thegrandlocus.com/2017/05/a-tutorial-on-burrows-wheeler-indexing-methods-3
CC-MAIN-2017-22
refinedweb
2,130
61.26
In this blog post I want to show you how to make first steps of Databinding in WPF. I often hear or see that people try to start with WPF but they do not start mit MVVM. Because databinding is frightening them. But why? Databinding is one of the biggest advantages you can have to decouple your logic from your view. With this post I want to give you a short introduction about databinding and how to get set up. First things first: Why MVVM? In the last time the MV*-Pattern was really getting pushed and was established because it gives you an easy possibility to divide your view form the logic which works underneath. In the web for example AngularJS gives you a lightweight MVVM-Pattern, ASP.NET works with the MVC-Pattern, wich also brings a separation between UI and Logic. Advantages are: Changing the UI without changing the logic: The UI changes more often than the logic. What if green is more “stylish” than the good old “blue”? It has to be changed, but all the things you show stay the same. Only because something looks different you are not showing different information. Testability of the logic: Because logic gets more modular it can be well tested. You do not need to know about your view or how it looks like. The only thing your tests are interested in are the output-information. Better overview: You can not only seperate the UI and the logic, you can also see it in the code. you have no UI-Code in your logic and no logic-code in your UI. Different Teams: Also in SCRUM or whatever you use you can easily divide the work into several parts. UI Designers can only focus on their work, while programmers code (and test) the work completetly different. The touching points are only made because of the DataBinding. Theoretically: What are we doing? In Wpf and C# the UI-files are described as _.xaml-files. The viewmodels are normal classes which are _.cs-files. You can connect them via the DataContext-property. This shall point on the ViewModel we are creating for it. (Also described here) The code-behind of a window stays empty. No matter what. There are cases to really do some work there but these are very rare! Lets see some code: Well if you only add a xaml-file or open a new wpf project in visual studio you can add a normal textblock to your xaml like this. <Window x: <Grid> <TextBlock></TextBlock> </Grid> </Window> Now add a binding to it. Want we want to do is bind the Text-Property of the TextBlock to a value from the viewmodel. Lets prepare our XAML: <Window x: <Grid> <TextBlock Text="{Binding NameToDisplay}"></TextBlock> </Grid> </Window> Now lets do the viewmodel. This is the base for our databinding. Its a normal class. Remember to name it like the view to associate it easier for other developers. public class MainViewModel { public string NameToDisplay { get; set; } public MainViewModel() { NameToDisplay = "Hello World"; } } Remeber: This is an external class. It has “nothing” to do (yet) with the UI. There is no connection until here. In a project this could look like this: The viewmodel offers all data it wants to show to the view (and perhaps some more ;) ). These data are offered with properties. Note: You can make an interface for the viewmodel to see what exactly is on the view and what is not. To get a besser overview. But internally wpf will take the real implementaion of the viewmodel as datacontext. But for larger view/viewmodels adding an interface can makes sense. Also for testing/mocking etc. You see that the MainWindow.xaml which we edited above and the viewmodel. We have no connection until here. In the last part you have to let the view know about its datacontext. This property can be set to nearly every viewmodel but its the source for the view where to get their data from. So the “Text”-Property in XAML gets its value from…what? You can set the datacontext in XAML but I think its easier to set this in the codebehind. This is the only thing you should set there! public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataContext = new MainViewModel(); } } And there you go. Now the view does know about the datacontext which is completely seperated. It is offering the information about properties and if you press F5 to run the solution you should see something like this: Now what we did is a normal Hello-World-Label. Depending on the UI-container (ItemControls, Comboboxes,…) you can bind whatever you want to the UI. This is it for the first shot of databinding. But this is only the basic basic ;). But I wanted you to get the point. Regards Fabian [UPDATE] I decided to go on and show you how to bind a list of any objects you want. In my example these are hard coded. In your example they can (and should ;) ) come from a service/repository whatever. First lets expand the viewmodel with a Person-class which has two properties: Name and Age. public class MainViewModel { public string NameToDisplay { get; set; } public List<Person> ListOfPersons { get; set; } public MainViewModel() { NameToDisplay = "Hello World"; ListOfPersons = GetListOfPersons(); } private List<Person> GetListOfPersons() { Person fabianPerson = GetPerson("Fabian", 29); Person evePerson = GetPerson("Eve", 100); return new List<Person> {fabianPerson, evePerson}; } private Person GetPerson(string name, int age) { return new Person(name, age); } } public class Person { public string Name { get; set; } public int Age { get; set; } public Person(string name, int age) { Name = name; Age = age; } } So right like the plain name we are offering a list of persons at the viewmodel. Now that the viewmodel is our Datacontext it can access every property on it. So lets access this in XAML: <Grid> <StackPanel> <TextBlock Text="{Binding NameToDisplay}"></TextBlock> <ItemsControl ItemsSource="{Binding ListOfPersons}"> </ItemsControl> </StackPanel> </Grid> But if you run this you only see the namespace and the name of the classes. Why this? Because the only thing you give to the ItemsControl is the list of persons. From where should it know what to do with it? It calls the “ToString()"-Extension on “object” and gets the Namespace and the name of the class. So lets tell the UI how to treat the objects. This can be done with an Itemtemplate. <Window.Resources> <DataTemplate x: <StackPanel Orientation="Horizontal"> <Label Content="{Binding Name}"></Label> <Label Content="{Binding Age}"></Label> </StackPanel> </DataTemplate> </Window.Resources> <Grid> <StackPanel> <TextBlock Text="{Binding NameToDisplay}"></TextBlock> <ItemsControl ItemsSource="{Binding ListOfPersons}" ItemTemplate="{StaticResource MyItemTemplate}"> </ItemsControl> </StackPanel> </Grid> The Itemtemplate now tells the object how to appear. In my case these are two labels showing the two properties name and age. I dont know why but this is a heavy thing every beginner stumbles upon: The Datacontext of you **view **is what we have set it to: The MainViewModel. Now you give the collection to the ItemsControl and make an ItemTemplate for each object in the list. So in the ItemTemplate the “datacontext” is the object “Person” itself and NOT the MainViewModel anymore! This is why you can access “Name” and “Age” in the DataTemplate directly. Because every Item (which the ItemTemplate is for) is a Person and a Person has got the mentioned properties. Great. After telling this let this thing run and see the result: Have fun Fabian
https://offering.solutions/blog/articles/2014/09/02/how-to-make-first-steps-of-databinding-in-wpf/
CC-MAIN-2022-21
refinedweb
1,240
65.22
This post is an extract from my presentation at the recent GoCon spring conference in Tokyo, Japan. Errors are just values I’ve spent a lot of time thinking about the best way to handle errors in Go programs. I really wanted there to be a single way to do error handling, something that we could teach all Go programmers by rote, just as we might teach mathematics, or the alphabet. However, I have concluded that there is no single way to handle errors. Instead, I believe Go’s error handling can be classified into the three core strategies. Sentinel errors The first category of error handling is what I call sentinel errors. if err == ErrSomething { … } The name descends from the practice in computer programming of using a specific value to signify that no further processing is possible. So to with Go, we use specific values to signify an error. Examples include values like io.EOF or low level errors like the constants in the syscall package, like syscall.ENOENT. There are even sentinel errors that signify that an error did not occur, like go/build.NoGoError , and path/filepath.SkipDir from path/filepath.Walk. Using sentinel values is the least flexible error handling strategy, as the caller must compare the result to predeclared value using the equality operator. This presents a problem when you want to provide more context, as returning a different error would will break the equality check. Even something as well meaning as using fmt.Errorf to add some context to the error will defeat the caller’s equality test. Instead the caller will be forced to look at the output of the error‘s Error method to see if it matches a specific string. Never inspect the output of error.Error As an aside, I believe you should never inspect the output of the error.Error method. The Error method on the error interface exists for humans, not code. The contents of that string belong in a log file, or displayed on screen. You shouldn’t try to change the behaviour of your program by inspecting it. I know that sometimes this isn’t possible, and as someone pointed out on twitter, this advice doesn’t apply to writing tests. Never the less, comparing the string form of an error is, in my opinion, a code smell, and you should try to avoid it. Sentinel errors become part of your public API If your public function or method returns an error of a particular value then that value must be public, and of course documented. This adds to the surface area of your API. If your API defines an interface which returns a specific error, all implementations of that interface will be restricted to returning only that error, even if they could provide a more descriptive error. We see this with io.Reader. Functions like io.Copy require a reader implementation to return exactly io.EOF to signal to the caller no more data, but that isn’t an error. Sentinel errors create a dependency between two packages By far the worst problem with sentinel error values is they create a source code dependency between two packages. As an example, to check if an error is equal to io.EOF, your code must import the io package. This specific example does not sound so bad, because it is quite common, but imagine the coupling that exists when many packages in your project export error values, which other packages in your project must import to check for specific error conditions. Having worked in a large project that toyed with this pattern, I can tell you that the spectre of bad design–in the form of an import loop–was never far from our minds. Conclusion: avoid sentinel errors So, my advice is to avoid using sentinel error values in the code you write. There are a few cases where they are used in the standard library, but this is not a pattern that you should emulate. If someone asks you to export an error value from your package, you should politely decline and instead suggest an alternative method, such as the ones I will discuss next. Error types Error types are the second form of Go error handling I want to discuss. if err, ok := err.(SomeType); ok { … } An error type is a type that you create that implements the error interface. In this example, the MyError type tracks the file and line, as well as a message explaining what happened. type MyError struct { Msg string File string Line int } func (e *MyError) Error() string { return fmt.Sprintf("%s:%d: %s”, e.File, e.Line, e.Msg) } return &MyError{"Something happened", “server.go", 42} Because MyError error is a type, callers can use type assertion to extract the extra context from the error. err := something() switch err := err.(type) { case nil: // call succeeded, nothing to do case *MyError: fmt.Println(“error occurred on line:”, err.Line) default: // unknown error } A big improvement of error types over error values is their ability to wrap an underlying error to provide more context. An excellent example of this is the os.PathError type which annotates the underlying error with the operation it was trying to perform, and the file it was trying to use. // PathError records an error and the operation // and file path that caused it. type PathError struct { Op string Path string Err error // the cause } func (e *PathError) Error() string Problems with error types So the caller can use a type assertion or type switch, error types must be made public. If your code implements an interface whose contract requires a specific error type, all implementors of that interface need to depend on the package that defines the error type. This intimate knowledge of a package’s types creates a strong coupling with the caller, making for a brittle API. Conclusion: avoid error types While error types are better than sentinel error values, because they can capture more context about what went wrong, error types share many of the problems of error values. So again my advice is to avoid error types, or at least, avoid making them part of your public API. Opaque errors Now we come to the third category of error handling. In my opinion this is the most flexible error handling strategy as it requires the least coupling between your code and caller. I call this style opaque error handling, because while you know an error occurred, you don’t have the ability to see inside the error. As the caller, all you know about the result of the operation is that it worked, or it didn’t. This is all there is to opaque error handling–just return the error without assuming anything about its contents. If you adopt this position, then error handling can become significantly more useful as a debugging aid. import “github.com/quux/bar” func fn() error { x, err := bar.Foo() if err != nil { return err } // use x } For example, Foo‘s contract makes no guarantees about what it will return in the context of an error. The author of Foo is now free to annotate errors that pass through it with additional context without breaking its contract with the caller. Assert errors for behaviour, not type In a small number of cases, this binary approach to error handling is not sufficient. For example, interactions with the world outside your process, like network activity, require that the caller investigate the nature of the error to decide if it is reasonable to retry the operation. In this case rather than asserting the error is a specific type or value, we can assert that the error implements a particular behaviour. Consider this example: type temporary interface { Temporary() bool } // IsTemporary returns true if err is temporary. func IsTemporary(err error) bool { te, ok := err.(temporary) return ok && te.Temporary() } We can pass any error to IsTemporary to determine if the error could be retried. If the error does not implement the temporary interface; that is, it does not have a Temporary method, then then error is not temporary. If the error does implement Temporary, then perhaps the caller can retry the operation if Temporary returns true. The key here is this logic can be implemented without importing the package that defines the error or indeed knowing anything about err‘s underlying type–we’re simply interested in its behaviour. Don’t just check errors, handle them gracefully This brings me to a second Go proverb that I want to talk about; don’t just check errors, handle them gracefully. Can you suggest some problems with the following piece of code? func AuthenticateRequest(r *Request) error { err := authenticate(r.User) if err != nil { return err } return nil } An obvious suggestion is that the five lines of the function could be replaced with return authenticate(r.User) But this is the simple stuff that everyone should be catching in code review. More fundamentally the problem with this code is I cannot tell where the original error came from. If authenticate returns an error, then AuthenticateRequest will return the error to its caller, who will probably do the same, and so on. At the top of the program the main body of the program will print the error to the screen or a log file, and all that will be printed is: No such file or directory. There is no information of file and line where the error was generated. There is no stack trace of the call stack leading up to the error. The author of this code will be forced to a long session of bisecting their code to discover which code path trigged the file not found error. Donovan and Kernighan’s The Go Programming Language recommends that you add context to the error path using fmt.Errorf func AuthenticateRequest(r *Request) error { err := authenticate(r.User) if err != nil { return fmt.Errorf("authenticate failed: %v", err) } return nil } But as we saw earlier, this pattern is incompatible with the use of sentinel error values or type assertions, because converting the error value to a string, merging it with another string, then converting it back to an error with fmt.Errorf breaks equality and destroys any context in the original error. Annotating errors I’d like to suggest a method to add context to errors, and to do that I’m going to introduce a simple package. The code is online at github.com/pkg/errors. The errors package has two main functions: // Wrap annotates cause with a message. func Wrap(cause error, message string) error The first function is Wrap, which takes an error, and a message and produces a new error. // Cause unwraps an annotated error. func Cause(err error) error The second function is Cause, which takes an error that has possibly been wrapped, and unwraps it to recover the original error. Using these two functions, we can now annotate any error, and recover the underlying error if we need to inspect it. Consider this example of a function that reads the content of a file into memory. func ReadFile(path string) ([]byte, error) { f, err := os.Open(path) if err != nil { return nil, errors.Wrap(err, "open failed") } defer f.Close() buf, err := ioutil.ReadAll(f) if err != nil { return nil, errors.Wrap(err, "read failed") } return buf, nil } We’ll use this function to write a function to read a config file, then call that from main. func ReadConfig() ([]byte, error) { home := os.Getenv("HOME") config, err := ReadFile(filepath.Join(home, ".settings.xml")) return config, errors.Wrap(err, "could not read config") } func main() { _, err := ReadConfig() if err != nil { fmt.Println(err) os.Exit(1) } } If the ReadConfig code path fails, because we used errors.Wrap, we get a nicely annotated error in the K&D style. could not read config: open failed: open /Users/dfc/.settings.xml: no such file or directory Because errors.Wrap produces a stack of errors, we can inspect that stack for additional debugging information. This is the same example again, but this time we replace fmt.Println with errors.Print func main() { _, err := ReadConfig() if err != nil { errors.Print(err) os.Exit(1) } } We’ll get something like this: readfile.go:27: could not read config readfile.go:14: open failed open /Users/dfc/.settings.xml: no such file or directory The first line comes from ReadConfig, the second comes from the os.Open part of ReadFile, and the remainder comes from the os package itself, which does not carry location information. Now we’ve introduced the concept of wrapping errors to produce a stack, we need to talk about the reverse, unwrapping them. This is the domain of the errors.Cause function. // IsTemporary returns true if err is temporary. func IsTemporary(err error) bool { te, ok := errors.Cause(err).(temporary) return ok && te.Temporary() } In operation, whenever you need to check an error matches a specific value or type, you should first recover the original error using the errors.Cause function. Only handle errors once Lastly, I want to mention that you should only handle errors once. Handling an error means inspecting the error value, and making a decision. func Write(w io.Writer, buf []byte) { w.Write(buf) } If you make less than one decision, you’re ignoring the error. As we see here, the error from w.Write is being discarded. But making more than one decision in response to a single error is also problematic. func Write(w io.Writer, buf []byte) error { _, err := w.Write(buf) if err != nil { // annotated error goes to log file log.Println("unable to write:", err) // unannotated error returned to caller return err } return nil } In this example if an error occurs during Write, a line will be written to a log file, noting the file and line that the error occurred, and the error is also returned to the caller, who possibly will log it, and return it, all the way back up to the top of the program. So you get a stack of duplicate lines in your log file, but at the top of the program you get the original error without any context. Java anyone? func Write(w io.Write, buf []byte) error { _, err := w.Write(buf) return errors.Wrap(err, "write failed") } Using the errors package gives you the ability to add context to error values, in a way that is inspectable by both a human and a machine. Conclusion In conclusion, errors are part of your package’s public API, treat them with as much care as you would any other part of your public API. For maximum flexibility I recommend that you try to treat all errors as opaque. In the situations where you cannot do that, assert errors for behaviour, not type or value. Minimise the number of sentinel error values in your program and convert errors to opaque errors by wrapping them with errors.Wrap as soon as they occur. Finally, use errors.Cause to recover the underlying error if you need to inspect it.
https://dave.cheney.net/2016/04/27/dont-just-check-errors-handle-them-gracefully
CC-MAIN-2020-34
refinedweb
2,529
65.01
09 February 2011 04:07 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Its petrochemical segment, which accounted for about a third of total revenue, had a 4% year-on-year decline in sales to NT$22.7bn, based on data posted on the company’s website. FPCC was operating its three crackers with a total capacity of 2.93m tonnes/year at full capacity in late January. Based on data posted on the company’s website, FPCC’s annualised monthly sales in 2010 weakened starting July, when its Mailiao petrochemical complex had two fire incidents. The first fire in early July shut the company’s 700,000 tonne/year No 1 cracker, and the second incident late that month prolonged the shutdown of the cracker for more than three months. This caused sales to contract for three consecutive months, before posting a 6% growth in October. Sales fell again in November but reversed back to growth in December, according to the data. Notwithstanding the weakness in the second half of the year, FPCC’s revenue grew 17.8% in 2010 to NT$747.3bn, after slumping by 27.6% in 2009, the data showed. Petrochemical sales surged 29.7% to NT$233.3bn in 2010, FPCC said. ($1 = NT$28.93)
http://www.icis.com/Articles/2011/02/09/9433476/taiwans-fpcc-jan-revenue-dips-1-on-weak-petrochemical-sales.html
CC-MAIN-2015-14
refinedweb
211
68.16
I'm using Capistrano and git to deploy a RoR app. I have a folder under which each user has their own folder. When a user uploads or saves a file, it is saved in their own folder. When I deploy new versions of the code to the server, the user files and folders are overwritten with what's on my dev machine. Is there a way to ignore some folders in capistrano, like we do in git? This post - - suggests using symlinks and storing the user files in a shared folder. But it's an old post, so I'm wondering if there is a better way to do it now. Also, does anyone know of any good screencasts/tutorials to recommend for using RoR+git+capistrano? Thanks. You should move the user's folders outside of Capistrano's releases directory. The usual approach is to have Capistrano create symbolic links to the directories that should be preserved across deployments. Here's an example from my Rails blog application's config/deploy.rb whereby files for download within blog posts and images used within posts are stored in a shared directory: after :deploy, 'deploy:link_dependencies' namespace :deploy do desc <<-DESC Creates symbolic links to configuration files and other dependencies after deployment. DESC task :link_dependencies, :roles => :app do run "ln -nfs #{shared_path}/public/files #{release_path}/public/files" run "ln -nfs #{shared_path}/public/images/posts #{release_path}/public/images/posts" end end
https://codedump.io/share/7GqELckpur4o/1/how-do-i-prevent-capistrano-from-overwriting-files-uploaded-by-users-in-their-own-folders
CC-MAIN-2016-50
refinedweb
240
54.63
ASB Quarterly Investor Confidence Report Investors Confident But Increasingly Cautious New Zealand investors have ended what proved to be another profitable year in a positive mood, according to the latest ASB Investor Confidence report. When asked: Do you expect your net return from investments this year to be better or worse than last year? (Chart 1) A net 16% of those surveyed in the December quarter (up 3% from the September quarter) expect the return from their investments to be better this year than last year. While this is below the net 24% reported twelve months earlier, this level of expectation still points to a group of generally optimistic investors. “Investors seem to be facing 2006 in a similar frame of mind to last year, albeit a bit more cautious,” says Anthony Byett, Chief Economist ASB. “This time last year the warnings were that the high returns of 2004 were unlikely to be repeated. As it was there were double digit benchmark returns for key asset classes such as property and equities.” The latest dwelling sales figures from REINZ show the median dwelling sale price to be up 13.5% between December 2004 and December 2005. The NZX report their NZX50 benchmark to be up 10.0% for the year. A number of offshore equity markets did even better. These good returns come after reservations widely expressed at the start of the year. When asked: What type of investment gives the best return? (Chart 2) Residential rental property remains the asset class that is most widely expected to provide the best return (over an indeterminate horizon), up one percent to 24%. Term deposits were ranked second on 13% (up 1%). Thanks to the 7% plus rates on offer term deposits had a strong end to the year and look to be closing the gap on residential rental property. Shares were rated third on 10%, followed by managed investments on 9%. Those in the top of the North Island and South Island continue to prefer residential rental property ahead of managed funds. Those in the lower North Island, who had previously held a more neutral position with the two asset classes had a major reversal in the fourth quarter with a 18% change leading to a 20% preference for residential rental property (from 2%). When asked: How confident are you in your current main investment? One of the big changes in the latest ASB Investor Confidence report was amongst those commenting on their main investment. Confidence amongst those with residential rental property as their current main investment decreased 2% to 61%. Conversely, confidence amongst those with equities as their main investment leapt 23 points to 67%. “A word of caution is still appropriate. This latest report has thrown up some more volatile results than we normally see, perhaps as a result of the confusion between forecasts for future results and current asset performance,” says Mr Byett. “There was also some slippage in confidence over the quarter, a development we will watch closely over 2006. “With the Reserve Bank’s increases to the cash rate last year and subsequent increases across all lending institutions for mortgages the dominance of residential rental property as the standout preference amongst New Zealand investors may be coming to an end. “Whether the pundits or the public are correct in 2006 it is clear that in an environment of a slowing economic growth rate and of property and equity markets facing more selling pressure a prudent and cautious approach – and balanced approach – is recommended.” Ends The ASB Quarterly Investor Confidence Survey is a nationwide survey, which has been undertaken every quarter since May 1998 interviewing a sample of up to 1000 respondents. A sample of this size has a maximum margin of error of ±3.65 at 95% confidence.
http://www.scoop.co.nz/stories/BU0601/S00129.htm
CC-MAIN-2016-40
refinedweb
632
60.04
Bucket sort is a sorting algorithm that works by inserting the elements of the sorting array intro buckets, then, each bucket is sorted individually. The idea behind bucket sort is that if we know the range of our elements to be sorted, we can set up buckets for each possible element, and just toss elements into their corresponding buckets. We then empty the buckets in order, and the result is a sorted list. It is similar to radix sort. How it works: Initially we have to set up an array of empty buckets, then put each object into its bucket. After this, we sort each bucket with elements, and then we pass through each bucket in order and gather all the elements into the original array. Also, the number of buckets can be determined by the programmer. Step by step example : Having the following list, let’s try to use bucket sort to arrange the numbers from lowest to greatest: Unsorted list: For this example, let us use 3 buckets, first is 1-3, second is 4-6, and last is 7-9. At the first run, the algorithm passes through the array, and selects the proper elements which then are moved into the bucket: And the same happens for the rest of elements, at the end of the pass, it will look like this: Now, we sort each bucket individually, using either bucket sort or a different algorithm : Now, we take the elements in the order from the buckets, and insert them into the original array. Also, the elements are in the proper order. Sample code: #include < iostream > using namespace std; #define m 10 void bucketsort (int *a, int n) { int buckets [m]; for (int j=0; j < m; ++j) buckets[j]=0; for (int i=0; i < n; ++i) ++buckets[a[i]]; for (int i=0, j=0; j < m; ++j) for (int k=buckets[j]; k > 0; --k) a[i++]=j; } int main () { int n; int *a; cout << "Please insert the number of elements to be sorted: "; cin >> n; // The total number of elements a = (int *)calloc(n, sizeof(int)); cout << "nThe elements must be lower than the value of m = " << m << endl; for(int i=0;i< n;i++) { cout << "Input " << i << " element: "; cin >>a[i]; // Adding the elements to the array } cout << "Unsorted list:" << endl; // Displaying the unsorted array for(int i=0;i< n;i++) { cout << a[i] << " "; } bucketsort(a,n); cout << "nSorted list:" << endl; // Display the sorted array for(int i=0;i < n;i++) { cout << a[i] << " "; } return 0; } Output : Code explanation : At first, we define the value of m, which means that all the elements we will introduce for the array will have to be lower than m. Next, we make buckets for the size of m, and we make them null and in the end, we add the elements to the proper buckets. It is not required another type of sorting algorithm, in this example we use bucket sort only, as we use a bucket for each element of the array, this might seem familiar with radix sort. Complexity:. The time complexity of bucket sort is: O(n + m) where: m is the range input values, n is the total number of values in the array. Bucket sort beats all other sorting routines in time complexity. It should only be used when the range of input values is small compared with the number of values. In other words, occasions when there are a lot of repeated values in the input. Bucket sort works by counting the number of instances of each input value throughout the array. It then reconstructs the array from this auxiliary data. This implementation has a configurable input range, and will use the least amount of memory possible. Advantages: - the user knows the range of the elements; - time complexity is good compared to other algorithms. Disadvantages: - you are limited to having to know the greatest element; - extra memory is required; Conclusion: If you know the range of the elements, the bucket sort is quite good, with a time complexity of only O(n+k). At the same time, this is a major drawback, since it is a must to know the greatest elements. It is a distribution sort and a cousin of radix sort. . - – Merge Sort - C Algorithms – Counting Sort - C Algorithms – Heap Sort - C Algorithms – Selection Sort - C Algorithms – Radix Sort - C Algorithms – Shell Sort - C Algorithms – Insertion Sort - C Algorithms – Bubble Sort - C Algorithms – The Problem of Sorting the Elements
http://www.exforsys.com/tutorials/c-algorithms/bucket-sort.html
CC-MAIN-2014-42
refinedweb
755
50.8