text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
lp:~reviczky/luatex/texlive-base-debian Created by Adam Reviczky and last modified - Get this branch: - bzr branch lp:~reviczky/luatex/texlive-base-debian Only Adam Reviczky can upload to this branch. If you are Adam Reviczky please log in for upload directions. Branch merges Related bugs Related blueprints Branch information - Owner: - Adam Reviczky - Status: - Development Recent revisions - 7. By Adam Reviczky Can't locate TeXLive/TLPDB.pm in @INC - 6. By Adam Reviczky update texmf.cnf-debian patch - Hunk #1 FAILED at 54. - 5. By Adam Reviczky adjust patches to trunk - 4. By Adam Reviczky adjust packaging - 3. By Adam Reviczky adjust changelog for daily packaging - 2. By Adam Reviczky delete Master for reducing source tarball - 1. By Adam Reviczky import texlive-base 2017.20180110-1
https://code.launchpad.net/~reviczky/luatex/texlive-base-debian
CC-MAIN-2022-33
refinedweb
127
53.78
Difference between revisions of "Janitorial tasks" Revision as of 23:32, 11 August 2007 This is a quick list of tasks needing to be done, that no one has gotten around to yet. They're not in any order yet, and simply have a note at the end with who added them to the list, so if you have questions, you can ask them. (Or, ask the devel mailing list.) See also DeveloperManual.. Class Design Consistency - Adding. (JonCruz) class Foo { ... private: Foo(const Foo& other); // no copy void operator=(const Foo& other); // no assign }; Add Assertions - Look for functions that are using pointer variables without checking against NULL first. Add a call to g_assert( ), g_return_if_fail( ), etc.: void inkscape_dispose(GObject * obj) { Inkscape::Application *inkscape = (Inkscape::Application *) object; g_assert(inkscape != NULL); // <-- add this assertion while (inkscape->documents) { /* ... */ } } (BPF) This could be contentious, and g_assert( ) will nearly always be the wrong thing here. I would not recommend adding such assertions to apparently good code, but it is certainly fair to assert before dereferencing a pointer in new code, with a view to removing the assertion once the code is de-bugged. I may have become confused because if the function in question is known to (or better, documented to) require a non-NULL pointer input then there should be a PRECONDITION macro at the head of the function which must provide correct run-time and release time behaviour. This would be the case even if the first use of the pointer is hundreds of lines into the function. See Bug [ 1210100 ] "Selecting corrupted embedded images crashes". to it (as stated): //#include "foo.h" /* <-- kill the header include! */ class Foo; /* <-- replace with a forward declaration */ class Bar { Foo* _foo; }; (BPF) The C++ FAQ Sheet explains how to code forward declarations (classes that both need to know about each other); To fully understand why, you should study the Pimpl idiom. - Move class documentation from the .h to the .cpp. We have an automated code documentation generator called Doxygen, that generates HTML docs from comments in the .h or .cpp files. Sometimes people put the code docs in the .h, but this means that whenever you update the docs, everything including that .h has to be recompiled, even if there are no _real_ changes to the code. Instead, move all these comments into the corresponding .cpp file. It's okay to have a short paragraph comment at the top of the .h file explaining what it is. Cleanup: Whitespace - Tabs in the source just lead to many troubles, so they aren't supposed to be used. One problem is that just converting them to spaces introduces extra diffs in the CVS history, so someone wanting to remove tabs should first figure out where they are and then talk over an approach for cleansing them. (JonCruz) - Trailing whitespace is also a non-visible but diff-confusing issue, and should also eventually tracked down and removed. (JonCruz) - See also which has a script for removing trailing whitespace. The thread talks about cleaning these when the area is touched anyway, but is against doing a full cleanup. Cleanup: Syntactical - Make sure all files include the standard copyright/license info -- be careful though! Check with ALL the listed copyright holders before replacing a license header. If there is no header, track down the authors and get permission before. Note that a copyright notice to "The Inkscape Organization" is not valid; you will need to track down the original authors in that case too. Cleanup: Modelines - Make sure all files include the emacs Local Variable block and a vim modeline at the end of the file. There is an example modeline on the Coding_Style page, but there are several versions in the codebase. Should they be made to agree? Documentation - Go through .h files and put a sentence or two comment at the top explaining what the class is. Don't be too detailed; details belong in the comments in the corresponding .cpp file. - Add comments to each function in .cpp files. Pick a .cpp file and read through it. Before each function, add comments describing what the function does. See the files in the inkscape/extensions/ directory as examples. We want ALL of the Inkscape sourcecode documented like that. config.h (Done by GigaClon, patch posted 05-09-05) Replace instances of #include <config.h> with #ifdef HAVE_CONFIG_H # include "config.h" #endif Note the "" instead of <> More details needed (BPF) See -Wold-style-cast (BPF) There could well be quite a lot of these. In some cases, this is because we are including C variables and concepts in C++ files, but there are many cases which I could classify as a True Bill. Do we in fact want to remove all C-Style casts ... - Fix all the gcc 3.4.2 warnings (inkblotter) (BPF) As reported in [Patch 1223928 ] "Convert call(s) to gtk_widget_set_usize ...", all the warnings cam be easily fixed, save for one in canvas-arena.cpp where we apply a C macro to a C++ struct. It is to be hoped that all new code will be warning-free, but I am not sure whether we should be in a hurry to commit patches to fix old code as the benefit in doing so might be small.
http://wiki.inkscape.org/wiki/index.php?title=Janitorial_tasks&diff=16046
CC-MAIN-2019-43
refinedweb
879
73.37
How can we store the output of 'type' function in a variable ? next = raw_input('> ') type = type(next) if type == 'int': val = int(next) else: print "Type a number!" There are several ways of doing what you want. Note that defining a type variable to mask the type function is not good practice! (and next either BTW :)) n = raw_input('> ') # (or input in python 3) try: val = int(n) except ValueError: print("Type a number!") or n = raw_input('> ') # (or input in python 3) if n.isdigit(): val = int(n) else: print("Type a number!") Note: as some comment indicated, that in python 2, it was possible to get what you wanted by just using n = input("> ") but very ill adviced since you have to control what n is really, not python 3 portable, and has huge security issues: Ex: in python 2 on windows, try that: import os n = input("> ") and type os.system("notepad") you'll get a nice notepad windows !! You see that it is really not recommended to use input (imagine I type os.system("del <root of your system>")) ...
https://codedump.io/share/W7Rz6wi9ZwhT/1/how-to-store-the-output-of-type-function-in-python-and-use-it-in-39if39-condition
CC-MAIN-2016-50
refinedweb
181
72.76
My son recently celebrated his 9th birthday, and like many kids his age, he was looking forward to his birthday party for months. In the midst of the Covid-19 pandemic, we knew that we needed to do something different this year, so I built him a video watch party app using the Vonage Video API! You, too, can build your own video watch party app with the Vonage Video API and Ruby on Rails, and I'll show you how. This two-part series will walk you through the steps to build your full-stack Rails application. The first part will focus on the backend of the app and the second part will focus on the frontend. tl;dr If you would like to skip ahead and get right to deploying it, you can find all the code for the app on GitHub or click this button to deploy it straight to Heroku. Table of Contents - What Will the App Do - Prerequisites - API Credentials - Installation - Creating the Model and Controller Methods - Providing Custom Site Configuration - Creating the Views - Next Steps What Will the App Do Before we begin building the application, let's take a moment to discuss what it will do. The app will have three distinct views: 1) A Landing Page 2) Party Video Chat 3) Video Watch Party The entry to the app will be through the landing page. At the landing page, participants will be asked to provide their name and the password for the party. The name will be used to identify them in the text chat. The password will provide a small layer of security for the app. After participants enter their name and the correct party password, they will be redirected to the Party Video Chat view. In this view, each participant will see and hear each other in a grid format. There will also be a place to chat by text as well. Everyone will see a real-time count of the participants in the navigation bar. The moderator of the party will also see a link to turn the Watch Mode On/Off. Once the moderator turns the Watch Mode on, all the participants will be directed to the third and final view, which is the Video Watch Party. In this view, the moderator will share their screen in the center of the page. The moderator's audio is also published in a hidden <div> so the participants can hear the audio from the shared video. The text chat will be the means of communication in this view. The audio and video feeds of all the participants will be disabled. The moderator can move people between the Party Video Chat and Video Watch Party modes whenever they would like by pressing the Watch Mode On/Off button in their navigation bar. Now that we have an idea of what we will be building, let's start building it! Prerequisites This app requires the following: It is free to create a Vonage Video API account. You need to do so in order to obtain your API key and secret, which are essential to making the app functional. Vonage Video API Credentials After you have created an account with the Vonage Video API, you will see a dashboard interface. The first step in obtaining API credentials is to create a new project. - Select the Create New Project link from the left sidebar. - Select API when asked what kind of project to create - Provide any name for the project - Choose the VP8 codec option. (Details on the difference between VP8 and H.264 are detailed here) You now have access to your project's API key and secret. Keep them somewhere safe, we will be using them soon. Installation From your terminal initiate a new Rails application by executing the following: $ rails new video-watch-party --database=postgresql Once that is done, change into the project directory and open up the project with your preferred code editor. Gem Dependencies We will add the Vonage Video API (formerly TokBox OpenTok) Ruby SDK to the Gemfile, along with the dotenv-rails gem to manage environment variables: gem 'opentok' gem 'dotenv-rails' Once that is done, we can run bundle install from the command line to install our dependencies. Model Generation Next, we will generate a model to hold and manipulate the video session information. From the command line execute the following: $ rails g model Session session_id:string expired:boolean This command will create a model file inside /app/models/ and a database migration file inside /db/migrate/. Let's open up the database migration file in our code editor. We need to add default values to the columns before we migrate it. You can find the migration file inside the /db/migrate/ folder. It will be the only file inside the folder, and will look similar to this: class CreateSessions < ActiveRecord::Migration[6.0] def change create_table :sessions do |t| t.string :session_id, null: false t.boolean :expired, default: false t.timestamps end end end We want to ensure that the session_id is never null and we also want to make sure that the expired boolean defaults to false. In order to do that, modify your migration file by adding , null: false and , default: false to the :session_id and :expired lines, respectively. You can now commit this database migration to the schema by running rake db:create from the command line. This command will create the PostgreSQL database and the sessions table with the session_id and expired columns. Routes Definition The application needs the HTTP routes that it will be accessed defined and pointing to the correct controller methods. Open up the /config/routes.rb file and add the following: Rails.application.routes.draw do get '/', to: 'video#landing' get '/party', to: 'video#index' get '/screenshare', to: 'video#screenshare' post '/name', to: 'video#name' post '/chat/send', to: 'video#chat' end All the routes point to methods inside the VideoController. We will create the controller in the next step. - The GETroot route directs to the #landingaction. This is the route for the landing page. - The GET /screenshareroute points to the #screenshareaction. This is the route for the watch party view. - The GET /partyroute points to the #indexaction. This is the route for the video chat view. - The POST /nameroute points to the #nameaction. This is where the landing page form will send its data. - The POST /chat/sendroute points to the #chataction. This is where the text chat messages will be sent. Lastly in this step, we will create the VideoController. Controller Generation In the next step, we will create methods for the controller. In this last Installation step, we will generate the controller file. From the command line execute the following: $ rails generate controller Video landing index screenshare name chat This will create a video_controller.rb file inside the /app/controllers/ folder with empty methods for each of the actions we specified in the command. It will also create the basic view structure for the app inside /app/views/video. Creating the Model and Controller Methods Now that all the necessary file structure and database schema has been created, it's time to create the methods for the application. We will need to create methods in both the Video Controller and in the Session model. Let's start with the Session model first. Defining the Model Methods Each Vonage Video session has its own unique session ID. This session ID is what enables different participants to join the same video chat. Additionally, each participant in the video chat is granted a token that enables them to participate. A token can be given special permissions, like moderation capabilities. In the Session model we are going to create three class methods that will be used to either create a new session ID or load the previous one, and generate tokens for each participant. The Session#create_or_load_session_id method will check to see if there already is a session ID. If there is an ID, it will use that ID. If not, it will generate a new one. Session IDs can expire, but for the purposes of this tutorial, we are going to work only with active session IDs: The above method also references an additional method we need to create called Session#create_new_session that does the work of creating a new session if one does not exist: def self.create_new_session session = @opentok.create_session record = Session.new record.session_id = session.session_id record.save @session_id = session.session_id @session_id end Lastly, we will create a method that will assign the right token for each participant: def self.create_token(user_name, moderator_name, session_id) @token = user_name == moderator_name ? @opentok.generate_token(session_id, { role: :moderator }) : @opentok.generate_token(session_id) end At the top of the model definition, we also need to instantiate an instance of the Vonage Video API (formerly known as TokBox OpenTok) SDK and assign it to an instance variable to use it throughout the model. All together the file will look like the following: require 'opentok' class Session < ApplicationRecord @opentok = OpenTok::OpenTok.new ENV['OPENTOK_API_KEY'], ENV['OPENTOK_API_SECRET'] def self.create_new_session session = @opentok.create_session record = Session.new record.session_id = session.session_id record.save @session_id = session.session_id @session_id end def self.create_token(user_name, moderator_name, session_id) @token = user_name == moderator_name ? @opentok.generate_token(session_id, { role: :moderator }) : @opentok.generate_token(session_id) end end We are now ready to move on to build our controller methods that will manage the routes of the app. Defining the Controller Methods The video controller will have a method for each route, and a few helper methods to build out the site. The first method we are going to build will provide all the subsequent methods access to the Video API credentials information. Open up the video_controller.rb file in /app/controllers and after the class definition add the following method: def set_opentok_vars @api_key = ENV['OPENTOK_API_KEY'] @api_secret = ENV['OPENTOK_API_SECRET'] @session_id = Session.create_or_load_session_id @moderator_name = ENV['MODERATOR_NAME'] @name ||= params[:name] @token = Session.create_token(@name, @moderator_name, @session_id) end As you will see in Part 2 of this series when we build the frontend of the app, these instance variables will also be critical in passing the data from the backend to the frontend of the site. Next, we will create a method for each of the routes in our application: def landing; end def name @name = name_params[:name] if name_params[:password] == ENV['PARTY_PASSWORD'] redirect_to party_url(name: @name) else redirect_to('/', flash: { error: 'Incorrect password' }) end end def index; end def chat; end def screenshare @darkmode = 'dark' end As you can see above, the #name method assigns the value of the @name variable taken from the landing page welcome form. It also provides the small layer of gatekeeping for the application, by only redirecting the participant to the video chat page if the password they provided matches the one set in the environment variable. If the password does not match they are redirected to the landing page and asked to try again. The rest of the methods are empty definitions, just the minimum to provide Rails with the information to seek out the view template corresponding to the name of the action. The only other exception is the #screenshare method that sets a @darkmode instance variable, which will be used to put the site into a dark mode visual setting during the screenshare view. The #name method also references name_params, which leverages Rails Strong Parameters. We need to build a private method called name_params that defines precisely which parameters the form on the landing page should include. Let's do that now: private def name_params params.permit(:name, :password) end With that private method, we have built out our controller. All together it will look like the following: require 'opentok' class VideoController < ApplicationController before_action :set_opentok_vars def set_opentok_vars @api_key = ENV['OPENTOK_API_KEY'] @api_secret = ENV['OPENTOK_API_SECRET'] @session_id = Session.create_or_load_session_id @moderator_name = ENV['MODERATOR_NAME'] @name ||= params[:name] @token = Session.create_token(@name, @moderator_name, @session_id) end def landing; end def name @name = name_params[:name] if name_params[:password] == ENV['PARTY_PASSWORD'] redirect_to party_url(name: @name) else redirect_to('/', flash: { error: 'Incorrect password' }) end end def index; end def chat; end def screenshare @darkmode = 'dark' end private def name_params params.permit(:name, :password, :authenticity_token, :commit) end end Before we go on and create our ERB files for our views, we can take a moment now and define a custom YAML file that will serve as the source of truth for information about the site. This information will be used to populate data on the site like the name of the party, the welcome message, the language and language direction of the site, and more. Putting this information into a single place will allow us to easily change it in the future without needing to modify multiple files. Providing Custom Site Configuration The place in Rails to place custom configuration files is inside the /config folder, so let's add a site_info.yml file inside there. We will read the data from this file to create the context for our site, things like the name of the party and language of the site: language: en lang_direction: ltr landing_page: welcome_message: text: 'Welcome to the Vonage Video Watch Party!' name_form: text: 'What is your name and the password for the party?' name_placeholder_text: Your name here password_placeholder_text: Password here submit_button_text: Submit navbar: title: text: Vonage Video Watch Party text_chat: submit_button_text: Submit placeholder_text: 'Enter text here' There are default values provided in the example above. Feel free to edit and change those for the needs of your application. In order to use this information, we need to load and read it somewhere. We will add several :before_action settings to the ApplicationController that will take in all of this information and make it available throughout the app. Open up the application_controller.rb file inside the /app/controllers directory and add the following: class ApplicationController < ActionController::Base before_action :set_site_lang_options before_action :set_site_welcome_options before_action :set_welcome_form_options before_action :set_site_navbar_options before_action :set_site_chat_options CONFIG = YAML.load_file("#{Rails.root}/config/site_info.yml") def set_site_lang_options @lang = CONFIG['language'] @lang_dir = CONFIG['lang_direction'] end def set_site_welcome_options @welcome_message = CONFIG['landing_page']['welcome_message']['text'] end def set_welcome_form_options @name_form_text = CONFIG['landing_page']['name_form']['text'] @name_placeholder_text = CONFIG['landing_page']['name_form']['name_placeholder_text'] @password_placeholder_text = CONFIG['landing_page']['name_form']['password_placeholder_text'] @name_form_submit_button_text = CONFIG['landing_page']['name_form']['submit_button_text'] end def set_site_navbar_options @navbar_title = CONFIG['navbar']['title']['text'] end def set_site_chat_options @submit_button_text = CONFIG['text_chat']['submit_button_text'] @chat_placeholder_text = CONFIG['text_chat']['placeholder_text'] end end Now those instance variables holding the data from the site_info.yml are available to be used inside the view files, which we will create now. Creating the Views Defining the Application Layout The first view we will work with is the default layout for the application. This file can be found at /app/views/layouts/application.html.erb. Inside the view we are goin to add the information about the language of our site, whether to go to dark mode or not, and also load the Video API JS script: <!DOCTYPE html> <html lang="<%= @lang %>" dir="<%= @lang_dir %>"> <head> <title>Video Watch Party</title> <meta charset="utf-8" /> <meta http- <meta name="viewport" content="width=device-width, initial-scale=1" /> <%= csrf_meta_tags %> <%= csp_meta_tag %> <script src=""></script> <script type ="text/javascript"> var api_key = '<%= @api_key %>'; var api_secret = '<%= @api_secret %>'; var session_id = '<%= @session_id %>'; </script> <%= stylesheet_pack_tag 'application', media: 'all', 'data-turbolinks-track': 'reload' %> <%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' %> </head> <body class="<%= @darkmode if @darkmode %>"> <%= yield %> </body> </html> An interesting point you will have noticed in the example above is we also created three JavaScript variables inside <script> tag. We passed the corresponding Ruby instance variable data to those newly instantiated JavaScript variables. In this way, we have started having our backend speak to our frontend. The rest of the view files we will work with will be the particular views of the app; the landing page, video chat, and video watch views. Before we build those though, let's create some partials that we will use throughout the rest of the views. Defining the Partials Partials are a great way to reuse ERB components throughout the view structure in a Rails application. Instead of defining the same content multiple times, we can put it in one file and simply invoke that file whenever we want to use that content. This application will have three partials; a partial for the header, a partial for the text chat, and a partial for the text chat button icon. Create a file called _header.html.erb inside /app/viws/video/ and add the following to it: <h1><%= @navbar_title %></h1> <p id="participant-count"></p> <button id="watch-mode">Watch Mode On/Off</button> The header partial reads the data from the @navbar_title instance variable to provide the name of the application. You will also notice an empty <p> tag with an id of #participant-count. That will be populated with data from the JavaScript we will create in Part 2 of this blog post series. Lastly, the header has a <button> tag that will only be visible to the moderator and allows them to switch between chat and screenshare views for all the participants. Now, create another file called _button-chat.html.erb in the same folder and add the following: <button class="btn-chat" id="showChat"><svg viewBox="0 0 512 512"><svg xmlns="" viewBox="0 0 496 496"><path fill="white" d="M392 279.499v-172c0-26.467-21.533-48-48-48H48c-26.467 0-48 21.533-48 48v172c0 26.467 21.533 48 48 48h43.085l.919 43.339c.275 13.021 15.227 20.281 25.628 12.438l73.983-55.776H344c26.467-.001 48-21.534 48-48.001zm-205.74 16a16.003 16.003 0 00-9.632 3.224l-53.294 40.179-.588-27.741c-.185-8.702-7.292-15.661-15.996-15.661H48c-8.822 0-16-7.178-16-16v-172c0-8.822 7.178-16 16-16h296c8.822 0 16 7.178 16 16v172c0 8.822-7.178 16-16 16H186.26zm309.74-88v132c0 26.468-21.532 48-48 48h-43.153l-.852 33.408c-.222 8.694-7.347 15.592-15.994 15.592-6.385 0-2.83 1.107-82.856-49H232c-8.837 0-16-7.163-16-16s7.163-16 16-16c84.866 0 80.901-.898 86.231 2.438l54.489 34.117.534-20.964c.222-8.675 7.317-15.592 15.995-15.592H448c8.822 0 16-7.178 16-16v-132c0-8.822-7.178-16-16-16-8.837 0-16-7.163-16-16s7.163-16 16-16c26.468.001 48 21.533 48 48.001zm-200-43c0 8.837-7.163 16-16 16H112c-8.837 0-16-7.163-16-16s7.163-16 16-16h168c8.837 0 16 7.163 16 16zm-29 70c0 8.837-7.163 16-16 16H141c-8.837 0-16-7.163-16-16s7.163-16 16-16h110c8.837 0 16 7.163 16 16z"/></svg></button> The HTML above generates a text chat icon that participants can click on to reveal or hide the text chatbox. The last partial is a file that will hold the text chat box area, including the form to submit new chat messages. Create a file called _chat.html.erb in the same directory and its contents will look like the following: <header class="chat-header"> <h2>Chat</h> </header> <div id="history" class="messages"></div> <%= form_with(url: "/chat/send", method: "post") do %> <%= text_field_tag :message, nil, placeholder: @chat_placeholder_text %> <%= submit_tag(@submit_button_text) %> <% end %> In the _chat.html.erb partial you will also see another empty tag, this time a <div> with the id of #history. All the text messages will go into that area automatically using the Vonage Video API text message functionality within the Signal API. We will discuss that in Part 2. Defining the Landing Page The landing page will be the place that the participants will first encounter when they come to the application. Its purpose is to ask the participants for their name, and the party password to enter the site. Create a new file inside /app/views/video called landing_html.erb and add the following: <main> <div class="landing"> <h1><%= @welcome_message %></h1> <p><%= @name_form_text %></p> <%= form_with(url: "/name", method: "post") do %> <%= text_field_tag 'name', nil, :placeholder => @name_placeholder_text %> <%= password_field_tag 'password', nil, :placeholder => @password_placeholder_text %> <%= submit_tag @name_form_submit_button_text %> <% flash.each do |type, msg| %> <p class="error"><%= msg %></p> <% end %> <% end %> </div> </main> Similar to the partials, the landing page view leverages the instance variables created in the ApplicationController to generate the welcome message and the text for the form. Defining the Video Chat View The video chat view will be the place that the participants will chat with each other with their video cameras and microphones. This view, in addition to the screenshare view, is the two essential parts of the application. To make this view, create another new file in the same directory called index.html.erb with the following inside of it: <script type ="text/javascript"> var token = '<%= @token %>'; var name = '<%= @name %>'; var moderator_env_name = '<%= @moderator_name %>'; // reload page to render with variables (function() { if(window.localStorage) { if(!localStorage.getItem('firstLoad')) { localStorage['firstLoad'] = true; window.location.reload(); } else localStorage.removeItem('firstLoad'); } })(); </script> <header> <%= render partial: 'header' %> </header> <main class="app"> <div class="videos"> <div class="publisher" id="publisher"></div> <div class="subscriber" id="subscribers"></div> </div> <aside class="chat"> <%= render partial: 'chat' %> </aside> <%= render partial: 'button-chat' %> </main> This view has several components that are worth mentioning. The first is what is happening inside the <script></script> tags. Similar to the application layout, we continue to pass more data to the frontend of the site in the form of new JavaScript variables. Separately, in order to take advantage of these variables inside the site after the JavaScript is loaded, we also add a small function to reload the page if it is the first time it is being loaded in the browser. The other area we will mention is you will notice that most of the view consists of empty <div> tags. The reason is that those will be populated by the videos from the Video API dynamically. The frontend JavaScript will seek out those tags by their ID names and add the videos of all the participants inside the #subscribers element and add your video to the #publisher element. Defining the Screenshare View The final view we need to create for the application is the one for the video screenshare. In this view, the participants can continue chatting via the text chat box, while all watching the same screen together. This view will only need to provide the <div> elements for the API to provide one publisher, namely the screenshare video, and one audio feed. A screenshare by itself does not include audio, which would make it difficult to watch a video together. That is why we will also publish an audio feed from the moderator's computer to accompany the screenshare. Add a file called screenshare.html.erb inside the same folder with the following: <script type ="text/javascript"> var token = '<%= @token %>'; var name = '<%= @name %>'; var moderator_env_name = '<%= @moderator_name %>'; // reload page to render with variables (function() { if(window.localStorage) { if(!localStorage.getItem('screenshareFirstLoad')) { localStorage['screenshareFirstLoad'] = true; window.location.reload(); } else localStorage.removeItem('screenshareFirstLoad'); } })(); </script> <header> <%= render partial: 'header' %> </header> <main class="app"> <div class="videos"> <div class="screenshare" id="screenshare"></div> <div class="audio" id="audio"></div> </div> <aside class="chat"> <%= render partial: 'chat' %> </aside> <%= render partial: 'button-chat' %> </main> At this point, the backend of our app is ready! Congratulations, you've finished Part 1 of getting the video watch party created. Next Steps In Part 2 of this blog post series, we will build the frontend of the application. While the backend of the app was mainly written in Ruby and leveraged the Vonage Video API Ruby SDK, the frontend will be written in JavaScript and utilize the JavaScript SDK. The work of providing the data that the JavaScript SDK will need has already transpired in the backend we created. Now we need to build the JavaScript classes and functions that will work with that information. Thanks to advances in Rails and its incorporation of Webpack, there is a clear process for incorporating JavaScript into a Rails application, and we will follow those steps. Continue on to Part 2 of this blog post series to finish up building the application. Discussion (4) Ben, this is a great tutorial. I am a Rails developer and I've been planning to build a project using the Twilio video API. I had no idea that there was competition in the space. I'm a little bit fuzzy on whether you're just a Vonage keener or if this is sponsored content. Luckily, the quality of the material is good enough that I'm not fazed either way, I'm just curious as to how this came about. For example, if you're not affiliated with Vonage, how did you make the decision to use the Vonage API over Twilio, which would seem to be the established choice? For the Vonage folks who I have a small hunch are reading this, I can say that this looks like an excellent product but I had some gripes getting rolling. I felt like the pricing structure was a bit hidden, and once I found it, well, frankly you should just cut to the chase and make it simple to compare your fee structure to Twilio's. For a lot of devs, that is one of the primary dimensions upon which the CTO will make a decision so make it easier for us to cheerlead for you. Money aside, the best thing you could do to get us in fighting shape to evangelize for a newcomer when there's an incumbent is present the competitive graph and be really transparent where you're at. Is Twilio able to do something slightly better? Give them the point. Do you have features they don't have? You get the point. Is something coming soon but not quite ready for prime time? Give us some dates so we can plan. It seems, after basic examination, that you're similar... but there's still limits to what Twilio can do. For example, Twilio breaks their video offerings into: 1:1, small groups of 2-4 and large groups up to 50. They don't appear to have a broadcast option beyond 50 people. They also give by-the-minute price breakdowns for each of these tiers, on a per-person basis. This gives us the ability to make realistic estimates... 10 people for an hour is $0.06 or $0.30 or whatever it works out to be. I think I read somewhere that you folks can broadcast up to 5000 people. That's amazing. Brag about it, and tell us how much it'll cost our clients to use it. Finally: when does the next part come out? Please keep it super vanilla in terms of JS. I'm a Stimulus user myself; seeing more and more tutorial written for React is a huge drag and it will seem unfortunate when we look back in a few short years. Hi @leastbad , Nice to hear from you! I work for Vonage and I am lucky enough to get paid to build fun projects, especially ones that make me look like a cool dad and give my son a somewhat decent birthday party during the pandemic! 😂 If you are new to DEV, one of the easiest ways to figure out if a piece of content is coming from a company is to look to see if it is under a company organization. Right above the title at the top of the page, and in the right-hand navigation menu there is identifier information for what organization this content is coming from. The second part of this series was published just now, and happy to say that it is all vanilla JS. I appreciate the kind words. If you end up using the Vonage Video API for the product you are building please let us know how it goes. We have dedicated a channel for the Video API in our community Slack. All the best! Ben Thanks, and I appreciate the quick reply! Let me be the first complete stranger to wish your son a very happy birthday. :) Right now I am in the realm of intrigued! Again, I hope that you can send some of my points/questions/suggestions regarding comparisons up the ladder. My DMs are open, as they say. I just don't see much value in pretending like there isn't healthy competition. There's extraordinary value for Vonage, Twilio and the developers in that competition, especially in the long term. Any obfuscation or unintended failure to make it mindlessly easy to make an apples to apples comparison will likely favour the known quantity incumbent, after all. Well, thank you for the happy birthday message! He had a great time, which given the world we are living in, I was happy to make that happen! The rest of your comments have been shared with others, and I appreciate your candor and the time you took to share those thoughts.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/vonagedev/create-a-video-party-app-with-rails-part-1-building-the-backend-2p4k
CC-MAIN-2021-49
refinedweb
4,849
62.68
05 August 2008 05:04 [Source: ICIS news] SINGAPORE (ICIS news)--China-based FibreChem Technologies reported late on Monday a 5% year-on-year rise in second quarter net profit to Hongkong dollars (HKD) 151.4m ($19.4m) from HKD 144.3m. “The group had done reasonably well in the first half of 2008, amidst signs of a possibly slowing fibre industry in ?xml:namespace> The company recorded a 10% growth in revenue to HKD 505.8m from HKD 459.6m driven by its new 10,000 tonne/year bi-component long fibre production line that commenced operations in June 2007. FibreChem’s net profit was boosted by exchange gains driven by foreign currency loans in light of the appreciating Chinese Yuan that also cushioned rising operational and other expenses, Zhang said. Selling and distribution expenses grew a significant 84% due to increases in advertising budget and additional costs that was set aside for the establishment of new sales offices in Looking forward, FibreChem intends to explore new fibre products whilst actively growing its microfibre leather business in anticipation of the weakening of the Chinese fibre industry. “We are confident that the steady investment in marketing and the strategic development of our various business segments will generate sustainable growth for the group,” Zhang said. ($1 = HKD 7
http://www.icis.com/Articles/2008/08/05/9145243/fibrechem-reports-5-growth-in-net-profit.html
CC-MAIN-2013-48
refinedweb
217
50.97
A timing facility for C++. More... #include <vcl_iosfwd.h> Go to the source code of this file. A timing facility for C++. The vul_timer class provides an interface to system timing. It allows a C++ program to record the time between a reference point (mark) and now. This class uses the system time(2) interface to provide time resolution at either millisecond or microsecond granularity, depending upon operating system support and features. Since the time duration is stored in a 32-bit word, the maximum time period before rollover occurs is about 71 minutes. Due to operating system dependencies, the accuracy of all member function results may not be as documented. For example, some operating systems do not support timers with microsecond resolution. In those cases, the values returned are provided to the nearest millisecond or other unit of time as appropriate. See the Timer header file for system specific notes. The Timer class provides timing code for performance evaluation. Modifications Created: BMK 07/14/89 Initial design and implementation. Updated: LGO 09/23/89 Conform to COOL coding style. Updated: AFM 12/31/89 OS/2 port. Updated: DLS 03/22/91 New lite version. Updated: VDN 10/14/93 ANSI C does not have user/system time. Peter Vanroose 27/05/2001: Corrected the documentation Definition in file vul_timer.h.
http://public.kitware.com/vxl/doc/release/core/vul/html/vul__timer_8h.html
crawl-003
refinedweb
222
51.55
The current method of spawning players used by the NetworkManager. //Attach this script to a GameObject //This script switches the Player spawn method between Round Robin spawning and Random spawning when you press the space key in Play Mode. using UnityEngine; using UnityEngine.Networking; public class Example : NetworkManager { void Start() { //Change the Player Spawn Method to be Round Robin (spawn at the spawn points in order) playerSpawnMethod = PlayerSpawnMethod.RoundRobin; } void Update() { //Press the space key to switch the spawn method if (Input.GetKeyDown(KeyCode.Space)) { //Press the space key to switch from RoundRobin method to Random method (spawn at the spawn points in a random order) if (playerSpawnMethod == PlayerSpawnMethod.RoundRobin) playerSpawnMethod = PlayerSpawnMethod.Random; //Otherwise switch back to RoundRobin at the press of the space key else playerSpawnMethod = PlayerSpawnMethod.RoundRobin; } } }
https://docs.unity3d.com/ru/2018.2/ScriptReference/Networking.NetworkManager-playerSpawnMethod.html
CC-MAIN-2021-31
refinedweb
129
64.61
Agenda See also: IRC log <scribe> Agenda: no changes or other business RESOLUTION: Minutes approved as sent 2005-07-25: Paul Downey to generate list of features, their requirement level and applicability for discussion <scribe> : DROPPED Arun's material appears sufficient. DHull review: DONE MarcG review: DONE Katy: PENDING Marsh AI: DONE scribe: All done but Katy's... Mark: Hugo suggested it would be good to write down a policy Hugo: I wrote a doc. <hugo> Draft I did: Hugo: For each draft we have set a new namespace, with the form w3.org/year/month/identifier ... we have been updating the namespace each publication. ... We haven't agreed on how to change this from now on. ... Before we reach CR, we replace the namespace each time. When we reach CR, we only update if there is a significant change is made. ... WHat is a significant change? ... Up to the WG in each case. Chorus: Seems sensible. Mark: Philippe said it might be good to document this at the end of the NS URI. Hugo: Only problem is that the policy is only visible when dereferencing the namespace URI. ... We should also put it in the status section of the doc. ... I'm very flexible. ... Good to see when people read the draft. Mark: Can we get a paragraph to paste into the spec and the RDDL? Hugo: Do you want to include the appropriate section from my doc? Mark: Yes, not too heavyweight. Nilo: CR says the URI only changes with significant changes. Hugo's proposal says the opposite. ... Seems we need to remove the "NOT"s <mnot> From Section 2.2: After a document has been published as a Candidate Recommendation, the namespace IRIs will be updated only if changes are made to the document are not significant and do not impact the implementation of the specification. <uyalcina> +1 to Nilo Hugo: Ah, I see! To many negatives... ... needs to be fixed. "are significant and impact ..." Mark: After we get to CR, we will attempt to keep the URI the same. ... OK? <scribe> ACTION: Editors to incorporate this into the document and the RDDL. [recorded in] <Nilo> After a document has been published as a Candidate Recommendation, the namespace IRIs will be updated only if changes made to the document are significant and impact the implementation of the specification. Mark: Only 2.2 applies to the CR docs, section 2.1 applies to the WSDL doc. [general agreement] Mark: We have 3 of 4 AIs done. Let's walk through them. <gpilz> anyone know what the address of this server is? <mnot> Mark: Tony's review on the Primer <gpilz> I meant the direct IRC address Tony: saw a number of typos. <marc> ACTION 1=Editors to incorporate ns update policy into drafts Tony: We aren't included in the references. ... Section 5.3 discusses endpoint references to describe the URL where the service lives. ... Could be "endpoint URL" or an "endpointer" ... A different term than "endpoint reference" would reduce confusion. Umit: THere is a definition of endpoint reference in 5.3. DaveH: Confusing. <anish> WSDL has an endpoint and ws-addr has an endpoint and they are quite different Umit: If there is a different term, would that solve the problem? Glen: Just the term, I believe. DaveH: Partly the term, partly two parallel universes. ... One where you refer with a URI, one with an EPR. Took a couple of readings to figure out what was going on. Glen: the third universe is the syntactical constructs, they may contain policy etc. ... the "endpoint" construct in WSDL. Tony: 5.3 comes right after 5.2 which makes reference to WS-A, already given people a defn of endpoint reference there (implicitly). ... Different term would avoid confusion. Umit: Definitions in Core and 5.3, and 5.2 introducing concepts in wrong order (possibly). Mark: Happy to send Tony's comments to WSDL? [apparently so] Mark: Dave's comments <mnot> David Hull's comments on the Core: <scribe> [postponed temporarily] Mark: Marc Goodner's ... typos scribe: points out some relevant sections of the spec, but don't appear to be issues. <mnot> WSDL Adjuncts Section 2.2: <mnot> atte . Mark: That seems fine. <mnot> Mark: SOAP Action Feature doesn't mention WSA Action - oversight? ... thoughts? Marsh: What would WSDL say? Mark: Might say "there are other specs out there..." ... Let's defer these till next week. ... when Katy's got her review done. ... And forward the typos. DaveH: Except for 3.3, most of the COre is too generic to be of concern to us. ... In 3.3, wsdlx:interface and wsdlx:binding are defined. ... Need to clarify endpoint references as applied to URIs. ... By tagging it the semantics of "this is a reference to an endpoint" are applied. ... It's nice, e.g. WSN, to say that an EPR is a particular type. Marsh: Interesting it wasn't clear to you that you can use these for wsa:EPRs. DaveH: Wasn't clear - should make it more explicit. ... Nice to hang it off an element declaration instead of just a type. ... Seems lighter weight than deriving a new type. ... Don't think anything would preclude that design. The examples in the primer tend towards extending a type nor an element. ... A use case was what is in the data vs. a schema. ... Could include wsdlx:interface in an EPR. ... Is that what WSDL wanted? Umit: Is there a problem in the Core (3.3), it says that that are AIIs in wsdlx, and a comment about how these can be used together, and how they are applied to xs:anyURI. ... No restriction against using them with wsa:EPRs. Are you saying there should be used on wsa:EPRs? and an example? DaveH: Says these annotate anyURIs or restrictions thereof, which sounds exclusive. Umit: That wasn't the intent. Add more text making it clear it can be used on EPRs too? Dave: "xs:anyURI" -> "xs:anyURI and wsa:EPR" or just drop xs:anyURI and talk about elements. ... even adding "such as" would have helped. Marsh: Intention was to provide description level constructs to complement wsa:Metadata/wsaw:ServiceName, etc. Dave: Was the intention to allow on element decls as well as types? Marsh/Umit: Don't remember. Dave: Other than this, core looks good. Summary: 1) Clarify wsdlx: can apply to EPRs, not just xs:anyURI. 2) Can wsdlx: be applied to element decls, not just types? <scribe> ACTION: DaveH to revise his comments on WSDL 2.0 3.3, by 9/19 [recorded in] <mnot> <GlenD> +1 Marsh: Goes through his posting. <Zakim> marc, you wanted to ask a question on wsdl:required=false case Arun: We should add some of this to the spec. Marc: wsdl:required="false" allows a service to send headers without receiving them from the client. ... That seems bad. ... You could send me a message with a messageID, expecting a relatesTo, but you won't get it, which is bad. Marsh: We could define that behavior - separate issue. Umit: Bottom line: wsas:Action can't force behavior without wsa:UsingAddressing. Anish: you say a client MAY engage wsaw:Action when wsdl:required="false", but not pick and choose the rest of the WS-A features. ... WHen the client uses WS-A it must engage it fully as we spec. Marsh: Useful to clarify that. <Gil> get the video! Summary:] Paco: "informational/advisory" should say "no normative intent". Marsh: you can't ignore it in all cases (out of band may make use of it). Mark: Everyone comfortable with this? [seem so] scribe: Can we send this off to the editors? Marc: Would like some text. <scribe> ACTION: Paco to come up with wording to implement i061 based on the above discussion. [recorded in] <anish> Mark: Marc's link is the proposal on the table. ... everyone comfortable at this point? Umit: looks reasonable, except for one thing. ... "if there is no wsa:EndpointReference..." what does it mean that there is an anonymous URI for the destination? Marc: IIRC, the anonymous means "do what the transport says". ... we define HTTP response. Umit: For destination what does that mean? ... for SOAP/HTTP. <vikas> Every binding has its own "understanding" of what is anonymous URI Marc: Conclude it's bad WSDL at that point. Umit: Maybe the fourth rule shouldn't apply then. Mark: Are there other bindings where that would have utility? Umit: Unless there were a catalog or config file to map anonymous. ... Not very interoperable. Marsh: What's the alternative? Anish: If you wanted to support a way to have this defined elsewhere, we should call it "undefined". ... IF a binding could define what anonymous means it would be useful, but I can't think of such a binding. ... Maybe a separate issue: we require a destination to be anonymous, which is delegated to the binding. In the SOAP binding we define what it means for replyTO and faultTo, but not destination. Mark: We could change the fourth bullet to say "it's undefined", or define what anonymous means for destination. Umit: We need to define anonymous for [destination] in any case. Anish: If wsa:To is missing destination is anonymous already. Is #4 adding anything? Marc: Mixing up the value of wsa:To and the value of the property. Anish: Proposal is if no values are specified you use anonymous. Similar with wsa:To. Marc: Doesn't interfere with it, just conveys it. Umit: In the end, what is the address on the wire? Marc: If you're sending a message to anonymous, you can omit the wsa:To header or put in the anonymous value. Umit: The relation of the HTTP address and the anonymous URI needs to be clarified somewhere. Marc: We only define one case where anonymous has a meaning - HTTP back-channel. Mark: What are you recommending? ... that people avoid anonymous for destination? Umit: Not sure what the solution is. ... Maybe clarify where anonymous doesn't make sense. Mark: So should we change or remove the 4th bullet? Marc: Don't think we fix anything by removing the default. Paco: This is a hole in WSDL we're trying to fix. ... If you have a WSDL with no address, we're providing an interpretation for that case. ... Can we derive the value of [destination] from WSDL? When the WSDL author doesn't provide one, we'll provide one. ... Is that the wisest thing to do? Anish: In the proposal, the EPR is included as a child of wsdl:endpoint or wsdl11:port. DOes that mean the EPR applies only to the in messages for that endpoint/port? ... What happens when it's specified for a port with only an out message? ... Need to be more specific on which messages the embedded EPR applies to. ... requires more thought. ... Specifically a problem with the first "in" message, but might be others. Umit: The reason you don't have an address in teh port (WSDL perspective) is that the address is added at deployment. ... Might not be safe to assume there is an anonymous destination. ... Addresses might be provided at a later time. Marc: Why would you have an EPR at a later time. ... OK, maybe so. <scribe> ACTION: Anish to explore the issue of application to messages other than the first "in". [recorded in] <scribe> ACTION: Umit explore the issues surrounding bullet 4 for next week. [recorded in] Mark: i056 to email. Mark: Delegated to the Async TF. ... At this point it seems the TF is running out of steam a bit. Good time to bring it back into the WG. ... Need to wrap this up to get the doc into CR. ... Glen will write up a summary, esp. of the decisions to be made, different proposals for solving them. ... Please look for it. ... Will reserve a chunk of the FTF to talk about this issue. If you have a proposal get it in by then. ... Number of ways to address the issue, but we need to see which fit within our deliverables and our charter requirements. Mark: Revised proposal from Anish. <stevewinkler> glen, I'll send my regrets for wed now, I'm on the road... <mnot> Anish: Logical/physical address ... There was the potential for a physical and logical address. ... The proposal had three parts, part 2 and 3 defined the relation of physical and logical. ... those have gone away. Part 1 uses updated terminology. ... Proposal: When the EPR minter includes a [selected port type], and/or [service-port] then the EPR is considered to be specific to the [selected port type] and/or [service-port] Marsh: The spec doesn't say this yet? Anish: No, not that I could see. Umit: No we don't say that anywhere. Although it seems obvious, it's not explicitly stated anywhere. Paco: Makes sense. <uyalcina> +1 for the proposal Mark: Any objections to the proposal? Vikas: Doesn't resolve the logical and physical. Mark: We resolved a while ago we wouldn't talk about that. <mnot> t <mnot> vice QName is present in the EPR. I.e., should our spec say that if the service QName is present then the physical address is what is specified by the wsdl port. Anish: WSDL never says the port address is any more physical than the WS-A address. THey are all IRIs and that's that. Vikas: Maybe we should rewrite the issue to get rid of the logical/physical confusion. Umit: Does Anish's proposal make sense on it's own right or not. Vikas: If the issue is cloudiness about physical/logical, the proposal doesn't address it. Mark: Should we delete everything but the second sentence of the issue? Anish: We can point to i052, say the questions about logical/physical are addressed there. Mark: Proposed resolution: Anish's proposal, plus a ref to the resolution of 052. ... Everyone comfortable? Vikas: i052 resolution: remove "logical" when talking about addresses. [silent assent] RESOLUTION: i020 closed with Anish's proposal. Bob: Customers have co-opted our meeting room, we've moved to a nearby building. ... Logistics coming. ... Negotiated rates available through email reservation. ... Unless you are fluent in Japanese, have an international drivers license, etc., it's maddness in the metro area to try to drive. ... Will be escorting people from the hotel to the meeting room starting on the 7th ... Trying to gauge interest in some sightseeing or entertainment. ... Just after the peak of the autumn color season, we might be able to get some rooms there Nov 5,6. ... Other details like how to buy tickets, get from the airport, etc. coming. DaveH: How long is the Narita express? Bob: About an hour. 12 min from hotel to meeting room; fare: 210. Narita Express fare information forthcoming. ... $125 hotel + $10.70 internet access. ... I'm translating maps from Japanese, takes a while. Mark: Talk next week, might be absent, in which case Hugo will chair. ... Do your AIs. ... Get your potential issues in. [adjourned]
http://www.w3.org/2002/ws/addr/5/09/12-ws-addr-minutes.html
CC-MAIN-2016-40
refinedweb
2,505
69.38
Alan Cox <alan@lxorguk.ukuu.org.uk> writes:> > I don't believe it. Solaris, Tru64 and BSD all disallow attaches after> > IPC_RMID. (I just tested them, in fact.) The *segment* is still alive until> > nattch==0, of course, but nobody can attach to it.> > Just how are you testing. My test set works.> > > Apps full of linuxisms are a pain to port. > > Indeed. I've also got a test patch to fix the API breakage in the shmfs code> and it seems very easy to fix, with the added bonus that I mended chrooted> sys5 ipc (wrongly in the draft patch 8))test prog attached.Greetings Christoph-- #include <stdlib.h>#include <stdio.h>#include <errno.h>#include <sys/types.h>#include <sys/ipc.h>#include <sys/shm.h>int main (int ac, char **av) { int seg, stat; if ((seg = shmget (IPC_PRIVATE, 4000, IPC_CREAT| 0600)) == -1) { perror("shmget"); exit (1); } if (shmat (seg, 0, 0) == (void*) -1) { perror ("shmat"); exit (2); } if (shmctl (seg, IPC_RMID, NULL) == -1) { perror ("shmctl"); exit (2); } switch (fork()) { case -1: perror ("fork"); return (3); case 0: if (shmat (seg, 0, 0) != (void*) -1) printf ("shmat in client ok\n"); else perror ("shmat in client failed"); return (0); default: wait(&stat); } exit (0);}
http://lkml.org/lkml/2000/3/13/157
CC-MAIN-2017-43
refinedweb
206
81.73
One of the first style guidelines that most programmers learn is to use symbols, rather than raw numbers, to represent arbitrary constant values. For example, rather than write: char buffer[256]; ... fgets(buffer, 256, stdin); you should define a symbol, say buffer_size, representing the number of characters in the buffer, and use the symbol instead of the literal, as in: char buffer[buffer_size]; ... fgets(buffer, buffer_size, stdin); C and C++ offer a number of different ways to define such symbols. This month, I'll show you what your choices are. Macros C programmers typically define symbolic constants as macros. For example, the code: #define buffer_size 256 defines buffer_size as a macro whose value is 256. The macro preprocessor is a distinct compilation phase. The preprocessor substitutes macros before the compiler does any other symbol processing. For example, given the macro definition just above, the preprocessor transforms: into: char buffer[256]; ... fgets(buffer, 256, stdin); Later compilation phases never see macro symbols such as buffer_size; they see only the source text after macro substitution. Therein lies the source of a minor irritation that comes with using macros: many compilers don't preserve macro names among the symbols they pass on to their debuggers. Macros have an even more serious problem: macro names don't observe the scope rules that apply to other names. For example, you can't restrict a macro to a local scope: void foo() { #define max 16 // non-local int a[max]; ... } Here, max is not local to function foo. It's effectively global. You can't declare a macro as a member of a C++ class or namespace. In a sense, macro names are more pervasive (read "worse") than global names. Global names can be hidden by names in inner scopes. Macros don't even respect inner scopes. Consequently, macros might substitute in places you don't want them to. For example, after macro substitution: #define max 16 ... void sort(int a[], size_t max); becomes: void sort(int a[], size_t 16); which is a syntax error. Unfortunately, such inadvertent macro substitution doesn't always produce a compiler diagnostic; and even when it does, the message may be puzzling. Since macro names don't behave like other names, most C and C++ programmers adopt a naming convention for macros to distinguish them from all other kinds of names. The most common convention is to spell macro names entirely in uppercase, as in: #define BUFFER_SIZE 256 char buffer[BUFFER_SIZE]; ... fgets(buffer, BUFFER_SIZE, stdin); Enumeration constants Both C and C++ offer alternatives that avoid the ill effects of macros. One of these alternative is the use of enumeration constants. An enumerated type definition can define a type along with associated constant values of that type. For example: enum color { red, green, blue }; defines an enumeration type color and constants red, green, and blue of type color. By default, red has the value 0, green has the value 1, and blue has the value 2. However, you can define the constants with values other than their defaults, as in: enum color { red = 1, green = 2, blue = 4 }; Most parts of an enumeration definition are optional, including the type name. For example: enum { blue = 4 }; omits the type name and all but one enumeration constant. It simply defines a constant named blue whose value is 4. You can use this simplified form of enumeration definition to define any integer-valued constant, such as: enum { buffer_size = 256 }; This defines buffer_size as the integer constant 256. An enumeration constant is a compile-time constant, so you can use it as an array dimension, as in: char buffer[buffer_size]; Unlike macros, enumeration constants do obey the usual scope rules. This means that you can declare enumeration constants local to functions, or, in C++, as members of classes or namespaces. Unfortunately, enumeration constants must have integer values, so you can't use them for floating constants, as in: // truncates to 3 enum { pi = 3.14159 }; Such truncations typically produce a warning from the compiler. const objects Both C and C++ offer yet another way to define a symbolic constant-as a const object, such as: int const buffer_size = 256; The order in which you write int and const doesn't matter to the compiler. You can just as well write the declaration as: const int buffer_size = 256; For reasons I've explained in the past, I prefer writing const to the right of the type, as in int const. (See "const T vs. T const," February 1999, p. 13.) Unfortunately, the above definition (whether you write it one way or the other) has a different meaning in C than it does in C++. In C++, the name of a const object is a compile-time constant expression. In C, it is not. Thus, a C++ program can use buffer_size as an array dimension, while a C program cannot. For instance, the following definition: compiles equally well in either C or C++, the subsequent definition: char buffer[buffer_size]; // ?? compiles only in C++. It produces a compile error in C. It's my impression that most C++ programmers prefer defining symbolic constants as const objects rather than as enumeration constants. If nothing else, a const object definition looks more like what it is. For example, when you write: the definition says fairly explicitly that buffer_size is an "integer constant" whose value is 256. It's not nearly so clear that: is essentially the same thing. Another, more substantive, advantage is that a constant object definition lets you specify the exact type of the constant. For example: unsigned int const buffer_size = 256; defines buffer_size as a constant whose type is unsigned int rather than plain int (which is signed by default). In contrast: defines buffer_size as a plain int. It's a plain int even if you specify the constant's value using an unsigned literal, as in: enum { buffer_size = 256u }; As I explained last year, a numeric literal with the suffix u or U has an unsigned integer type. (See "Numeric Literals," September 2000, p. 113.) In this case, 256u has type unsigned int. However, regardless of the exact type used to specify the enumeration constant's value, the enumeration constant has type int if the value can be represented as an int. Most of the time, the exact type of a symbolic integer constant doesn't matter. For example, whether you define buffer_size as: or as: an array declared in C++ as: has 256 elements. The only time it matters is on those rare occasions when you pass the constant as an argument to one member of a family of overloaded functions. For example, given: int f(int i); unsigned int f(unsigned int ui); the way in which you define buffer_size affects which function f(buffer_size) calls. If buffer_size is an enumeration constant, f(buffer_size) calls f(int). If buffer_size is an unsigned int constant, it calls f(unsigned int). But I like enumeration constants Despite the disadvantages that I just mentioned, I generally prefer defining symbolic constants as enumeration constants rather than as const objects. The problem with const objects is that they may incur a performance penalty, which enumeration constants avoid. That should keep you on the edge of your seat until next time. See you then. Dan Saks is the president of Saks & Associates, a C/C++ training and consulting company. He is also a consulting editor for the C/C++ Users Journal. You can write to him at dsaks@wittenberg.edu.
http://www.embedded.com/story/OEG20011016S0116
crawl-002
refinedweb
1,249
55.03
This chapter describes the functions in the libxmlrpc_client function library, which is part of XML-RPC For C/C++ (Xmlrpc-c). Also see. Everything you need to know about XML-RPC is here. The libxmlrpc_client library provides functions for use in an program that is an XML-RPC client. These functions take care of all the protocol related things so the calling program can be very simple. When using libxmlrpc_client, you must also use the libxmlrpc library. It contains additional facilities that an XML-RPC client needs but are general to XML-RPC and not specific to XML-RPC clients. Besides, the libxmlrpc_client library routines depend on it. The xmlrpc_client.h header file declares the interface to libxmlrpc_client. You'll have to figure out where on your system this file lives and how to make your compiler look there for it. Or use xmlrpc-c-config. Because the libxmlrpc library is a prerequisite, you'll also need its header file (xmlrpc.h). The classic Unix name for the file containing the libxmlrpc_client library is libxmlrpc_client.a or libxmlrpc_client.so. The classic linker option to cause the library to be linked into your program is -l xmlrpc_client._client, so you'll need to link them in too: A complete example of an XML-RPC client program that uses libxmlrpc_client is here. Here is an example of the main part of the same program using the slightly more complex but preferred private client method: #include <xmlrpc-c/base.h> #include <xmlrpc-c/client.h> #include "config.h" /* information about this build environment */ #define NAME "XML-RPC C Test Client" #define VERSION "1.0" int main(int const argc, const char ** const argv) { xmlrpc_env env; xmlrpc_client * clientP; xmlrpc_value * resultP; int sum; char * const url = ""; char * const methodName = "sample.add"; /* Initialize our error-handling environment. */ xmlrpc_env_init(&env); xmlrpc_client_setup_global_const(&env); xmlrpc_client_create(&env, XMLRPC_CLIENT_NO_FLAGS, NAME, VERSION, NULL, 0, &clientP); die_if_fault_occurred(&env); /* Make the remote procedure call */ xmlrpc_client_call2f(&env, clientP, url, methodName, &resultP, "); xmlrpc_client_destroy(clientP); xmlrpc_client_teardown_global_const(); return 0; } As you build an XML-RPC client using libxmlrpc_client, it's good to be able to try it out by talking to a server. One way to do this is to use the one of the server programs in the examples directory of the Xmlrpc-c source tree. For example, xmlrpc_sample_add_server runs a server on a local TCP port of your choosing. If you choose Port 8080, for example, you can direct your client program to URL and execute a system.listMethods method. Or you can use a server that already exists on the Internet. The Xmlrpc-c project operates one at. This is Apache with a CGI program that uses Xmlrpc-c's libxmlrpc_server_cgi library. (You can find the code for that program as examples/xmlrpc_sample_add_server_cgi.c in the Xmlrpc-c source tree. An advantage of running your own server is that you can do tracing on the server side to help you understand why your client isn't doing what you expect. Also see Debugging. libxmlrpc_client has global constants that you must set up. The global initialization function is xmlrpc_client_setup_global_const(). The global termination function is xmlrpc_client_teardown_global_const(). See Global Constants for an explanation of why you need these and how to use them. If you use the global client object (i.e. xmlrpc_client_init2()), creating the global client and setting up the global constants are merged into one operation, so you need not call xmlrpc_client_setup_global_const(). You perform XML-RPC client functions through an xmlrpc-c client object. You create this object with xmlrpc_client_create() and destroy it with xmlrpc_client_destroy(). A handle for this object is an argument to the functions that perform RPCs. A slightly simpler method is available in which the client is implied by a static global variable. You can't use it in a modular program because all users of libxmlrpc_client in the program would be sharing the same global variable and would conflict with each other. But before Xmlrpc-c 1.05 (March 2006), this is the only interface available. The private client method is requires a few more lines of code (a global constant setup and teardown, and an extra argument on many functions), but is the cleaner option by far. Code that uses a private client can be modular and it is obvious to a reader of the code where the state is being kept, as opposed to the global client method where it is in a hidden global variable. An example of using the global client object is here. An example of using a private client object is here. A client object is represented by a data structure of type xmlrpc_client. A pointer to the object is a handle that you use to identify it. Prototype: void xmlrpc_client_create(xmlrpc_env * const envP, int const flags, char * const appname, char * const appversion, struct xmlrpc_clientparms * const clientparmsP, unsigned int const parmSize, xmlrpc_client ** const clientPP); This creates an Xmlrpc-c client object and returns a handle for it. You use this handle with various other library functions. You can undo this — destroy the object — with xmlrpc_destroy_client(). envP is an error environment variable pointer. appname and appversion are meaningful only if the client XML transport is libwww, and are in the parameter list (instead of the libwww transport-specific parameters) only for historical reasons. These values control the User-Agent HTTP header in the XML-RPC call. (The User-Agent HTTP header normally tells through what program the user made the HTTP request; for classic web browsing, it identifies the web browser program, such as Internet Explorer). The value in the User-Agent header is appname/appversion, plus the name and version number of the libwww library. For the other HTTP transports, see the documentation of the individual transport for information on controlling User-Agent. The name and version number of your program would be appropriate values for these. These are asciiz strings. clientparmsP is a pointer to a structure that contains other parameters of the client. parmSize is the size in bytes of that structure. More precisely, it is the amount of the structure that you have filled with meaningful information. Details are below. This structure may contain more information in future versions of libxmlrpc_client. If you want defaults for everything in this structure, you may specify a null pointer as clientparmsP.This structure may contain more information in future versions of libxmlrpc_client. If you want defaults for everything in this structure, you may specify a null pointer as clientparmsP. struct xmlrpc_clientparms { const char * transport; struct xmlrpc_xportparms * transportparmsP; size_t transportparmSize; const struct xmlrpc_client_transport_ops * transportOpsP; xmlrpc_client_transport * transportP; xmlrpc_dialect dialect; }; For parmSize, use the XMLRPC_CPSIZE macro. This macro gives the size of the xmlrpc_clientparms structure up through the member you name. Name the last member in the structure that you set. You must set every member up through that one, but for every member after it, xmlrpc_client_create() will assume a default. The reason it's important to use XMLRPC_CPSIZE instead of just setting all the members and using sizeof(struct xmlrpc_clientparms) is forward compatibility. Future versions of libxmlrpc_client might add new members, and you want the client program you write today and compile with that future library to work. xmlrpc_client_create() was new in Xmlrpc-c 1.05 (March 2006). Before that, you must use the global client. The transport, transportparmsP, transportparm_size, transportOpsP, and transportP parameters are all for you specify what to use for the XML transport. The XML transport is the thing that delivers XML to the XML-RPC server and gets XML back. See Client XML Transports. There are two ways to specify the XML transport. In the first, you name a built-in transport type and xmlrpc_client_create() creates one for you. In the other, you supply your own XML transport. Select this option by making transportP null or not present. transport is the name of the XML transport that you want the client to use, as an asciiz string. The available transports (by name) are: transportparmsP and transportparmSize are the address and size, respectively, of a structure that describes parameters whose meanings are specific to the particular transport you are using. struct xmlrpc_xportparms is not "complete" (in a C sense) you always cast between that and a type specific to a transport, named like "xmlrpc_TRANSPORT_xportparms". The transport parameters are described in Client XML Transports. If your clientparms structure is too small to include transportparmsP or transportparmsP is a null pointer, that selects defaults for all transport-specific parameters. The whole transport-specific parameters interface was new in Xmlrpc-c 1.02 (March 2005). Before that, all the parameters that are now transport-specific parameters were always defaults. Select this option by making transport a null pointer and transportP non-null. Details on how to create a transport class are not in this manual. Instead, just look at the interface header files and the source code for the built-in transport classes. transportOpsP points to an operation vector that serves to define the class of XML transport. transportP is a handle for the transport object. Its meaning is entirely determined by the functions identified by transportOpsP. The ability to supply your own transport was introduced in Xmlrpc-c 1.09 (December 2006). dialect selects the dialect that the client will use when it generates the XML for a method call parameter. Note that this has no effect on the dialect the client is able to interpret in responses from a server. The client understands all the dialects. The default is i8. This parameter was new in Xmlrpc-c 1.11 (June 2007). Before that, the dialect is always i8. Prototype: void xmlrpc_client_destroy(xmlrpc_client * const clientP); This function destroys a client object. It releases resources used by the object. clientP is the handle of the client to be destroyed. You got it from a prior xmlrpc_client_create() call. You must not use the client handle for anything after you execute xmlrpc_client_destroy(). You must not call this on a client that has RPCs in progress via the asynchronous RPC facility. To ensure there are none, call xmlrpc_client_event_loop_finish(). xmlrpc_client_destroy() was new in Xmlrpc-c 1.05 (March 2006). Before that, you must use the global client. There can be zero or one global client per program (A "program" is all the code and threads that share one memory space). To create the global client, call xmlrpc_client_init2(). This additionally sets up library global constants, i.e. it has the effect of calling xmlrpc_client_setup_global_const(). You call this once at the beginning of your program, before calling anything else in libxmlrpc_client, and while your program is still one thread. When you're done using the object, call xmlrpc_client_cleanup(). After that, you can start over with a new object by calling xmlrpc_client_init2() again. xmlrpc_client_cleanup() destroys the global client and abandons global constants (i.e. has the effect of calling xmlrpc_client_teardown_global_const(). In truth, this call is unnecessary. All it does is free resources. If your program is sloppy enough to use the global client (as opposed to creating a private client of its own), it might as well be more sloppy and let the operating system clean up the global client automatically as the program exits. Prototype: void xmlrpc_client_init2(xmlrpc_env * const envP, int const flags, char * const appname, char * const appversion, struct xmlrpc_clientparms * const clientparmsP, unsigned int const parmSize); This creates the global Xmlrpc-c client object. You must not call this while the global Xmlrpc-c client object already exists. This function is identical to xmlrpc_create_client() except that it creates the global client rather than a private one, and returns no handle. (With the global client, you use library functions designed for the global client, so they know implicitly to use this client). This function is not thread-safe. Results are undefined not only if you call it and another libxmlrpc_client function at the same time from separate threads, but if you call it while any other thread is running at all. The reason for this restriction is that the function internally calls non-threadsafe functions in other libraries and you don't even know what libraries those are, so the only way you know another thread isn't calling that other library simultaneously is if there is no other thread running. So you typically call xmlrpc_client_init2() at the beginning of a threaded program, while it is still just one thread and let the relevant threads inherit the global client. This is an older, less functional version of xmlrpc_client_init2(). It exists for backward compatibility. Don't use that in new code, but if you're maintaining old code, you can easily guess what it does based on the documentation of xmlrpc_client_init2(). Prototype: void xmlrpc_client_cleanup(void); This destroys the global Xmlrpc-c client object. You call this before exit from the program to release resources. You may also call it in order to create a new global Xmlrpc-c client object, since you can't have more than one existing at once. You must not call this if the global client has RPCs in progress via the asynchronous RPC facility. To ensure there are none, call xmlrpc_client_event_loop_finish_asynch(). This function is thread-unsafe in a way analogous to xmlrpc_client_init2(). The purpose of using libxmlrpc_client is to perform RPCs. The functions in this section do that. All the other functions in the library are just overhead to support these functions. When we say "make an XML-RPC call," we refer only to delivering the XML for it to the server. The server's implementation of it, and the response side of the transaction are not included. "Perform an RPC" means to conduct the entire transaction: make the call, have the server do its thing, and receive the response. The easiest function to use to perform an RPC call is xmlrpc_client_call2f(). As arguments, you supply the URL of your server, the name of the XML-RPC method you are invoking, and the XML-RPC parameters. You supply those parameters as a format string followed by a variable number of arguments as required by the format string. You get the response back as an XML-RPC value. Example: Prototype:Prototype: xmlrpc_value * resultP; xmlrpc_client_call2f(&env, clientP, url, methodName, &resultP, "(ii)", (xmlrpc_int32) 5, (xmlrpc_int32) 7); void xmlrpc_client_call2f(xmlrpc_env * const envP, xmlrpc_client * const clientP, const char * const serverUrl, const char * const methodName, xmlrpc_value ** const resultPP, const char * const format, ...); clientP is the handle of the client to use. You got it from a prior xmlrpc_client_create() call. server_url is the same as the argument to xmlrpc_server_info_new(). (But it's just what you'd expect it to be, so don't feel you have to go read that). method_name is the name of the XML-RPC method you are invoking (it's a name defined by the particular XML-RPC server). This is an ASCIIZ string. resultPP is a pointer to the variable in which the function returns the handle for the XML-RPC result. format is a format string that describes the XML-RPC parameters. The variable arguments (...) are the values for the XML-RPC parameters. Their number, type, and meaning are determined by format. format and the variable arguments together describe an XML-RPC value. That value must be of the array type. Each element of the array is an XML-RPC parameter of the RPC, in array index order. This odd use of an XML-RPC value is a historical mistake (at one time, the xmlrpc_value type was meant to be a general purpose data structure -- an extension to the C language, rather than just an entity for XML-RPC use). Do not let it fool you into thinking that you're specifying an array as the single parameter of the RPC. In XML-RPC, an RPC takes any number of parameters, and the elements of this array are those parameters. If the RPC fails at the server (i.e. the server's response is an XML-RPC fault), xmlrpc_client_call2f() fails. The error code and description in *envP in that case are what the server said in its fault response. xmlrpc_client_call2f() was new in Xmlrpc-c 1.05 (March 2006). Before that, you must use the global client. This is like xmlrpc_client_call2f(), but is more flexible in your ability to specify the XML-RPC parameters and the server information. Prototype: void xmlrpc_client_call2(xmlrpc_env * const envP, struct xmlrpc_client * const clientP, const xmlrpc_server_info * const serverInfoP, const char * const methodName, xmlrpc_value * const paramArrayP, xmlrpc_value ** const resultPP); Example: xmlrpc_env env; xmlrpc_client * clientP; xmlrpc_value * resultP; xmlrpc_server_info * serverInfoP; xmlrpc_value * paramArrayP; xmlrpc_value * addend1P; xmlrpc_value * addend2P; serverInfoP = xmlrpc_server_info_new( &env, "http::/localhost:8080/RPC2"); paramArrayP = xmlrpc_array_new(&env); addend1P = xmlrpc_int_new(&env, 5); addend2P = xmlrpc_int_new(&env, 7); xmlrpc_array_append_item(&env, myArrayP, addend1P); xmlrpc_array_append_item(&env, myArrayP, addend2P); xmlrpc_DECREF(addend1P); xmlrpc_DECREF(addend2P); xmlrpc_client_call2(&env, clientP, serverInfoP, "sample.add", paramArrayP, &resultP); xmlrpc_DECREF(paramArrayP); xmlrpc_server_info_free(serverInfoP); For the XML-RPC parameters, you supply an xmlrpc_value of array type paramArrayP, in which each element of the array is an XML-RPC parameter. This value has the same meaning as the one you specify via format string with xmlrpc_client_call2f(). Note that xmlrpc_client_call2f() is useless when you don't know at compile time what kinds of parameters the method requires. But xmlrpc_client_call2() lets you build up the parameter list using runtime program intelligence. For the server information, you supply serverInfoP. While xmlrpc_client_call2f() lets you specify only the server's URL, serverInfoP can specify more information necessary to work with the server. For example, you may need to authenticate yourself to the server, so you may have to supply some credentials. serverInfoP can do that. See the description of a xmlrpc_server_info object. xmlrpc_client_call2() was new in Xmlrpc-c 1.05 (March 2006). Before that, you must use the global client. This is identical to xmlrpc_client_call2f() except that it uses the global client. Ergo, there is no clientP argument. Also, it uses the more traditional and compact, but harder to read, form in which the return value of the function is used to return information. Prototype: xmlrpc_value * xmlrpc_client_call(xmlrpc_env * const envP, const char * const serverUrl, const char * const methodName, const char * const format, ...); This is like xmlrpc_client_call() except that you specify the XML-RPC method parameters like you do for xmlrpc_client_call2(), which is more flexible. Prototype: xmlrpc_value * xmlrpc_client_call_params(xmlrpc_env * const envP, const char * const serverUrl, const char * const methodName, xmlrpc_value * const paramArrayP); This is like xmlrpc_client_call() except that you specify the server information like you do for xmlrpc_client_call2(), which is more expressive. Prototype: xmlrpc_value * xmlrpc_client_call_server(xmlrpc_env * const envP, xmlrpc_server_info * const serverInfoP, const char * const methodName, const char * const format, ...); This is like xmlrpc_client_call() except that you specify the XML-RPC parameters and server information like you do for xmlrpc_client_call2(). Prototype: xmlrpc_value * xmlrpc_client_call_server_params( xmlrpc_env * const envP, xmlrpc_server_info * const serverInfoP, const char * const methodName, xmlrpc_value * const paramArrayP); All the preceding functions for performing an RPC do the entire thing while the caller waits. But some programs want to use a type of explicit threading where the function returns immediately while the RPC is still in progress so the caller can proceed to different work (perhaps starting more RPCs) and the caller synchs up with the RPC later. libxmlrpc_client provides an asynchronous RPC facility for that. By the way, there's no such thing as "an asynchronous call." "asynchronous" describes the overall relationship of the RPCs to the execution of the caller. The relationship is asynchronous because the two are not in lock step. Saying that an individual call is asynchronous is like saying that an individual note of a song is in 4/4 time. The proper appellation of a call that returns before all the work is done is a "no-wait" call. There is a no-wait version of each of the RPC functions mentioned above. For private clients, the functions are xmlrpc_client_start_rpc() and xmlrpc_client_start_rpcf() (analogous to xmlrpc_client_call2() and xmlrpc_client_call_2f()). For the global client, the functions are xmlrpc_client_call_asynch(), xmlrpc_client_call_server_async(), and xmlrpc_client_call_asynch_params() (note the inconsistency in these names -- it was a mistake). For these no-wait versions, we need to define another entity -- the "RPC request." An RPC request is a request through libxmlrpc_client for an RPC. The main thing an RPC request does is the RPC, but in pathological conditions an RPC request might not do an RPC at all. The RPC request exists before and after the RPC does. The no-wait version of the RPC calls makes an RPC request, as opposed to performing an RPC. Performing of the RPC typically comes later. So there is an additional argument with which you identify a function to be called to handle the response to the XML-RPC call. This is called a response handler. There is another argument that supplies an argument to be passed to the response handler. The response handler gets that argument in addition to information from the XML-RPC response. The response handler is slightly misnamed, because it handles all completions of RPC requests. For example, if a problem prevents libxmlrpc_client from even starting the RPC, which means there is no XML-RPC response, the response handler still gets called. It's natural to believe that the response handler is a completion function that gets called the moment the RPC request completes, like an I/O interrupt. But that's not what it is. When you start an RPC (e.g. by calling xmlrpc_client_start_rpc()), you must eventually call an RPC finishing function such as xmlrpc_client_event_loop_finish(). What that does is complete all RPC requests that have been started. This includes waiting for RPCs to complete if they haven't already. For each of these RPC requests (when the RPC has completed), the RPC finishing function calls the response handler. It also does other things that are necessary to complete the RPC request, so you must call it eventually even if you don't care about the completions. An RPC finishing function doesn't specify a particular RPC to finish. One call finishes all RPCs a particular client has started. For a private client, the RPC finishing functions are xmlrpc_client_event_loop_finish() and xmlrpc_client_event_loop_finish_timeout(). The former waits as long as it takes for all the client's RPCs to complete; the latter returns after a timeout you specify so you can do other stuff and come back to it. For the global client, the equivalent finishing functions are xmlrpc_client_event_loop_finish_asynch() and xmlrpc_client_event_loop_finish_asynch_timeout(). If you find yourself needing timeouts, you should consider dumping the whole aynchronous RPC facility and using general purpose threading as recommended above. But it does allow you to intersperse XML-RPC transactions with other work in simple ways. It does not, however, give you a way to abandon a long-running RPC. It gives you a way to temporarily suspend waiting for an RPC to complete, but you must eventually wait for every RPC to complete. There is no such thing as a cancelled RPC in this facility. The timeout RPC finishing functions give you no way to know whether the finishing function timed out or the RPCs completed. If you need to know all the RPCs are completed, call xmlrpc_client_event_loop_finish_async(). An RPC finishing function will return early if the process receives a signal (assuming the signal does not terminate the process). The interface does not give you any good way to know whether the function returned because a wait was interrupted by a signal, because all the RPCs completed, or because the wait timed out. But as with the timeout, interrupting the wait for the RPC to complete does not excuse you from eventually completing the RPC. So you have to use your imagination to find a way to make sure you eventually give an RPC finishing function a chance to complete every RPC in spite of interruptions. Some day, we will fix this interface. Don't confuse interrupting the finishing function with interrupting the RPCs it's trying to finish. When you set a client's interrupt flag, that causes all the client's RPCs to abort. You still need a subsequent call to a finishing function to cause the RPCs to finish failing and free resources and such, but that call will be quick, as the RPCs are no longer trying to do anything but clean up. The exact nature of the asynchronicity depends highly on the client XML transport involved. The no-wait call may in actuality wait for the RPC to finish. Or it might not even start it, and the RPC finishing function might do all the work. See the sections on the transports (e.g. curl) for details. By the way, people sometimes like to refer to the response handler as a callback. It isn't. A callback is something that gets called within the context of the call with which it is associated. I.e. A calls into B, specifying a C callback. Before returning to A, B calls C. An example of using a callback is a sort routine. The sort routine gets 2 arguments: a set of values and a collating function that compares two such values. The sort routine calls the collating function on various pairs of values as part of putting them all in order. The calling of the collating function is a callback. Whenever you make an RPC request via one of the functions that does not wait for it to complete, you supply a response handler function and a parameter for it. If the function succeeds, the response handler eventually gets called. If the function fails, the response handler never gets called. Note that I'm talking about the library function itself failing. The RPC or RPC request might fail even though the function that made the request succeeded, and in that case the response handler definitely gets called. The primary purpose of the response handler is to process the XML-RPC response to the XML-RPC call that was requested. But in pathological cases, the request does not result in any XML-RPC call being made; or any XML-RPC response being received; or the response being capable of being processed. In those cases, the response handler handles the failure. The xmlrpc_response_handler type is a function pointer to a response handler. The prototype of response handler is as follows: void (*xmlrpc_response_handler) (const char * server_url, const char * method_name, xmlrpc_value * param_array, void * user_data, xmlrpc_env * faultP, xmlrpc_value * resultP); server_url, method_name, and param_array are the information you provided to describe the RPC when you requested it. user_data is the response handler argument you specified when you requested the RPC. faultP is a pointer to an error environment variable that describes either the error response to the XML-RPC call or the client-side failure of libxmlrpc_client to make the XML-RPC call or process its response or indicates that the RPC and RPC request were successful. resultP is a pointer to an XML-RPC value which is the result of the RPC. This is undefined unless faultP indicates success. None of the objects passed to the response handler have references to them that belong to the response handler, so there is no reference for the response handler to release. The caller naturally maintains its own reference on the objects for the duration of the call, so you know they aren't going to go away. A call to any client function may wait for a response handler to run for any RPC of that same client, whether the RPC is related to the call or not. Keep that in mind in ordering your resources to avoid deadlock. In particular, you cannot call any client function against the same client within your response handler. You can do an RPC at the XML level if you want: build your own call XML and parse the response XML. xmlrpc_client_transport_call() merely transports XML to the server and collects the XML the server sends back. It does not look at the XML at all; in fact, it need not even be XML. Example: xmlrpc_mem_block * respXmlP; xmlrpc_mem_block * callXmlP; xmlrpc_value * paramP; xmlrpc_value * sumP; paramP = xmlrpc_build_value(&env, "(ii)", 5, 7); XMLRPC_MEMBLOCK_NEW(char, callXmlP, 0); xmlrpc_serialize_call(&env, callXmlP, "sample.add", paramP); xmlrpc_client_transport_call(&env, serverP, callXml, &respXml); sumP = xmlrpc_parse_response(&env, XMLRPC_MEMBLOCK_CONTENTS(char, respXmlP), XMLRPC_MEMBLOCK_SIZE(char, respXmlP)); XMLRPC_MEMBLOCK_FREE(responseXmlP); XMLRPC_MEMBLOCK_FREE(callXmlP); An xmlrpc_server_info structure is an object that describes an XML-RPC server. It identifies the server and tells how to talk to it. You can use an xmlrpc_server_info object as input to a function that makes an XML-RPC call to the indicated server. This is not an object that represents the server itself -- just information about it. The key distinction is that the object contains no information about the state of the server or of the client's use of the server. There is no expectation that all access to that server will be via this object. An xmlrpc_server_info object contains the following information about a server: To create an xmlrpc_server_info object, call xmlrpc_server_info_new(). To destroy one, call xmlrpc_server_info_free(). xmlrpc_server_info_new() takes the server's URL as an argument and creates an object that says no identification is required. An example URL argument is: The URL must be an absolute URL (you can recognize an absolute URL by the fact that has a double slash after the "http:") As the URL of an XML-RPC server, it must be an HTTP URL (that means, among other things, that the URL specifies a scheme of "http"). You can create an xmlrpc_server_info object containing the same information as an existing one with xmlrpc_server_info_copy(). In an xmlrpc_server_info object, you indicate what kind of authentication (and identification) you want to do with the server. A freshly created xmlrpc_server_info specifies no authentication at all. Functions in this section declare that you're willing to authenticate various other ways. It is up the the client XML transport whether actually to do it or not; not all transports know how to do all of them. Furthermore, there is negotiation with the server involved. Both the server and client have to be willing to use a particular method. All of the XML transports can do HTTP basic authentication. Only the Curl transport can do the others. And depending on the version of Curl library with which you link your program, it may not be able to do some of those. Any Curl library built after 2005 should at least be able to do digest authentication. void void xmlrpc_server_info_set_user(xmlrpc_env * const envP, xmlrpc_server_info * const serverInfoP, const char * const username, const char * const password); This sets the username and password to be used in identifying and authenticating the client, for those authentication methods that involve usernames and passwords. This function by itself does not enable any authentication. You must separately call a function such as xmlrpc_server_info_allow_auth_basic as well.. This function was new in Xmlrpc-c 1.13 (December 2007). void void xmlrpc_server_info_allow_auth_basic(xmlrpc_env * const envP, xmlrpc_server_info * const serverInfoP); This sets the xmlrpc_server_info object to indicate that HTTP basic authentication is allowed with the server. You must set a username and password with xmlrpc_server_info_set_user() before calling this, or it will fail. Use xmlprc_server_info_disallow_auth_basic() to undo this. envP is an error environment variable pointer. serverP identifies the xmlrpc_server_info object in which the information is to be changed. This function was new in Xmlrpc-c 1.13 (December 2007). void void xmlrpc_server_info_disallow_auth_basic(xmlrpc_env * const envP, xmlrpc_server_info * const serverInfoP); This sets the xmlrpc_server_info object to indicate that HTTP basic authentication is not allowed with the server. This undoes what xmlprc_server_info_allows_auth_basic() does. This function was new in Xmlrpc-c 1.13 (December 2007). This is analogous to xmlrpc_server_info_allow_auth_basic(), except for HTTP digest authentication. This is analogous to xmlrpc_server_info_disallow_auth_basic(), except for HTTP digest authentication. This is analogous to xmlrpc_server_info_allow_auth_basic(), except for HTTP GSS-Negotiate authentication. This is analogous to xmlrpc_server_info_disallow_auth_basic(), except for HTTP GSS-Negotiate authentication. This is analogous to xmlrpc_server_info_allow_auth_basic(), except for HTTP NTLM authentication. This is analogous to xmlrpc_server_info_disallow_auth_basic(), except for HTTP NTLM authentication. This function is obsolete. In new code, use xmlrpc_server_info_allow_auth_basic and xmlrpc_server_info_set_user instead. void xmlrpc_server_info_set_basic_auth( xmlrpc_env * const envP, xmlrpc_server_info * const serverP, const char * const username, const char * const password); This has the same effect as and xmlrpc_server_info_set_user followed by xmlrpc_server_info_allow_auth_basic.. There is a facility for interrupting long-running client functions with a signal. Two examples of how this is useful: Control-C typically causes the system to send a signal of class SIGINT to the process, so you need that signal to interrupt your xmlrpc_client_call2(). Another way to put a time limit on an RPC is use the timeout parameter of the Curl transport. The program interrupted_client in the examples directory of the Xmlrpc-c source tree is a complete example of a client program whose long-running RPCs can be interrupted. It was new in Xmlrpc-c 1.13 (December 2007). The way it works is that you set up a C variable somewhere that tells whether you want an interruption or not. It's called an interrupt flag. Set it to 0 to mean "carry on" and 1 to mean "interrupt." Tell Xmlrpc-c in advance where your flag is, with xmlrpc_client_set_interrupt(). Before you call an Xmlrpc-c function, set the interrupt flag to 0. Set up a signal handler that sets the interrupt flag to 1 and returns. Xmlrpc-c library functions check that flag at certain times and, seeing it set, abort what they are doing and return a failure. So the only thing left to consider is just when the library function checks the interrupt flag. Ideally, it checks it before doing anything that could take a while, and shortly after every signal is handled. But it doesn't always meet that ideal; see Limitations. There is no way to interrupt a function of the global client. Remember that if you don't set up a signal handler, a signal typically terminates the process. And if you set up to block a signal, the process never receives it, so your signal handler does not run and a long-running system call keeps running. When you set the interrupt flag to 1, all RPCs then in progress via the client terminate soon, aborting and failing if necessary. This means in addition to a libxmlrpc_client function returning soon, the server also may see an unfinished RPC. XML-RPC has no concept of aborting an RPC, so what the server sees may just be an abruptly truncated conversation. This is different from the timeout in the asynchronous RPC interface. The timeout simply makes xmlrpc_client_event_loop_finish_timeout() return before the RPCs are finished. The RPCs still exist and you can (and must) eventually finish them with another call. Example: static int interrupt; xmlrpc_client * clientP; xmlrpc_client_create(&env, XMLRPC_CLIENT_NO_FLAGS, NAME, VERSION, NULL, 0, &clientP); xmlrpc_client_set_interrupt(clientP, &interrupt); interrupt = 0; Prototype: void xmlrpc_client_set_interrupt(xmlrpc_client * const clientP, int * const interruptP); This function declares an interrupt flag for a client. Henceforth, you can interrupt various client operations by setting that flag to a nonzero value. To clear the interrupt flag, specify NULL for interruptP. If you call this while the client is in the middle of something, results are undefined. Normally, you call this only as part of setting up a client, shortly after you create it. This function was new in Xmlrpc-c 1.10 (March 2007). The system is imperfect. Only some things are eligible for interruption. Other long-running things may just ignore signals and keep you waiting. One particular thing that is interruptible is, with the curl XML transport, the wait for the XML-RPC server to respond after the client has sent the XML-RPC call. When a library function runs for a long time, it's usually because it is executing a system call that won't complete until something external happens, such as the system receives an HTTP response over the network. The nature of Unix signals is such that a signal will usually interrupt any system call taking place at the time of the signal. So after your signal handler returns, the system call returns (fails). The libxmlrpc_client function typically checks the interrupt flag soon after such a system call completes. If the flag is set, the function fails immediately. Otherwise, it just repeats the failed system call and the wait goes on. So you should see the Xmlrpc-c library function return soon after the signal. In many cases that an Xmlrpc-c function ignores or delays response to a signal, it's because Xmlrpc-c uses a subordinate library that does not respect signals. If a library function that Xmlrpc-c calls does not return when the process receives a signal, there is nothing Xmlrpc-c can do to respond to the signal. The library of greatest significance, with the Curl transport, is the Curl library ("libcurl"), which performs HTTP functions. Before March 2007, that library has a limitation in that it may take up to a second after a signal arrives for it to abort its wait (e.g. for a response from the server). (In case you're curious, this limitation is due to the way the interruptibility function evolved from another library function -- the ability to make periodic progress report callbacks, which itself evolved from a library function that actually prints progress reports on the terminal, once per second). Another case where the interruptibility is not what you would expect is where the underlying system is older Linux. Older Linux kernels do not have a pselect() system call. Consequently, the GNU C Library on a system with one of these kernels cannot implement POSIX pselect(). libxmlrpc_client depends upon POSIX pselect() for its interruptibility. On these systems, the GNU C Library implements its pselect() function with select() and changes the signal mask before and after calling select(). This differs from what POSIX requires in that if a signal arrives before or just after the select() begins, it will not stop the select() from waiting. Consequently, if a signal arrives within a very narrow window of time, and your signal handler signals that libxmlrpc_client should abandon whatever it's doing, the libxmlrpc_client function that calls the GNU C Library's pselect() function will wait anyway. Thus, on such a system you cannot depend upon a signal interrupting a libxmrpc_client call. It is still useful in many cases, though. For example, if you're trying to respond to control-C, this just means that on extremely rare occasions, the user will have to hit Control-C again. For timeouts based on alarm signals, you may want to have your signal handler reschedule an alarm signal for a short while later just in case the one it's handling falls into one of those blind windows. A pselect() system call showed up in kernel.org Linux in an early 2007 release. I don't know what if any GNU C libraries and Linux operating systems take advantage of it. Before Release 1.11 (June 2007), libxmlrpc_client may delay responding to a signal by up to a second when it arrives while a synchronous call function (e.g. xmlrpc_client_call2()) is executing, even with a current Curl library. (That's because the older libxmlrpc_client uses libcurl's curl_easy_perform() synchronous interface for that, and curl_easy_perform() is not properly interruptible. All it does is poll for interruptions once per second. Current libxmlrpc_client uses the curl "multi" interface instead). Before Xmlrpc-c 1.10 (March 2007), there is no way to interrupt a libxmlrpc_client call (and not have the OS terminate your program). The layer of libxmlrpc_client that delivers the XML for an RPC to the server and receives the XML for the response back is the client XML transport, and you can choose among several of them. In order for the RPC to be true XML-RPC, this transport must use HTTP to transport the XML, but in theory it could be something else entirely. This section describes the individual client XML transports. A normal libxmlrpc_client can use any of these; you choose one when you create the client with xmlrpc_client_init2(). But people often create a variation on libxmlrpc_client that omits transports they don't want or for which they don't have the prerequisites. If you don't care (and if you're not doing anything fancy, there's really no reason to care), you can let libxmlrpc_client choose a transport for you by specifying NULL in place of the transport pointer on your xmlrpc_client_init2() call. This transport uses the widely used Curl library to transport the XML over HTTP. People usually render the name of this library as "cURL". We use standard typography instead in this manual, because it is easier to read. The Curl library has a concept of a session (it's represented in the API by a CURL handle). The Xmlrpc-c Curl transport uses sessions like this: The transport uses a single Curl session for the life of the transport (which is normally the life of the Xmlrpc-c client) for all RPCs you perform through the synchronous interface (the xmlrpc_client_call2() function). But every RPC you perform via the asynchronous interface gets its own Curl session. This latter situation is not desirable; it exists because of limitations of the Curl API -- A session cannot be used simultaneously by multiple threads. Curl sessions matter for these reasons: stores cookies the server sends and includes them in future requests, as defined by HTTP cookie standards. But it does so only within the scope of a Curl session. A cookie session is a Curl session and the Curl transport treats persistent cookies as session cookies -- they do not outlive the Curl session. The transport works with SSL servers, i.e. https: URLs. By default, the Curl transport will refuse to talk to the server (i.e. abort and fail your RPC) unless the server proves it is who you expect it to be. There are two parts to establishing identity: identification and authentication (of identity). Identification is claiming to be someone. Authentication is proving the claim. You control these two things independently with the Curl transport. In SSL, a server identifies itself by presenting a certificate. The certificate contains a Common Name and optionally Subject Alternate Names, which are normally host names -- the same names you put in a URL to identify the server. An SSL server authenticates its certificate by providing a digital signature. It may also provide a signature from someone else authenticating the server's signature, and so on up to somene whose signature you recognize. The Curl library comes with a few high level signatures, so as long as you trust whoever gave you the Curl library, the chain of trust will normally end with a signature you recognize. You can make the Curl transport bypass the authentication of the server's identity (i.e. bypass making sure the server is who its certificate says it is) with the ssl_no_verifypeer option. You can make the Curl transport bypass identification (i.e. bypass making sure the host name that the server claims via its certificate matches your URL) with the no_ssl_verifyhost option. Note that there isn't much point to authenticating the server's certificate if you aren't going to use the authenticated host name. The Curl transport doesn't give you any way to use it except to abort the RPC if it doesn't match the URL. Similarly, there isn't a whole lot of reason to verify an unauthenticated host name, because any crook who would accept network traffic addressed to someone else would also forge a certificate saying he is that someone else. It is common to disable authentication and identification to work around a technical problem wherein you're unable to confirm the server's identity, but don't really think there's any risk that the server is an impostor. A common technical problem that requires you to use no_ssl_verifypeer in order to do any RPCs is that you don't have certificate authority (CA) certificates properly installed on your client system. The details of the verification of server identity, including what files you need on your system to make it work, are all handled by the Curl library. See Curl documentation for details. The Curl transport has a bunch of transport parameters to control the details of the SSL verification. In HTTP, the "user agent" is the program through which the user makes an HTTP request. For class web browsing, it is the web browser program, such as Internet Explorer 6.0. An optional header in the HTTP request, the User-Agent header, identifies the user agent. It conventionally does this with a value that looks like this: "prog1/1.0 prog2/2.1 prog3/0.9beta". It is a sequence of name/version pairs, identifying components at successively lower layers. The Xmlrpc-c Curl transport by default does not include a User-Agent header in its requests. You can cause it to include via its user_agent parameter. When user_agent is non-null, the transport includes a User-Agent header, and its value is the string identified by user_agent, concatenated with a name/version pair for Xmlrpc-c and a name/version pair for Curl, all separated by spaces. In HTTP 1.1 (but not 1.0), the client can send the header "Expect: 100-continue", which tells the server that the client isn't going to send the body of the HTTP request until the server tells it to by sending a "continue" response. The server is obligated to send that response. The point of this is that the client doesn't want to spend a lot of resources generating and sending a body that the server is just going to reject based on the headers (for example because the body, according to the headers, is too big for the server to handle). But some servers that are otherwise HTTP 1.1 don't hold up their end of the deal: they just ignore the Expect header and leave the client hanging. (This is presumably just a matter of implementation mistake or expedience). Consequently, the Curl transport never sends Expect (and, more importantly, never expects a continue response). BUT: Before Xmlrpc-c 1.19 (June 2009), with a recent libcurl library and a large XML-RPC call, the transport does send the Expect and waits up to 3 seconds for the continue response. If the server doesn't send the response, the transport goes ahead and the only problem with the transaction is that it takes 3 seconds longer than it should. (I'm being coy about what version of libcurl and what size of body because I don't know). struct xmlrpc_curl_xportparms { const char * network_interface; xmlrpc_bool no_ssl_verifypeer; xmlrpc_bool no_ssl_verifyhost; const char * user_agent const char * ssl_cert; const char * sslcerttype; const char * sslcertpasswd; const char * sslkey; const char * sslkeytype; const char * sslkeypasswd; const char * sslengine; xmlrpc_bool sslengine_default; enum xmlrpc_sslversion sslversion; const char * cainfo; const char * capath; const char * randomfile; const char * egdsocket; const char * ssl_cipher_list; unsigned int timeout; }; Example: struct xmlrpc_clientparms clientParms; struct xmlrpc_curl_xportparms curlParms; xmlrpc_client * clientP; curlParms.network_interface = "eth1"; curlParms.no_ssl_verifypeer = TRUE; curlParms.no_ssl_verifyhost = TRUE; curlParms.user_agent = "myprog/1.0"; clientParms.transport = "curl"; clientParms.transportparmsP = &curlParms; clientParms.transportparm_size = XMLRPC_CXPSIZE(user_agent); xmlrpc_client_create(&env, 0, "myprog", "1", &clientParms, XMLRPC_CPSIZE(transportparm_size), &clientP); network_interface is the Curl library's "interface" option. See documentation of the Curl API for details (the best documentation of Curl API options, by the way, is the manual for the 'curl' program that comes with it and has command line options to control all the API options). But essentially, it chooses the local network interface through which to send RPCs to the server. It causes the Curl library to perform a "bind" operation on the socket it uses for the communication. It can be the name of a network interface (e.g. on Linux, "eth1") or an IP address of the interface or a host name that resolves to the IP address of the interface. Unfortunately, you can't explicitly state which form you're specifying, so there's some ambiguity. Examples: eth1, 64.171.19.66, giraffe.giraffe-data.com. The value is an ASCIIZ string. You can free its storage as soon as xmlrpc_client_init2() returns. If this parameter is NULL or not present, the Curl transport uses whatever is the default fo the Curl library. This parameter was new in Xmlrpc-c 1.02. no_ssl_verifypeer and no_ssl_verifyhost are meaningful only for an SSL connection (a connection to a server with a "https:" URL). They control how secure the connection is. no_ssl_verifypeer = true means the Curl transport just believes that the server is who its certificate says it is. False means the Curl transport will refuse to connect to the server if it can't prove that it is who it says it is. It also means it doesn't care if you have proper CA certificates installed on the client system. no_ssl_verifyhost = true means the Curl transport doesn't care if the server is the one it was trying to reach. False means the Curl transport will refuse to connect to the server if its certificate does not match its URL. Note that all combinations of these two options are meaningful (though not necessarily useful). The default value is false. These parameters were new in Xmlrpc-c 1.03 (June 2005). Before that, the Curl transport did whatever is the default for the Curl library involved. In reasonably modern Curl, that's the same as the parameters being false, but in really old Curl, the default was different. user_agent is a string that identifies the user of the Xmlrpc-c client object, for purposes of a User-Agent HTTP header. If the parameter is NULL or not present, the HTTP request has no User-Agent header. See the discussion of user agents above. The user_agent parameter was new in Xmlrpc-c 1.03 (June 2005). Before that, the Curl transport never used a User-Agent header. These are options that control the details of SSL verification of the server identity. You can usually do without any of these; the defaults are the Curl library defaults, which are typically all you need. All of these options correlate directly to Curl options of the same name, so rather than document them here, we refer you to the Curl documentation. All of these parameters were new in Curl 1.04 (November 2005). timeout is the maximum time in milliseconds that the transport will allow for network communications. If it takes longer than that, the transaction fails without waiting any more. This is just a limit on certain aspects of network communication; it does not include time that the server takes between receiving a call and sending the response. Exactly which parts of network communication (name server lookup, ARP, TCP connection, data transfer, etc.) are subject to this time limit varies from one system to another and I can't say any more specifically what is covered. The actual timeout is what you requested rounded up to the next second. In future Xmlrpc-c, it may have better resolution. timeout=0 means no timeout (wait as long as it takes). If timeout is 0 or not present, there is no timeout -- The transport waits as long as it takes. The Curl library must be Release 7.10 or newer. If it is not and you specify timeout, creation of the transport fails. This parameter was new in Xmlrpc-c 1.13 (December 2007). Note that as a matter of good design, it is often better to use an alarm signal (SIGALRM) to interrupt a transport operation instead of the timeout parameter. An alarm signal sets a master timeout on a whole sequence of operations without all the layers having to be aware of it. I.e. you don't have to have "timeout" arguments on all your functions and have them all watch the clock. If you are writing code that doesn't own the whole environment (e.g. a general purpose library), you can't generally set up an alarm, but in that case you probably don't want to establish an arbitrary time limit either, because the appropriate limit depends upon context. It's often better to have the top level code, which does own the whole environment, set up an alarm and just have your code be interruptible by signals (as libxmlrpc_client is). On the other hand, a hardcoded timeout value is by far the easiest to code solution to the annoying problem of unresponsive servers. The Curl transport before Xmlrpc-c 1.04 serializes RPCs. This means your program as a whole will never start an RPC before the previous one has completed. In 1.04 and later, the Curl transport performs RPCs concurrently (via the Curl library's "multi" facility). But this applies only to the asynchronous interface. If you use multiple operating system threads to request multiple RPCs simultaneously via the synchronous interface, the Xmlrpc-c Curl transport will serialize them -- A Curl transport will never start an RPC before the previous one has completed. But starting in 1.05, you can have multiple Curl transports in a program (one per client). So your program as a whole might have multiple concurrent RPCs going, via the synchronous interface, by using multiple operating system threads, each using its own client). The Curl transport does one DNS server host name lookup at a time, and the timeouts in xmlrpc_client_event_loop_finish_asynch_timeout() and xmlrpc_client_event_loop_finish_timeout() are ineffective against long-running name lookups. This is due to a weakness in the Curl library (at least as late as version 7.16.1 -- January 2007). Before Xmlrpc-c 1.04 (November 2005), the timeouts don't work at all; the two finish_async functions are identical. Your use of libxmlrpc_client with the Curl transport may interfere with other uses in your program of the Curl library. This is primarily because of a weakness in current Curl (at least as late as Curl 7.12.2, November 2005) in which a call to its global constant teardown routine, curl_global_cleanup() tears down the global constants for every user of the library in the program. xmlrpc_client_teardown_global_const() makes such a call to curl_global_cleanup(). Therefore, you must make sure that you do not call xmlrpc_client_teardown_global_const() while anything else in your program is using the Curl library. As of December 2005, a request is being considered by the maintainer of the Curl library to change it such that it remembers how many modules are referring to its constants and tears them down only when the last referrer goes away. That would eliminate this interference. Also note the thread-unsafety of xmlrpc_client_setup_global_const() and xmlrpc_client_teardown_global_const() (explained in their respective sections), which is particularly relevant when you have other threads using the Curl library. xmlrpc_client_init2() and xmlrpc_client_cleanup() do the same things as xmlrpc_client_setup_global_const() and xmlrpc_client_teardown_global_const() in this respect, so the same caution applies to them. This transport uses the classic Libwww (W3C Libwww to be exact) libraries to transport the XML over HTTP. This is less convenient, less documented, and less functional than Curl, so I don't know any reason to use it unless you have easier access to the Libwww libraries than to the Curl libraries, and therefore built a special version of libxmlrpc_client that doesn't have Curl capability. One way in particular that using Curl is easier is that when something prevents Curl from communicating with the server, it reports a fairly specific indication of why (which libxmlrpc_client then forwards to you. In contrast, Libwww often just tells you "something failed" and you have to guess or trace the Libwww code. ignores cookies the server sends and sends no cookies to the server. In Xmlrpc-c 1.00 (October 2004) through 1.02 (April 2005), the transport recognizes a single cookie named "auth". This is a cookie shared by all servers; the transport ignores the domain that the server tries to associate with the cookie. The cookie lives for the life of the transport (i.e. the life of the client). The function was created for some special purpose lost in history, during Xmlrpc-c's pre-1.00 dark age. The Xmlrpc-c maintainer eventually studied it enough to determine that it was more detrimental than beneficial. The asynchronous facility, in at least one experiment Bryan did, was rather disappointing used with the libwww transport. The no-wait call had no visible effect, and the RPC finishing function did each RPC serially (waits for one to complete before starting the next one). Because libwww is undocumented and its code too complex to read easily, Bryan did not determine if there are circumstances under which it behaves better. This is available on Windows only. This function tells you the name of the default XML transport. It's always "libwww", but in future versions of libxmlrpc_client or other libraries that are variations on libxmlrpc_client, it might be something else. The default XML transport is the one that gets used if you don't specify a particular XML transport when you create the Xmlrpc-c client object. const char * xmlrpc_client_get_default_transport(xmlrpc_env * const envP); The library is mostly thread-safe. See Thread Safety for general information. But the objects provided by the library are thread-safe only if you use the Curl XML transport. With Curl, you can call a function that operates on a client object or RPC object while another thread is also calling a fuction that operates on the same client or RPC. But with the other transports, you can't. If you are using a thread-unsafe XML transport, you can still do RPCs from multiple threads; you just have to give each thread its own client so that a particular client is accessed by only one thread. And you can't share RPCs between threads either (though it probably would never occur to you to do that anyway). You can get multiple RPCs running concurrently without using multiple operating system threads. libxmlrpc_client's Asynchronous RPC facility allows a single operating system thread, using a single client, to have multiple RPCs in progress at once. (It is, in fact, a threading facility). Just how concurrent two RPCs are depends upon the Xmlrpc-c XML transport you use and on the servers and network. Multiple operating system threads and the asynchronous interface make it possible for multiple RPCs to be in progress from your program's point of view (you can start one without waiting for the previous one to complete). But the XML transport may very well serialize the RPCs -- keep them in a list and perform them one at a time. And if the XML transport doesn't do that, the server may do it -- complete one RPC before accepting the HTTP connection for the next. See the Curl transport documentation for details on the concurrency characteristics of that transport type. It is a basic design goal of Xmlrpc-c that no matter how broken a server is, a client of the server will tolerate it. The server cannot make the client crash. When a client operation fails because of a bad server, failure information tells you the server is the problem. However, it is of course possible for your own client code that uses Xmlrpc-c libraries to be written poorly enough that a bad server would bring it down. For example, if an RPC produces a negative integer result and Xmlrpc-c client facilities dutifully pass that up to your code, but your code is expecting a positive integer, you could have big trouble. One common form of server defect that is really hard for a client to tolerate is where the server simply doesn't respond. The server could be hung in an infinite loop or deadlock and to the client it looks just like an RPC that takes a long time to execute. A very common form of this is where the server is actually non-existent. You call xmlrpc_client_call2() to execute an RPC on this hypothetical server and while you'd expect the call to fail immediately with a fault string telling you there's no such server, it just hangs. In truth, it normally doesn't really hang. It does in fact fail eventually, but it can take as long as several minutes to do it. What this is is a weakness of TCP. The Xmlrpc-c code is simply trying to make an HTTP connection to the server and the operating system TCP connection-making service takes all this time to admit that there's no server there. It's probably doing a lot of retrying in the meantime, thinking the lack of response may be due to packets getting lost in the network. IP generally doesn't have a means to indicate to a sender that a packet can't be delivered because the "to" address doesn't exist; rather, the network merely discards the packet and the sender is supposed to give up if it doesn't get a response after a while. In some situations, it is actually possible for the network to tell the client that the server isn't there, but for security reasons, it doesn't (information is power; the network wants to keep the hackers guessing). Now, this several minutes of waiting is probably annoying because you know that if there hasn't been a response to a couple of tries within a few seconds, there isn't going to be one. But it's a function that's buried deep in your operating system, so there is no way Xmlrpc-c can spare you the wait. What you can do about this is the general solution to all of the client hangs caused by bad servers: Use an alarm signal. On Unix, a process can ask the OS to send an alarm signal to it at a specified time in the future (see the standard C library function alarm()). You can make that signal interrupt your call to libxmlrpc_client When you interrupt a libxmlrpc_client function that was waiting on a server, it breaks whatever TCP connection there was, so if the server really is functioning, it will find out you aborted the RPC and you don't have to worry about orphan results coming in later. By the way, another apparent solution that you may think of is to use the asynchronous RPC interface. That doesn't help with the nonexistent server problem, though, because an RPC start function doesn't return until it has at least delivered the call to the server. A slightly more sophisticated approach to the venerable but primitive alarm signal is to use OS threads. Create a thread to do the RPC. Then wait, with timeout, for the thread to complete. If the wait times out, send a signal to the RPC thread you created to interrupt it. Finally, with the Curl XML transport, you can simply specify a timeout value for all XML transportation functions, using the timeout transport-specific client parameter. This is easy, but not as flexible as using a signal. Per the XML-RPC spec, an HTTP response which is an XML-RPC response must have a Content-Length header and its value must be the size of the XML-RPC response message. libxmlrpc_client facilities may fail the XML-RPC call if the response does not have the required Content-Length header or does not contain the specified amount of contents. With the Curl or Libwww XML transport, libxmlrpc_client does not require a Content-Length header, but with the Wininet XML transport, it does. An HTTP chunked response does not have a Content-Length header, so cannot be an XML-RPC response. But there are some pseudo-XML-RPC servers that send an HTTP chunked response without a Content-Length header or a pseudo-HTTP chunked response with a Content-Length header with a liberal (greater than actual) value. libxmlrpc_client facilities accept and process naturally an HTTP chunked response, with the Curl and Libwww XML transports. With the Wininet transport, libxmlrpc_client accepts a pseudo-HTTP response which is chunked and also has a Content-Length header. That header must specify the actual content length (i.e. the number of characters after decoding according to the Transfer-Encoding header). Even though an HTTP server never sends a chunked response with a Content-Length header, a true HTTP client ignores the Content-Length header in a chunked response -- i.e. it tolerates the server error. An XML-RPC client need not. This is a rare case where an XML-RPC client is excused from also being an HTTP client. As stated above, the Wininet XML transport takes advantage of that excuse, so a libxmlrpc_client-based XML-RPC client with a Wininet transport is not an HTTP client. This section describes some facilities and techniques for debugging programs that use libxmlrpc_client. The trace facilities described here write messages to the Standard Error file descriptor via the Standard Error stream of the standard C library (stderr). So make sure you have one. Many server processes don't (they explicitly close the one that the system setup code provides). If you set the XMLRPC_TRACE_XML environment variable to 1, the libxmlrpc_client transports will print to Standard Error the XML of the call and of the response, in addition to their normal processing. To be exact, what the transport prints is the bytes that are presented as XML -- it doesn't know or care at this point whether it is valid XML. The transport writes to Standard Error each byte that it recognizes as a printable ASCII character (except backslash). For every other byte, it writes a traditional backslash escape sequence (for example, newline is "\b"). For backslash ("\"), the transport prints a double backslash ("\\") so you know it is not part of an escape sequence for a nonprintable character. The transport prints each line of the XML on a separate line (i.e. after it prints "\n" for a newline in the XML, it prints an actual newline). Note that this same environment variable does the same thing for Xmlrpc-c servers. To tell you what version (release, level) of libxmlrpc_client is linked to your program, the library has 3 external integer variables. They are declared in <xmlrpc-c/client.h> as follows: extern unsigned int const xmlrpc_client_version_major; extern unsigned int const xmlrpc_client_version_minor; extern unsigned int const xmlrpc_client_version_point; These symbols were new in Xmlrpc-c 1.13 (December 2007).
http://xmlrpc-c.sourceforge.net/doc/libxmlrpc_client.html
crawl-002
refinedweb
11,012
55.03
This is a Monthly Rapid Update release of the MySQL Enterprise Server 5.0. This section documents all changes and bug fixes that have been applied since the last MySQL Enterprise Server release (5.0.38). Functionality Added or Changed MySQL Cluster: The behavior of the ndb_restore utility has been changed as follows: It is now possible to restore selected databases or tables using ndb_restore. Several options have been added for use with ndb_restore --print_datato facilitate the creation of structured data dump files. These options can be used to make dumps made using ndb_restore more like those produced by mysqldump. For details of these changes, see ndb_restore — Restore a MySQL Cluster Backup. (Bug #26899, Bug #26900) If a set function Swith an outer reference S( outer_ref) S( outer_ref) S( const) ANSISQL mode is enabled. (Bug #27348) Added the --service-startup-timeoutoption for mysql.server to specify how long to wait for the server to start. If the server does not start within the timeout period, mysql.server exits with an error. (Bug #26952) Prefix lengths for columns in SPATIALindexes are no longer displayed in SHOW CREATE TABLEoutput. mysqldump uses that statement, so if a table with SPATIALindexes containing prefixed columns is dumped and reloaded, the index is created with no prefixes. (The full column width of each column is indexed.) (Bug #26794) The output of mysql --xmland mysqldump --xmlnow includes a valid XML namespace. (Bug #25946) If you use SSL for a client connection, you can tell the client not to authenticate the server certificate by specifying neither --ssl-canor --ssl-capath. The server still verifies the client according to any applicable requirements established using GRANTstatements for the client, and it still uses any --ssl-ca/ --ssl-capathvalues that were passed to server at startup time. (Bug #25309) The syntax for index hints has been extended to enable explicit specification that the hint applies only to join processing. See Index Hints. This is a new fix for this issue, and replaces the fix made in MySQL 5.0.25 and reverted in 5.0.26. (Bug #21174) The mysql_create_system_tables script was removed because mysql_install_db no longer uses it. Important Note: The parser accepted invalid code in SQL condition handlers, leading to server crashes or unexpected execution behavior in stored programs. Specifically, the parser permitted a condition handler to refer to labels for blocks that enclose the handler declaration. This was incorrect because block label scope does not include the code for handlers declared within the labeled block. The parser now rejects this invalid construct, but if you perform a binary upgrade (without dumping and reloading your databases), existing handlers that contain the construct are still DECLARE ... HANDLER Syntax. (Bug #26503) MySQL Cluster: NDBtables having MEDIUMINT AUTO_INCREMENTcolumns were not restored correctly by ndb_restore, causing spurious duplicate key errors. This issue did not affect TINYINT, INT, or BIGINTcolumns with AUTO_INCREMENT. (Bug #27775) MySQL Cluster: NDBtables with indexes whose names contained space characters were not restored correctly by ndb_restore (the index names were truncated). (Bug #27758) MySQL Cluster: Under certain rare circumstances performing a DROP TABLEor TRUNCATE TABLEon an NDBtable could cause a node failure or forced cluster shutdown. (Bug #27581) MySQL Cluster: Memory usage of a mysqld process grew even while idle. (Bug #27560) MySQL Cluster: It was not possible to set LockPagesInMainMemoryequal to 0. (Bug #27291) MySQL Cluster: A race condition could sometimes occur if the node acting as master failed while node IDs were still being allocated during startup. (Bug #27286) MySQL Cluster: When a data node was taking over as the master node, a race condition could sometimes occur as the node was assuming responsibility for handling of global checkpoints. (Bug #27283) MySQL Cluster: Error messages displayed when running in single user mode were inconsistent. (Bug #27021) MySQL Cluster: The failure of a data node while restarting could cause other data nodes to hang or crash. (Bug #27003) MySQL Cluster: On Solaris, the value of an NDBtable column declared as BIT(33)was always displayed as 0. (Bug #26986) MySQL Cluster: mysqld processes would sometimes crash under high load. (Bug #26825) MySQL Cluster: The output from ndb_restore --print_datawas incorrect for a backup made of a database containing tables with TINYINTor SMALLINTcolumns. (Bug #26740) MySQL Cluster: In some cases, AFTER UPDATEand AFTER DELETEtriggers on NDBtables that referenced subject table did not see the results of operation which caused invocation of the trigger, but rather saw the row as it was prior to the update or delete operation. This was most noticeable when an update operation used a subquery to obtain the rows to be updated. An example would be UPDATE tbl1 SET col2 = val1 WHERE tbl1.col1 IN (SELECT col3 FROM tbl2 WHERE c4 = val2)where there was an AFTER UPDATEtrigger on table tbl1. In such cases, the trigger failed to execute. The problem occurred because the actual update or delete operations were deferred to be able to perform them later as one batch. The fix for this bug solves the problem by disabling this optimization for a given update or delete if the table has an AFTERtrigger defined for this operation. (Bug #26242) MySQL Cluster: Condition pushdown did not work with prepared statements. (Bug #26225) MySQL Cluster: Joins on multiple tables containing BLOBcolumns could cause data nodes run out of memory, and to crash with the error NdbObjectIdMap::expand unable to expand. (Bug #26176) MySQL Cluster: After entering single user mode it was not possible to alter non- NDBtables on any SQL nodes other than the one having sole access to the cluster. (Bug #25275) MySQL Cluster:. (Bug #24793) MySQL Cluster: The management client command node_idSTATUS Nodewhen node_id: not connected node_idwas not the node ID of a data node.Note The ALL STATUScommand in the cluster management client still displays status information for data nodes only. This is by design. See Commands in the MySQL Cluster Management Client, for more information. (Bug #21715) MySQL Cluster: Some values of MaxNoOfTablescaused the error Job buffer congestion to occur. (Bug #19378) MySQL Cluster: When trying to create tables on an SQL node not connected to the cluster, a misleading error message Table ' tbl_name' already exists was generated. The error now generated is Could not connect to storage engine. (Bug #11217, Bug #18676) Replication: Out-of-memory errors were not reported. Now they are written to the error log. (Bug #26844) Replication: Improved out-of-memory detection when sending logs from a master server to slaves, and log a message when allocation fails. (Bug #26837) Replication: When RAND()was called multiple times inside a stored procedure, the server did not write the correct random seed values to the binary log, resulting in incorrect replication. (Bug #25543) Replication: GRANTstatements were not replicated if the server was started with the --replicate-ignore-tableor --replicate-wild-ignore-tableoption. (Bug #25482) Replication: Replication between master and slave would infinitely retry binary log transmission where the max_allowed_packeton the master was larger than that on the slave if the size of the transfer was between these two values. (Bug #23775) Cluster Replication: Some queries that updated multiple tables were not backed up correctly. (Bug #27748) Cluster API: Using NdbBlob::writeData()to write data in the middle of an existing blob value (that is, updating the value) could overwrite some data past the end of the data to be changed. (Bug #27018). Some equi-joins containing a WHEREclause that included a NOT INsubquery caused a server crash. (Bug #27870) SELECT DISTINCTcould return incorrect results if the select list contained duplicated columns. (Bug #27659) With NO_AUTO_VALUE_ON_ZEROSQL mode enabled, LOAD DATAoperations could assign incorrect AUTO_INCREMENTvalues. (Bug #27586) Incorrect results could be returned for some queries that contained a select list expression with INor BETWEENtogether with an ORDER BYor GROUP BYon the same expression using NOT INor NOT BETWEEN. (Bug #27532) Evaluation of an IN()predicate containing a decimal-valued argument caused a server crash. (Bug #27513, Bug #27362, CVE-2007-2583) In out-of-memory conditions, the server might crash or otherwise not report an error to the Windows event log. (Bug #27490) Passing nested row expressions with different structures to an INpredicate caused a server crash. (Bug #27484) The decimal.hheader file was incorrectly omitted from binary distributions. (Bug #27456) With innodb_file_per_tableenabled, attempting to rename an InnoDBtable to a nonexistent database caused the server to exit. (Bug #27381) A subquery could get incorrect values for references to outer query columns when it contained aggregate functions that were aggregated in outer context. (Bug #27321) The server did not shut down cleanly. (Bug #27310) In a view, a column that was defined using a GEOMETRYfunction was treated as having the LONGBLOBdata type rather than the GEOMETRYtype. (Bug #27300) Queries containing subqueries with COUNT(*)aggregated in an outer context returned incorrect results. This happened only if the subquery did not contain any references to outer columns. (Bug #27257) Use of an aggregate function from an outer context as an argument to GROUP_CONCAT()caused a server crash. (Bug #27229) String truncation upon insertion into an integer or year column did not generate a warning (or an error in strict mode). (Bug #27176, Bug #26359) Storing NULLvalues in spatial fields caused excessive memory allocation and crashes on some systems. (Bug #27164) Row equalities in WHEREclauses could cause memory corruption. (Bug #27154) GROUP BYon a ucs2column caused a server crash when there was at least one empty string in the column. (Bug #27079) Duplicate members in SETor ENUMdefinitions were not detected. Now they result in a warning; if strict SQL mode is enabled, an error occurs instead. (Bug #27069) For INSERT ... ON DUPLICATE KEY UPDATEstatements on tables containing AUTO_INCREMENTcolumns, LAST_INSERT_ID()was reset to 0 if no rows were successfully inserted or changed. “Not changed” includes the case where a row was updated to its current values, but in that case, LAST_INSERT_ID()should not be reset to 0. Now LAST_INSERT_ID()is reset to 0 only if no rows were successfully inserted or touched, whether or not touched rows were changed. (Bug #27033) References: See also Bug #27210, Bug #27006. This bug was introduced by Bug #19978. mysql_install_db could terminate with an error after failing to determine that a system table already existed. (Bug #27022) In a MEMORYtable, using a BTREEindex to scan for updatable rows could lead to an infinite loop. (Bug #26996) Invalid optimization of pushdown conditions for queries where an outer join was guaranteed to read only one row from the outer table led to results with too few rows. (Bug #26963) Windows binaries contained no debug symbol file. Now .mapand .pdbfiles are included in 32-bit builds for mysqld-nt.exe, mysqld-debug.exe, and mysqlmanager.exe. (Bug #26893) For InnoDBtables having a clustered index that began with a CHARor VARCHARcolumn, deleting a record and then inserting another before the deleted record was purged could result in table corruption. (Bug #26835) Duplicates were not properly identified among (potentially) long strings used as arguments for GROUP_CONCAT(DISTINCT). (Bug #26815) ALTER VIEWrequires the CREATE VIEWand DROPprivileges for the view. However, if the view was created by another user, the server erroneously required the SUPERprivilege. (Bug #26813) A result set column formed by concatenation of string literals was incomplete when the column was produced by a subquery in the FROMclause. (Bug #26738) When using the result of SEC_TO_TIME()for time value greater than 24 hours in an ORDER BYclause, either directly or through a column alias, the rows were sorted incorrectly as strings. (Bug #26672) The range optimizer could cause the server to run out of memory. (Bug #26625) The range optimizer could consume a combinatorial amount of memory for certain classes of WHEREclauses. (Bug #26624) mysqldumpcould crash or exhibit incorrect behavior when some options were given very long values, such as --fields-terminated-by=". The code has been cleaned up to remove a number of fixed-sized buffers and to be more careful about error conditions in memory allocation. (Bug #26346) some very long string" If the server was started with --skip-grant-tables, selecting from INFORMATION_SCHEMAtables caused a server crash. (Bug #26285) For an INSERTstatement that should fail due to a column with no default value not being assigned a value, the statement succeeded with no error if the column was assigned a value in an ON DUPLICATE KEY UPDATEclause, even if that clause was not used. (Bug #26261) The temporary file-creation code was cleaned up on Windows to improve server stability. (Bug #26233) For MyISAMtables, COUNT(*)could return an incorrect value if the WHEREclause compared an indexed TEXTcolumn to the empty string ( ''). This happened if the column contained empty strings and also strings starting with control characters such as tab or newline. (Bug #26231) For INSERT INTO ... SELECTwhere index searches used column prefixes, insert errors could occur when key value type conversion was done. (Bug #26207) For DELETE FROM(with no tbl_nameORDER BY col_name WHEREor LIMITclause), the server did not check whether col_namewas a valid column in the table. (Bug #26186) REPAIR TABLE ... USE_FRMwith an ARCHIVEtable deleted all records from the table. (Bug #26138) mysqldump crashed for MERGEtables if the --complete-insert( -c) option was given. (Bug #25993) Setting a column to NOT NULLwith an ON DELETE SET NULLclause foreign key crashes the server. (Bug #25927) On Windows, debug builds of mysqld could fail with heap assertions. (Bug #25765) In certain situations, MATCH ... AGAINSTreturned false hits for NULLvalues produced by LEFT JOINwhen no full-text index was available. (Bug #25729) OPTIMIZE TABLEmight fail on Windows when it attempts to rename a temporary file to the original name if the original file had been opened, resulting in loss of the .MYDfile. (Bug #25521) For SHOW ENGINE INNODB STATUS, the LATEST DEADLOCK INFORMATIONwas not always cleared properly. (Bug #25494) mysql_stmt_fetch()did an invalid memory deallocation when used with the embedded server. (Bug #25492) Difficult repair or optimization operations could cause an assertion failure, resulting in a server crash. (Bug #25289) Duplicate entries were not assessed correctly in a MEMORYtable with a BTREEprimary key on a utf8 ENUMcolumn. (Bug #24985) Selecting the result of AVG()within a UNIONcould produce incorrect values. (Bug #24791) MBROverlaps()returned incorrect values in some cases. (Bug #24563) Increasing the width of a DECIMALcolumn could cause column values to be changed. (Bug #24558) A problem in handling of aggregate functions in subqueries caused predicates containing aggregate functions to be ignored during query execution. (Bug #24484) The test for the MYSQL_OPT_SSL_VERIFY_SERVER_CERToption for mysql_options()was performed incorrectly. Also changed as a result of this bug fix: The argoption for the mysql_options()C API function was changed from char *to void *. (Bug #24121) On Windows, debug builds of mysqlbinlog could fail with a memory error. (Bug #23736) The values displayed for the Innodb_row_lock_time, Innodb_row_lock_time_avg, and Innodb_row_lock_time_maxstatus variables were incorrect. (Bug #23666) SHOW CREATE VIEWqualified references to stored functions in the view definition with the function's database name, even when the database was the default database. This affected mysqldump (which uses SHOW CREATE VIEWto dump views) because the resulting dump file could not be used to reload the database into a different database. SHOW CREATE VIEWnow suppresses the database name for references to stored functions in the default database. (Bug #23491) An INTO OUTFILEclause is permitted only for the final SELECTof a UNION, but this restriction was not being enforced correctly. (Bug #23345) With the NO_AUTO_VALUE_ON_ZEROSQL mode enabled, LAST_INSERT_ID()could return 0 after INSERT ... ON DUPLICATE KEY UPDATE. Additionally, the next rows inserted (by the same INSERT, or the following INSERTwith or without ON DUPLICATE KEY UPDATE), would insert 0 for the auto-generated value if the value for the AUTO_INCREMENTcolumn was NULLor missing. (Bug #23233) SOUNDEX()returned an invalid string for international characters in multibyte character sets. (Bug #22638) COUNT(sometimes generated a spurious truncation warning. (Bug #21976) decimal_expr) InnoDB: The first read statement, if served from the query cache, was not consistent with the READ COMMITTEDisolation level. (Bug #21409) For a stored procedure containing a SELECTstatement that used a complicated join with an ONexpression, the expression could be ignored during re-execution of the procedure, yielding an incorrect result. (Bug #20492) In some cases, the optimizer preferred a range or full index scan access method over lookup access methods when the latter were much cheaper. (Bug #19372) Conversion of DATETIMEvalues in numeric contexts sometimes did not produce a double ( YYYYMMDDHHMMSS.uuuuuu) value. (Bug #16546)
http://dev.mysql.com/doc/relnotes/mysql/5.0/en/news-5-0-40.html
CC-MAIN-2016-07
refinedweb
2,720
53.61
Telemetry Schemas Status: Experimental - Motivation - How Schemas Work - What is Out of Scope - Use Cases - Schema URL - Schema Version Number - OTLP Support - API Support - OpenTelemetry Schema Motivation Telemetry sources such as instrumented applications and consumers of telemetry such as observability backends sometimes make implicit assumptions about the emitted telemetry. They assume that the telemetry will contain certain attributes or otherwise have a certain shape and composition of data (this is referred to as “telemetry schema” throughout this document). This makes it difficult or impossible to change the composition of the emitted telemetry data without breaking the consumers. For example changing the name of an attribute of a span created by an instrumentation library can break the backend if the backend expects to find that attribute by its name. Semantic conventions are an important part of this problem. These conventions define what names and values to use for span attributes, metric names and other fields. If semantic conventions are changed the existing implementations (telemetry source or consumers) need to be also changed correspondingly. Furthermore, to make things worse, the implementations of telemetry sources and implementations of telemetry consumers that work together and that depend on the changed semantic convention need to be changed simultaneously, otherwise such implementations will no longer work correctly together. Essentially there is a coupling between 3 parties: 1) OpenTelemetry semantic conventions, 2) telemetry sources and 3) telemetry consumers. The coupling complicates the independent evolution of these 3 parties. We recognize the following needs: OpenTelemetry semantic conventions need to evolve over time. When conventions are first defined, mistakes are possible and we may want to fix the mistakes over time. We may also want to change conventions to re-group the attributes into different namespaces as our understanding of the attribute taxonomy improves. Telemetry sources over time may want to change the schema of the telemetry they emit. This may be because for example the semantic conventions evolved and we want to make our telemetry match the newly introduced conventions. In an observability system there may simultaneously exist telemetry sources that produce data that conforms to different telemetry schemas because different sources evolve at a different pace and are implemented and controlled by different entities. Telemetry consumers have a need to understand what schema a particular piece of received telemetry confirms to. The consumers also need a way to be able to interpret the telemetry data that uses different telemetry schemas. Telemetry Schemas that were proposed and accepted in OTEP0152 address these needs. How Schemas Work We believe that the 3 parties described above should be able to evolve independently over time, while continuously retaining the ability to correctly work together. Telemetry Schemas are central to how we make this possible. Here is a summary of how the schemas work: OpenTelemetry defines a file format for defining telemetry schemas. Telemetry schemas are versioned. Over time the schema may evolve and telemetry sources may emit data conforming to newer versions of the schema. Telemetry schemas explicitly define transformations that are necessary to convert telemetry data between different versions of the schema, provided that such conversions are possible. When conversions are not possible it constitutes a breaking change between versions. Telemetry schemas are identified by Schema URLs, that are unique for each schema version. Telemetry sources (e.g. instrumentation libraries) should include a schema URL in the emitted telemetry. Telemetry consumers should pay attention to the schema of the received telemetry. If necessary, telemetry consumers may transform the telemetry data from the received schema version to the target schema version as expected at the point of use (e.g. a dashboard may define which schema version it expects). OpenTelemetry publishes a telemetry schema as part of the specification. The schema contains the list of transformations that semantic conventions undergo. The schema is to be available, to be referred and downloaded at a well known URL: /schemas/<version>(where <version>matches the specification version number). OpenTelemetry instrumentation libraries include the OpenTelemetry Schema URL in all emitted telemetry. This is currently work-in-progress, here is an example of how it is done in Go SDK’s Resource detectors. OTLP allows inclusion of a schema URL in the emitted telemetry. Third-party libraries, instrumentation or applications are advised to define and publish their own telemetry schema if it is completely different from OpenTelemetry schema (or use OpenTelemetry schema) and include the schema URL in the emitted telemetry. What is Out of Scope The concept of schema does not attempt to fully describe the shape of telemetry. The schema for example does not define all possible valid values for attributes or expected data types for metrics, etc. It is not a goal. Our goal is narrowly defined to solve the following problem only: to allow OpenTelemetry Semantic Conventions to evolve over time. For that reason this document is concerned with changes to the schema as opposed to the full state of the schema. We do not preclude this though: the schema file format is extensible and in the future may allow defining the full state of the schema. We intentionally limit the types of transformations of schemas to the bare minimum that is necessary to handle the most common changes that we believe OpenTelemetry Semantic Conventions will require in the near future. More types of transformations may be proposed in the future. This proposal does not attempt to support a comprehensive set of possible transformation types that can handle all possible changes to schemas that we can imagine. That would be too complicated and very likely superfluous. Any new transformation types should be proposed and added in the future to the schema file format when there is an evidence that they are necessary for the evolution of OpenTelemetry. Use Cases This section shows a couple interesting use-cases for the telemetry schemas (other uses-cases are also possible, this is not an exhaustive list). Full Schema-Aware Here is an example on a schema-aware observability system: Let’s have a closer look at what happens with the Telemetry Source and Backend pair as the telemetry data is emitted, delivered and stored: In this example the telemetry source produces spans that comply with version 1.2.0 of OpenTelemetry schema, where the “deployment.environment” attribute is used to record that the span is coming from production. The telemetry consumer desires to store the telemetry in version 1.1.0 of OpenTelemetry schema. The schema translator compares the schema_url in the received span with the desired schema and sees that a version conversion is needed. It then applies the change that is described in the schema file and renames the attribute from “deployment.environment” to “environment” before storing the span. And here is for example how the schemas can be used to query stored data: Collector-Assisted Schema Transformation Here is a somewhat different use case, where the backend is not aware of schemas and we rely on OpenTelemetry Collector to translate the telemetry to a schema that the backend expects to receive. The “Schema Translate Processor” is configured, the target schema_url is specified and all telemetry data that passes through the Collector is converted to that target schema: Schema URL Schema URL is an identifier of a Schema. The URL specifies a location of a Schema File that can be retrieved (so it is a URL and not just a URI) using HTTP or HTTPS protocol. Fetching the specified URL may return an HTTP redirect status code. The fetcher MUST follow the HTTP standard and honour the redirect response and fetch the file from the redirected URL. The last part of the URL path is the version number of the schema. The part of the URL preceding the <version> is called Schema Family identifier. All schemas in one Schema Family have identical Schema Family identifiers. To create a new version of the schema copy the schema file for the last version in the schema family and add the definition of the new version. The schema file that corresponds to the new version must be retrievable at a new URL. Important: schema files are immutable once they are published. Once the schema file is retrieved it is recommended to be cached permanently. Schema files may be also packaged at build time with the software that anticipates it may need the schema (e.g. the latest OpenTelemetry schema file can be packaged at build time with OpenTelemetry Collector’s schema translation processor). Schema Version Number Version number follows the MAJOR.MINOR.PATCH format, similar to semver 2.0. Version numbers use the ordering rules defined by semver 2.0 specification. See how ordering is used in the Order of Transformations. Other than the ordering rules the schema version numbers do not carry any other semantic meaning. OpenTelemetry schema version numbers match OpenTelemetry specification version numbers, see more details here. OTLP Support To allow carrying the Schema URL in emitted telemetry OTLP includes a schema_url field in the messages: The schema_url field in the ResourceSpans, ResourceMetrics, ResourceLogs messages applies to the contained Resource, Span, SpanEvent, Metric, LogRecord messages. The schema_url field in the InstrumentationLibrarySpans message applies to the contained Span and SpanEvent messages. The schema_url field in the InstrumentationLibraryMetrics message applies to the contained Metric messages. The schema_url field in the InstrumentationLibraryLogs message applies to the contained LogRecord messages. If schema_url field is non-empty both in Resource* message and in the contained InstrumentationLibrary* message then the value in InstrumentationLibrary* message takes the precedence. API Support The OpenTelemetry API allows getting a Tracer/Meter that is associated with a Schema URL. OpenTelemetry Schema OpenTelemetry publishes it own schema at /schemas/<version>. The version number of the schema is the same as the specification version number which publishes the schema. Every time a new specification version is released a corresponding new schema MUST be released simultaneously. If the specification release did not introduce any change the “changes” section of the corresponding version in the schema file will be empty.
https://opentelemetry.io/docs/reference/specification/schemas/overview/
CC-MAIN-2022-21
refinedweb
1,662
52.9
ratatouille alternatives and similar packages Based on the "Command Line Applications" category. Alternatively, view ratatouille alternatives based on common mentions on social networks and blogs. getopt8.5 1.1 ratatouille VS getoptCommand-line options parser for Erlang. progress_bar8.1 2.2 ratatouille VS progress_barCommand-line progress bars and spinners. ExCLI7.5 0.0 ratatouille VS ExCLIUser friendly CLI apps for Elixir table_rex7.4 1.9 ratatouille VS table_rexGenerate configurable ASCII style tables for display. anubis7.0 0.0 ratatouille VS anubisCommand-Line application framework for Elixir. optimus6.6 4.5 ratatouille VS optimusCommand-line option parser for Elixir inspired by clap.rs. loki5.8 0.0 ratatouille VS lokiLibrary for creating interactive command-line application. tabula5.8 0.0 ratatouille VS tabulaPretty print list of Ecto query results / maps in ascii tables (GitHub Markdown/OrgMode). firex3.8 0.0 ratatouille VS firexFirex is a library for automatically generating command line interfaces (CLIs) from an elixir module. meld3.2 0.1 ratatouille VS meldCreate global binaries from mix tasks. ex_prompt3.1 0.0 ratatouille VS ex_promptHelper package to add interactivity to your command line applications as easy as possible. phoenix-cliCommand-line interface for Phoenix Framework like Rails commands. ex_cliUser friendly CLI apps for Elixir. Get performance insights in less than 4 minutes Do you think we are missing an alternative of ratatouille or a related project? Popular Comparisons README Ratatouille Ratatouille is a declarative terminal UI kit for Elixir for building rich text-based terminal applications similar to how you write HTML. It builds on top of the termbox API (using the Elixir bindings from ex_termbox). For the API Reference, see:. Toby, a terminal-based Erlang observer built with Ratatouille Table of Contents - Ratatouille - Getting Started - Building an Application - Views - The DSL - Adding Logic - Styling - Views are Strict - Example Applications - Under the Hood - Packaging and Distributing - Defining an OTP Application - Executable Releases with Distillery - Projects using Ratatouille - Installation - From Hex - From Source - Roadmap - Contributing - Running the Tests Getting Started Ratatouille implements The Elm Architecture (TEA) as a way to structure application logic. This fits quite naturally in Elixir and is part of what makes Ratatouille declarative. If you've already used TEA on the web, this should feel very familiar. As with a GenServer definition, Ratatouille apps only implement a behaviour by defining callbacks and don't know how to start or run themselves. It's the application runtime that handles all of those (sometimes tricky) details. Building an Application Let's build a simple application that displays an integer counter which can be incremented when the user presses + and decremented when the user presses -. First a quick clarification, since we're using the word "application" a lot. For our purposes, an application is a terminal application, and not necessarily an OTP application, but your terminal application could also be an OTP application. We'll cover that in Packaging and Distributing Applications below. Back to the counter app. First we'll look at the entire example, then we'll go through it line by line to see what each line does. You can also find this example in the repo and run it with mix run. # examples/counter.exs defmodule Counter do @behaviour Ratatouille.App import Ratatouille.View def init(_context), do: 0 def update(model, msg) do case msg do {:event, %{ch: ?+}} -> model + 1 {:event, %{ch: ?-}} -> model - 1 _ -> model end end def render(model) do view do label(content: "Counter is #{model} (+/-)") end end end Ratatouille.run(Counter) At the top, we define a new module ( Counter) for the app and we inform Elixir that it will implement the Ratatouille.App behaviour. This just ensures we're warned if we forget to implement a callback and serves as documentation that this is a Ratatouille app. defmodule Counter do @behaviour Ratatouille.App # ... end Next, we import the View DSL from Ratatouille.View: import Ratatouille.View The View DSL provides element builder functions like view, row, table, label that you can use to define views. Think of them like HTML tags. init/1 The init/1 callback defines the initial model. "Model" is the Elm architecture's term for what we often call "state" in Elixir/Erlang. As with a GenServer, the state (our model) will later be passed to callbacks when things happen in order to allow the app to update it. The model can be any Erlang term. For larger apps, it's helpful to use maps or structs to organize different pieces of the state. Here, we just have an integer counter, so we return 0 as our initial model: defmodule Counter do # ... def init(_context), do: 0 # ... end update/2 The update/2 callback defines how to transform the model when a particular message is received. Ratatouille's runtime will automatically call update/2 when terminal events occur (pressing a key, resizing the window, clicking the mouse, etc.). We can also send ourselves messages via subscriptions and commands. Here, we'd like to increment the counter when we get a ?+ key press and decrement it when get a ?-. Event messages are based on the underlying termbox events and characters are given as code points (e.g., ?a is 97). defmodule Counter do # ... def update(model, msg) do case msg do {:event, %{ch: ?+}} -> model + 1 {:event, %{ch: ?-}} -> model - 1 _ -> model end end # ... end It's a good idea to provide a fallback clause in case we don't know how to handle a message. This way the app won't crash if the user presses a key that the app doesn't handle. But if things stop working as you expect, try removing the fallback to see if important messages are going unmatched. render/1 The render/1 callback defines a view to display the model. The runtime will call it as needed when it needs to update the terminal window. Like an HTML document, a view is defined as a tree of elements (nodes). Elements have attributes (e.g., text: bold) and children (nested content). While helper functions can return arbitrary element trees, the render/1 callback must return a view tree starting with a root view element---it's sort of like the <body> tag in HTML. defmodule Counter do # ... def render(model) do view do label(content: "Counter is #{model} (+/-)") end end # ... end Running it There's a final and very important line at the bottom: Ratatouille.run(Counter) This starts the application runtime with our app definition. Options can be passed as a second argument. This is an easy way to run simple apps. For more complicated ones, it's recommended to define an OTP application. That's it---now you can run the program with mix run <file>. To run the bundled example: $ mix run examples/counter.exs You should see the counter we defined, be able to make changes to it with + and -, and be able to quit using q. Views Ratatouille's views are trees of elements similar to HTML in structure. For example, here's how to define a two-column layout: view do row do column size: 6 do panel title: "Left Column" do label(content: "Text on the left") end end column size: 6 do panel title: "Right Column" do label(content: "Text on the right") end end end end The DSL As you might have noticed, Ratatouille provides a small DSL on top of Elixir for defining views. These are functions and macros which accept attributes and/or child elements in different formats. For example, a column element can be defined in all of the following ways: column() column(size: 12) column do # ... child elements ... end column size: 12 do # ... child elements ... end All of these evaluate to a %Ratatouille.Renderer.Element{tag: :column} struct. The macros provide syntactic sugar, but under the hood it's all structs. Here's a list of all the elements provided by Ratatouille.View: Adding Logic Because it's just Elixir code, you can freely mix in Elixir syntax and abstract views using functions: label(content: a_variable) view do case current_tab do :one -> render_tab_one() :two -> render_tab_two() end end if window.width > 80 do row do column(size: 6) column(size: 6) end else row do column(size: 12) end end Styling Attributes are used to style text and other content: # Labels are block-level, so this makes text within the whole block red. label(content: "Red text", color: :red) # Nested inline text elements can be used to style differently within a label. label do text(content: "R", color: :red) text(content: "G", color: :green) text(content: "B", color: :blue) end # `color` sets the foreground, while `background` sets the background. label(content: "Black on white", color: :black, background: :white) # `attributes` accepts a list of text attributes, here `:bold` and `:underline`. label(content: "Bold and underlined text", attributes: [:bold, :underline]) Styling is still being developed, so it's not currently possible to style every aspect of every element, but this will improve with time. Views are Strict Most web browsers will happily try to make sense of any HTML you give them. For example, you can put a td directly under a div and the content will likely still be rendered. Ratatouille takes a different, more strict approach and first validates that the view tree is well-structured. If it's not valid, an error is raised explaining the problem. This is intended to provide quick feedback when something's wrong. Restricting the set of valid views also helps to simplify the rendering implementation. It's helpful to keep the following things in mind when defining views: - Each tag has a list of allowed child tags. For example, a rowmay only have elements with the columntag as direct descendants. - Each tag has a list of attributes. Some attributes are required, and these must be set. Optional attributes have some default behavior when unset. It's not allowed to set an attribute that's not in the list. - A viewelement must be the root element of any view tree you'd like to render. See the list of elements above for documentation on each element. Example Applications The following examples show off different aspects of the framework: With the repository cloned locally, run an example with mix run examples/<example>.exs. Examples can be quit with q or CTRL-c (unless indicated otherwise). Under the Hood The application runtime abstracts away many of the details concerning how the terminal window is updated and how events are received. If you're interested in how these things actually work, or if the runtime doesn't support your use case, see this guide: Packaging and Distributing Warning: This part is still rough around the edges. While it's easy to run apps while developing with mix run, packaging them for others to easily run is a bit more complicated. Depending on the type of app you're building, it might not be reasonable to assume that users have any Elixir or Erlang tools installed. Terminal apps are usually distributed as binary executables so that they can just be run as such without additional dependencies. Fortunately, this is possible using OTP releases that bundle ERTS. Defining an OTP Application In order to create an OTP release, we first need to define an OTP application that runs the terminal application. Ratatouille.Runtime.Supervisor takes care of starting all the necessary runtime components, so we start this supervisor under the OTP application supervisor and pass it a Ratatouille app definition (along with any other runtime configuration). For example, the OTP application for toby looks like this: defmodule Toby do use Application def start(_type, _args) do children = [ {Ratatouille.Runtime.Supervisor, runtime: [app: Toby.App]}, # other workers... ] Supervisor.start_link( children, strategy: :one_for_one, name: Toby.Supervisor ) end end Executable Releases with Distillery We'll use Distillery to create the OTP release, as it can even create distributable, self-contained executables. Releases built on a given architecture can generally be run on machines of the same architecture. Follow the Distillery guide to generate a release configuration: In order to make a "batteries-included" release, it's important that you have include_erts set to true: environment :prod do # ... set(include_erts: true) # ... end Now it's possible to generate the release: MIX_ENV=prod mix release --executable --transient This creates a Distillery release that bundles the Erlang runtime and the application. Start it in the foreground, e.g.: _build/prod/rel/toby/bin/toby.run foreground You can also move this executable somewhere else (e.g., to a directory in your $PATH). A current caveat is that it must be able to unpack itself, as Distillery executables are self-extracting archives. Projects using Ratatouille For inspiration or ideas on how to structure your application, check out this list of projects built with Ratatouille: tefter/cli- the command-line client for Tefter toby- a terminal-based Erlang observer If you have a project you'd like to include here, just open a PR to add it to the list. Installation From Hex Add Ratatouille as a dependency in your project's mix.exs: def deps do [ {:ratatouille, "~> 0.5.0"} ] end From Source To try out the master branch, first clone the repo: git clone cd ratatouille Next, fetch the deps: mix deps.get Finally, try out one of the included examples/: mix run examples/rendering.exs If you see lots of things drawn on your terminal screen, you're good to go. Use "q" to quit in the examples (unless otherwise specified). Roadmap - Apps - [x] Application Runtime - [x] Subscriptions - [x] Commands - Views / Rendering - [x] Rendering engine with basic elements - [ ] More configurable charts (axis label, color, multiple lines, etc.) - [ ] Uniform support for text styling (incl. inheritance) - [x] Automatic translation to termbox styling constants - For example, color: :redinstead of color: Constants.color(:red). - [ ] Rendering optimizations (view diffing, more efficient updates, etc.) - Events - [ ] Translate termbox events to a cleaner format - Dealing with the integer constants is inconvenient. These could be converted to atoms by the event manager. - Terminal Backend - [x] ex_termbox NIFs - [ ] Alternative port-based termbox backend - Customization - [ ] Registering custom element renderers - This would support using custom elements (e.g. my_table()) that are defined outside of the core library. Contributing Contributions are much appreciated. They don't necessarily have to come in the form of code, I'm also very thankful for bug reports, documentation improvements, questions, and suggestions. Running the Tests Run the unit tests as usual: mix test Ratatouille also includes integration tests of the bundled examples. These aren't included in the default suite because they actually run the example apps. The integration suite can be run like so: mix test --only integration
https://elixir.libhunt.com/ratatouille-alternatives
CC-MAIN-2021-10
refinedweb
2,439
56.25
'\" ocmp.1m,v 1.34 2003/10/25 16:19:01 tom Exp $ .TH infocmp 1M "" .ds n 5 .ds d /usr/share/terminfo .SH NAME \fBinfocmp\fR - compare or print out \fIterminfo\fR descriptions .SH SYNOPSIS \fBinfocmp\fR [\fB\-\ 1\ C\ E\ F\ G\ I\ L\ T\ V\ c\ d\ e\ g\ i\ l\ n\ p\ q\ r\ t\ u\ \fR] .br [\fB\-v\fR \fIn\fR] [\fB\-s d\fR| \fBi\fR| \fBl\fR| \fBc\fR] [\fB\-R \fR\fBsubset\fR] .br [\fB\-w\fR\ \fIwidth\fR] [\fB\-A\fR\ \fIdirectory\fR] [\fB\-B\fR\ \fIdirectory\fR] .br [\fItermname\fR...] .SH DESCRIPTION \fBinfocmp\fR can be used to compare a binary \fBterminfo\fR entry with other terminfo entries, rewrite a \fBterminfo\fR description to take advantage of the \fBuse=\fR terminfo field, or print out a \fBterminfo\fR description from the binary file (\fBterm\fR) in a variety of formats. In all cases, the boolean fields will be printed first, followed by the numeric fields, followed by the string fields. .SS Default Options If no options are specified and zero or one \fItermnames\fR are specified, the \fB\-I\fR option will be assumed. If more than one \fItermname\fR is specified, the \fB\-d\fR option will be assumed. .SS Comparison Options [\-d] [\-c] [\-n] \fBinfocmp\fR compares the \fBterminfo\fR description of the first terminal \fItermname\fR with each of the descriptions given by the entries for the other terminal's \fItermnames\fR. If a capability is defined for only one of the terminals, the value returned will depend on the type of the capability: \fBF\fR for boolean variables, \fB-1\fR for integer variables, and \fBNULL\fR for string variables. The \fB\-d\fR option produces a list of each capability that is different between two entries. This option is useful to show the difference between two entries, created by different people, for the same or similar terminals. The \fB\-c\fR option produces a list of each capability that is common between two entries. Capabilities that are not set are ignored. This option can be used as a quick check to see if the \fB\-u\fR option is worth using. The \fB\-n\fR option produces a list of each capability that is in neither entry. If no \fItermnames\fR are given, the environment variable \fBTERM\fR will be used for both of the \fItermnames\fR. This can be used as a quick check to see if anything was left out of a description. .SS Source Listing Options [\-I] [\-L] [\-C] [\-r] The \fB\-I\fR, \fB\-L\fR, and \fB\-C\fR options will produce a source listing for each terminal named. .TS center tab(/) ; l l . \fB\-I\fR/use the \fBterminfo\fR names \fB\-L\fR/use the long C variable name listed in <\fBterm.h\fR> \fB\-C\fR/use the \fBtermcap\fR names \fB\-r\fR/when using \fB\-C\fR, put out all capabilities in \fBtermcap\fR form .TE If no \fItermnames\fR are given, the environment variable \fBTERM\fR will be used for the terminal name. The source produced by the \fB\-C\fR option may be used directly as a \fBtermcap\fR entry, but not all parameterized strings can be changed to the \fBtermcap\fR format. \fBinfocmp\fR will attempt to convert most of the parameterized information, and anything not converted will be plainly marked in the output and commented out. These should be edited by hand. All padding information for strings will be collected together and placed at the beginning of the string where \fBtermcap\fR expects it. Mandatory padding (padding information with a trailing '/') will become optional. All \fBtermcap\fR variables no longer supported by \fBterminfo\fR, but which are derivable from other \fBterminfo\fR variables, will be output. Not all \fBterminfo\fR capabilities will be translated; only those variables which were part of \fBtermcap\fR will normally be output. Specifying the \fB\-r\fR option will take off this restriction, allowing all capabilities to be output in \fItermcap\fR form. Note that because padding is collected to the beginning of the capability, not all capabilities are output. Mandatory padding is not supported. Because \fBtermcap\fR strings are not as flexible, it is not always possible to convert a \fBterminfo\fR string capability into an equivalent \fBtermcap\fR format. A subsequent conversion of the \fBtermcap\fR file back into \fBterminfo\fR format will not necessarily reproduce the original \fBterminfo\fR source. Some common \fBterminfo\fR parameter sequences, their \fBtermcap\fR equivalents, and some terminal types which commonly have such sequences, are: .TS center tab(/) ; l c l l l l. \fBterminfo/termcap\fR/Representative Terminals = \fB%p1%c/%.\fR/adm \fB%p1%d/%d\fR/hp, ANSI standard, vt100 \fB%p1%'x'%+%c/%+x\fR/concept \fB%i/%i\fRq/ANSI standard, vt100 \fB%p1%?%'x'%>%t%p1%'y'%+%;/%>xy\fR/concept \fB%p2\fR is printed before \fB%p1/%r\fR/hp .TE .SS Use= Option [\-u] The \fB\-u\fR option produces a \fBterminfo\fR source description of the first terminal \fItermname\fR which is relative to the sum of the descriptions given by the entries for the other terminals \fItermnames\fR. It does this by analyzing the differences between the first \fItermname\fR and the other \fItermnames\fR and producing a description with \fBuse=\fR fields for the other terminals. In this manner, it is possible to retrofit generic terminfo entries into a terminal's description. Or, if two similar terminals exist, but were coded at different times or by different people so that each description is a full description, using \fBinfocmp\fR will show what can be done to change one description to be relative to the other. A capability will get printed with an at-sign (@) if it no longer exists in the first \fItermname\fR, but one of the other \fItermname\fR entries contains a value for it. A capability's value gets printed if the value in the first \fItermname\fR is not found in any of the other \fItermname\fR entries, or if the first of the other \fItermname\fR entries that has this capability gives a different value for the capability than that in the first \fItermname\fR. The order of the other \fItermname\fR entries is significant. Since the terminfo compiler \fBtic\fR does a left-to-right scan of the capabilities, specifying two \fBuse=\fR entries that contain differing entries for the same capabilities will produce different results depending on the order that the entries are given in. \fBinfocmp\fR will flag any such inconsistencies between the other \fItermname\fR entries as they are found. Alternatively, specifying a capability \fIafter\fR a \fBuse=\fR entry that contains that capability will cause the second specification to be ignored. Using \fBinfocmp\fR to recreate a description can be a useful check to make sure that everything was specified correctly in the original source description. Another error that does not cause incorrect compiled files, but will slow down the compilation time, is specifying extra \fBuse=\fR fields that are superfluous. \fBinfocmp\fR will flag any other \fItermname use=\fR fields that were not needed. .SS Changing Databases [\-A \fIdirectory\fR] [\-B \fIdirectory\fR] The location of the compiled \fBterminfo\fR database is taken from the environment variable \fBTERMINFO\fR . If the variable is not defined, or the terminal is not found in that location, the system \fBterminfo\fR database, in \fB/usr/share/terminfo\fR, will be used. The options \fB\-A\fR and \fB\-B\fR may be used to override this location. The \fB\-A\fR option will set \fBTERMINFO\fR for the first \fItermname\fR and the \fB\-B\fR option will set \fBTERMINFO\fR for the other \fItermnames\fR. With this, it is possible to compare descriptions for a terminal with the same name located in two different databases. This is useful for comparing descriptions for the same terminal created by different people. .SS Other Options .TP 5 \fB\-1\fR causes the fields to be printed out one to a line. Otherwise, the fields will be printed several to a line to a maximum width of 60 characters. .TP \fB\-a\fR tells \fBinfocmp\fP to retain commented-out capabilities rather than discarding them. Capabilities are commented by prefixing them with a period. .TP 5 \fB\-E\fR Dump the capabilities of the given terminal as tables, needed in the C initializer for a TERMTYPE structure (the terminal capability structure in the \fB \fR). This option is useful for preparing versions of the curses library hardwired for a given terminal type. The tables are all declared static, and are named according to the type and the name of the corresponding terminal entry. .sp Before ncurses 5.0, the split between the \fB\-e\fP and \fB\-E\fP options was not needed; but support for extended names required making the arrays of terminal capabilities separate from the TERMTYPE structure. .TP 5 \fB\-e\fR Dump the capabilities of the given terminal as a C initializer for a TERMTYPE structure (the terminal capability structure in the \fB \fR). This option is useful for preparing versions of the curses library hardwired for a given terminal type. .TP 5 \fB\-F\fR \fB\-r\fR. .TP 5 \fB\-f\fR Display complex terminfo strings which contain if/then/else/endif expressions indented for readability. .TP 5 \fB\-G\fR Display constant literals in decimal form rather than their character equivalents. .TP 5 \fB\-g\fR Display constant character literals in quoted form rather than their decimal equivalents. .TP 5 \fB\-i\fR Analyze the initialization (\fBis1\fR, \fBis2\fR, \fBis3\fR), and reset (\fBrs1\fR, \fBrs2\fR, \fBrs3\fR),: .TS center tab(/) ; l l l l. Action/Meaning = RIS/full reset SC/save cursor RC/restore cursor LL/home-down RSR/reset scroll region .TE .sp}). .TP 5 \fB\-l\fR Set output format to terminfo. .TP 5 \fB\-p\fR Ignore padding specifications when comparing strings. .TP 5 \fB\-q\fR Make the comparison listing shorter by omitting subheadings, and using "-" for absent capabilities, "@" for canceled rather than "NULL". .TP 5 \fB\-R\fR\fIsubset\fR \fBterminfo\fR(\*n) for details. You can also choose the subset "BSD" which selects only capabilities with termcap equivalents recognized by 4.4BSD. .TP \fB\-s \fR\fI[d|i|l|c]\fR The \fB\-s\fR option sorts the fields within each type according to the argument below: .br .RS 5 .TP 5 \fBd\fR leave fields in the order that they are stored in the \fIterminfo\fR database. .TP 5 \fBi\fR sort by \fIterminfo\fR name. .TP 5 \fBl\fR sort by the long C variable name. .TP 5 \fBc\fR sort by the \fItermcap\fR name. .RE .IP If the \fB\-s\fR option is not given, the fields printed out will be sorted alphabetically by the \fBterminfo\fR name within each type, except in the case of the \fB\-C\fR or the \fB\-L\fR options, which cause the sorting to be done by the \fBtermcap\fR name or the long C variable name, respectively. .TP 5 \-V\fR reports the version of ncurses which was used in this program, and exits. .TP 5 \fB\-v\fR \fIn\fR prints out tracing information on standard error as the program runs. Higher values of n induce greater verbosity. .TP 5 \fB\-w\fR \fIwidth\fR changes the output to \fIwidth\fR characters. .SH FILES .TP 20 \*d Compiled terminal description database. .SH EXTENSIONS The \fB\-E\fR, \fB\-F\fR, \fB\-G\fR, \fB\-R\fR, \fB\-T\fR, \fB\-V\fR, \fB\-a\fR, \fB\-e\fR, \fB\-f\fR, \fB\-g\fR, \fB\-i\fR, \fB\-l\fR, \fB\-p\fR, \fB\-q\fR and \fB\-t\fR options are not supported in SVr4 curses. The \fB\-r\fR option's notion of `termcap' capabilities is System V Release 4's. Actual BSD curses versions will have a more restricted set. To see only the 4.4BSD set, use \fB\-r\fR \fB\-RBSD\fR. .SH BUGS The \fB\-F\fR option of \fBinfocmp\fR(1M) should be a \fBtoe\fR(1M) mode. .SH SEE ALSO \fBinfocmp\fR(1M), \fBcaptoinfo\fR(1M), \fBinfotocap\fR(1M), \fBtic\fR(1M), \fBtoe\fR(1M), \fBcurses\fR(3X), \fBterminfo\fR(\*n). .SH AUTHOR Eric S. Raymond and .br Thomas E. Dickey .\"# .\"# The following sets edit modes for GNU EMACS .\"# Local Variables: .\"# mode:nroff .\"# fill-column:79 .\"# End:
http://www.fiveanddime.net/ss/man-unformatted/man1/infocmp.1m
crawl-003
refinedweb
2,095
55.13
. Let’s add a Dog class: public class Dog { public string Name { get; } public int Age { get; } public Dog(string name, int age) { Name = name; Age = age; } } Back in the Person class we’ll add a method called “WalkDog”: public void WalkDog(Dog dog) => Console.WriteLine("I'm taking {0} out for a walk", dog.Name); If for whatever reason you’d like to calculate the combined age of a Person and a Dog you can have a function with a return value like this: public int GetCombinedAge(Dog dog) => Age + dog.Age; These are “normal” methods so you can call them accordingly: Person p = new Person("John", "Smith", 28); p.WalkDog(new Dog("Caesar", 3)); int combined = p.GetCombinedAge(new Dog("Caesar", 3)); View all various C# language feature related posts here. Pingback: Expression bodied members in constructors and get-set properties in C# 7.0 | Fitness Promotions
https://dotnetcodr.com/2016/02/15/how-to-assign-an-expression-to-a-method-in-c6/
CC-MAIN-2021-49
refinedweb
149
63.59
Spark NLP by John Snow Labs What is Spark NLP? Spark NLP is a text processing library built on top of Apache Spark and its Spark ML library. It provides simple, performant and accurate NLP annotations for machine learning pipelines, that scale easily in a distributed environment. There are some eye-catching phrases that got my attention the first time I read an article on Databricks introducing Spark NLP about a year ago. I love Apache Spark and I learned Scala (and still learning) just for that purpose. Back then I wrote my own Stanford CoreNLP wrapper for Apache Spark. I wanted to stay in the Scala ecosystem so I avoided Python libraries such as spaCy, NLTK, etc. However, I faced many issues since I was dealing with large-scale datasets. Also, I couldn’t seamlessly integrate my NLP codes into Spark ML pipelines. I can sum up my problems by quoting some parts from the same blog post: Any integration between the two frameworks (Spark and another library) means that every object has to be serialized, go through inter-process communication in both ways, and copied at least twice in memory. I was really excited when I saw there was an NLP library built on top of Apache Spark and it natively extends the Spark ML Pipeline. I could finally build NLP pipelines in Apache Spark! Spark NLP is open source and has been released under the Apache 2.0 license. It is written in Scala but it supports Java and Python as well. It has no dependencies on other NLP or ML libraries. Spark NLP’s annotators provide rule-based algorithms, machine learning, and deep learning by using TensorFlow. For a more detailed comparison between Spark NLP and other open source NLP libraries, you can read this blog post. As a native extension of the Spark ML API, the library offers the capability to train, customize and save models so they can run on a cluster, other machines or saved for later. It is also easy to extend and customize models and pipelines, as we’ll do here. The library covers many NLP tasks, such as: For the full list of annotators, models, and pipelines you can read their online documentation. Full disclosure: I am one of the Installing Spark NLP My Environments: - Spark NLP 2.0.3 release - Apache Spark 2.4.1 - Apache Zeppelin release 0.8.2 - Local setup with MacBook Pro/macOS - Cluster setup by Cloudera/CDH 6.2 with 40 servers - Programming language: Scala (but no worries, the Python APIs in Spark and Spark NLP are very similar to the Scala language) I will explain how to set up Spark NLP for my environment. Nevertheless, if you wish to try something different you can always find out more about how to use Spark NLP either by visiting the main public repository or have look at their showcase repository with lots of examples: Main public repository: Showcase repository: Let’s get started! To use Spark NLP in Apache Zeppelin you have two options. Either use Spark Packages or you can build a Fat JAR yourself and just load it as an external JAR inside Spark session. Why don’t I show you both? First, with Spark Package: Either add this to your conf/zeppelin-env.sh # set options to pass spark-submit commandcom.johnsnowlabs.nlp:spark-nlp_2.11:2.0.3 export SPARK_SUBMIT_OPTIONS="--packages " 2. Or, add it to Generic Inline ConfInterpreter (at the beginning of your notebook before starting your Spark Session): %spark.conf # spark.jars.packages can be used for adding packages into spark interpreter spark.jars.packages com.johnsnowlabs.nlp:spark-nlp_2.11:2.0.3 Second, loading an external JAR: To build a Fat JAR all you need to do is: $ git clone $ cd spark-nlp $ sbt assembly Then you can follow one of the two ways I mentioned to add this external JAR. You just need to change “ — packages” to “ — jars” in the first option. Or for the second solution, just have “spark.jars”. Start Spark with Spark NLP Now we can start using Spark NLP 2.0.3 with Zeppelin 0.8.2 and Spark 2.4.1 by importing Spark NLP annotators: import com.johnsnowlabs.nlp.base._ import com.johnsnowlabs.nlp.annotator._ import org.apache.spark.ml.Pipeline Apache Zeppelin is going to start a new Spark session that comes with Spark NLP regardless of whether you used Spark Package or an external JAR. Read the Mueller Report PDF file Remember the issue about the PDF file not being a real PDF? Well, we have 3 options here: - You can either use any OCR tools/libraries you prefer to generate a PDF or a Text file. - Or you can use already searchable and selectable PDF files created by the community. - Or you can just use Spark NLP! Spark NLP comes with an OCR package that can read both PDF files and scanned images. However, I mixed option 2 with option 3. (I needed to install Tesseract 4.x+ for image-based OCR on my entire cluster so I got a bit lazy) You can download these two PDF files from Scribd: Of course, you can just download the Text version and read it by Spark. However, I would like to show you how to use the OCR that comes with Spark NLP. Spark NLP OCR: Let’s create a helper function for everything related to OCR: import com.johnsnowlabs.nlp.util.io.OcrHelper val ocrHelper = new OcrHelper() Now we need to read the PDF and create a Dataset from its content. The OCR in Spark NLP creates one row per page: //If you do this locally you can use or hdfs:/// if the files are hosted in Hadoop val muellerFirstVol = ocrHelper.createDataset(spark, "/tmp/Mueller/Mueller-Report-Redacted-Vol-I-Released-04.18.2019-Word-Searchable.-Reduced-Size.pdf") As you can see I’m loading the “Volume I” of this report in the format of PDF into a Dataset. I do this locally just to show you don’t always need a cluster to use Apache Spark and Spark NLP! TIP 1: If the PDF was actually a scanned image, we could have used these settings (but not in our use case, we found a selectable PDF): ocrHelper.setPreferredMethod("image") ocrHelper.setFallbackMethod(false) ocrHelper.setMinSizeBeforeFallback(0) TIP 2: You can simply convert Spark Dataset into DataFrame if needed by: muellerFirstVol.toDF() Spark NLP Pipelines and Models NLP by Machine Learning and Deep Learning Now it’s time to do some NLP tasks. As I mentioned at the beginning, we would like to use already pre-trained pipelines and models provided by Spark NLP in Part I. These are some of the pipelines and models that are available: However, I would like to use a pipeline called “explain_document_dl” first. Let’s see how we can download this pipeline, use it to annotate some inputs, and what exactly does it offer: import com.johnsnowlabs.nlp.pretrained.PretrainedPipeline val pipeline = PretrainedPipeline("explain_document_dl", "en") // This DataFrame has one sentence for testing val testData = Seq( "Donald Trump is the 45th President of the United States" ).toDS.toDF("text") // Let's use our pre-trained pipeline to predict the test dataset pipeline.transform(testData).show Here is the result of .show(): I know! It’s a lot going on in this pipeline. Let’s start with NLP annotators we have in “explain_document_dl” pipeline: - DocumentAssembler - SentenceDetector - Tokenizer - LemmatizerModel - Stemmer - PerceptronModel - ContextSpellCheckerModel - WordEmbeddings (GloVe 6B 100) - NerDLModel - NerConverter (chunking) To my knowledge, there are some annotators inside this pipeline which are using Deep Learning powered by TensorFlow for their supervised learning. For instance, you will notice these lines when you are loading this pipeline: pipeline: com.johnsnowlabs.nlp.pretrained.PretrainedPipeline = PretrainedPipeline(explain_document_dl,en,public/models) adding (ner-dl/mac/_sparse_feature_cross_op.so,ner-dl/mac/_lstm_ops.so) For simplicity, I’ll select a bunch of columns separately so we can actually see some results: So this is a very complete NLP pipeline. It has lots of NLP tasks like other NLP libraries and even more like Spell checking. But, this might be a bit heavy if you are just looking for one or two NLP tasks such as POS or NER. Let’s try another pre-trained pipeline called “entity_recognizer_dl”: import com.johnsnowlabs.nlp.pretrained.PretrainedPipeline val pipeline = PretrainedPipeline("entity_recognizer_dl", "en") val testData = Seq( "Donald Trump is the 45th President of the United States" ).toDS.toDF("text") // Let's use our pre-trained pipeline to predict the test dataset pipeline.transform(testData).show As you can see, using pre-trained pipeline is very easy. You just need to change its name and it will download and cache it locally. What is inside this pipeline? - Document - Sentence - Tokens - Embeddings - NER - NER chunk Let’s walk through what is happening with the NER model in both of these pipelines. The Named Entity Recognition (NER) uses Word Embeddings (GloVe or BERT) for training. I can quote one of the main maintainers of the project about what it is: NerDLModel is the result of a training process, originated by NerDLApproach SparkML estimator. This estimator is a TensorFlow DLmodel. It follows a Bi-LSTM with Convolutional Neural Networks scheme, utilizing word embeddings for token and sub-token analysis. You can read this full article about the use of TensorFlow graphs and how Spark NLP uses it to train its NER models: Back to our pipeline, NER chunk will extract chunks of Named Entities. For instance, if you have Donald -> I-PER and Trump -> I-PER, it will result in Donal Trump. Take a look at this example: Custom Pipelines Personally, I would prefer to build my own NLP pipelines when I am dealing with pre-trained models. This way, I have full control over what types of annotators I want to use, whether I want ML or DL models, use my own trained models in the mix, customize the inputs/outputs of each annotator, integrate Spark ML functions, and so much more! Is it possible to create your own NLP pipeline but still take advantage of pre-trained models? The answer is yes! Let’s look at one example: val document = new DocumentAssembler() .setInputCol("text") .setOutputCol("document") val sentence = new SentenceDetector() .setInputCols(Array("document")) .setOutputCol("sentence") .setExplodeSentences(true) val token = new Tokenizer() .setInputCols(Array("document")) .setOutputCol("token") val normalized = new Normalizer() .setInputCols(Array("token")) .setOutputCol("normalized") val pos = PerceptronModel.pretrained() .setInputCols("sentence", "normalized") .setOutputCol("pos") val chunker = new Chunker() .setInputCols(Array("document", "pos")) .setOutputCol("pos_chunked") .setRegexParsers(Array( " )) val embeddings = WordEmbeddingsModel.pretrained() .setOutputCol("embeddings") val ner = NerDLModel.pretrained() .setInputCols("document", "normalized", "embeddings") .setOutputCol("ner") val nerConverter = new NerConverter() .setInputCols("document", "token", "ner") .setOutputCol("ner_chunked") val pipeline = new Pipeline().setStages(Array( document, sentence, token, normalized, pos, chunker, embeddings, ner, nerConverter )) That’s it! Pretty easy and Sparky. The important part is that you can set which inputs you want for each annotator. For instance, for POS tagging, I can either use tokens, stemmed tokens, lemmatized tokens, or normalized tokens. This can change the results of annotators. Same for NerDLModel. I chose normalized tokens for both POS and Ner models since I am guessing my dataset is a bit messy and requires some cleaning. Let’s use our customized pipeline. If you know anything about Spark ML pipeline, it has two stages. One is fitting which is where you train the models inside your pipeline. The second is predicting your new data by transforming it into a new DataFrame. val nlpPipelineModel = pipeline.fit(muellerFirstVol) val nlpPipelinePrediction = nlpPipelineModel.transform(muellerFirstVol) The .fit() is for decoration here as everything already comes pre-trained. We don’t have to train anything so the .transform() is where we use the models inside our pipeline to create a new DataFrame with all the predictions. But if we did have our own models or Spark ML functions which required training then the .fit() would take some time to train the models. On a local machine, this took about 3 seconds to run. My laptop has a Core i9, 32G Memory, and Vega 20 (if this matters at all) so it is a pretty good machine. This example is nowhere near a Big Data scenario where you are dealing with millions of records, sentences, or words. In fact, it’s not even small data. However, we are using Apache Spark for a reason! Let’s run this in a cluster where we can distribute our tasks. read original article here
https://coinerblog.com/mueller-report-for-nerds-spark-meets-nlp-with-tensorflow-and-bert-part-1-32490a8f8f12/
CC-MAIN-2019-39
refinedweb
2,082
65.12
Opened 9 years ago Closed 9 years ago Last modified 9 years ago #5296 closed (wontfix) [patch] Additions to TestCase for easy view testing Description At work we frequently have testing idioms likes this: def test_x(self): ...do database stuff... response = self.client.get('/some/url/') check the response status code check the response template check the response content check the response content_type I'm attaching a patch, with tests, that adds a "check_view" method to TestCase that wraps up the common idiom of checking the status code, the template, the content and the content_type. In my mind it's somewhat analogous to render_to_response -- it's a shortcut for a common idiom, and it doesn't prevent you from doing more complex/interesting testing with the other assertX methods. Attachments (1) Change History (4) Changed 9 years ago by comment:1 Changed 9 years ago by I see the point you're making regarding render_to_response, but I'm not entirely convinced. Your own test example demonstrates the problem that I have with this idea. Even in your test case, you can't express the test as a single statement - for clarity, you break the test into predefined constants and then the function call. render_to_response works because every page render requires a template name and a data dictionary, with a base context being provided as an optional extra. There is really only 1 option (strictly, the data dictionary is optional too), and the argument order makes a good deal of sense, so there isn't any confusion about where the optional argument goes. In a unified test method such as yours, all the arguments (except the response) are optional - this means that every time you call the method, you need to either rely upon an argument order (and it isn't obvious to me what the 'correct' argument order should be), or you need to specify kwarg names, which detracts from your 'simple' argument. Ultimately, all you end up with is one call that wraps calls to all the other assert statements, with optional arguments that allow each test to be ignored. I'm not sure I see the value in such a wrapper. For me, testing is all about the clear statement of atomic tests. Every view is slightly different, so every view will require a slightly different set of atomic tests. The assert syntax isn't _that_ onerous as is, and each assert is atomic. It's very clear what is being tested, when, and in what order. comment:2 Changed 9 years ago by I'm marking this as wontfix. Chris H. - please re-open this if you disagree/respond to Russell's comments above. comment:3 Changed 9 years ago by Simon, I think it's fine to leave it as wontfix... Russell's comments make sense... Additions to TestCase and tests to exercise those additions.
https://code.djangoproject.com/ticket/5296
CC-MAIN-2017-09
refinedweb
479
58.11
please I've been working on this for hours and I'm about to explode, I'm beulding a LinkedSeq<E> and I'm trying to create an addAll method, here's my class: // File: LinkedSeq.java based on the... please I've been working on this for hours and I'm about to explode, I'm beulding a LinkedSeq<E> and I'm trying to create an addAll method, here's my class: // File: LinkedSeq.java based on the... sorry, I didn't realized I only posted the first half of the code, here's what I have so far: public class DoubleArraySeq implements Cloneable { // Invariant of the DoubleArraySeq class: //... Hello guys, I'm trying to build this DoubleArraySeq program and the addAfter and addBefore methods are not working correctly and I can't figure out why please help public class DoubleArraySeq... Thank you very very much :) ok so I fixed some things and I'm passing most of my tests except the union tests public class Statistician implements Cloneable { /**************************** * class invariant: * -... well the failure in union is at assertTrue("Union not empty", w.length( ) == 6); I'm very sorry, the tests that are failing are union, allNegative, compareTwoSmall, compareTwoMedium, unionSmall, addExtremes, and cloneEquals, I hope you can help me Hi everyone, I was doing this assignment and I though I had everything right until I had to run the Test program, here's my code: public class Statistician implements Cloneable { ... that's one place I'm having trouble with, I'm not sure where I need to use them the ranges: birthLow, birthHigh, liveLow, and liveHigh 4 in total my mistake For a birth to occur, the cell must be empty and the neighborhood density be in the birth range. The birth range can be... ok but where should I initialize the other 5 variables? that's one of my question, do I have to do the console.nextInt() in the same readInput method, or do I need to create a new one? so now my program needs to read the parameters from standard input. The new inputs for this part are the values for the ranges, birthLow, birthHigh, liveLow, and liveHigh. I need to read the matrix... Great idea, I'll try it out, thanks much :) Its not really a problem, I was just trying to create an if loop trying to stop the program after just 2 inputs, but when I run the program and I write just 2 numbers, it just continues running... I tried it and the program just waits for a third input, after a third input its when the program stops, I guess making it so it stops at 2 inputs would not matter Hello again guys, I was cleaning up my code and doing tests and I wanted to ask if you recommend any way to stop the program if only 2 inputs are typed, like throwing an exception if the user only... ok so I changed the printMatrix method to look like this private static void printMatrix(boolean[][] matrix) { for(int r = 0; r < matrix.length; r++) { for(int c = 0; c <... Wow thank you very much, also on my printMatrix method the output should be a '-' for false and a '#' for true, any ideas how I can do this? Again thank you a lot Hello guys, I'm new here, I was trying my homework for computer science which consists of making a basic Game of Life program with a 2D boolean array and a Random number with a seed, this is what I...
http://www.javaprogrammingforums.com/search.php?s=ee6f8a7df9128a26aaa3b51ef16b0d17&searchid=1133219
CC-MAIN-2014-42
refinedweb
595
63.63
Create CLIs with classes and type hints. Project description Cliar Cliar is a Python package to help you create commandline interfaces. It focuses on simplicity and extensibility: - Creating a CLI is as simple as subclassing from cliar.Cliar. - Extending a CLI is as simple as subclassing from a cliar.Cliarsubclass. Cliar's mission is to let you focus on the business logic instead of building an interface for it. At the same time, Cliar doesn't want to stand in your way, so it provides the means to customize the generated CLI. Installation $ pip install cliar Cliar requires Python 3.6+ and is tested on Windows, Linux, and macOS. There are no dependencies outside Python's standard library. Basic Usage Let's create a commandline calculator that adds two floats: from cliar import Cliar class Calculator(Cliar): '''Calculator app.''' def add(self, x: float, y: float): '''Add two numbers.''' print(f'The sum of {x} and {y} is {x+y}.') if __name__ == '__main__': Calculator().parse() Save this code to calc.py and run it. Try different inputs: Valid input: $ python calc.py add 12 34 The sum of 12.0 and 34.0 is 46.0. Invalid input: $ python calc.py add foo bar usage: calc.py add [-h] x y calc.py add: error: argument x: invalid float value: 'foo' $ python calc.py -h usage: calc.py [-h] {add} ... Calculator app. optional arguments: -h, --help show this help message and exit commands: {add} Available commands: add Add two numbers. Help for addcommand: $ python calc.py add -h usage: calc.py add [-h] x y Add two numbers. positional arguments: x y optional arguments: -h, --help show this help message and exit A few things to note: It's a regular Python class with a regular Python method. You don't need to learn any new syntax to use Cliar. addmethod is converted to addcommand, its positional params are converted to positional commandline args. There is no explicit conversion to float for xor yor error handling in the addmethod body. Instead, xand yare just treated as floats. Cliar converts the types using add's type hints. Invalid input doesn't even reach your code. --helpand -hflags are added automatically and the help messages are generated from the docstrings. Read Next Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cliar/
CC-MAIN-2020-10
refinedweb
407
70.09
Globalize FilePath On 01/05/2014 at 20:29, xxxxxxxx wrote: Hi, I am writing a python script in c4d and i need some help. The script i have written requires the global(absolute) filepath of the textures. As, i require it to copy the texture file into a new directory. Currently, i have resorted to using call command, to get the filepath: c4d.CallCommand(1029486) #Open Texture Manager c4d.CallCommand(1029813) #Select All texture c4d.CallCommand(1029820) #Globalize filenames But i know that this is method may lead to complications in the future. I have tried googling every possible term i can think of, and i still have not come up with a solution. So i would like to know if anyone can help me, with how i would go along writing a python alternative on how to get the global(absolute) filepath. -Harry On 02/05/2014 at 06:16, xxxxxxxx wrote: You need to check in what folder the texture exists, if it not already is an absolute filename. Use os.path.join(directory, filename) to join to paths into one. The directories Cinema will use to search for textures are 1. Document's tex/ folder 2. Document's parent folder 3. User texture folder (use the c4d.storage.GeGetC4DPath() function) 4. Global texture folders (use the c4d.GetGlobalTexturePath() function) os.path.isfile() will tell you whether a path is pointing to a file. It'll return False if it's a directory or if the path does not exist. -Niklas On 04/05/2014 at 20:03, xxxxxxxx wrote: Thanks for the reply Niklas. This is the script that i came up with: #Get Textures def getTextures(self) : global texturePathArray global Textures texturePathArray = [] Textures = doc.GetAllTextures() #Get All textures for (i,texture) in Textures: if os.path.isfile(texture) == False: if os.path.isfile(projectDir + "\" + texture) == True: texturePathArray.append(projectDir + "\" + texture) elif os.path.isfile(projectDir + "\ ex" + texture) == True: texturePathArray.append(projectDir + "\ ex" + texture) for i in range(8) : if os.path.isfile(c4d.storage.GeGetC4DPath(i) + "\" + texture) == True: texturePathArray.append(c4d.storage.GeGetC4DPath(i) + "\" + texture) for i in range(9) : if os.path.isfile(c4d.GetGlobalTexturePath(i) + "\" + texture) == True: texturePathArray.append(c4d.GetGlobalTexturePath(i) + "\" + texture) else: texturePathArray.append(texture) On 05/05/2014 at 17:28, xxxxxxxx wrote: @harryseow94: Small hint: use os.path.join(...) instead of '+ \", otherwise this does not work on OSX. Cheers, s_rath On 05/05/2014 at 20:04, xxxxxxxx wrote: @s_rath: Cool. i didnt know that till you mentioned it. Thanks. -Harry On 10/02/2015 at 01:58, xxxxxxxx wrote: Hi, I am doing something similar in my script where I want to extract the full path of the textures. I came across this thread which precisely gives me an idea how to go about it. But what I am confused about, is the order in which I should search these paths. E.g. I have a texture water.jpg in tex folder and another water.jpg in one of my search paths. The images are different. My texture contains /searchpath1/water.jpg. But if I start the search in /tex and I get /tex/water.jpg, I stop my search, thus getting the incorrect image path. So how to determine when should I stop my search? Hope I could explain the problem.. Thanks.
https://plugincafe.maxon.net/topic/7850/10164_globalize-filepath/3
CC-MAIN-2019-30
refinedweb
555
69.99
The C library function double asin(double x) returns the arc sine of x in radians. Following is the declaration for asin() function. double asin(double x) x − This is the floating point value in the interval [-1,+1]. This function returns the arc sine of x, in the interval [-pi/2,+pi/2] radians. The following example shows the usage of asin() function. #include <stdio.h> #include <math.h> #define PI 3.14159265 int main () { double x, ret, val; x = 0.9; val = 180.0 / PI; ret = asin(x) * val; printf("The arc sine of %lf is %lf degrees", x, ret); return(0); } Let us compile and run the above program that will produce the following result − The arc sine of 0.900000 is 64.158067 degrees
http://www.tutorialspoint.com/c_standard_library/c_function_asin.htm
CC-MAIN-2020-10
refinedweb
128
78.04
UpFront - LJ Index, September 2008 - Linux on the Desktop? Who Cares? - New LinuxJournal.com Mobile - Eclipse Ganymede - New Top-Level Domains on the Way - What They're Using: Christian Einfeldt, Producer, the Digital Tipping Point - Adios Windows 9x - diff -u: What's New in Kernel Development - They Said It LJ Index, September 2008 1. Number of directories in kernel 2.26: 1,417 2. Number of files in kernel 2.26: 23,810 3. Number of lines in kernel 2.26: 9,257,383 4. Number of directories in gcc 4.4: 3,563 5. Number of files in gcc 4.4: 58,264 6. Number of lines in gcc 4.4: 10,187,740 7. Number of directories in KDE 4.0: 7,515 8. Number of files in KDE 4.0: 100,688 9. Number of lines in KDE 4.0: 25,325,252 10. Number of directories in GNOME 2.23: 573 11. Number of files in GNOME 2.23: 8,278 12. Number of lines in GNOME 2.23: 4,780,168 13. Number of directories in X Window System 7.3: 1,023 14. Number of files in X Window System 7.3: 14,976 15. Number of lines in X Window System 7.3: 21,674,310 16. Number of directories in Eclipse 3.4: 297,500 17. Number of files in Eclipse 3.4: 912,309 18. Number of lines in Eclipse 3.4: 94,187,895 19. Number of dollars in the US National Debt: 9,388,297,685,583 20. Dollars earned per line by open-source developers if the US Debt had been used to fund these projects: 56,756 1–18: wc -l 19: 20: math the average Joe can use them. I think, however, that it's fair to say most of our previous conceptions of “ready for the desktop” are moot points. The only folks who are still up in arms over whether Linux ever will be ready are the same folks is mentioned, but not insisted upon). Microsoft Office. And, that's it. The last point bummed me out a bit, so I asked more probing questions. It turns out that Microsoft Office has become the common name for an office suite—much like Kleenex became the name for facial tissue. For almost everyone I asked, OpenOffice.org or even Google Docs (in a pinch) is the same thing. In fact, some weren't really sure why I'd ask such a thing, because “aren't they all the same?” Some people want a specific type of computer for tasks like video production or gaming, but they aren't the overwhelming majority anymore. Everyone wants or needs a computer now, and the general population doesn't seem to care much about what operating system it's running. My suspicion is that Web 2.0 and mobile (smartphone) technology is doing more to help Linux than anything else in history. It's not because Linux is better at such things; it's because the world is moving to the Web. The vehicle to get there is becoming less and less important. The good news is that now Linux finally can take over the world, and most people won't even notice! New LinuxJournal.com Mobile We are all very excited to let you know that LinuxJournal.com is now optimized for mobile viewing. You can enjoy all of our news, blogs and articles from anywhere you can find a data connection on your phone or mobile device. We know you find it difficult to be separated from your Linux Journal, so now you can take LinuxJournal.com everywhere. Need to read that latest shell script trick right now? You got it. Go to m.linuxjournal.com to enjoy this new experience, and be sure to let us know how it works for you. Eclipse Ganymede The latest version of Eclipse, version 3.4, aka Ganymede, should be available by the time you read this.). Most IDEs come with built-in “support” for lots of programming languages. Although for a lot of them, support means it colorizes your code. Eclipse is a bit different. It doesn't come with built-in support for many languages, or any, depending on the version you download. Support is provided via Eclipse Plugins. And normally, “support” means more than just colorizing your code. You usually get something that understands your language. It can show you an outline of the functions high. New Top-Level Domains on the Way In late June 2008, ICANN accepted a proposal to relax restrictions on the top-level domain namespace and, in the process, opened up the possibility for thousands of new domains. Currently, there are only 21 top-level domains, such as .com, .org or .info, and around 240 active country-code domains, such as .us, .de and .uk. The proposed plan would allow any organization or person to apply for a customized top-level domain. For example, New York City could operate the .nyc domain for addresses, such as brooklyn.nyc, penn-station.nyc or. “It's a massive increase in the 'real estate' of the Internet”, said Dr Paul Twomey, President and CEO of ICANN. The .com registry is by far the most crowded at this point, with 71 million registered domains. For comparison, the second (.de) and third (.net) most popular registries have only 11.2 million and 10.6 million domains, respectively. Before you rush to register your new top-level domain, you may want to check your bank account first. ICANN is expected to charge a minimum of $100,000 for the right to operate your own top-level domain, provided you qualify. Applicants must prove that they have a “business plan and technical capacity”. There is hope that this measure will help keep domain squatters out of the top-level namespace. ICANN also has a process in place to deal with controversial submissions, as stated on icann.org: “Offensive names will be subject to an objection-based process based on public morality and order. This process will be conducted by an international arbitration body utilizing criteria drawing on provisions in a number of international treaties. ICANN will not be the decision maker on these objections.” Applications for new names will be available in the second quarter of 2009. Yes, it is true, ICANN HAZ MORE DOMAINS. (linux.slashdot.org/article.pl?sid=07/10/11/1446254). (ACCRC.org) (lwn.net/Articles/273770) at the school and four other locations in the San Francisco Bay Area (untangle.com/index.php?option=com_content&task=view&id=393&Itemid=139) (untangle.com/index.php?option=com_content&task=view&id=351&Itemid=139) (news.cnet.com/Tenderloin-Tech-Day/1606-2_3-6223419.html?part=rss&tag=2547-1_3-0-20&subj=news). Ad. diff -u: What's New in Kernel Development There's an interesting new project, the Kernel Library Project, that aims to port the Linux OS features, such as the Virtual Filesystem, into a generic library that would work on any other operating system. Octavian Purdila, Stefania Costache and Lucian Adrian Grijincu have been working on this, and it could make it a lot easier to run Linux software anywhere else a user might want to run it. If you find this interesting, they're looking for volunteers to help out. Mark Lord, Tejun Heo and a variety of others have been keeping Serial ATA good and solid. At the moment, they are focusing on fixing, or at least working around, all stability issues. In some cases, they've been making very small speed sacrifices in order to make sure that certain rare problems don't come up at all. At some point, they plan to revamp some of the code, in order to solve the problems and improve speed, but that will require more invasive changes. For the moment, they simply want to make sure that absolutely nothing can go wrong for users. Kudos to them for keeping up that discipline. As everyone knows, it's much more fun to throw caution to the wind and just build lots of new features. Believe it or not, there still are plenty of people using 2.4 in the world. I'm sure they all wish they could upgrade to 2.6, and the kernel developers wish that too, but undoubtedly, there are reasons why their entire corporate infrastructure and all their products would break if they upgraded to 2.6. And for those users, Willy Tarreau has just come out with 2.4.36.4, which includes a small number of key security fixes. Willy encourages all 2.4 users to upgrade to 2.4. David Woodhouse and Paul Gortmaker now are officially in charge of embedded systems. The idea of having a maintainer for a general kernel concept like embedded systems is fairly new, and it creates some ambiguity for people submitting patches. Do they submit patches to the maintainer of the specific hardware driver or to the embedded system maintainers? In practice, it's likely that this won't be a real concern, and folks will get used to cc-ing whomever they should on their e-mail messages. Another potential problem with having an overarching embedded system maintainer is that such a person might become hypnotized by the idea of reducing size at any cost, as Andi Kleen has pointed out. But, David has reassured him and everyone else, that size reduction is only one part of supporting embedded devices, and that the new maintainers plan to keep a broad outlook, making sure their changes are good for everyone (or at least not harmful to larger systems or to the kernel sources themselves). One of David and Paul's main hopes, and Andrew Morton's as well as the whole thing was his idea to begin with, is that companies designing embedded devices will work with David and Paul to create a better dialogue between that class of companies and the kernel developers. Adrian Bunk has submitted a patch to remove the final PCI OSS driver from the kernel. The Trident 4DWave/SIS 7018 PCI Audio Core has been on Adrian's hit list for a very long time, but Muli Ben-Yehuda always has resisted. Now that Muli has moved on to other projects, and an ALSA driver exists that works for the exact same hardware, Adrian's patience has paid off. OSS finally is fully out of the kernel. UBIFS seems to be on a relatively fast track into the main kernel tree. The new Flash filesystem is likely to go into Linux-Next for a while, and from there, it should feed relatively automatically into Linus Torvalds' tree at the next merge window. Artem Bityutskiy set the wheels in motion with a formal request to Stephen Rothwell. Christoph Hellwig had a lot of feedback on the code for Artem, and it came out that NFS would be very difficult for UBIFS to support without significant code revisions. Artem was surprised to learn about that, and admitted that yes, probably the initial version of UBIFS in Linus' tree would not support NFS. This doesn't seem to bother anyone, and in any case, Artem already is working on some ideas to fix the problems around NFS support. It does seem as though UBIFS will soon be part of the official kernel releases. Recently, there was a fairly significant effort to eliminate the BKL (Big Kernel Lock) by replacing it with semaphores. This is an excellent goal, with all kinds of speed implications for regular users, but unfortunately, the particular implementation had some speed problems of its own that led Linus Torvalds eventually to undo the change entirely. This fairly severe step was prompted partly by the speed issues of the semaphore solution and partly by the sense that there must be a better solution out there. Everyone, including Linus, wants to get rid of the BKL. But, doing this is very hard. The BKL has various qualities that are difficult to implement in any of the available alternative locking methods, and it also has some subtleties that make it hard to determine whether a given alternate implementation is doing the right thing or not. Ingo Molnar, therefore, has decided to cut through the morass, with a partial solution that will make the full solution much more manageable. He plans first of all to extract all the BKL code out of the core kernel and into an isolated part of the source tree, where it can one day be replaced entirely, without requiring any subtle changes to core code. Eventually, he hopes to push each occurrence of the BKL into the relevant subsystem code, where it could be replaced with cleaner subsystem locks, which in turn could be eliminated in a more normal and familiar way. With Ingo on the job, and Linus taking an active part, a lot of other big-time hackers have piled on, and there is no doubt that very significant locking changes are in store for the kernel. What does this mean for regular users? Probably a snappier, speedier kernel in the relatively near future. They Said It Not everything worth doing is worth doing well. —Tom West, from The Soul of a New Machine by Tracy Kidder, 1981 Technology has the shelf life of a banana. —Scott McNealy Never trust a computer you can't throw out a window. —Steve Wozniak Computers are useless. They can only give you answers. —Pablo Picasso In the long run, paying for Wi-Fi in your hotel will be like paying to use the toilet or the heater. You won't. Meanwhile, it would be nice if it were easy, cheap, good, or at least two out of those three. First, it [Microsoft] “embraces” the wonderfulness of open source; then it “extends” open source through deals like the one it signed with Novell, effectively adding software patents to the free software mix; and then, one day, it “extinguishes” it by changing the terms of the licences it grants. —Glyn Moody on Microsoft's old embrace, extend and extinguish cha-cha, Like the Presidential campaign, it's not who is most experienced or most viral or any of that. Rather, it's who's left after the least are gone. All the religious arguments—closed versus open in particular—are left in the dust by our desire to live as much in the future as we can. —Steve Gillmor on the iPhone, gesturelab.com/?p=111 How much marketing fakery do you willingly accept, and how much do you want to know about? Does the vegetarian really want to know that they didn't wash the pot at the restaurant and a few molecules of chicken broth are in that soup? As long as you have one person to talk to, you have a community. And I think way too many people are looking at how many Twitter followers they have, or how many RSS people they're having following them and that's a mistake. You need to embrace your community no matter how big or small—I mean, everyone started off real small. —Gary Vaynerchuk, garyvaynerchuk.com/2008/06/05/when-do-you-know-you-have-a 21 sec ago 2 hours 6 min ago 4 hours 3 min ago 4 hours 21 min ago 4 hours 50 min ago 4 hours 51 min ago 4 hours 52 min ago 7 hours 52 min ago 16 hours 19 min ago 16 hours 24 min ago
http://www.linuxjournal.com/article/10153
CC-MAIN-2013-20
refinedweb
2,600
73.58
This article shared the top best answers to the problem mentioned above. Why is “using namespace std;” considered bad practice? Answer #1: This is not related to performance at all. But consider this: you are using two libraries called Foo and Bar: using namespace foo; using namespace bar; Everything works fine, and. Why is “using namespace std;” considered bad practice? Answer #2: I agree with everything explained above, but I’d like to add: It can even get worse than that! Library Foo 2.0 could introduce a function, Quux(), that is an unambiguously better match for some of your calls to Quux() than the bar::Quux() your code called for years. Then your code still compiles, but it silently calls the wrong function and does god-knows-what. That’s about as bad as things can get. Keep in mind that the std namespace has tons of identifiers, many of which are very common ones (think list, sort, string, iterator, etc.) which are very likely to appear in other code, too. If you consider this unlikely: There was a question asked here on Stack Overflow where pretty much exactly this happened (wrong function called due to omitted std:: prefix) about half a year after I gave this answer. Here is another, more recent example of such a question. So this is a real problem. Here’s one more data point: Many, many years ago, I also used to find it annoying having to prefix everything from the standard library with std::. Then I worked in a project where it was decided at the start that both using directives and declarations are banned except for function scopes. Guess what? It took most of us very few weeks to get used to writing the prefix, and after a few more weeks most of us even agreed that it actually made the code more readable. There’s a reason for that: Whether you like shorter or longer prose is subjective, but the prefixes objectively add clarity to the code. Not only the compiler, but you, too, find it easier to see which identifier is referred to. In a decade, that project grew to have several million lines of code. Since these discussions come up again and again, I once was curious how often the (allowed) function-scope using actually was used in the project. I grep’d the sources for it and only found one or two dozen places where it was used. To me this indicates that, once tried, developers don’t find std:: painful enough to employ using directives even once every 100 kLoC even where it was allowed to be used. Bottom line: Explicitly prefixing everything doesn’t do any harm, takes very little getting used to, and has objective advantages. In particular, it makes the code easier to interpret by the compiler and by human readers — and that should probably be the main goal when writing code. Why is “using namespace std;” considered bad practice? Answer #3: The problem with putting using namespace in the header files of your classes is that it forces anyone who wants to use your classes (by including your header files) to also be ‘using’ (i.e. seeing everything in) those other namespaces. However, you may feel free to put a using statement in your (private) *.cpp files. Beware that some people disagree with my saying “feel free” like this — because although a using statement in a cpp file is better than in a header (because it doesn’t affect people who include your header file), they think it’s still not good (because depending on the code it could make the implementation of the class more difficult to maintain). This C++ Super-FAQ entry says, The using-directive exists for legacy C++ code and to ease the transition to namespaces, but you probably shouldn’t use it on a regular basis, at least not in your new C++ code. The FAQ suggests two alternatives: - A using-declaration: using std::cout; // a using-declaration lets you use cout without qualification cout << "Values:"; - Just typing std:: std::cout << "Values:"; Why is “using namespace std;” considered bad practice? Answer #4: Do not use it globally It is considered “bad” only when used globally. Because: - You clutter the namespace you are programming in. - Readers will have difficulty seeing where a particular identifier comes from, when you use many using namespace xyz;. - Whatever is true for other readers of your source code is even more true for the most frequent reader of it: yourself. Come back in a year or two and take a look… - If you only talk about using namespace std;you might not be aware of all the stuff you grab — and when you add another #includeor move to a new C++ revision you might get name conflicts you were not aware of. You may use it locally Go ahead and use it locally (almost) freely. This, of course, prevents you from repetition of std:: — and repetition is also bad. An idiom for using it locally In C++03 there was an idiom — boilerplate code — for implementing a swap function for your classes. It was suggested that you actually use a local using namespace std; — or at least using std::swap;: class Thing { int value_; Child child_; public: // ... friend void swap(Thing &a, Thing &b); }; void swap(Thing &a, Thing &b) { using namespace std; // make `std::swap` available // swap all members swap(a.value_, b.value_); // `std::stwap(int, int)` swap(a.child_, b.child_); // `swap(Child&,Child&)` or `std::swap(...)` } This does the following magic: - The compiler will choose the std::swapfor value_, i.e. void std::swap(int, int). - If you have an overload void swap(Child&, Child&)implemented the compiler will choose it. - If you do not have that overload the compiler will use void std::swap(Child&,Child&)and try its best swapping these. With C++11 there is no reason to use this pattern any more. The implementation of std::swap was changed to find a potential overload and choose it. Follow Programming Articles for more!
https://programming-articles.com/why-is-using-namespace-std-considered-bad-practice-top-answers/
CC-MAIN-2022-40
refinedweb
1,017
69.52
These lab exercises are intended to show you how to run a C program on the EECS instructional computers, to introduce you to the gdb debugger, and to get you thinking about the internal representations of numbers. Copy the contents of ~cs61c/labs/01 to a suitable location in your home directory. $ mkdir ~/lab $ gcp -R ~cs61c/labs/01/ ~/lab Fill in the blank in the following C program, also in output0.c, so that its output is a line containing 0. Don't change anything else in the program. #include <stdio.h> int main ( ) { int n; n = _____; printf ("%c\n", n); return 0; } To verify your solution, compile it and run the resulting binary: $ gcc -c output0.c $ gcc output0.o -o output0 $ ./output0 0 Compile your solution to exercise 1 with the "-g" option. This causes gcc to store information in the executable program for gdb to make sense of it. Then single-step through the whole program by: Type help from within gdb to find out the commands to do these things. The program mysteryout apparently produces a blank line as output when it is executed. Find out what it really prints using the od (octal dump) command. Running "man od" will give you information on how it works. Print the output as hexadecimal numbers (hex). You will learn more about hex numbers in next lecture. Hint: The output is a sequence of 5 bytes. Therefore, it is best to look at the output byte by byte using -t switch of od. In Monday and Tuesday's class, we discussed number representation. In particular, we looked at unsigned integers and two's complement, the almost ubiquitous format for signed integers. Look at biggestInt.c. You may wish to read through the comments but at this point it is not critical that you understand exactly how the program works. Basically, it is a C program that will tell you some useful information about certain C data types. It does this by exploiting the fact that C does not check for overflow and wrap around conditions. Compile and run the program and answer the following questions:
http://www-inst.eecs.berkeley.edu/~cs61c/su08/labs/01/
CC-MAIN-2017-17
refinedweb
358
65.93
. ManagementTime allowed Reading and planning: 15 minutesWriting: 3 hours.Fundamentals Pilct Paper Skills mcdulePaper F9The Association of Chartered Certified Accountants2ALL FOUR questions are compulsory and MUST be attempted 1 Droxfol Co is a listed company that plans to spend $10m on expanding its existing business. It has been suggested that the money could be raised by issuing 9% loan notes redeemable in ten years time. Current financial information on Droxfol Co is as follows. Incomestatementinformationforthelastyear $000 Profit before interest and tax 7,000 Interest (500) Profit before tax 6,500 Tax (1,950) Profit for the period 4,550 Balancesheetforthelastyear $000 $000 Non-current assets 20,000 Current assets 20,000 Total assets 40,000 Equityandliabilities Ordinary shares, par value $1 5,000 Retained earnings 22,500 Total equity 27,500 10% loan notes 5,000 9% preference shares, par value $1 2,500 Total non-current liabilities 7,500 Current liabilities 5,000 Total equity and liabilities 40,000 The current ex div ordinary share price is $4.50 per share. An ordinary dividend of 35 cents per share has just been paid and dividends are expected to increase by 4% per year for the foreseeable future. The current ex div preference share price is 76.2 cents. The loan notes are secured on the existing non-current assets of Droxfol Co and are redeemable at par in eight years time. They have a current ex interest market price of $105 per $100 loan note. Droxfol Co pays tax on profits at an annual rate of 30%. The expansion of business is expected to increase profit before interest and tax by 12% in the first year. Droxfol Co has no overdraft. Averagesectorratios: Financial gearing: 45% (prior charge capital divided by equity capital on a book value basis) Interest coverage ratio: 12 timesRequired:(a) Calculate the current weighted average cost of capital of Droxfol Co. (9 marks)(b) Discuss whether financial management theory suggests that Droxfol Co can reduce its weighted average cost of capital to a minimum level. (8 marks)(c) Evaluate and comment on the effects, after one year, of the loan note issue and the expansion of business on the following ratios: (i) interest coverage ratio; (ii) financial gearing; (iii) earnings per share. Assume that the dividend growth rate of 4% is unchanged. (8 marks) (25 marks) 32 Nedwen Co is a UK-based company which has the following expected transactions.. One month: Expected receipt of $240,000 One month: Expected payment of $140,000 Three months: Expected receipts of $300,000 The finance manager has collected the following information: Spot rate ($ per ): 1.7820 0.0002 One month forward rate ($ per ): 1.7829 0.0003 Three months forward rate ($ per ): 1.7846 0.0004 Money market rates for Nedwen Co: Borrowing Deposit One year sterling interest rate: 4.9% 4.6 One year dollar interest rate: 5.4% 5.1 Assume that it is now 1 April.Required:(a) Discuss the differences between transaction risk, translation risk and economic risk. (6 marks)(b) Explain how inflation rates can be used to forecast exchange rates. (6 marks)(c) Calculate the expected sterling receipts in one month and in three months using the forward market. (3 marks)(d) Calculate the expected sterling receipts in three months using a money-market hedge and recommend whether a forward market hedge or a money market hedge should be used. (5 marks)(e) Discuss how sterling currency futures contracts could be used to hedge the three-month dollar receipt. (5 marks) (25 marks) 3 Ulnad Co has annual sales revenue of $6 million and all sales are on 30 days credit, although customers on average take ten days more than this to pay. Contribution represents 60% of sales and the company currently has no bad debts. Accounts receivable are financed by an overdraft at an annual interest rate of 7%. Ulnad Co plans to offer an early settlement discount of 1.5% for payment within 15 days and to extend the maximum credit offered to 60 days. The company expects that these changes will increase annual credit sales by 5%, while also leading to additional incremental costs equal to 0.5% of turnover. The discount is expected to be taken by 30% of customers, with the remaining customers taking an average of 60 days to pay.Required:(a) Evaluate whether the proposed changes in credit policy will increase the profitability of Ulnad Co. (6 marks)(b) Renpec Co, a subsidiary of Ulnad Co, has set a minimum cash account balance of $7,500. The average cost to the company of making deposits or selling investments is $18 per transaction and the standard deviation of its cash flows was $1,000 per day during the last year. The average interest rate on investments is 5.11%. Determine the spread, the upper limit and the return point for the cash account of Renpec Co using the Miller-Orr model and explain the relevance of these values for the cash management of the company. (6 marks)(c) Identify and explain the key areas of accounts receivable management. (6 marks) (d) Discuss the key factors to be considered when formulating a working capital funding policy. (7 marks) (25 marks) 44 Trecor Co plans to buy a new machine to meet expected demand for a new product, Product T. This machine will cost $250,000 and last for four years, at the end of which time it will be sold for $5,000. Trecor Co expects demand for Product T to be as follows: Year 1 2 3 4 Demand (units) 35,000 40,000 50,000 25,000 The selling price for Product T is expected to be $12.00 per unit and the variable cost of production is expected to be $7.80 per unit. Incremental annual fixed production overheads of $25,000 per year will be incurred. Selling price and costs are all in current price terms. Selling price and costs are expected to increase as follows: Increase Selling price of Product T: 3% per year Variable cost of production: 4% per year Fixed production overheads: 6% per year Other information Trecor Co has a real cost of capital of 5.7% and pays tax at an annual rate of 30% one year in arrears. It can claim capital allowances on a 25% reducing balance basis. General inflation is expected to be 5% per year. Trecor Co has a target return on capital employed of 20%. Depreciation is charged on a straight-line basis over the life of an asset.Required:(a) Calculate the net present value of buying the new machine and comment on your findings (work to the nearest $1,000). (13 marks)(b) Calculate the before-tax return on capital employed (accounting rate of return) based on the average investment and comment on your findings. (5 marks)(c) Discuss the strengths and weaknesses of internal rate of return in appraising capital investments. (7 marks) (25 marks)5Formulae SheetEconomicorderquantityMillerOrrModelTheCapitalAssetPricingModelTheassetbetaformulaTheGrowthModelGordonsgrowthapproximationTheweightedaveragecostofcapitalTheFisherformulaPurchasingpowerparityandinterestrateparity133683UHVHQW 9DOXH 7DEOHPresent value cf 1 i.e. (1 + U)QWhere r ~ cisccunt raten ~ number cf periccs until payment'LVFRXQW UDWH U3HULRGV(n) 1 2 3 4 5 6 7 8 9 101 0990 0980 0971 0962 0952 0943 0935 0926 0917 0909 12 0980 0961 0943 0925 0907 0890 0873 0857 0842 0826 23 0971 0942 0915 0889 0864 0840 0816 0794 0772 0751 34 0961 0924 0888 0855 0823 0792 0763 0735 0708 0683 45 0951 0906 0863 0822 0784 0747 0713 0681 0650 0621 56 0942 0888 0837 0790 0746 0705 0666 0630 0596 0564 67 0933 0871 0813 0760 0711 0665 0623 0583 0547 0513 78 0923 0853 0789 0731 0677 0627 0582 0540 0502 0467 89 0914 0837 0766 0703 0645 0592 0544 0500 0460 0424 910 0905 0820 0744 0676 0614 0558 0508 0463 0422 0386 1011 0896 0804 0722 0650 0585 0527 0475 0429 0388 0350 1112 0887 0788 0701 0625 0557 0497 0444 0397 0356 0319 1213 0879 0773 0681 0601 0530 0469 0415 0368 0326 0290 1314 0870 0758 0661 0577 0505 0442 0388 0340 0299 0263 1415 0861 0743 0642 0555 0481 0417 0362 0315 0275 0239 15(n) 11 12 13 14 15 16 17 18 19 201 0901 0893 0885 0877 0870 0862 0855 0847 0840 0833 12 0812 0797 0783 0769 0756 0743 0731 0718 0706 0694 23 0731 0712 0693 0675 0658 0641 0624 0609 0593 0579 34 0659 0636 0613 0592 0572 0552 0534 0516 0499 0482 45 0593 0567 0543 0519 0497 0476 0456 0437 0419 0402 56 0535 0507 0480 0456 0432 0410 0390 0370 0352 0335 67 0482 0452 0425 0400 0376 0354 0333 0314 0296 0279 78 0434 0404 0376 0351 0327 0305 0285 0266 0249 0233 89 0391 0361 0333 0308 0284 0263 0243 0225 0209 0194 910 0352 0322 0295 0270 0247 0227 0208 0191 0176 0162 1011 0317 0287 0261 0237 0215 0195 0178 0162 0148 0135 1112 0286 0257 0231 0208 0187 0168 0152 0137 0124 0112 1213 0258 0229 0204 0182 0163 0145 0130 0116 0104 0093 1314 0232 0205 0181 0160 0141 0125 0111 0099 0088 0078 1415 0209 0183 0160 0140 0123 0108 0095 0084 0074 0065 1579$QQXLW\ 7DEOH3UHVHQW YDOXH RI DQ DQQXLW\ RI LHWhere r ~ cisccunt raten ~ number cf periccs'LVFRXQW UDWH U3HULRGV(n) 1 2 3 4 5 6 7 8 9 101 0990 0980 0971 0962 0952 0943 0935 0926 0917 0909 12 1970 1942 1913 1886 1859 1833 1808 1783 1759 1736 23 2941 2884 2829 2775 2723 2673 2624 2577 2531 2487 34 3902 3808 3717 3630 3546 3465 3387 3312 3240 3170 45 4853 4713 4580 4452 4329 4212 4100 3993 3890 3791 56 5795 5601 5417 5242 5076 4917 4767 4623 4486 4355 67 6728 6472 6230 6002 5786 5582 5389 5206 5033 4868 78 7652 7325 7020 6733 6463 6210 5971 5747 5535 5335 89 8566 8162 7786 7435 7108 6802 6515 6247 5995 5759 910 9471 8983 8530 8111 7722 7360 7024 6710 6418 6145 1011 1037 9787 9253 8760 8306 7887 7499 7139 6805 6495 1112 1126 1058 9954 9385 8863 8384 7943 7536 7161 6814 1213 1213 1135 1063 9986 9394 8853 8358 7904 7487 7103 1314 1300 1211 1130 1056 9899 9295 8745 8244 7786 7367 1415 1387 1285 1194 1112 1038 9712 9108 8559 8061 7606 15(n) 11 12 13 14 15 16 17 18 19 201 0901 0893 0885 0877 0870 0862 0855 0847 0840 0833 12 1713 1690 1668 1647 1626 1605 1585 1566 1547 1528 23 2444 2402 2361 2322 2283 2246 2210 2174 2140 2106 34 3102 3037 2974 2914 2855 2798 2743 2690 2639 2589 45 3696 3605 3517 3433 3352 3274 3199 3127 3058 2991 56 4231 4111 3998 3889 3784 3685 3589 3498 3410 3326 67 4712 4564 4423 4288 4160 4039 3922 3812 3706 3605 78 5146 4968 4799 4639 4487 4344 4207 4078 3954 3837 89 5537 5328 5132 4946 4772 4607 4451 4303 4163 4031 910 5889 5650 5426 5216 5019 4833 4659 4494 4339 4192 1011 6207 5938 5687 5453 5234 5029 4836 4656 4486 4327 1112 6492 6194 5918 5660 5421 5197 4988 4793 4611 4439 1213 6750 6424 6122 5842 5583 5342 5118 4910 4715 4533 1314 6982 6628 6302 6002 5724 5468 5229 5008 4802 4611 1415 7191 6811 6462 6142 5847 5575 5324 5092 4876 4675 151 (1 U)Q UEnd of Question Paper89Answers10Pilot Paper F9 AnswersFinancial Management1 (a) Calculation of weighted average cost of capital (WACC) Market values Market value of equity = 5m x 4.50 = $22.5 million Market value of preference shares = 2.5m x .0762 = $1.905 million Market value of 10% loan notes = 5m x (105/ 100) = $5.25 million Total market value = 22.5m + 1.905m + 5.25m = $29.655 million Cost of equity using dividend growth model = [(35 x 1.04)/ 450] + 0.04 = 12.08% Cost of preference shares = 100 x 9/ 76.2 = 11.81% Annual after-tax interest payment = 10 x 0.7 = $7 Year Cashflow $ 10%DF PV($)5%DF PV($) 0 market value (105) 1.000 (105) 1.000 (105) 18 interest 7 5.335 37.34 6.463 45.24 8 redemption 100 0.467 46.70 0.677 67.70 (20.96) 7.94 Using interpolation, after-tax cost of loan notes = 5 + [(5 x 7.94)/ (7.94 + 20.96)] = 6.37% WACC = [(12.08 x 22.5) + (11.81 x 1.905) + (6.37 x 5.25)]/ 29.655 = 11.05%(b) Droxfol Co has long-term finance provided by ordinary shares, preference shares and loan notes. The rate of return required by each source of finance depends on its risk from an investor point of view, with equity (ordinary shares) being seen as the most risky and debt (in this case loan notes) seen as the least risky. Ignoring taxation, the weighted average cost of capital (WACC) would therefore be expected to decrease as equity is replaced by debt, since debt is cheaper than equity, i.e. the cost of debt is less than the cost of equity. However, financial risk increases as equity is replaced by debt and so the cost of equity will increase as a company gears up, offsetting the effect of cheaper debt. At low and moderate levels of gearing, the before-tax cost of debt will be constant, but it will increase at high levels of gearing due to the possibility of bankruptcy. At high levels of gearing, the cost of equity will increase to reflect bankruptcy risk in addition to financial. On this traditional view, therefore, Droxfol Co can gear up using debt and reduce its WACC to a minimum, at which point its market value (the present value of future corporate cash flows) will be maximised. In contrast to the traditional view, continuing to ignore taxation but assuming a perfect capital market, Miller and Modigliani demonstrated that the WACC remained constant as a company geared up, with the increase in the cost of equity due to financial risk exactly balancing the decrease in the WACC caused by the lower before-tax cost of debt. Since in a prefect capital market the possibility of bankruptcy risk does not arise, the WACC is constant at all gearing levels and the market value of the company is also constant. Miller and Modigliani showed, therefore, that the market value of a company depends on its business risk alone, and not on its financial risk. On this view, therefore, Droxfol Co cannot reduce its WACC to a minimum. When corporate tax was admitted into the analysis of Miller and Modigliani, a different picture emerged. The interest payments on debt reduced tax liability, which meant that the WACC fell as gearing increased, due to the tax shield given to profits. On this view, Droxfol Co could reduce its WACC to a minimum by taking on as much debt as possible. However, a perfect capital market is not available in the real world and at high levels of gearing the tax shield offered by interest payments is more than offset by the effects of bankruptcy risk and other costs associated with the need to service large amounts of debt. Droxfol Co should therefore be able to reduce its WACC by gearing up, although it may be difficult to determine whether it has reached a capital structure giving a minimum WACC. (c) (i) Interest coverage ratio Current interest coverage ratio = 7,000/ 500 = 14 times Increased profit before interest and tax = 7,000 x 1.12 = $7.84m Increased interest payment = (10m x 0.09) + 0.5m = $1.4m Interest coverage ratio after one year = 7.84/ 1.4 = 5.6 times The current interest coverage of Droxfol Co is higher than the sector average and can be regarded as quiet safe. Following the new loan note issue, however, interest coverage is less than half of the sector average, perhaps indicating that Droxfol Co may not find it easy to meet its interest payments.11(ii) Financial gearing This ratio is defined here as prior charge capital/equity share capital on a book value basis Current financial gearing = 100 x (5,000 + 2,500)/ (5,000 + 22,500) = 27% Ordinary dividend after one year = 0.35 x 5m x 1.04 = $1.82 million Total preference dividend = 2,500 x 0.09 = $225,000 Incomestatementafteroneyear $000 $000 Profit before interest and tax 7,840 Interest (1,400) Profit before tax 6,440 Income tax expense (1,932) Profit for the period 4,508 Preference dividends 225 Ordinary dividends 1,820 (2,045) Retained earnings 2,463 Financial gearing after one year = 100 x (15,000 + 2,500)/ (5,000 + 22,500 + 2,463) = 58% The current financial gearing of Droxfol Co is 40% less (in relative terms) than the sector average and after the new loan note issue it is 29% more (in relative terms). This level of financial gearing may be a cause of concern for investors and the stock market. Continued annual growth of 12%, however, will reduce financial gearing over time.(iii) Earnings per share Current earnings per share = 100 x (4,550 225)/ 5,000 = 86.5 cents Earnings per share after one year = 100 x (4,508 - 225)/ 5,000 = 85.7 cents Earnings per share is seen as a key accounting ratio by investors and the stock market, and the decrease will not be welcomed. However, the decrease is quiet small and future growth in earnings should quickly eliminate it. The analysis indicates that an issue of new debt has a negative effect on the companys financial position, at least initially. There are further difficulties in considering a new issue of debt. The existing non-current assets are security for the existing 10% loan notes and may not available for securing new debt, which would then need to be secured on any new non-current assets purchased. These are likely to be lower in value than the new debt and so there may be insufficient security for a new loan note issue. Redemption or refinancing would also pose a problem, with Droxfol Co needing to redeem or refinance $10 million of debt after both eight years and ten years. Ten years may therefore be too short a maturity for the new debt issue. An equity issue should be considered and compared to an issue of debt. This could be in the form of a rights issue or an issue to new equity investors.2 (a) Transaction risk This is the risk arising on short-term foreign currency transactions that the actual income or cost may be different from the income or cost expected when the transaction was agreed. For example, a sale worth $10,000 when the exchange rate is $1.79 per has an expected sterling value is $5,587. If the dollar has depreciated against sterling to $1.84 per when the transaction is settled, the sterling receipt will have fallen to $5,435. Transaction risk therefore affects cash flows and for this reason most companies choose to hedge or protect themselves against transaction risk. Translationrisk This risk arises on consolidation of financial statements prior to reporting financial results and for this reason is also known as accounting exposure. Consider an asset worth 14 million, acquired when the exchange rate was 1.4 per $. One year later, when financial statements are being prepared, the exchange rate has moved to 1.5 per $ and the balance sheet value of the asset has changed from $10 million to $9.3 million, resulting an unrealised (paper) loss of $0.7 million. Translation risk does not involve cash flows and so does not directly affect shareholder wealth. However, investor perception may be affected by the changing values of assets and liabilities, and so a company may choose to hedge translation risk through, for example, matching the currency of assets and liabilities (eg a euro-denominated asset financed by a euro-denominated loan). Economicrisk Transaction risk is seen as the short-term manifestation of economic risk, which could be defined as the risk of the present value of a companys expected future cash flows being affected by exchange rate movements over time. It is difficult to measure economic risk, although its effects can be described, and it is also difficult to hedge against it. (b) The law of one price suggests that identical goods selling in different countries should sell at the same price, and that exchange rates relate these identical values. This leads on to purchasing power parity theory, which suggests that changes in exchange rates over time must reflect relative changes in inflation between two countries. If purchasing power parity holds true, the expected spot rate (Sf) can be forecast from the current spot rate (S0) by multiplying by the ratio of expected inflation rates ((1 + if)/ (1 + iUK)) in the two counties being considered. In formula form: Sf = S0 (1 + if)/ (1 + iUK).12 This relationship has been found to hold in the longer-term rather than the shorter-term and so tends to be used for forecasting exchange rates several years in the future, rather than for periods of less than one year. For shorter periods, forward rates can be calculated using interest rate parity theory, which suggests that changes in exchange rates reflect differences between interest rates between countries. (c) Forward market evaluation Net receipt in 1 month = 240,000 140,000 = $100,000 Nedwen Co needs to sell dollars at an exchange rate of 1.7829 + 0.003 = $1.7832 per Sterling value of net receipt = 100,000/ 1.7832 = $56,079 Receipt in 3 months = $300,000 Nedwen Co needs to sell dollars at an exchange rate of 1.7846 + 0.004 = $1.7850 per Sterling value of receipt in 3 months = 300,000/ 1.7850 = $168,067 (d) Evaluation of money-market hedge Expected receipt after 3 months = $300,000 Dollar interest rate over three months = 5.4/ 4 = 1.35% Dollars to borrow now to have $300,000 liability after 3 months = 300,000/ 1.0135 = $296,004 Spot rate for selling dollars = 1.7820 + 0.0002 = $1.7822 per Sterling deposit from borrowed dollars at spot = 296,004/ 1.7822 = $166,089 Sterling interest rate over three months = 4.6/ 4 = 1.15% Value in 3 months of sterling deposit = 166,089 x 1.0115 = $167,999 The forward market is marginally preferable to the money market hedge for the dollar receipt expected after 3 months. (e) A currency futures contract is a standardised contract for the buying or selling of a specified quantity of foreign currency. It is traded on a futures exchange and settlement takes place in three-monthly cycles ending in March, June, September and December, ie a company can buy or sell September futures, December futures and so on. The price of a currency futures contract is the exchange rate for the currencies specified in the contract. marked. Nedwen Co expects to receive $300,000 in three months time and so is concerned that sterling may appreciate (strengthen) against the dollar, since this would result in a lower sterling receipt. The company can hedge the receipt by buying sterling currency futures contracts in the US and since it is 1 April, would buy June futures contracts. In June, Nedwen Co could sell the same number of US sterling currency futures it bought in April and sell the $300,000 it received on the currency market.3 (a) Evaluation of change in credit policy Current average collection period = 30 + 10 = 40 days Current accounts receivable = 6m x 40/ 365 = $657,534 Average collection period under new policy = (0.3 x 15) + (0.7 x 60) = 46.5 days New level of credit sales = $6.3 million Accounts receivable after policy change = 6.3 x 46.5/ 365 = $802,603 Increase in financing cost = (802,603 657,534) x 0.07 = $10,155 $ Increase in financing cost 10,155 Incremental costs = 6.3m x 0.005 = 31,500 Cost of discount = 6.3m x 0.015 x 0.3 = 28,350 Increase in costs 70,005 Contribution from increased sales = 6m x 0.05 x 0.6 = 180,000 Net benefit of policy change 109,995 The proposed policy change will increase the profitability of Ulnad Co (b) Determination of spread: Daily interest rate = 5.11/ 365 = 0.014% per day Variance of cash flows = 1,000 x 1,000 = $1,000,000 per day Transaction cost = $18 per transaction Spread = 3 x ((0.75 x transaction cost x variance)/interest rate)1/3 = 3 x ((0.75 x 18 x 1,000,000)/ 0.00014)1/3 = 3 x 4,585.7 = $13,75713 Lower limit (set by Renpec Co) = $7,500 Upper limit = 7,500 + 13,757 =$21,257 Return point = 7,500 + (13,757/ 3) = $12,086 The Miller-Orr model takes account of uncertainty in relation to receipts and payment. The cash balance of Renpec Co is allowed to vary between the lower and upper limits calculated by the model. If the lower limit is reached, an amount of cash equal to the difference between the return point and the lower limit is raised by selling short-term investments. If the upper limit is reached an amount of cash equal to the difference between the upper limit and the return point is used to buy short-term investments. The model therefore helps Renpec Co to decrease the risk of running out of cash, while avoiding the loss of profit caused by having unnecessarily high cash balances.(c) There are four key areas of accounts receivable management: policy formulation, credit analysis, credit control and collection of amounts due. Policyformulation This is concerned with establishing the framework within which management of accounts receivable in an individual company takes place. The elements to be considered include establishing terms of trade, such as period of credit offered and early settlement discounts: deciding whether to charge interest on overdue accounts; determining procedures to be followed when granting credit to new customers; establishing procedures to be followed when accounts become overdue, and so on. Creditanalysis Assessment of creditworthiness depends on the analysis of information relating to the new customer. This information is often generated by a third party and includes bank references, trade references and credit reference agency reports. The depth of credit analysis depends on the amount of credit being granted, as well as the possibility of repeat business. Creditcontrol Once credit has been granted, it is important to review outstanding accounts on a regular basis so overdue accounts can be identified. This can be done, for example, by an aged receivables analysis. It is also important to ensure that administrative procedures are timely and robust, for example sending out invoices and statements of account, communicating with customers by telephone or e-mail, and maintaining account records. Collectionofamountsdue Ideally, all customers will settle within the agreed terms of trade. If this does not happen, a company needs to have in place agreed procedures for dealing with overdue accounts. These could cover logged telephone calls, personal visits, charging interest on outstanding amounts, refusing to grant further credit and, as a last resort, legal action. With any action, potential benefit should always exceed expected cost.(d) When considering how working capital is financed, it is useful to divide assets into non-current assets, permanent current assets and fluctuating current assets. Permanent current assets represent the core level of working capital investment needed to support a given level of sales. As sales increase, this core level of working capital also increases. Fluctuating current assets represent the changes in working capital that arise in the normal course of business operations, for example when some accounts receivable are settled later than expected, or when inventory moves more slowly than planned. The matching principle suggests that long-term finance should be used for long-term assets. Under a matching working capital funding policy, therefore, long-term finance is used for both permanent current assets and non-current assets. Short-term finance is used to cover the short-term changes in current assets represented by fluctuating current assets. Long-term debt has a higher cost than short-term debt in normal circumstances, for example because lenders require higher compensation for lending for longer periods, or because the risk of default increases with longer lending periods. However, long-term debt is more secure from a company point of view than short-term debt since, provided interest payments are made when due and the requirements of restrictive covenants are met, terms are fixed to maturity. Short-term debt is riskier than long-term debt because, for example, an overdraft is repayable on demand and short-term debt may be renewed on less favourable terms. A conservative working capital funding policy will use a higher proportion of long-term finance than a matching policy, thereby financing some of the fluctuating current assets from a long-term source. This will be less risky and less profitable than a matching policy, and will give rise to occasional short-term cash surpluses. An aggressive working capital funding policy will use a lower proportion of long-term finance than a matching policy, financing some of the permanent current assets from a short-term source such as an overdraft. This will be more risky and more profitable than a matching policy. Other factors that influence a working capital funding policy include management attitudes to risk, previous funding decisions, and organisation size. Management attitudes to risk will determine whether there is a preference for a conservative, an aggressive or a matching approach. Previous funding decisions will determine the current position being considered in policy formulation. The size of the organisation will influence its ability to access different sources of finance. A small company, for example, may be forced to adopt an aggressive working capital funding policy because it is unable to raise additional long-term finance, whether equity of debt.144 (a) Calculation of NPV Nominal discount rate using Fisher effect: 1.057 x 1.05 = 1.1098 ie 11% Year 1 2 3 4 5 $000 $000 $000 $000 $000 Sales (W1) 433 509 656 338 Variable cost (W2) 284 338 439 228 Contribution 149 171 217 110 Fixed production overheads 27 28 30 32 Net cash flow 122 143 187 78 Tax (37) (43) (56) (23) CA tax benefits (W3) 19 14 11 30 After-tax cash flow 122 125 158 33 7 Disposal 5 After-tax cash flow 122 125 158 38 7 Discount factors 0.901 0.812 0.731 0.659 0.593 Present values 110 102 115 25 4 $ PV of benefits 356,000 Investment 250,000 NPV 106,000 Since the NPV is positive, the purchase of the machine is acceptable on financial grounds. Workings (W1)Year 1 2 3 4 Demand (units) 35,000 40,000 50,000 25,000 Selling price ($/unit) 12.36 12.73 13.11 13.51 Sales ($/year) 432,600 509,200 655,500 337,750 (W2)Year 1 2 3 4 Demand (units) 35,000 40,000 50,000 25,000 Variable cost ($/unit) 8.11 8.44 8.77 9.12 Variable cost ($/year) 283,850 337,600 438,500 228,000 (W3)Year Capitalallowances Taxbenefits 1 250,000 x 0.25 = 62,500 62,500 x 0.3 = 18,750 2 62,500 x 0.75 = 46,875 46,875 x 0.3 = 14,063 3 46,875 x 0.75 = 35,156 25,156 x 0.3 = 10,547 4 By difference 100,469 100,469 x 0.3 = 30,141 250,000 5.000 = 245,000 73,501 (b) Calculation of before-tax return on capital employed Total net before-tax cash flow = 122 + 143 + 187 + 78 = $530,000 Total depreciation = 250,000 5,000 = $245,000 Average annual accounting profit = (530 245)/ 4 = $71,250 Average investment = (250,000 + 5,000)/ 2 = $127,500 Return on capital employed = 100 x 71,250/ 127,500 = 56% Given the target return on capital employed of Trecor Co is 20% and the ROCE of the investment is 56%, the purchase of the machine is recommended.(c) One of the strengths of internal rate of return (IRR) as a method of appraising capital investments is that it is a discounted cash flow (DCF) method and so takes account of the time value of money. It also considers cash flows over the whole of the project life and is sensitive to both the amount and the timing of cash flows. It is preferred by some as it offers a relative measure of the value of a proposed investment, ie the method calculates a percentage that can be compared with the companys cost of capital, and with economic variables such as inflation rates and interest rates. IRR has several weaknesses as a method of appraising capital investments. Since it is a relative measurement of investment worth, it does not measure the absolute increase in company value (and therefore shareholder wealth), which can be found using the net present value (NPV) method. A further problem arises when evaluating non-conventional projects (where cash 15flows change from positive to negative during the life of the project). IRR may offer as many IRR values as there are changes in the value of cash flows, giving rise to evaluation difficulties. There is a potential conflict between IRR and NPV in the evaluation of mutually exclusive projects, where the two methods can offer conflicting advice as which of two projects is preferable. Where there is conflict, NPV always offers the correct investment advice: IRR does not, although the advice offered can be amended by considering the IRR of the incremental project. There are therefore a number of reasons why IRR can be seen as an inferior investment appraisal method compared to its DCF alternative, NPV. 16Pilot Paper F9 Marking SchemeFinancial Management Marks Marks1 (a) Calculation of market values 2 Calculation of cost of equity 2 Calculation of cost of preference shares 1 Calculation of cost of debt 2 Calculation of WACC 2 9 (b) Relative costs of equity and debt 1 Discussion of theories of capital structure 78 Conclusion 1 Maximum 8 (c) Analysis of interest coverage ratio 23 Analysis of financial gearing 23 Analysis of earnings per share 23 Comment 23 Maximum 8 252 (a) Transaction risk 2 Translation risk 2 Economic risk 2 6 (b) Discussion of purchasing power parity 45 Discussion of interest rate parity 12 Maximum 6 (c) Netting 1 Sterling value of 3-month receipt 1 Sterling value of 1-year receipt 1 3 (d) Evaluation of money market hedge 4 Comment 1 5 (e) Definition of currency futures contract 12 Initial margin and variation margin 12 Buying and selling of contracts 12 Hedging the three-month receipt 12 Maximum 5 2517 Marks Marks3 (a) Increase in financing cost 2 Incremental costs 1 Cost of discount 1 Contribution from increased sales 1 Conclusion 1 6 (b) Calculation of spread 2 Calculation of upper limit 1 Calculation of return point 1 Explanation of findings 2 6 (c) Policy formulation 12 Credit analysis 12 Credit control 12 Collection of amounts due 12 Maximum 6 (d) Analysis of assets 12 Short-term and long-term debt 23 Discussion of policies 23 Other factors 12 Maximum 7 254 (a) Discount rate 1 Inflated sales revenue 2 Inflated variable cost 1 Inflated fixed production overheads 1 Taxation 2 Capital allowance tax benefits 3 Discount factors 1 Net present value 1 Comment 1 13 (b) Calculation of average annual accounting profit 2 Calculation of average investment 2 Calculation of return on capital employed 1 5 (c) Strengths of IRR 23 Weaknesses of IRR 56 Maximum 7.
https://id.scribd.com/document/82798206/f9-2006-dec-ppq
CC-MAIN-2019-51
refinedweb
5,979
56.39
This time value appears in my database: (2005, 4, 3, 3, 47, 43, 6, 93, -1) Try this: import time time.mktime((2005, 4, 3, 2, 47, 43, 6, 93, -1)) time.mktime((2005, 4, 3, 1, 47, 43, 6, 93, -1)) time.mktime((2005, 4, 3, 3, 47, 43, 6, 93, -1)) The first mktime call fails with an overflow error. The other two pass. That's because the hour between 2 and 3 AM did not exist this past night. This datetime made it into the database through some addition (it wasn't a timestamp). col.py's DateTimeValidator (line 745 in 0.6.1) choked on this value. Anyone have thoughts on the best way to handle this? I could try to be stricter in my timezones, or the validator could keep an eye out for this, since this is specifically a problem where the daily savings time status is unknown (the -1 at the end). Kevin
http://sourceforge.net/p/sqlobject/mailman/attachment/3f085ecd05040312201bb91a17%40mail.gmail.com/1/
CC-MAIN-2014-52
refinedweb
160
83.15
Testing deployment of pre-built .war to various web serversJonathan Fuerth Apr 18, 2012 10:53 AM Hi testing enthusiasts, I'm itching to automate deployment testing of several Errai quickstart projects. Here's what we're doing by hand: * Launch in Dev Mode and poke at the app (this can be handled already by the tooling we have) * Build a WAR and deploy it to Jetty 7, Jetty 8, Tomcat 7, JBoss AS 6, and JBoss AS 7. (Each server has its own Maven profile, so we need to do a clean build for each) * Test that "mvn clean" properly deletes all generated files, bringing the project back to its pristine state HtmlUnit is powerful enough to verify the app deployed correctly. Real browser testing is not important because the quickstarts are not intricately styled. I feel like Arquillian has probably solved this problem already. Can I use Arquillian to deploy the target/${myapp}.war to an Arquillian Managed container, then load the page and poke at the DOM (fill in a form field, press a button, check for response) with HtmlUnit? It's a different use case than the docs and tutorials focus on: I explicitly don't want to use ShrinkWrap in this case, because the thing I'm testing is that the .war was assembled correctly. I also don't want to inject anything into my test case. I just want to load the page into HtmlUnit and poke at it. I greatly appreciate any and all ideas about how to automate away this tedious job. -Jonathan 1. Re: Testing deployment of pre-built .war to various web serversMarek Schmidt Apr 18, 2012 11:20 AM (in response to Jonathan Fuerth) It is easy to deploy an existing war with ShrinkWrap: ShrinkWrap.create(ZipImporter.class, "foo.war").importFrom(new File("target/foo.war")) (you just have to make sure your test runs in the integration-test maven phase, that is after "package", .. you will probably run the test itself in a completely different maven project anyway) You can use HtmlUnitDriver with Arquillian Drone: (just replace the "WebDriver" in the example with ) 2. Re: Testing deployment of pre-built .war to various web serversJonathan Fuerth Apr 23, 2012 1:42 PM (in response to Marek Schmidt) Thanks, Marek. I tried that and it worked great. I'm just now grappling with whether or not a Maven build is the appropriate vehicle for this type of deployment testing. It's a lot of baggage to add to the main pom generated by the archetype. Putting it in a second "deployment testing" pom alongside the main pom might be an option. It could be useful to show how to approach deployment testing, or it could be a big distraction from the quickstart itself. Maybe just a script in the parent project that creates the archetype? How have others approached testing of quickstarts that must be deployable to various containers? -Jonathan 3. Re: Testing deployment of pre-built .war to various web serversKarel Piwko Apr 24, 2012 3:37 AM (in response to Jonathan Fuerth) ShrinkWrap Maven Resolver contains a MavenImporter. Not really so much useful at this point of view, as it basically automagically picks up the result of "mvn package", however with possibility to select profiles and spawning a completely different Maven execution from Surefire (in upcoming versions), it should allow you to construct JAR/WAR/EAR easily. See following for further details: 4. Re: Testing deployment of pre-built .war to various web serversDan Allen Apr 24, 2012 3:57 AM (in response to Karel Piwko) This is very nice: WebArchive archive = ShrinkWrap.create(MavenImporter.class, "test.war") .loadEffectivePom("pom.xml").importBuildOutput() .as(WebArchive.class); Since this is a case which can come up quite often, and in some test suites be used over and over again, I'd like to see an annotation for this scenario (which activates this build chain under the covers). Something like: @RunWith(Arquillian.class) @DeployBuildOutput public class MyFunctionalTest { @ArquillianResource private URL url; @Test public void shouldBehaveSomeWay() { // make a request to the url } } Of course, in this case, a @Deployment method would not be required (which is possible through an extension). Thoughts? 5. Re: Testing deployment of pre-built .war to various web serversDan Allen Apr 24, 2012 4:03 AM (in response to Dan Allen) I had hacked up an extension prototype a while back that implements this idea, though it uses the older ShrinkWrap Resolver...which would be replaced w/ Karel's snippet. It reminded me I had a better name for the deploy annotation: @RunWith(Arquillian.class) @DeployProjectArtifact public class MyFunctionalTest { ... } If we pursue this, where do you think this belongs? In Drone? In a module by itself? 6. Re: Testing deployment of pre-built .war to various web serversKarel Piwko Apr 24, 2012 4:06 AM (in response to Dan Allen) Such requirement exists for a pretty long time here: . Now we have a ShrinkWrap Resolver Maven Plugin, @DeployBuildOutput might work without specifying path to the pom, active profiles, etc. However, IDE support for the plugin is still an open question here. This is an actual show-stopper for the moment . If implemented, creating an Arquillian extension with @DeployBuildOuput annotation would be an easy task. 7. Re: Testing deployment of pre-built .war to various web serversKarel Piwko Apr 24, 2012 4:15 AM (in response to Dan Allen) I think it should be a part of ShrinkWrap Maven Resolver. Once on classpath, you can do Maven magic for deployments. 8. Re: Testing deployment of pre-built .war to various web serversSamuel Santos Apr 24, 2012 9:48 AM (in response to Karel Piwko) +1 to add this to ShrinkWrap Maven Resolver. Can you create a Jira entry to make it easier to follow? 9. Re: Testing deployment of pre-built .war to various web serversDan Allen May 1, 2012 10:40 PM (in response to Samuel Santos) It turns out, there was already a JIRA...it just was before its time - Base test deployment on project in which test is run
https://developer.jboss.org/thread/198551?tstart=0
CC-MAIN-2018-17
refinedweb
1,014
62.98
User Name: Published: 14 May 2007 By: Brian Mains. Health monitoring is a new way to evaluate errors in or collect general information about your applications. It uses event classes that raise events to a provider, which logs the event in a data store of the provider’s choosing. The beauty of health monitoring is that it is completely configurable through the configuration file, meaning it logs the events you want to log. By default, the .NET framework has providers to log events to the ASP.NET standard database, the event log, the tracing mechanism, an email message, and any other source you desire through creating custom event providers. All of this we shall see in an upcoming example. The health monitoring terms that are used are summarized below: Raise() WebEventProvider BufferedWebEventProvider The healthMonitoring element has five sub-elements (bufferModes, profiles, providers, eventMappings, and rules) that define the events and providers the framework uses. If you take a look at your machine's web.config file (not machine.config) in Windows\Microsoft.NET\Framework\\Config folder, you will see the default setup for health monitoring. By default, health monitoring comes with several providers and events already registered; however, only the event log is setup to capture errors and auditing failures. Some of the events already defined are events for Request Processing, Auditing, and general error information. In addition, there are some default buffer mode/profile settings already available as well. Lastly, there is a heartbeat event that is raised in accordance to the setting of the heartbeatInterval property. healthMonitoring bufferModes profiles providers eventMappings rules web.config machine.config heartbeatInterval So what does the configuration file setup look like? First, the healthMonitoring element must be defined. The element has two primary attributes: enabled and heartbeatInterval. Enabled is obvious, and heartbeatInterval specifies the interval that the heartbeat will run in seconds. The following example will raise a heartbeat event every 10 minutes (60 sec. * 10). enabled Enabled heartbeat Next, I created a profile and buffer mode as an example. I normally use the default settings if I use any, but in this case I wanted to show how these two are setup: Next, below are event mappings defintions, mapping the appropriate event class to a friendly event mapping name. This name will be used to map the events to the appropriate provider in the rules section. As you see, events defined below define an event code. These event codes will be sent to the provider, meaning that only Web Error Events with a code of 100100 or 100101 will be provided; everything else is ignored. Web Error Events 100100 100101 Providers map a friendly name to the provider that will receive calls. The friendly name will be used to reference the provider later on. If the event provider is a buffered event provider, add the buffer and bufferMode attributes to the declaration. The bufferMode attribute must be a valid name in the bufferModes section. Notice some providers have additional properties, like xmlDataFile for the XmlProvider provider. These values are collected through the Initialize method of the provider, like any other provider. See more information about creating custom providers for more information. bufferMode xmlDataFile XmlProvider Initialize Lastly, rules map event mappings to providers, so that only certain events are raised to the providers; all others are ignored. By default, the event log receives errors and audit failures. Since this article features custom events (shown later), we will register those in the section below. This web site now has additional error/event detection for custom events raised in the application, as well as logging for pre-defined events. The configuration above is what is needed to use health monitoring. These events will register the source to the XML file, event log, and tracing mechanism wherever those providers are specified. Let’s take a look at what was rendered to the XML provider for the heartbeat event: I created a test page that raises custom events in my Nucleo framework, which I was capturing. The following events were captured in the XML: It did log the errors; however, I raised multiple events at one time, but only one was logged at a time. I assume this was an issue with the profile, and to test I changed the profile of Critical (which does not throttle events), and the following results occurred whenever I clicked through the events. Critical It definitely appears that the profile setting highly affects the amount of event log entries being logged, so be aware of that. You’ve seen a couple of event objects that aren’t defined in the framework, which you may be interested in. It’s not hard at all to create your own custom events. Below is a base class that I defined for all of my custom events. Custom web events inherit from WebBaseEvent, which you can do so yourself. However, there is good reason to create your own custom base event class. Let's take a look at the definition below first: WebBaseEvent Because all of our custom events inherit from this custom base class, it makes it easier to register your events in the configuration file. To include events for monitoring, the configuration file references them by object type. Because all of the events inherit from WebMonitoringEventBase class, all events can be referenced through the WebMonitoringEventBase type in bulk, instead of registering each one separately. If you look back to the eventMappings section, you will see that WebMonitoringEventBase has been defined. WebMonitoringEventBase Below is a custom event that is used for raising error events in a web application. This is a custom application with the event code of 100100, well over the event codes for user-defined events (custom events start at 100000). Instead of exposing the message directly, a prefabricated message defined as a constant is used instead. More pertinent information is passed in through the constructor, like the page name and the Exception that occurred. For your information, there is a WebEventCodes object that contains an enumerated list of code values; however, I used a constant so I could provide the value in the constructor, and so the value was static. Instead of providing an event code manually, I wanted to provide it in the class so the user of the event didn’t need to worry about it. WebExtendedBase is a value of 100000, so I start my event numbering from there. It's simple to create custom events because the parameters are passed through the constructor, so the definition of the event is relatively simple to implement. You don’t need to define any other properties, methods, or events to do anything with them. In addition to custom events, we can create custom providers by inheriting from the WebEventProvider class in the System.Web.Management namespace. This class defines three methods: Flush(), ProcessEvent(), and Shutdown(). Shutdown is called whenever the web application's application domain is terminated, and therefore any resources can be saved that are not yet flushed. Some providers have the ability to buffer the information, and flush it all at once. This method performs that action. Lastly, ProcessEvent handles the processing of the event, which usually means writing it to the underlying data store. System.Web.Management Flush() ProcessEvent() Shutdown() Shutdown ProcessEvent If you inherit from BufferedWebEventProvider, this class takes care of those three methods for you, but forces you to override the ProcessEventFlush method, which when buffering, flushes all of the events at a single time. That is what this article will focus on, by creating an XmlEventProvider that flushes events to an XML file. ProcessEventFlush XmlEventProvider The following code definition is the XmlEventProvider class. This class uses an XmlDocument collection to modify and save the changes to the XML file. The Initialize method gets the path of the XML data file and stores it in a variable. XmlDocument As you see in ProcessEventFlush, it’s easy to flush a series of events because the flushInfo object contains an events collection. There are several helper methods used to assist with the rendering of the XML and referencing the path; because the file checking code is in the Initialize method, I was having an error when compiling. flushInfo Some of the code is featured in the Nucleo framework on the CodePlex web site. You can download it and check it out for yourself. The code used above is in the Nucleo.Web project, under the Monitoring and Providers folders. I tried to include the code directly in the example project attached; however, there was a problem and it didn’t work correctly. Nucleo.Web However, the project attached with this article comes with a test web site and the Nucleo assemblies a referenced in the bin folder, so you could use a tool like Reflector to check out the source. Nucleo Using the web application, select the type to raise and click the button. The event will eventually be added to the XML file when flushed. HealthMonitoringExample The following references may be of use to you to understand health monitoring and more of the specifics:
http://dotnetslackers.com/articles/sql/HealthMonitoring.aspx
crawl-003
refinedweb
1,518
62.17
Introducing the Factory Now we're down to the really fun part. We've got our interface (Ferret) defined, and we've got at least two classes (GoogleFinder and GoogleSpecificFinder) that implement it. We've got an application that can use one or the other to instantiate a Ferret object and use it to perform an analysis. Now it's time for the factory. The idea behind a factory is that rather than create a specific class, we can create a factory and then let the factory decide what class to instantiate. In some ways, Java factories are much like real-world factories; we might have a factory that produces television sets. Each television set has specific features about it (such as a screen and a way to change the channel), but a particularly flexible factory might turn out hand-held models with a radio-like dial and projection setswith a three-pound remote control that also makes coffee. It all depends on what management wants at any given time. In our case, we will create a factory that produces Ferrets. In Listing 6, we'll start simple, with a factory (FerretFactory) that always produces a GoogleFinder-variety Ferret. Listing 6 Simple Factory package org.chase.research; public class FerretFactory { public Ferret newFerret() { return new org.chase.ferrets.GoogleFinder(); } } This class has only one method, and all it does is return an instance of GoogleFinder as a Ferret object. To use the factory, we need to make a change to our ResearchProject application, as shown in Listing 7. Listing 7 Using the Factory); ... Notice, first of all, that we removed the import statement for GoogleFinder and GoogleSpecificFinder, and that neither is mentioned anywhere in the application. Instead, we're instantiating the FerretFactory and then using the newFerret() method to return a Ferret object. The overall application has no idea what class is implementing the Ferret, and that's as it should be. If we run the ResearchProject application, however, we can see pretty quickly from the results that it's GoogleFinder, as noted within the FerretFactory.
http://www.informit.com/articles/article.aspx?p=28283&seqNum=7
CC-MAIN-2019-04
refinedweb
346
54.02
0 I am attempting to take names and weights from strings and just simply output them. For some reason it can't get out of the while loop, it just sets there waiting for the users to input names and weights. Here's my code #include <iostream> #include <string> #include <vector> using namespace std; int main (void) { // creating variables for input string name; int weight; // creating vectors vector<string> names; vector<int> weights; cout << "Enter names and weight" << endl; while(cin >> name, weight) { names.push_back(name); weights.push_back(weight); } // outputs names and weight for( unsigned int n = 0; n < weights.size(); n++ ) { cout << "[" << n << "] " << weights[n] << " " << names[n] << endl; } cout << "done." << endl; return 0; } I am self teaching myself C++ with someone helping me as well, my issue is. I am currently learning from Accelerated C++ book. I am on chapter three. The problem is it doesn't come out of the while loop. It doesn't know when the user will stop input. How do I do it this way, but make it so the computer knows when the while(cin >> ) stops from the user input.
https://www.daniweb.com/programming/software-development/threads/201057/while-loop-user-input
CC-MAIN-2017-09
refinedweb
187
81.02
0.6.3-release From OpenSim r8505 | lbsa71 | 2009-02-19 07:54:21 -0700 (Thu, 19 Feb 2009) | 1 line - reverted the revert of the revert. What can I say? I'm calling this a day, and will get back up on the horse tomorrow. r8504 | lbsa71 | 2009-02-19 07:51:33 -0700 (Thu, 19 Feb 2009) | 1 line - Changed all AssemblyInfo to explicit version 1.0.0.0 to not confuse poor poor Nant. We probably should take the opportunity to let the non-module bins reside in their /bin/Debug dirs later. r8503 | lbsa71 | 2009-02-19 07:35:11 -0700 (Thu, 19 Feb 2009) | 1 line - Reverted the revert, as it seems the problem was the 1.0.* in the separate projects. r8502 | lbsa71 | 2009-02-19 07:16:22 -0700 (Thu, 19 Feb 2009) | 1 line - Reverted Prebuild commit due to strange run-time errors. r8501 | lbsa71 | 2009-02-19 07:10:46 -0700 (Thu, 19 Feb 2009) | 1 line - Ignored some bins r8500 | lbsa71 | 2009-02-19 06:36:25 -0700 (Thu, 19 Feb 2009) | 2 lines - ... okay, so the Prebuild.exe changed again when building from VS... trying to get to the bottom of this. r8499 | melanie | 2009-02-19 06:02:11 -0700 (Thu, 19 Feb 2009) | 3 lines Make the implementation of the message transfer module protected virtual throughout r8498 | lbsa71 | 2009-02-19 06:01:01 -0700 (Thu, 19 Feb 2009) | 3 lines - Hm. Something odd here, the Prebuild.exe wasn't supposed to change from last commit. Re-trying. - Ignoring some gens r8497 | lbsa71 | 2009-02-19 05:48:38 -0700 (Thu, 19 Feb 2009) | 21 lines PREBUILD UPSTREAMS UPDATE : POTENTIAL BREAKAGE - Applied upstreams changes to allow for auditing and debugging in our various environments. - This should, in theory, bring back 'multiple ref dirs'. - Temporarily Removed xmlns because prebuild-1.7 schema does not allow for multiple solutions per prebuild node (This will be a moot issue once the Prebuild node is moved out of prebuild.xml) - Autotools target: Various minor fixes - MonoDevelop Target : No changes. - Nant Target: Various minor fixes, support for net-3.5 and mono-2.0/3.5 targets - Sharpdevelop targets: No changes. - VS Targets: Refactored into using VSGenericTarget, and supports 2.0-3.5 - XCode Target: No changes. --- Regressions and outstanding issues --- - The Solution is assigned a random Guid - will lead to unnecessary reloads and loss of user settings. --- New features of Prebuild 2.0.4 --- - (Better) support for Web, WinForms and Database Projects and build actions - Conditional Framework Version compilation support (1.1, 2.0-3.5) - ArrayList -> List<>, ICollection -> IList (this means Prebuild can generate 1.1 solutions, but can't itself be built under 1.1 - how very meta) - Added <?include file="sub_prebuild.xml" ?> preprocessor directive. r8496 | mw | 2009-02-19 05:38:17 -0700 (Thu, 19 Feb 2009) | 1 line reverted last revision, until we decide how to handle capturing IM's r8495 | mw | 2009-02-19 04:54:53 -0700 (Thu, 19 Feb 2009) | 1 line Added a event to IMessageTransferModule (and MessageTransferModule) so that other modules can capture IM messages and do custom handling of them. As just attaching to Client IM events doesn't really support this, as they would still get routed through the normal process and could give back errors. r8494 | melanie | 2009-02-18 22:31:17 -0700 (Wed, 18 Feb 2009) | 2 lines Force plugin state update when region crossing r8493 | melanie | 2009-02-18 22:24:19 -0700 (Wed, 18 Feb 2009) | 3 lines Try this, then :) remove just one line from script serialization, hunting the bug r8492 | melanie | 2009-02-18 22:18:23 -0700 (Wed, 18 Feb 2009) | 2 lines Refix the fix, adding a forgotten line r8491 | melanie | 2009-02-18 22:16:25 -0700 (Wed, 18 Feb 2009) | 2 lines Attempt to fix a Windows only race in thread termination r8490 | melanie | 2009-02-18 20:09:56 -0700 (Wed, 18 Feb 2009) | 5 lines Thank you, Snowdrop, for a patch that makes the callback ID parameter usable. Applied with formatting changes, please don't introduce K&R style indentations into OpenSimulator Fixes Mantis #3190 r8489 | ckrinke | 2009-02-18 19:51:32 -0700 (Wed, 18 Feb 2009) | 2 lines Mantis#3188. Thank you kindly, BlueWall, for a patch that: Adding the ability to set the background color for osSetDynamicTextureData in the extra data: bgcolour:value (see [^] for color names) r8488 | melanie | 2009-02-18 18:14:26 -0700 (Wed, 18 Feb 2009) | 2 lines Fix region crossing for unscripted prims, avoid costly SEH r8487 | melanie | 2009-02-18 16:28:04 -0700 (Wed, 18 Feb 2009) | 5 lines Make in-code provisions for the tests. Tests would fail because the required file system objects are not present in the test harness. This makes the main code ignore the failure, therefore the test succeeds. Not elegant and maybe a unit test guru has a better way. Marked as a TODO r8486 | melanie | 2009-02-18 15:57:36 -0700 (Wed, 18 Feb 2009) | 2 lines Fix standalone / simulator local script crossings. r8485 | melanie | 2009-02-18 15:32:25 -0700 (Wed, 18 Feb 2009) | 2 lines Fix the windows sharing violations on script crossings r8484 | diva | 2009-02-18 14:28:54 -0700 (Wed, 18 Feb 2009) | 1 line Stops animations on Teleports, to conform with what the viewer does. r8483 | justincc | 2009-02-18 14:02:43 -0700 (Wed, 18 Feb 2009) | 2 lines - Change AssetGatherer method access so that only methods which are worth calling from the outside are public r8482 | diva | 2009-02-18 13:10:40 -0700 (Wed, 18 Feb 2009) | 1 line Fixes height on Basic Physics in local teleports. Plus some small refactoring. r8481 | justincc | 2009-02-18 13:04:14 -0700 (Wed, 18 Feb 2009) | 2 lines - minor: comment out a few more [de]serialization sog timing messages r8480 | justincc | 2009-02-18 13:00:21 -0700 (Wed, 18 Feb 2009) | 3 lines - Move asset gathering code from oar module to OpenSim.Region.Framework since this is useful in a variety of situations - Comment out one oar test since I think somehow the two save tests are causing the occasional test failures r8479 | justincc | 2009-02-18 12:26:10 -0700 (Wed, 18 Feb 2009) | 3 lines - Make save iar behave properly if the nominated inventory path does not exist - load iar probably still fails for this r8478 | melanie | 2009-02-18 11:48:59 -0700 (Wed, 18 Feb 2009) | 2 lines Fix estate ban list persistence in MySQL and reenable tests r8477 | diva | 2009-02-18 09:11:34 -0700 (Wed, 18 Feb 2009) | 1 line Restoring method 2 of linking regions in HG, which was commented out for some bizarre reason. Fixes mantis #3141. Thanks Vinc for providing an alternative patch, which wasn't used but served to expose the mix-up. r8476 | sdague | 2009-02-18 06:15:07 -0700 (Wed, 18 Feb 2009) | 5 lines From: Alan Webb <awebb@linux.vnet.ibm.com> I've changed the extension point name, and the internal references that used the same string. I also fixed up the messaging around the asset loader so that it is more explicit. r8475 | sdague | 2009-02-18 05:56:36 -0700 (Wed, 18 Feb 2009) | 12 lines From: Christopher Yeoh <yeohc@au1.ibm.com> The attached patch implements osGetDrawStringSize that looks like: vector osGetDrawStringSize(string contentType, string text, string fontName, int fontSize) in LSL. It is meant to be used in conjunction with the osDraw* functions. It returns accurate information on the size that a given string will be rendered given the specified font and font size. This allows for nicely formatted and positioned text on the generated image. r8474 | sdague | 2009-02-18 05:56:28 -0700 (Wed, 18 Feb 2009) | 2 lines remove legacy pre-migration code for mysql grid adapter, who knew this was still in there. r8473 | diva | 2009-02-17 20:50:09 -0700 (Tue, 17 Feb 2009) | 1 line Improved log message. r8472 | diva | 2009-02-17 18:49:18 -0700 (Tue, 17 Feb 2009) | 4 lines Adds support for preserving animations on region crossings and TPs. Known issue: after TP, the self client doesn't see the animations going, but others can see them. So there's a bug there (TPs only, crossings seem to be all fine). Untested: did not test animation overriders; only tested playing animations from the viewer. r8471 | diva | 2009-02-17 16:46:19 -0700 (Tue, 17 Feb 2009) | 1 line Makes SP.CopyFrom a bit more robust with respect to sims in older versions which still don't have the new appearance management code. r8470 | melanie | 2009-02-17 13:08:35 -0700 (Tue, 17 Feb 2009) | 2 lines Fix a typo. i + i is not 2 times me r8469 | melanie | 2009-02-17 12:33:25 -0700 (Tue, 17 Feb 2009) | 2 lines Re-fixing the fix :/ r8468 | melanie | 2009-02-17 12:30:35 -0700 (Tue, 17 Feb 2009) | 2 lines One-liner to fix an omission r8467 | sdague | 2009-02-17 12:06:23 -0700 (Tue, 17 Feb 2009) | 2 lines remove all the very old create and upgrade sql files, these were outdated by migrations 6 months ago. r8466 | justincc | 2009-02-17 11:46:42 -0700 (Tue, 17 Feb 2009) | 4 lines - Allow inventory archives to be saved from the 'root' inventory directory - Reload doesn't currently obey structure information - Not yet ready for use r8465 | drscofield | 2009-02-17 11:27:01 -0700 (Tue, 17 Feb 2009) | 4 lines - additional code to get ConciergeModule to do truly async broker updates - adding watchdog timer async web request - making broker update timeout configurable r8464 | justincc | 2009-02-17 11:19:24 -0700 (Tue, 17 Feb 2009) | 3 lines - Assign incoming items with a random UUID so that archives can be loaded more than once - Also remove a duplicate write archive call in the unit test which might be causing test failures for people using mono 2.2 (though not 1.9.1, it would seem) r8463 | justincc | 2009-02-17 10:40:48 -0700 (Tue, 17 Feb 2009) | 2 lines - extend inventory archive save test to check for the presence of the item file in the saved archive r8462 | diva | 2009-02-17 10:38:11 -0700 (Tue, 17 Feb 2009) | 1 line Addresses mantis #3181. Waiting for confirmation from the reporter. r8461 | justincc | 2009-02-17 10:12:10 -0700 (Tue, 17 Feb 2009) | 4 lines - Apply - This enables parsing of xml files and files obtained via http for the -inimaster option as well as -inifile - Thanks StrawberryFride! r8460 | justincc | 2009-02-17 09:51:09 -0700 (Tue, 17 Feb 2009) | 2 lines - switch to pulsing monitors to perform test sync instead of events, since this doesn't allow one to accidentally forget to reset the event r8459 | justincc | 2009-02-17 09:25:59 -0700 (Tue, 17 Feb 2009) | 3 lines - Get rid of a unit test race condition based on my misreading of the AutoResetEvent docs - Hopefully this will reduce the spike in build failures seen in the past few days (since I introduced an addition oar test) r8458 | lbsa71 | 2009-02-17 09:19:17 -0700 (Tue, 17 Feb 2009) | 1 line - Ignored even more gens r8457 | lbsa71 | 2009-02-17 09:15:29 -0700 (Tue, 17 Feb 2009) | 1 line - fixed 'path' reference attribute for Nant and VS2008 targets. r8456 | justincc | 2009-02-17 09:04:43 -0700 (Tue, 17 Feb 2009) | 5 lines - Apply - Moves llEmail() delay to after e-mail send rather than before, in line with SL - Thanks DoranZemlja - Last build failure looks like a glitch, but one that has already happened twice recently which I need to look at r8455 | justincc | 2009-02-17 08:55:56 -0700 (Tue, 17 Feb 2009) | 5 lines - Apply - This slightly extends a lock in WorldCommModule so that it covers the GetNewHandle method which states in it's doc that it assumes locking has happened before the method is called - Thanks DoranZemlja r8454 | justincc | 2009-02-17 08:47:53 -0700 (Tue, 17 Feb 2009) | 4 lines - Apply - Clamps textured map rgb values to 0-255 - Thanks DoranZemlja r8453 | justincc | 2009-02-17 08:39:18 -0700 (Tue, 17 Feb 2009) | 3 lines - Establish InventoryArchiveSaved event for unit tests - This is done on the inventory archiver module directly rather than Scene.EventManager - the module seems the more appropriate location r8452 | lbsa71 | 2009-02-17 07:13:55 -0700 (Tue, 17 Feb 2009) | 1 line - Ignored a bunch of genned files r8451 | lbsa71 | 2009-02-17 07:12:57 -0700 (Tue, 17 Feb 2009) | 1 line - Moved the nifty MySQLEstateData connectionstring password-stripper out into the Util project r8450 | melanie | 2009-02-16 21:16:42 -0700 (Mon, 16 Feb 2009) | 5 lines Re-add the objectID field to the anim pack, that was deemed unneccessary and dropped nonths ago, because it is required to get smooth region crossings with AO running. Without it, in some corner cases, anims will continue to run in an unstoppable state. r8449 | diva | 2009-02-16 20:14:08 -0700 (Mon, 16 Feb 2009) | 1 line Small change on dealing with ODE physics, so that this warning doesn't happen: "[PHYSICS]: trying to change capsule size, but the following ODE data is missing - Shell Body Amotor". That warning occurred in MakeRoot, because of the call to SetSize, immediately after making the avie physical. r8448 | mikem | 2009-02-16 18:36:44 -0700 (Mon, 16 Feb 2009) | 6 lines - remove the Metadata property from AssetBase and return all previous properties as before - prefix private variables with m_ in AssetBase.cs - related to Mantis #3122, as mentioned in - all services will likely need to be upgraded after this commit r8447 | diva | 2009-02-16 17:35:52 -0700 (Mon, 16 Feb 2009) | 2 lines Major change to how appearance is managed, including changes in login and user service/server. Appearance is now sent by the user service/server along with all other loginparams. Regions don't query the user service for appearance anymore. The appearance is passed along from region to region as the avie moves around. And, as before, it's stored back with the user service as the client changes the avie's appearance. Child agents have default appearances that are set to the actual appearance when the avie moves to that region. (as before, child agents are invisible and non-physical). r8446 | drscofield | 2009-02-16 13:13:59 -0700 (Mon, 16 Feb 2009) | 2 lines cleanup r8445 | drscofield | 2009-02-16 13:01:54 -0700 (Mon, 16 Feb 2009) | 5 lines From: alan webb <alan_webb@us.ibm.com> & dr scofield <drscofield@xyzzyxyzzy.net> This changeset fixes a rather nasty script compile bug that manifests itself under heavy load. r8444 | justincc | 2009-02-16 12:33:11 -0700 (Mon, 16 Feb 2009) | 4 lines - Apply - Adds estate access list supports to NHibernate data module - Thanks Tommil r8443 | sdague | 2009-02-16 12:23:53 -0700 (Mon, 16 Feb 2009) | 2 lines line ending fixes and set native eol property r8442 | justincc | 2009-02-16 12:15:16 -0700 (Mon, 16 Feb 2009) | 3 lines - refactor: remove AssetCache field hanging off Scene - This is always available at Scene.CommsManager.AssetCache r8441 | justincc | 2009-02-16 11:33:05 -0700 (Mon, 16 Feb 2009) | 2 lines - Iniital inventory archive test code. Doesn't actually do any testing yet r8440 | justincc | 2009-02-16 09:53:43 -0700 (Mon, 16 Feb 2009) | 2 lines - remove duplicate OpenSim.Region.CoreModules assembly entry r8439 | justincc | 2009-02-16 09:31:07 -0700 (Mon, 16 Feb 2009) | 4 lines - Apply - Corrects behaviour of llListSort() - Thanks DoranZemlja! r8438 | justincc | 2009-02-16 09:22:52 -0700 (Mon, 16 Feb 2009) | 2 lines - minor: print out status messages at start and end of inventory archive loading and saving r8437 | sdague | 2009-02-16 05:20:31 -0700 (Mon, 16 Feb 2009) | 29 lines From: Alan Webb <awebb@linux.vnet.ibm.com> The change makes two principal implementation changes: [1] It removes the hard coded set of possible asset server client implementations, allowing any arbitrary implementation that has been identified to the PluginLoader as an appropriate extension. The extension point for asset server client extension is /OpenSim/AssetServerClient. All of the old configuration rules have been preserved, and any of the legacy configuration values will still work as they did before, except the implementation is now loaded as a plug-in, rather than as a hard-coded instantiation of a specific class. The re-hashing of IAssetServer as an extension of IPlugin made upgrading of the implementation classes a necessity. Caveat: I have not been able to meaningfully test the crypto-grid clients. I believe they should work correctly, but the refactoring necessary to handle plug-in based initialization (vs constructor-based initialisation) admits the possibility of a problem. [2] The asset cache implementation, previously introduce as a hard-code class instantiation is now implemented as an IPlugin. Once again the previous (configurationless) behavior has been preserved. But now it is possible for those interested in experimenting with cache technologies to do so simply by introducing a new extension for the asset cache extension point (/OpenSim/AssetCache). I've tested all of the configuration settings, after applying the patch to a newly extracted tree, and they seem to work OK. r8436 | drscofield | 2009-02-16 02:17:55 -0700 (Mon, 16 Feb 2009) | 2 lines cosmetic: adding region name to logging statement r8435 | mikem | 2009-02-15 19:29:00 -0700 (Sun, 15 Feb 2009) | 4 lines - replace existing license header in each source file in AssetInventoryServer with the standard OpenSimulator license header - add note about Cable Beach to CONTRIBUTORS.txt - clean up AssetInventoryServer.ini.example r8434 | mikem | 2009-02-15 19:28:51 -0700 (Sun, 15 Feb 2009) | 5 lines - add restrictions and error handling to plugin loading in AssetInventoryServer - assign shorter names to each AssetInventory plugin - modify AssetInventoryServer.ini.example file so it works out of the box r8433 | mikem | 2009-02-15 19:28:43 -0700 (Sun, 15 Feb 2009) | 1 line Standardize logging messages. r8432 | mikem | 2009-02-15 19:28:34 -0700 (Sun, 15 Feb 2009) | 11 lines - removed OpenSim.Grid.AssetInventoryServer.Metadata class in favor of OpenSim.Framework.AssetMetadata and related updates in AssetInventory server - removed dependency on MySql.Data.MySqlClient - commented out the bulk of OpenSimInventoryStorage due to missing MySql.Data dependency - refactor asset creation in OpenSimAssetFrontend - commented out ForEach implementation, which also depended on MySql.Data, until it's supported by OpenSimulator backends - commented out some handlers in BrowseFrontend and ReferenceFrontend as they relied on either ForEach or the removed Metadata class r8431 | mikem | 2009-02-15 19:28:24 -0700 (Sun, 15 Feb 2009) | 1 line We need to return a zero-length byte array from the Handle() routine. r8430 | mikem | 2009-02-15 19:28:16 -0700 (Sun, 15 Feb 2009) | 3 lines - clean up using references as well as References in prebuild.xml - comment out a bunch of stuff in OpenSimInventoryFrontendPlugin.cs to get rid of warnings r8429 | mikem | 2009-02-15 19:28:08 -0700 (Sun, 15 Feb 2009) | 1 line Name extension points a little clearer. r8428 | mikem | 2009-02-15 19:28:00 -0700 (Sun, 15 Feb 2009) | 5 lines - remove dependency on OpenSim.Grid.AssetServer.Plugins.Opensim in OpenSim.Data.*.addin.xml, this is cruft left over from previous testing - fix example SQLite connection string in AssetInventoryServer.ini.example r8427 | mikem | 2009-02-15 19:27:51 -0700 (Sun, 15 Feb 2009) | 1 line Fix dependency on non-OpenSimulator version of OpenMetaverse.StructuredData.dll. r8426 | mikem | 2009-02-15 19:27:43 -0700 (Sun, 15 Feb 2009) | 3 lines - change AssetInventoryServer config from XML to INI - convert AssetInventoryServer logging to OpenSim's log4net - updated AssetInventoryServer.ini.example file r8425 | mikem | 2009-02-15 19:27:34 -0700 (Sun, 15 Feb 2009) | 4 lines - remove dependency on ExtensionLoader.dll (DBConnString.cs can go) - bring config system in line with other servers - add new plugin filter class which filters on ID - update AssetInventoryServer.ini file r8424 | mikem | 2009-02-15 19:27:25 -0700 (Sun, 15 Feb 2009) | 2 lines - asset server functionality works with OpenSim's HttpServer - start of removal of AssetInventoryServer.Metadata class r8423 | mikem | 2009-02-15 19:27:17 -0700 (Sun, 15 Feb 2009) | 2 lines AssetInventoryServer now compiles while using the standard OpenSimulator console and HttpServer. It doesn't work though. r8422 | mikem | 2009-02-15 19:27:09 -0700 (Sun, 15 Feb 2009) | 1 line Update to new generic DataPluginFactory calls. r8421 | mikem | 2009-02-15 19:27:01 -0700 (Sun, 15 Feb 2009) | 4 lines - add list for backend plugins and Dispose() all plugins on shutdown - fix some plugin names - remove most references to ExtensionLoader - remove commented out AssetInventoryServer blobs from prebuild.xml r8420 | mikem | 2009-02-15 19:26:52 -0700 (Sun, 15 Feb 2009) | 1 line Move NullAuthentication and AuthorizeAll extensions to plugins. r8419 | mikem | 2009-02-15 19:26:44 -0700 (Sun, 15 Feb 2009) | 2 lines Move BrowseFrontend and ReferenceFrontend to OpenSim/Grid/AssetInventoryServer/Plugins. r8418 | mikem | 2009-02-15 19:26:36 -0700 (Sun, 15 Feb 2009) | 2 lines Migrate OpenSimulator inventory frontend to load with Mono.Addins. Everything should compile and it seems even creating users works somehow. r8417 | mikem | 2009-02-15 19:26:27 -0700 (Sun, 15 Feb 2009) | 1 line Add OpenSimulator & Simple inventory storage plugins and Null metrics plugin. r8416 | mikem | 2009-02-15 19:26:18 -0700 (Sun, 15 Feb 2009) | 4 lines - added Simple AssetInventoryServer plugin (asset storage only) - removed OpenSimulator storage and frontend classes in Extensions dir - put OpenSimulator plugins in OpenSim.Grid.AssetInventoryServer.Plugins.OpenSim namespace r8415 | mikem | 2009-02-15 19:26:09 -0700 (Sun, 15 Feb 2009) | 2 lines - implement and load NullMetrics module in AssetInventoryServer - update AssetBase de/serialization in AssetInventoryServer r8414 | mikem | 2009-02-15 19:26:01 -0700 (Sun, 15 Feb 2009) | 2 lines - IAssetProviderPlugin was changed to IAssetDataPlugin - Use OpenSim.Data.DataPluginFactory to load data plugins r8413 | mikem | 2009-02-15 19:25:53 -0700 (Sun, 15 Feb 2009) | 1 line AssetInventoryServer plugins can't be a dependency for the OpenSim.Data.MySQL addin. r8412 | mikem | 2009-02-15 19:25:44 -0700 (Sun, 15 Feb 2009) | 1 line Converted to Linux newlines. r8411 | mikem | 2009-02-15 19:25:36 -0700 (Sun, 15 Feb 2009) | 1 line Added OpenSimulator asset frontend plugin. r8410 | mikem | 2009-02-15 19:25:25 -0700 (Sun, 15 Feb 2009) | 2 lines Rename NewAssetServer AssetInventoryServer and fully qualify with OpenSim.Grid.AssetInventoryServer. r8409 | mikem | 2009-02-15 19:25:15 -0700 (Sun, 15 Feb 2009) | 3 lines - add OpenSim.Grid.AssetServer.Plugins.OpenSim as a dependency for OpenSim.Data.*.addin.xml - remove OpenSim.Grid.NewAssetServer.exe from bin/OpenSim.Data.addin.xml - add prebuild.xml section for OpenSim.Grid.AssetServer.Plugins.OpenSim.dll r8408 | mikem | 2009-02-15 19:25:06 -0700 (Sun, 15 Feb 2009) | 2 lines - add section to prebuild.xml for building OpenSim.Grid.NewAssetServer.exe r8407 | mikem | 2009-02-15 19:24:57 -0700 (Sun, 15 Feb 2009) | 4 lines Adding - NewAssetServer code - NewAssetServer addin manifest - example AssetServer.ini file r8406 | melanie | 2009-02-15 18:58:26 -0700 (Sun, 15 Feb 2009) | 4 lines Thank you, cmickeyb, for a patch to ass two string functions to OSSL. Fixes Mantis #3173 r8405 | melanie | 2009-02-15 18:22:37 -0700 (Sun, 15 Feb 2009) | 5 lines Thank you, patnad, for a patch that adds 3 new discovery functions to OSSL. Applied with changes. Fixes Mantis #3172 r8404 | diva | 2009-02-15 13:02:13 -0700 (Sun, 15 Feb 2009) | 1 line More guards around SetHeight. r8403 | idb | 2009-02-15 09:12:58 -0700 (Sun, 15 Feb 2009) | 2 lines Fix exception when standing up. Fixes Mantis #3170 r8402 | melanie | 2009-02-15 06:54:34 -0700 (Sun, 15 Feb 2009) | 4 lines Thank you, Vytek, for a patch that streamlines the delay in the email module and changes SMTP authentication (applied with changes) Fixes Mantis #3168 r8401 | diva | 2009-02-14 23:12:11 -0700 (Sat, 14 Feb 2009) | 1 line Guarding the new call to SetHeight in MakeRoot, so that ODE doesn't complain when it's 0. r8400 | diva | 2009-02-14 22:50:07 -0700 (Sat, 14 Feb 2009) | 1 line Moving SendInitialData sort of back to where it was before, so that it doesn't interfere with the unit tests. r8399 | diva | 2009-02-14 22:00:58 -0700 (Sat, 14 Feb 2009) | 1 line This started as way to correct Mantis #3158, which I believe should be fixed now. The flying status was temporarily being ignored, which caused the avie to drop sometimes -- there was a race condition. In the process it also fixes that annoying bug in basic physics where the avie would drop half-way to the ground upon region crossings (SetAppearance was missing). Additionally, a lot of child-agent-related code has been cleaned up; namely child agents are now consistently not added to physical scenes, and they also don't have appearances. All of that happens in MakeRoot, consistently. r8398 | dahlia | 2009-02-14 21:00:00 -0700 (Sat, 14 Feb 2009) | 1 line Set sculpt map alpha to 255 prior to scaling and meshing. Addresses Mantis #3150 r8397 | melanie | 2009-02-14 18:06:03 -0700 (Sat, 14 Feb 2009) | 4 lines Thank you, DoranZemlja, for a patch that addresses some moe llGetNextEmail issues. Fixes Mantis #3145. r8396 | ckrinke | 2009-02-14 15:31:39 -0700 (Sat, 14 Feb 2009) | 6 lines Mantis 3164. Thank you kindly, TLaukkan (Tommil) for a patch that: - Added tests for manager, user and group lists. - Added test for ban list. The test had to be left as ignored as native MySQL throws exception when ban is saved. - Added utility class to support parametrized unit tests for range checking. r8395 | diva | 2009-02-14 14:26:20 -0700 (Sat, 14 Feb 2009) | 1 line Restores the HGWorldMap functionality that has been reduced since a recent refactoring of the WorldMapModule. r8394 | melanie | 2009-02-14 14:25:22 -0700 (Sat, 14 Feb 2009) | 4 lines Thank you, DoranZemlja, for a patch that implements local inter-object email delivery. Leaving Mantis #3145 open so that more code can be added. r8393 | ckrinke | 2009-02-14 13:03:16 -0700 (Sat, 14 Feb 2009) | 2 lines Remove the "?" that I inadvertently got into the first line of EstateRegionLink.cs r8392 | ckrinke | 2009-02-14 12:47:02 -0700 (Sat, 14 Feb 2009) | 16 lines Thank you kindly, TLaukkan (Tommil) for a patch that: - Created value object for EstateRegionLink for storing the estate region relationship. - Refactored slightly NHibernateManager and NHibernateXXXXData implementations for accesing nhibernate generated ID on insert. - Changed NHibernateManager.Save method name to Insert as it does Insert. - Changed NHibernateManager.Save return value object as ID can be both UUID and uint currently. - Changed NHibernateManager.Load method Id parameter to object as it can be both UUID and uint. - Created NHibernateEstateData implementation. This is the actual estate storage. - Created NHibernate mapping files for both EstateSettings and EstateRegionLink - Created MigrationSyntaxDifferences.txt files to write notes about differences in migration scripts between different databases. - Created estate storage migration scripts for all four databases. - Created estate unit test classes for all four databases. - Updated one missing field to BasicEstateTest.cs - Tested NHibernate unit tests with NUnit GUI. Asset databases fail but that is not related to this patch. - Tested build with both Visual Studio and nant. - Executed build tests with nant succesfully. r8391 | idb | 2009-02-14 11:09:08 -0700 (Sat, 14 Feb 2009) | 2 lines Add an override for the % operator. Fixes Mantis #3157 r8390 | diva | 2009-02-14 10:17:48 -0700 (Sat, 14 Feb 2009) | 1 line This hopefully fixes a long-standing annoying behavior related to neighbour regions going up & down while avies are logged in (mantis #2701, perhaps? maybe not). This is the bug mentioned 2 commits ago. If this proves to work well in OSGrid, there's a lot of old code cleaning to do. r8389 | diva | 2009-02-14 09:56:37 -0700 (Sat, 14 Feb 2009) | 1 line Making initialized an instance variable again. My last commit wrote over justin's r8383, for some strange reason. r8388 | diva | 2009-02-14 09:37:55 -0700 (Sat, 14 Feb 2009) | 1 line Moved RegionUp to REST/LocalComms. The original functionality has been entirely maintained, although it will have to be revisited soon, because it's buggy. r8387 | melanie | 2009-02-14 05:24:26 -0700 (Sat, 14 Feb 2009) | 4 lines Thank you, patnad, for a patch that removes the "Subdivision of" text when dividing land. Fixes Mantis #3154 r8386 | idb | 2009-02-13 14:56:50 -0700 (Fri, 13 Feb 2009) | 1 line Correct llGetNumberOfPrims to include sitting avatars in the count. r8385 | justincc | 2009-02-13 13:51:22 -0700 (Fri, 13 Feb 2009) | 2 lines - minor: remove mono compiler warnings r8384 | melanie | 2009-02-13 13:49:23 -0700 (Fri, 13 Feb 2009) | 2 lines Guard the values used to set the cursor position in the real time console r8383 | justincc | 2009-02-13 13:12:11 -0700 (Fri, 13 Feb 2009) | 3 lines - Change static field "initialized" in RestInterregionComms to an instance field - This was the cause of teleport tests interfering with each other r8382 | justincc | 2009-02-13 12:03:18 -0700 (Fri, 13 Feb 2009) | 2 lines - refactor: move alert commands from Scene to DialogModule r8381 | justincc | 2009-02-13 11:02:24 -0700 (Fri, 13 Feb 2009) | 2 lines - Quieten down log messages from the Friends module r8380 | justincc | 2009-02-13 10:41:48 -0700 (Fri, 13 Feb 2009) | 2 lines - add file missing from last commit r8379 | justincc | 2009-02-13 10:40:52 -0700 (Fri, 13 Feb 2009) | 2 lines - refactor: Move LazySaveGeneratedMapTile from scene to WorldMapModule r8378 | justincc | 2009-02-13 10:15:49 -0700 (Fri, 13 Feb 2009) | 2 lines - Remove old Scene.CreateTerrainTexture code that is now handled by the world map module r8377 | justincc | 2009-02-13 10:02:26 -0700 (Fri, 13 Feb 2009) | 4 lines - Apply - If the texture does not contain any discard levels the last image packet was not sent - Thanks Snowdrop r8376 | justincc | 2009-02-13 09:43:20 -0700 (Fri, 13 Feb 2009) | 2 lines - refactor: Move export map function to world map module from scene r8375 | drscofield | 2009-02-13 09:11:52 -0700 (Fri, 13 Feb 2009) | 2 lines fixing crash due to make-child and make-root stepping on each other's toes r8374 | diva | 2009-02-12 21:08:28 -0700 (Thu, 12 Feb 2009) | 2 lines Commented the tests for region crossings for now -- they need to be substantially changed because of the callback from region B triggered by the client. r8373 | diva | 2009-02-12 20:45:08 -0700 (Thu, 12 Feb 2009) | 1 line And finally... region crossings entirely over RESTComms/LocalComms. No more remoting for agent movements. WARNING: This breaks region crossing compatibility with previous versions. r8372 | chi11ken | 2009-02-12 19:52:08 -0700 (Thu, 12 Feb 2009) | 1 line Fix some compiler warnings. Minor formatting cleanup. r8371 | chi11ken | 2009-02-12 19:06:28 -0700 (Thu, 12 Feb 2009) | 1 line Add copyright headers. Minor formatting cleanup. Fix some compiler warnings. Fix some m_log declarations. r8370 | chi11ken | 2009-02-12 18:57:06 -0700 (Thu, 12 Feb 2009) | 1 line Update svn properties. r8369 | diva | 2009-02-12 17:49:58 -0700 (Thu, 12 Feb 2009) | 1 line Bug fix in prim crossing: making it clear when the local object needs to be cloned (regions on the same instance) and when it doesn't (regions on different instances). r8368 | mikem | 2009-02-12 17:02:26 -0700 (Thu, 12 Feb 2009) | 1 line Remove extra ID field from asset DB mapping. Mantis #3122, fixes Mantis #3080. r8367 | diva | 2009-02-12 16:38:41 -0700 (Thu, 12 Feb 2009) | 2 lines Fixes a bug in the ScenePresence test itself. r8366 | diva | 2009-02-12 16:23:44 -0700 (Thu, 12 Feb 2009) | 3 lines Makes region crossings asynchronous. Moved the bulk of the original code out of ScenePresence and into SceneCommunicationService, where it should be (next to RequestTeleportToLocation). No changes in the crossing mechanism itself, yet. But this change opens the way to doing crossings as slowly as it needs to be, outside the simulator Update loop. Note: weirdnesses may occur! r8365 | justincc | 2009-02-12 12:54:19 -0700 (Thu, 12 Feb 2009) | 3 lines - Make it possible to load and save inventory archives while a user is not logged in on standalone mode but not on grid mode - No user functionality yet r8364 | sdague | 2009-02-12 11:59:45 -0700 (Thu, 12 Feb 2009) | 2 lines large scale fix for svn props after "the great refactor" r8363 | justincc | 2009-02-12 11:54:48 -0700 (Thu, 12 Feb 2009) | 2 lines - Lock remaining m_rpcHandlers use since these accesses are not guaranteed to be thread safe r8362 | diva | 2009-02-12 11:43:49 -0700 (Thu, 12 Feb 2009) | 1 line Commented a couple of not very useful log messages that are cluttering the log in sims that have objects belonging to foreign users. r8361 | justincc | 2009-02-12 11:37:27 -0700 (Thu, 12 Feb 2009) | 3 lines - Remove a change which shouldn't have made it into the last commit - Rogue change affected grid only r8360 | justincc | 2009-02-12 11:31:56 -0700 (Thu, 12 Feb 2009) | 2 lines - Add missing OpenSIm.Framework.Communications ref for Wdinwos builds r8359 | sdague | 2009-02-12 11:23:05 -0700 (Thu, 12 Feb 2009) | 3 lines - Forgot to fix bamboo.build for the new ScriptEngine Tests From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com> r8358 | justincc | 2009-02-12 11:01:29 -0700 (Thu, 12 Feb 2009) | 4 lines - Apply - Adds a GetXmlRPCHandler() to the BaseHttpServer - Thanks mpallari r8357 | justincc | 2009-02-12 10:41:09 -0700 (Thu, 12 Feb 2009) | 2 lines - move userinfo for inventory archiving up to module class so that it only has to be done once r8356 | justincc | 2009-02-12 10:17:04 -0700 (Thu, 12 Feb 2009) | 2 lines - Remove some pointless CachedUserInfo != null tests since these are already made in earlier code r8355 | justincc | 2009-02-12 10:07:44 -0700 (Thu, 12 Feb 2009) | 3 lines - refactor: Move RequestInventoryForUser() from service to CachedUserInfo - This simplifies callers in most cases - CachedUserInfo is already handling the rest of the fetch inventory work anyway r8354 | sdague | 2009-02-12 10:02:54 -0700 (Thu, 12 Feb 2009) | 4 lines - Added XEngine tests and gathered other ScriptEngine Tests together From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com> r8353 | lbsa71 | 2009-02-12 03:30:53 -0700 (Thu, 12 Feb 2009) | 1 line - Some more CCC r8352 | lbsa71 | 2009-02-12 03:21:21 -0700 (Thu, 12 Feb 2009) | 2 lines - Renamed RegionProfileService to RegionProfileServiceProxy to better reflect actual use. - Added IRegionProfileService r8351 | lbsa71 | 2009-02-12 03:16:11 -0700 (Thu, 12 Feb 2009) | 1 line - Turned RegionProfileService non-static r8350 | lbsa71 | 2009-02-12 03:08:47 -0700 (Thu, 12 Feb 2009) | 1 line - Applied some CCC (Code Convention Conformance) r8349 | lbsa71 | 2009-02-12 03:05:15 -0700 (Thu, 12 Feb 2009) | 1 line - Added RegionProfileService and moved RequestSimData to it. r8348 | lbsa71 | 2009-02-12 02:53:12 -0700 (Thu, 12 Feb 2009) | 1 line - optimized usings. r8347 | dahlia | 2009-02-12 00:58:10 -0700 (Thu, 12 Feb 2009) | 1 line Thanks Kitto Flora for a patch that adds automatic min fly height to ODE - Mantis #3134 r8346 | diva | 2009-02-11 21:26:13 -0700 (Wed, 11 Feb 2009) | 1 line Sending this to Justin, so that he can see what's wrong with the StandaloneTeleportTests when we add RESTInterregionComms module to the ScenePresenceTests. r8345 | diva | 2009-02-11 19:03:36 -0700 (Wed, 11 Feb 2009) | 1 line Makes ban of HG users exactly the same as ban of local users, that is upon AddClient and not before. r8344 | diva | 2009-02-11 18:09:51 -0700 (Wed, 11 Feb 2009) | 1 line Fixes mantis #3121. r8343 | diva | 2009-02-11 14:07:41 -0700 (Wed, 11 Feb 2009) | 1 line Enforce estate bans on Teleports. r8342 | justincc | 2009-02-11 13:36:17 -0700 (Wed, 11 Feb 2009) | 2 lines - minor: remove some mono compiler warnings r8341 | justincc | 2009-02-11 13:24:41 -0700 (Wed, 11 Feb 2009) | 3 lines - When an inventory archive is loaded, immediately update the client's inventory if that client is online at that region server - Not useable yet r8340 | justincc | 2009-02-11 12:57:45 -0700 (Wed, 11 Feb 2009) | 2 lines - Change SendBulkUpdateInventory from two methods to one which accepts an InventoryNode r8339 | justincc | 2009-02-11 12:29:59 -0700 (Wed, 11 Feb 2009) | 2 lines - Establish a common InventoryNodeBase class from InventoryItemBase and InventoryFolderBase r8338 | justincc | 2009-02-11 11:46:51 -0700 (Wed, 11 Feb 2009) | 3 lines - Refactor inventory archive code to allow direct invocation in order to support future unit tests - Add a file I missed out from the last commit (the build was probably fine without it) r8337 | justincc | 2009-02-11 10:34:12 -0700 (Wed, 11 Feb 2009) | 3 lines - More inventory archive invocation to a proper region module - Not ready for use yet r8336 | ckrinke | 2009-02-11 09:01:56 -0700 (Wed, 11 Feb 2009) | 4 lines Thank you kindly, FrankNichols for a patch that: The following patch fixes [^] by changing call from setRot to llSetRot, the later handles child prim being rotated relative to root prim in linked set. r8335 | drscofield | 2009-02-11 07:35:07 -0700 (Wed, 11 Feb 2009) | 13 lines From: Christopher Yeoh <yeohc@au1.ibm.com> This changeset add the RegionReady module code. The module sends a message on a configurable channel when an oar file has finished loading or if the script engine has emptied its queue for the first time (eg server startup). Config is something like this: [RegionReady] enabled = true channel_notify = -800 The module also knows if there was an error with startup. r8334 | melanie | 2009-02-10 16:40:22 -0700 (Tue, 10 Feb 2009) | 2 lines If an instance contains only one region, select it in the console by default r8333 | sdague | 2009-02-10 16:31:49 -0700 (Tue, 10 Feb 2009) | 2 lines fix a typo where the High Southwest height was getting set to the Low Southwest height r8332 | melanie | 2009-02-10 16:15:48 -0700 (Tue, 10 Feb 2009) | 5 lines Add proper handling for shared vs. unshared modules to the command interface. Shared modules will now only get added once, so the command handler is called once per module, not once per scene. Removal of scenes has no adverse effects. Nonshared modules will be called for each scene. r8331 | diva | 2009-02-10 15:54:05 -0700 (Tue, 10 Feb 2009) | 1 line Fixes the problem of attachment offset after crossings/TPs. Hopefully it fixes mantis #3126, as well as other random displacements. The problem was that the new object at the receiving region was being marked as attachment before AttachObject was called. That made its AbsolutePosition be the position of the avie, and that was what was being given to AttachObject. r8330 | justincc | 2009-02-10 12:33:09 -0700 (Tue, 10 Feb 2009) | 3 lines - Remove load and save inventory commands from the console since these are actually experimental and the storage format may soon undergo incompatible changes - If you were using these please uncomment the code before rebuilding, but be aware that old files may become incompatible soon r8329 | justincc | 2009-02-10 12:00:10 -0700 (Tue, 10 Feb 2009) | 2 lines - minor: Remove SOG XML2 serialization log messages for now r8328 | justincc | 2009-02-10 11:50:25 -0700 (Tue, 10 Feb 2009) | 2 lines - Stop OpenSimulator crashing if an exception from a command makes it right up to the top of the stack r8327 | justincc | 2009-02-10 11:43:36 -0700 (Tue, 10 Feb 2009) | 3 lines - Implement merging of oars in code - Not fully tested yet and not yet available as an option from the user console r8326 | justincc | 2009-02-10 09:56:35 -0700 (Tue, 10 Feb 2009) | 2 lines - extend load oar test to check that an object was actually loaded r8325 | lbsa71 | 2009-02-10 08:59:12 -0700 (Tue, 10 Feb 2009) | 1 line - Ignored some gens r8324 | justincc | 2009-02-10 08:46:38 -0700 (Tue, 10 Feb 2009) | 2 lines - Fix build break, parentheses in the wrong place r8323 | justincc | 2009-02-10 08:35:41 -0700 (Tue, 10 Feb 2009) | 2 lines - Overwrite the old saved OpenSim.ini file saved in response to a crash if one already exists r8322 | melanie | 2009-02-10 07:39:04 -0700 (Tue, 10 Feb 2009) | 4 lines Change the command parser and resolver to be able to disambiguate commands that are a prefix of another command. Fixes "terrain load" Fixes Mantis #3123 r8321 | drscofield | 2009-02-10 07:38:57 -0700 (Tue, 10 Feb 2009) | 2 lines dropping obsolete XIRC section from OpenSim.ini.example r8320 | drscofield | 2009-02-10 07:32:35 -0700 (Tue, 10 Feb 2009) | 2 lines fix region_limit example in OpenSim.ini.example r8319 | drscofield | 2009-02-10 07:32:23 -0700 (Tue, 10 Feb 2009) | 2 lines fixing ConciergeModule to follow coding conventions r8318 | justincc | 2009-02-10 07:03:51 -0700 (Tue, 10 Feb 2009) | 2 lines - Reinstate texture tests, eliminating duplicate OpenSim.Region.CoreModules.Tests r8317 | sdague | 2009-02-10 06:36:42 -0700 (Tue, 10 Feb 2009) | 10 lines From Rob Smart <SMARTROB@uk.ibm.com> In SL if llAbs() is called with the minimum integer value of -2147483648 it will return that value untouched without error. this patch replicates the SL functionality. OpenSimulator currently throws an overflow exception: number too small under mono or a "System.OverflowException: Negating the minimum value of a twos complement number is invalid. " under .NET r8316 | drscofield | 2009-02-10 06:10:57 -0700 (Tue, 10 Feb 2009) | 79 lines this is step 2 of 2 of the OpenSim.Region.Environment refactor. NOTHING has been deleted or moved off to forge at this point. what has happened is that OpenSim.Region.Environment.Modules has been split in two: - OpenSim.Region.CoreModules: all those modules that are either directly or indirectly referenced from other OpenSimulator packages, or that provide functionality that the OpenSimulator developer community considers core functionality: CoreModules/Agent/AssetTransaction CoreModules/Agent/Capabilities CoreModules/Agent/TextureDownload CoreModules/Agent/TextureSender CoreModules/Agent/TextureSender/Tests CoreModules/Agent/Xfer CoreModules/Avatar/AvatarFactory CoreModules/Avatar/Chat/ChatModule CoreModules/Avatar/Combat CoreModules/Avatar/Currency/SampleMoney CoreModules/Avatar/Dialog CoreModules/Avatar/Friends CoreModules/Avatar/Gestures CoreModules/Avatar/Groups CoreModules/Avatar/InstantMessage CoreModules/Avatar/Inventory CoreModules/Avatar/Inventory/Archiver CoreModules/Avatar/Inventory/Transfer CoreModules/Avatar/Lure CoreModules/Avatar/ObjectCaps CoreModules/Avatar/Profiles CoreModules/Communications/Local CoreModules/Communications/REST CoreModules/Framework/EventQueue CoreModules/Framework/InterfaceCommander CoreModules/Hypergrid CoreModules/InterGrid CoreModules/Scripting/DynamicTexture CoreModules/Scripting/EMailModules CoreModules/Scripting/HttpRequest CoreModules/Scripting/LoadImageURL CoreModules/Scripting/VectorRender CoreModules/Scripting/WorldComm CoreModules/Scripting/XMLRPC CoreModules/World/Archiver CoreModules/World/Archiver/Tests CoreModules/World/Estate CoreModules/World/Land CoreModules/World/Permissions CoreModules/World/Serialiser CoreModules/World/Sound CoreModules/World/Sun CoreModules/World/Terrain CoreModules/World/Terrain/DefaultEffects CoreModules/World/Terrain/DefaultEffects/bin CoreModules/World/Terrain/DefaultEffects/bin/Debug CoreModules/World/Terrain/Effects CoreModules/World/Terrain/FileLoaders CoreModules/World/Terrain/FloodBrushes CoreModules/World/Terrain/PaintBrushes CoreModules/World/Terrain/Tests CoreModules/World/Vegetation CoreModules/World/Wind CoreModules/World/WorldMap - OpenSim.Region.OptionalModules: all those modules that are not core modules: OptionalModules/Avatar/Chat/IRC-stuff OptionalModules/Avatar/Concierge OptionalModules/Avatar/Voice/AsterixVoice OptionalModules/Avatar/Voice/SIPVoice OptionalModules/ContentManagementSystem OptionalModules/Grid/Interregion OptionalModules/Python OptionalModules/SvnSerialiser OptionalModules/World/NPC OptionalModules/World/TreePopulator r8315 | melanie | 2009-02-10 05:25:29 -0700 (Tue, 10 Feb 2009) | 3 lines Stopgap measure: To use gridlaunch, or GUI, start opensim with OpenSim.exe -gui=true r8314 | diva | 2009-02-09 17:15:30 -0700 (Mon, 09 Feb 2009) | 1 line Commented out a problematic test that needs more careful revision. r8313 | diva | 2009-02-09 16:12:49 -0700 (Mon, 09 Feb 2009) | 1 line Fixes a failed unit test on ScenePresences tests. That test unit needs some fixing too. r8312 | chi11ken | 2009-02-09 15:49:05 -0700 (Mon, 09 Feb 2009) | 1 line Update svn properties, minor formatting cleanup. r8311 | diva | 2009-02-09 15:27:27 -0700 (Mon, 09 Feb 2009) | 5 lines Moved prim crossing out of OGS1 and into RESTComms and LocalInterregionComms. This breaks interregion comms with older versions in what concerns prim crossing. In the process of moving the comms, a few things seem to be working better, namely this may address mantis #3011, mantis #1698. Hopefully, this doesn't break anything else. But I'm still seeing weirdnesses with attchments jumping out of place after a cross/TP. The two most notable changes in the crossing process were: - Object gets passed in only one message, not two as done before. - Local object crossings do not get serialized, as done before. r8310 | ckrinke | 2009-02-09 15:07:27 -0700 (Mon, 09 Feb 2009) | 4 lines Thank you kindly, TLaukkan (Timmil) for a patch that: - Fixed and added athursv's BasicEstateTest - Added MySQLEstateTest - Added SQLiteEstateTest r8309 | sdague | 2009-02-09 15:04:43 -0700 (Mon, 09 Feb 2009) | 1 line oops, missing file from last patch set r8308 | sdague | 2009-02-09 14:47:55 -0700 (Mon, 09 Feb 2009) | 9 lines From Alan Webb <awebb@linux.vnet.ibm.com> These changes replace all direct references to the AssetCache with IAssetCache. There is no change to functionality. Everything works as before. This is laying the groundwork for making it possible to register alternative asset caching mechanisms without disrupting other parts of OpenSimulator or their dependencies upon AssetCache functionality. r8307 | ckrinke | 2009-02-09 14:44:39 -0700 (Mon, 09 Feb 2009) | 3 lines Thank you kindly, TLaukkan (Tommil) for a patch that: - Updated migration scripts and hbm.xml so that nhibernate tests work. r8306 | justincc | 2009-02-09 13:52:04 -0700 (Mon, 09 Feb 2009) | 2 lines - Add the ability to type help <command> for more detailed help about a specific command if any is available r8305 | sdague | 2009-02-09 13:06:06 -0700 (Mon, 09 Feb 2009) | 1 line a last set of files that seem to have embedded ^M in them r8304 | sdague | 2009-02-09 12:59:08 -0700 (Mon, 09 Feb 2009) | 2 lines add a script for fixing line endings (at least from linux) r8303 | sdague | 2009-02-09 12:58:37 -0700 (Mon, 09 Feb 2009) | 2 lines Add a bunch of missing svn line ending properties r8302 | justincc | 2009-02-09 11:11:09 -0700 (Mon, 09 Feb 2009) | 2 lines - Restore show information for the OpenSimulator region server (version, info, threads, etc.) r8301 | justincc | 2009-02-09 10:31:03 -0700 (Mon, 09 Feb 2009) | 4 lines - Apply - Changes the NHibernate asset mapping and expose FullID on AssetBase for NHibernate - mikem has seen this patch :) r8300 | justincc | 2009-02-09 10:02:10 -0700 (Mon, 09 Feb 2009) | 4 lines - Apply - Add NHibernate PostgreSQL database tests. - Tests not yet being run as the PostgreSQL module is not yet fully functional r8299 | melanie | 2009-02-09 09:34:21 -0700 (Mon, 09 Feb 2009) | 3 lines Reinstate the KickUserCommand handler, which was commented out by another dev whiel I was putting the reference to it back in r8298 | melanie | 2009-02-09 09:21:13 -0700 (Mon, 09 Feb 2009) | 3 lines Correct a delegate in OpenSim.cs Fixes Mantis #3117 r8297 | justincc | 2009-02-09 08:57:53 -0700 (Mon, 09 Feb 2009) | 2 lines - Reinstate tests that are now in CoreModules r8296 | sdague | 2009-02-09 08:40:31 -0700 (Mon, 09 Feb 2009) | 3 lines - Fixing refactoring +1 (Fixes Mantis #3113) From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com> r8295 | drscofield | 2009-02-09 03:40:12 -0700 (Mon, 09 Feb 2009) | 2 lines fixing warnings. r8294 | drscofield | 2009-02-09 03:04:54 -0700 (Mon, 09 Feb 2009) | 6 lines This patch implements llLookAt to the extent that an object will correctly rotate and point to the target, however it doesnt yet use the damping or strength parameters. From: Robert Smart <smartrob@uk.ibm.com> r8293 | drscofield | 2009-02-09 02:45:09 -0700 (Mon, 09 Feb 2009) | 2 lines fixing refactoring artefact. (fixes mantis #3113) r8292 | drscofield | 2009-02-09 02:16:15 -0700 (Mon, 09 Feb 2009) | 3 lines starting phase 2 of the OpenSim.Region.Environment commit: relocating OpenSim.Region.Environment.Modules.Agent en bloc to OpenSim.Region.CoreModules r8291 | drscofield | 2009-02-09 02:02:49 -0700 (Mon, 09 Feb 2009) | 3 lines adding bin/ScriptEngines/*/*.{dll,state}, bin/j2kDecodeCache, bin/UserAssets to .gitignore r8290 | mikem | 2009-02-08 17:59:02 -0700 (Sun, 08 Feb 2009) | 3 lines Thanks Tommi Laukkanen for a patch that allows the CSCodeGeneratorTest.TestStringsWithEscapedQuotesAndComments unit test to pass on Windows. Fixes Mantis #3104. r8289 | mikem | 2009-02-08 17:34:01 -0700 (Sun, 08 Feb 2009) | 1 line Remove unused OpenSim/Data/{DataStore,InventoryData}Base.cs. r8288 | mikem | 2009-02-08 17:33:44 -0700 (Sun, 08 Feb 2009) | 2 lines The DataPluginFactory is now a set of generic methods instead of multiple duplicates of the same code. r8287 | teravus | 2009-02-08 11:05:12 -0700 (Sun, 08 Feb 2009) | 2 lines - Once again, fixing linked prim collisions by putting AbsolutePosition = AbsolutePosition; back in the linking routine. Why was it removed? It's critical to the physics scene. - Fixes mantis #3108 r8286 | teravus | 2009-02-08 10:41:15 -0700 (Sun, 08 Feb 2009) | 2 lines - Some minor cleanup - sealed OdeScene r8285 | teravus | 2009-02-08 10:25:02 -0700 (Sun, 08 Feb 2009) | 1 line Reverts patch from tuco/mikkopa/sempuki mantis #3072 r8284 | dahlia | 2009-02-08 03:50:22 -0700 (Sun, 08 Feb 2009) | 1 line send group name in binary bucket in chatterbox invitation eventqueue message r8283 | teravus | 2009-02-07 20:02:43 -0700 (Sat, 07 Feb 2009) | 3 lines - Limit the total number of joints created per frame to the maximum possible without causing a stack collision. - This fixes crashing on large sets of physical prims because of stack collisions (assuming you follow the directions on linux for starting ode with ulimit). After the maximum joints are created, objects will start to fall through the ground and be disabled. Not the best solution, but it's better then a crash caused by a stack collision with the process exceeding the maximum available memory/recursions per thread. - Make a clean region, make a stack of 5000 prim, 20 layers high. Make them physical, *SLOW*, but no crash. r8282 | teravus | 2009-02-07 18:05:09 -0700 (Sat, 07 Feb 2009) | 3 lines - Fixes colliding with the terrain lower then 0 and higher then 256m - The actual AABB of the heightfield on the Z is now determined by the minimum and maximum heightfield value in the terrain array (assuming it's a reasonable number). This might optimize collisions in simulators that have a small difference between minimum and maximum heightfield values. r8281 | diva | 2009-02-07 17:54:56 -0700 (Sat, 07 Feb 2009) | 3 lines - Removed the duplicate AddCapsHandler that existed in ScenePresence.MakeRootAgent; CAPs are already in place when this runs. - Moved MoveAgentIntoRegion further down in the CompleteMovement method. - changed a couple of methods from protected to public in SceneCommunicationService r8280 | diva | 2009-02-07 16:51:30 -0700 (Sat, 07 Feb 2009) | 1 line Bug fix related to the filling out the remoting port in RegionInfo. It still must be there because of attachments. r8279 | lbsa71 | 2009-02-07 13:16:58 -0700 (Sat, 07 Feb 2009) | 1 line - Refactored UserLoginService.CustomiseResponse to be (almost) text-wide identical to LocalLoginService.CustomiseResponse in order to be able to pull them up. r8278 | ckrinke | 2009-02-07 11:11:04 -0700 (Sat, 07 Feb 2009) | 4 lines Thank you kindly, TLaukkan (Tommil) for a patch that: Created nunit test for LSL API and example test for llAngleBetween which was marked untested in wiki. Run new test succesfully with NUnitGUI and nant build. r8277 | ckrinke | 2009-02-07 10:56:36 -0700 (Sat, 07 Feb 2009) | 1 line Add TLaukkan (Tommil) to CONTRIBUTORS.txt for OpenSimulator r8276 | diva | 2009-02-07 09:10:23 -0700 (Sat, 07 Feb 2009) | 1 line Adds support for HG linking to specific regions within an instance. The format is Host:Port:Region. Refactored the linking code from MapSearchModule to HGHyperlink, so that it can be used both by the MapSearchModule and the Console command. r8275 | melanie | 2009-02-07 08:51:00 -0700 (Sat, 07 Feb 2009) | 2 lines Fix a .NET issue where changing a locked reference would cause a crash r8274 | sdague | 2009-02-07 06:16:27 -0700 (Sat, 07 Feb 2009) | 3 lines the parameters for llAtan2 were inverted before passing to Math. Thanks to Rob Smart for pointing this out. r8272 | melanie | 2009-02-07 05:25:39 -0700 (Sat, 07 Feb 2009) | 6 lines Replace the console for all OpenSimulator apps with a new console featuring command line editing, context sensitive help (press ? at any time), command line history, a new plugin command system and new appender features thet let you type while the console is scrolling. Seamlessly integrates the ICommander interfaces. r8271 | lbsa71 | 2009-02-07 02:45:56 -0700 (Sat, 07 Feb 2009) | 2 lines Thank you dslake for a patch that: - fixes mantis #3092: User Server sets agent starting position to passed x/y/x instead of x/y/z r8270 | sdague | 2009-02-06 20:21:34 -0700 (Fri, 06 Feb 2009) | 1 line oops spaces where tabs should be in the makefile r8269 | sdague | 2009-02-06 20:18:58 -0700 (Fri, 06 Feb 2009) | 3 lines create a "make release" target which does the release instead of debug build. Just a convenience for people on the Linux side of the house r8268 | justincc | 2009-02-06 14:56:50 -0700 (Fri, 06 Feb 2009) | 5 lines - minor: Apply second patch from - This adds more explanation for the new proxy settings in OpenSim.ini.example - Also does some formatting correction - I did some additional reformatting on top of that r8267 | lbsa71 | 2009-02-06 14:50:05 -0700 (Fri, 06 Feb 2009) | 1 line - Removed superflous (and circular) ref to OpenSim.Region.Framework r8266 | drscofield | 2009-02-06 14:47:44 -0700 (Fri, 06 Feb 2009) | 2 lines killing OpenSim.Data.Base ref once more r8265 | drscofield | 2009-02-06 14:41:52 -0700 (Fri, 06 Feb 2009) | 2 lines dropping ref to OpenSim.Grid (another survivor thx to the refactoring by-pass) r8264 | justincc | 2009-02-06 14:37:10 -0700 (Fri, 06 Feb 2009) | 3 lines - reinstate OpenSim/Region/Framework/Scenes/Tests - should bring us back up to 240 tests r8263 | drscofield | 2009-02-06 14:28:30 -0700 (Fri, 06 Feb 2009) | 3 lines removing bad references to OpenSim.Data stuff that got deleted earlier this week but seems to have survive on the refactoring by-pass. r8262 | justincc | 2009-02-06 13:05:12 -0700 (Fri, 06 Feb 2009) | 2 lines - Add missing Region.Framework reference in ApplicationPlugins.LoadRegions r8261 | justincc | 2009-02-06 12:12:04 -0700 (Fri, 06 Feb 2009) | 4 lines - Make the module loader display which module failed if there was a loading problem - Such failures are now fatal to grab the user's attention. - However, they could be made non-fatal (just with a loud error warning) if this proves too inconvenient r8260 | justincc | 2009-02-06 11:18:01 -0700 (Fri, 06 Feb 2009) | 4 lines - Implement help <command> from the region console - So at the moment once can type 'help terrain fill' as well as 'terrain fill help' - Current implementation is a transient hack that should be tidied up soon r8259 | drscofield | 2009-02-06 09:55:34 -0700 (Fri, 06 Feb 2009) | 17 lines This changeset is the step 1 of 2 in refactoring OpenSim.Region.Environment into a "framework" part and a modules only part. This first changeset refactors OpenSim.Region.Environment.Scenes, OpenSim.Region.Environment.Interfaces, and OpenSim.Region.Interfaces into OpenSim.Region.Framework.{Interfaces,Scenes} leaving only region modules in OpenSim.Region.Environment. The next step will be to move region modules up from OpenSim.Region.Environment.Modules to OpenSim.Region.CoreModules and then sort out which modules are really core modules and which should move out to forge. I've been very careful to NOT BREAK anything. i hope i've succeeded. as this is the work of a whole week i hope i managed to keep track with the applied patches of the last week --- could any of you that did check in stuff have a look at whether it survived? thx! r8258 | lbsa71 | 2009-02-06 08:01:20 -0700 (Fri, 06 Feb 2009) | 1 line - removed superfluous constants class r8257 | dahlia | 2009-02-06 02:58:23 -0700 (Fri, 06 Feb 2009) | 1 line more eventqueue endian madness r8256 | dahlia | 2009-02-06 01:53:30 -0700 (Fri, 06 Feb 2009) | 1 line move RegionDenyAgeUnverified parameter to AgeVerificationBlock in parcel properties event queue message. Addresses Mantis#3090 r8255 | dahlia | 2009-02-05 18:25:59 -0700 (Thu, 05 Feb 2009) | 1 line Thanks cmickyb for a patch (Mantis#3089) that adds support for proxy in http requests r8254 | justincc | 2009-02-05 15:03:23 -0700 (Thu, 05 Feb 2009) | 2 lines - minor: remove mono compiler warning r8253 | justincc | 2009-02-05 14:54:22 -0700 (Thu, 05 Feb 2009) | 4 lines - Apply - Clamps negative values to zero when a terrain is exported in LLRAW format, since LLRAW doesn't support negative values. - Thanks jonc! r8252 | justincc | 2009-02-05 14:46:57 -0700 (Thu, 05 Feb 2009) | 2 lines - Remove CommanderTestModule as there are several normal modules which effectively fulfil this function r8251 | justincc | 2009-02-05 14:46:04 -0700 (Thu, 05 Feb 2009) | 3 lines - cheap hack to make module help information more accurately reflect what command text needs to be typed - Should disappear soon r8250 | justincc | 2009-02-05 14:35:59 -0700 (Thu, 05 Feb 2009) | 4 lines - Make existing module commanders register as help topics - Typing help will now give a list of these topics at the top (as well as the rest of the current help stuff) - Typing help <topic> will give information about commands specific to that topic r8249 | justincc | 2009-02-05 12:54:22 -0700 (Thu, 05 Feb 2009) | 2 lines - Use the commander name to register module commanders instead of providing the information twice r8248 | justincc | 2009-02-05 12:34:23 -0700 (Thu, 05 Feb 2009) | 2 lines - refactor: Split out module Command class into a separate file r8247 | justincc | 2009-02-05 11:47:39 -0700 (Thu, 05 Feb 2009) | 3 lines - Remove unused region info list from OpenSimBase. - The same information is available via SceneManager r8246 | justincc | 2009-02-05 11:36:53 -0700 (Thu, 05 Feb 2009) | 2 lines - refactor: Move module handling code up into SceneBase from Scene, reducing the large number of different things that Scene does r8245 | sdague | 2009-02-05 09:12:51 -0700 (Thu, 05 Feb 2009) | 7 lines From: Christopher Yeoh <yeohc@au1.ibm.com> This patch fixes the problem where if an object containing a script is deleted at the same time as an object containing the same script is rezzed, it can result in the assembly file being deleted after the second object script initialisation has found it but not started using it yet, resulting in the script not starting up. r8244 | teravus | 2009-02-05 06:43:36 -0700 (Thu, 05 Feb 2009) | 1 line - Add the second version of the experimental ObjectAdd Cap. It will handle both versions currently. r8243 | teravus | 2009-02-04 23:44:46 -0700 (Wed, 04 Feb 2009) | 3 lines - Committing an experimental ObjectAdd module. Intended to work with . - Catherine contacted us and gave us a LLSD dump to study for implementation. - Still needs to be tested. May not produce expected results. r8242 | justincc | 2009-02-04 13:37:20 -0700 (Wed, 04 Feb 2009) | 3 lines - minor: remove deprecated and unused terrain method from SceneManager - other minor tidy up r8241 | justincc | 2009-02-04 11:56:12 -0700 (Wed, 04 Feb 2009) | 5 lines - Introduce a new "default" option for asset_database in the [STORAGE] section - This option makes OpenSimulator use the usual db based asset service in standalone, and the grid based one in grid mode - The other options can (local, grid, etc) can still be used explicitly as before - Also change OpenSim.ini.example and the surrounding explanative text r8240 | chi11ken | 2009-02-04 11:49:06 -0700 (Wed, 04 Feb 2009) | 1 line Update svn properties. r8239 | diva | 2009-02-04 09:31:48 -0700 (Wed, 04 Feb 2009) | 1 line Addresses a race condition that happened between the viewer and the departing region wrt the creation of the child agent in the receiving region, and that resulted in failed TPs. r8238 | mw | 2009-02-04 09:00:39 -0700 (Wed, 04 Feb 2009) | 5 lines Added a ForceSceneObjectBackup method to Scene, which as it says forces a database backup/update on the SceneObjectGroup. This is now called at the beginning of DeRezObject, so we know the database is upto date before we attempt to delete a object. Fix Mantis #1004 Which happened because Database backups don't happen if a object is still selected, so when you select a part in a link-set and then unlink it and then delete it, all without unselecting the prim at all. The unlink changes never get updated to the database. So then when the call to delete the prim from the database happens, which is called with the SceneObjectId. That SceneObjectId is never found, as the database still has that prim as part of another link set. It is possible that these changes might have to be reverted and for us to find a different method of fixing the problem. If the performance overhead is too high or it causes any other problems. r8237 | diva | 2009-02-04 06:13:47 -0700 (Wed, 04 Feb 2009) | 1 line Closing the requestStream and setting a 10 sec timeout for getting it. r8236 | mikem | 2009-02-03 17:01:36 -0700 (Tue, 03 Feb 2009) | 2 lines - add OpenSim.Framework.AssetMetadata class. AssetBase is now composed of it - trim trailing whitespace r8235 | justincc | 2009-02-03 13:45:18 -0700 (Tue, 03 Feb 2009) | 2 lines - Add another object to the existing save oar test r8234 | justincc | 2009-02-03 13:16:15 -0700 (Tue, 03 Feb 2009) | 3 lines - Address by actually eliminating the redundant enable = true commented example - Comment out some startup verbosity from the module if we haven't enabled it r8233 | justincc | 2009-02-03 13:13:34 -0700 (Tue, 03 Feb 2009) | 2 lines - Fission SceneObjectTests into basic and linking sets r8232 | justincc | 2009-02-03 12:36:57 -0700 (Tue, 03 Feb 2009) | 3 lines - Lock the parts for the old group while we're clearing it as well - not much point doing one without the other - Shouldn't result in any deadlocks as I don't think there are any locks in the calling code r8231 | justincc | 2009-02-03 12:13:17 -0700 (Tue, 03 Feb 2009) | 3 lines - Mark the old group after linking as deleted - Add unit test assertions to check this r8230 | diva | 2009-02-03 12:03:01 -0700 (Tue, 03 Feb 2009) | 1 line OK, commenting the return again :-/ r8229 | justincc | 2009-02-03 11:48:04 -0700 (Tue, 03 Feb 2009) | 3 lines - Now clearing parts list in the old group after a link has occurred - Adjusted existing link tests to reflect this and added some new assertions r8228 | justincc | 2009-02-03 11:06:24 -0700 (Tue, 03 Feb 2009) | 3 lines - Lock parts while they're being duplicated to prevent possible race conditions with other parts changers - This shouldn't provoke any deadlocks since the callers aren't taking any other locks beforehand r8227 | justincc | 2009-02-03 10:50:25 -0700 (Tue, 03 Feb 2009) | 2 lines - minor: remove some pointless assignments in SOG.Copy() that had already been done by MemberwiseClone() r8226 | teravus | 2009-02-03 07:11:52 -0700 (Tue, 03 Feb 2009) | 2 lines - Fixes mantis #3070 r8225 | mikem | 2009-02-03 01:31:08 -0700 (Tue, 03 Feb 2009) | 3 lines Change access levels from private to protected to facilitate subclassing; also add new method signatures. Thanks tuco and mikkopa. Fix Mantis #3072. r8224 | mikem | 2009-02-02 22:20:52 -0700 (Mon, 02 Feb 2009) | 1 line Embed OpenSim.Data.addin.xml as a resource into OpenSim.Data.dll. r8223 | mikem | 2009-02-02 22:20:44 -0700 (Mon, 02 Feb 2009) | 5 lines - moved data plugin loading code from various places to OpenSim/Data/DataPluginFactory.cs - removed dependencies on a few executable assemblies in bin/OpenSim.Data.addin.xml - trim trailing whitespace r8222 | mikem | 2009-02-02 22:20:35 -0700 (Mon, 02 Feb 2009) | 2 lines - move OpenSim/Framework/IUserData.cs to OpenSim/Data/IUserData.cs - trim trailing whitespace r8221 | mikem | 2009-02-02 22:20:26 -0700 (Mon, 02 Feb 2009) | 3 lines - move OpenSim/Framework/IInventoryData.cs to OpenSim/Data/IInventoryData.cs - trim trailing whitespace r8220 | mikem | 2009-02-02 22:20:16 -0700 (Mon, 02 Feb 2009) | 3 lines - move IAssetDataPlugin from OpenSim/Framework/IAssetProvider.cs to OpenSim/Data/IAssetData.cs - remove some trailing whitespace r8219 | mikem | 2009-02-02 22:20:03 -0700 (Mon, 02 Feb 2009) | 1 line Rename IAssetProviderPlugin to IAssetDataPlugin aligning with the other data plugins. r8218 | justincc | 2009-02-02 13:59:12 -0700 (Mon, 02 Feb 2009) | 4 lines - Establish OnOarFileSaved EventManager event and subscribe to that instead of passing in a waithandle to the archiver - This matches the existing OnOarFileLoaded event - This brings up the question of how these things can be made generic so that they don't have to be tied into EventManager, but that's a topic for another day r8217 | justincc | 2009-02-02 13:01:50 -0700 (Mon, 02 Feb 2009) | 4 lines - As per - Copy OpenSim.ini to _OpenSim.ini on crash instead of opensim.ini - This makes it work on Linux/Mac(?) as well as Windows r8216 | justincc | 2009-02-02 12:29:43 -0700 (Mon, 02 Feb 2009) | 2 lines - Add a few more contributing projects that were not yet listed r8215 | idb | 2009-02-02 12:20:12 -0700 (Mon, 02 Feb 2009) | 2 lines Restore llGetSunPosition to its former self. Fixes Mantis #2195 r8214 | justincc | 2009-02-02 10:33:47 -0700 (Mon, 02 Feb 2009) | 2 lines - Make the fact that there is a setting to control which instant message module is used explicit in OpenSim.ini.example r8213 | justincc | 2009-02-02 10:27:23 -0700 (Mon, 02 Feb 2009) | 3 lines - Make it more obvious that there is an enabled switch for chat in OpenSim.ini.example. - Add default information for other chat settings r8212 | justincc | 2009-02-02 10:22:20 -0700 (Mon, 02 Feb 2009) | 2 lines - Stop the instant message module from trying to register for the message transfer module in PostInitialise() if it hasn't actually been enabled r8211 | justincc | 2009-02-02 10:19:57 -0700 (Mon, 02 Feb 2009) | 2 lines - Small tweak to move name replacement in friendship offer since server side requests don't want the lookup r8210 | drscofield | 2009-02-02 07:57:20 -0700 (Mon, 02 Feb 2009) | 5 lines [previous VectorRender patch was from: Robert Smart <SMARTROB@uk.ibm.com>] clean up. r8209 | lbsa71 | 2009-02-02 07:57:01 -0700 (Mon, 02 Feb 2009) | 2 lines - Minor refactoring and comments updates - Ignored some gens r8208 | drscofield | 2009-02-02 06:58:01 -0700 (Mon, 02 Feb 2009) | 19 lines [patching previous patch and also taking the chance of fixing the previous commit message] This patch reimplements the Draw method in the VectorRenderModule which is used to create dynamic textures. The previous version was limited to creating square dynamic textures, it also didnt allow for dynamically loading an image containing transparency except at 256x256. The extraParams string in such functions as osSetDynamicTextureData can now be passed a comma seperated string of name value pairs which set the width,height and alpha value of dynamic textures. e.g. "height:512,width:2048,alpha:255" Backward compatibility is still preserved so passing the old params of either a string integer "256" "512" will still work in the same fashion as will passing "setAlpha" on its own r8207 | teravus | 2009-02-02 06:57:54 -0700 (Mon, 02 Feb 2009) | 1 line - Changing the ode collision filter to 'off by default' instead of 'on by default'. It needs to be improved more. r8206 | drscofield | 2009-02-02 04:40:34 -0700 (Mon, 02 Feb 2009) | 2 lines Merge branch 'vector' into OpenSimulator.org r8205 | lbsa71 | 2009-02-02 04:27:58 -0700 (Mon, 02 Feb 2009) | 1 line - Removed erroneous reference to the Data.Base Framework r8204 | lbsa71 | 2009-02-02 04:16:41 -0700 (Mon, 02 Feb 2009) | 1 line - Removed the unused Data.Base Framework r8203 | chi11ken | 2009-02-02 02:01:00 -0700 (Mon, 02 Feb 2009) | 1 line Minor formatting cleanup. r8202 | teravus | 2009-02-01 23:04:03 -0700 (Sun, 01 Feb 2009) | 4 lines - Adding the Tree module configuration options to OpenSim.ini.example - Adding an option to use the tree module to manage the trees in the simulator (grow/reproduce/die) - Setting it to off by default in an effort to reduce the number of threads in use by default - You can also turn it on in a 'one off' way with 'tree active true' on the console. To 'one off' turn it off, it's 'tree active false'. The permanent way to do that, however is in the opensim.ini. r8201 | diva | 2009-02-01 13:36:10 -0700 (Sun, 01 Feb 2009) | 1 line Putting the return back in AddCapsHandler upon attempt at adding CAPs twice. The return seems to have been commented in 8038, as an attempt at fixing multiple TP problems later identified to be deadlocks. CAPs should never be overwritten, or the viewer can get confused. Right now this method is erroneously being called twice because of legacy code. I'll fix that later, after further testing. r8200 | ckrinke | 2009-02-01 10:41:33 -0700 (Sun, 01 Feb 2009) | 3 lines Thank you kindly, TLaukkan (Tommil) for a patch that: Added osTeleportAgent with region coordinates to support hyper grid scripted teleports. r8199 | teravus | 2009-02-01 10:16:36 -0700 (Sun, 01 Feb 2009) | 1 line - Adding a few fields to the Land data responder that the client is complaining about (and older clients are crashing on) r8198 | idb | 2009-02-01 08:12:32 -0700 (Sun, 01 Feb 2009) | 1 line Correct the method signature on llMakeFountain. r8197 | diva | 2009-01-31 19:20:57 -0700 (Sat, 31 Jan 2009) | 1 line More on dynamic hyperlinks. Making the 4096 check (deregistration of region) work in grid mode. r8196 | diva | 2009-01-31 17:59:42 -0700 (Sat, 31 Jan 2009) | 1 line Check for the 4096 limitation in dynamic region hyperlinks. r8195 | idb | 2009-01-31 12:02:09 -0700 (Sat, 31 Jan 2009) | 1 line Speed improvement mostly when sensing objects especially noticeable in a sim with many objects. r8194 | ckrinke | 2009-01-31 11:27:44 -0700 (Sat, 31 Jan 2009) | 2 lines Flesh out llGetAgentLanguage to return "en-us" until we have an I18N committee for internationalization. r8193 | diva | 2009-01-31 11:13:22 -0700 (Sat, 31 Jan 2009) | 1 line Initial support for dynamic HG hyperlinks. With this commit, remote sims can be linked (and TPed to) simply by searching on the map for things like this ucigrid03.nacs.uci.edu:9003 or by clicking on things like this in the chat history secondlife://ucigrid03.nacs.uci.edu:9003/ or by clicking on links like that on the embedded browser. r8192 | teravus | 2009-01-31 09:49:32 -0700 (Sat, 31 Jan 2009) | 2 lines - Tweaks some locks when modifying an ODECharacter. This actually allows a user to log-in while the physics scene and the scripts are starting up. This also seems to smooth out the jerks on teleport/connect/disconnect a little bit. - If you log-in while the simulator is starting up, you won't be able to move and the sim stats will say 0 FPS, and 0 Physics Frames and you may see only terrain. Once the sim finishes starting up, it'll all resume as normal. r8191 | diva | 2009-01-30 18:59:05 -0700 (Fri, 30 Jan 2009) | 1 line Oops. Forgot a try-catch on the last commit. r8190 | diva | 2009-01-30 17:28:51 -0700 (Fri, 30 Jan 2009) | 1 line Fixes mantis #3061. Thanks Hallow Palmer for diagnosing the issue so well. I bet this inconsistency happens a lot out there. r8189 | diva | 2009-01-30 17:15:13 -0700 (Fri, 30 Jan 2009) | 1 line Hopefully fixes mantis #3063. r8188 | diva | 2009-01-30 16:53:41 -0700 (Fri, 30 Jan 2009) | 1 line Bug fix on posting assets onto foreign users inventory. Check that the key is already in the local asset map before adding it. r8187 | diva | 2009-01-30 16:23:02 -0700 (Fri, 30 Jan 2009) | 1 line Added a new method SendGroupRootUpdate to start addressing mantis #3019. ll functions have not been changed. r8186 | justincc | 2009-01-30 14:39:54 -0700 (Fri, 30 Jan 2009) | 2 lines - Put a wait timeout on the archive test, just in case the archiver never returns r8185 | justincc | 2009-01-30 14:26:38 -0700 (Fri, 30 Jan 2009) | 2 lines - minor: remove some mono compiler warnings r8184 | justincc | 2009-01-30 14:04:23 -0700 (Fri, 30 Jan 2009) | 2 lines - In OpenSim.ini.example, list defaults for AllowOSFunctions and OSFunctionThreatLevel and change existing OpenSim.ini.example settings r8183 | justincc | 2009-01-30 13:54:38 -0700 (Fri, 30 Jan 2009) | 3 lines - Extend archive save test to check for the presence of the file for the object that was in the scene - Can now pass in a wait handle to ArchiveRegion() if you want same thread signalling that the save has completed r8182 | justincc | 2009-01-30 11:38:32 -0700 (Fri, 30 Jan 2009) | 2 lines - furhter simplify test setups for objects r8181 | justincc | 2009-01-30 11:28:05 -0700 (Fri, 30 Jan 2009) | 2 lines - minor: stop bothering to set parts to phantom within test setups - tests now seem to pass without having to do this r8180 | drscofield | 2009-01-30 07:45:39 -0700 (Fri, 30 Jan 2009) | 3 lines reporting original request URI if HttpWebRequest failed, adding try-catch around GetRequestStream (this time for sure) r8179 | chi11ken | 2009-01-30 02:03:23 -0700 (Fri, 30 Jan 2009) | 1 line Update svn properties, minor formatting cleanup. r8178 | dahlia | 2009-01-30 01:52:45 -0700 (Fri, 30 Jan 2009) | 1 line remove dummy parcel media settings from event queue message r8177 | drscofield | 2009-01-30 01:49:00 -0700 (Fri, 30 Jan 2009) | 1 line r8176 | drscofield | 2009-01-30 01:48:41 -0700 (Fri, 30 Jan 2009) | 4 lines fixing: client gets logged out when concierge's broker returns 500 response. adding: more verbose error logging r8175 | justincc | 2009-01-29 13:08:04 -0700 (Thu, 29 Jan 2009) | 7 lines - If an orphaned group is found in the mysql or mssql databases (i.e. there is no prim where UUID = SceneGroupID), then force one prim to have UUID = SceneGroupID. - A warning is posted about this on startup giving the location of the object - This should allow one class of persistently undeletable prims to be removed - This change should not cause any issues, but I still suggest that you backup your database beforehand - If this doesn't work for previously linked objects, then you could also try the workaround in - This change has been made to mysql and mssql, but sqlite appears to work in a different way r8174 | idb | 2009-01-29 12:47:55 -0700 (Thu, 29 Jan 2009) | 1 line Complete the implementation of llSHA1String. r8173 | justincc | 2009-01-29 11:39:33 -0700 (Thu, 29 Jan 2009) | 2 lines - minor: just a few formatting changes and log quietening r8172 | sdague | 2009-01-28 12:23:20 -0700 (Wed, 28 Jan 2009) | 3 lines - Enhanced ScenePresenceTests. Now tests for region and prim crossing. From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com> r8171 | drscofield | 2009-01-28 11:58:49 -0700 (Wed, 28 Jan 2009) | 3 lines fix: client gets logged out when concierge's broker returns 500 response. r8170 | drscofield | 2009-01-28 02:52:09 -0700 (Wed, 28 Jan 2009) | 17 lines From: Christopher Yeoh <yeohc@au1.ibm.com> This changeset fixes a race condition where a script (XEngine run) can startup before a reference is added to it in all of the required places in the XEngine class. The effect of this is that a script can sometimes on startup miss script events. For example a script which starts up and initialises itself from a notecard may never receive the dataserver event containing the notecard information. The patch isn't as clean as I'd like - I've split the constructor of ScriptInstance up so it does everything it did before except call Startup and post events like state_entry and on_rez. An Init function has been added which is called after the ScriptInstance object has been added to the necessary data structures in XEngine. Happy to rework it if someone suggests a better way of doing it. r8169 | drscofield | 2009-01-28 02:22:12 -0700 (Wed, 28 Jan 2009) | 7 lines From: Christopher Yeoh <yeohc@au1.ibm.com> Adding Oarfileloaded and EmptyScriptCompileQueue event support which allows (with a module) for programmatic notification of when a region objects and scripts are up and running after a server start or load-oar. r8168 | ckrinke | 2009-01-27 21:50:25 -0700 (Tue, 27 Jan 2009) | 2 lines Add in a stub for llSHA1String. I believe it is the only one new function we were missing. r8167 | mikem | 2009-01-27 18:59:07 -0700 (Tue, 27 Jan 2009) | 1 line Removing ThirdParty/3Di from prebuild.xml. r8166 | mikem | 2009-01-27 18:56:04 -0700 (Tue, 27 Jan 2009) | 4 lines Removing contents of ThirdParty/3Di. The load balancer can now be found at config key: svn.rmdir r8165 | mikem | 2009-01-27 18:55:45 -0700 (Tue, 27 Jan 2009) | 1 line Slight cleanup of docs, removing trailing whitespace. r8164 | dahlia | 2009-01-26 23:20:03 -0700 (Mon, 26 Jan 2009) | 1 line delete some commented out junk code notes r8163 | dahlia | 2009-01-26 23:14:11 -0700 (Mon, 26 Jan 2009) | 1 line Send groups list via event queue r8162 | dahlia | 2009-01-26 18:31:06 -0700 (Mon, 26 Jan 2009) | 1 line correct formatting if parcel description field in event queue message r8161 | dahlia | 2009-01-26 17:51:14 -0700 (Mon, 26 Jan 2009) | 1 line Correct a typo in the parcel properties event queue message which was preventing the display of authorized buyer r8160 | drscofield | 2009-01-26 14:35:54 -0700 (Mon, 26 Jan 2009) | 1 line r8159 | drscofield | 2009-01-26 14:35:38 -0700 (Mon, 26 Jan 2009) | 1 line r8158 | drscofield | 2009-01-26 14:35:16 -0700 (Mon, 26 Jan 2009) | 3 lines ~ fixing bugs in ConciergeServer.py test code ~ fix bug in ConciergeModule: wrong closing tag for avatars list r8157 | drscofield | 2009-01-26 14:34:59 -0700 (Mon, 26 Jan 2009) | 2 lines adding XML parsing to make sure POST content is welformed r8156 | drscofield | 2009-01-26 14:34:44 -0700 (Mon, 26 Jan 2009) | 2 lines + adding URI substitution for concierges broker URI r8155 | drscofield | 2009-01-26 14:34:27 -0700 (Mon, 26 Jan 2009) | 2 lines ~ moving test server script on level up r8154 | drscofield | 2009-01-26 14:33:53 -0700 (Mon, 26 Jan 2009) | 2 lines ~ moving test server script on level up r8153 | drscofield | 2009-01-26 14:33:36 -0700 (Mon, 26 Jan 2009) | 2 lines ~ turning synchronous broker update into asynchronous one r8152 | drscofield | 2009-01-26 14:33:20 -0700 (Mon, 26 Jan 2009) | 3 lines ~ fix: Concierge reports avatar leaving region twice ~ cleaning up log statements r8151 | drscofield | 2009-01-26 14:32:59 -0700 (Mon, 26 Jan 2009) | 2 lines + completed python test server r8150 | drscofield | 2009-01-26 14:32:43 -0700 (Mon, 26 Jan 2009) | 2 lines + adding test server for debugging purposes r8149 | drscofield | 2009-01-26 14:32:24 -0700 (Mon, 26 Jan 2009) | 4 lines ~ extending attendee list to include agent name + code to generate full XML avatar list + code to POST XML snipplet r8148 | drscofield | 2009-01-26 14:31:41 -0700 (Mon, 26 Jan 2009) | 2 lines adding timestamp as ISO 8601 r8147 | drscofield | 2009-01-26 14:31:21 -0700 (Mon, 26 Jan 2009) | 2 lines adding XML sniplet generation (start of) r8146 | drscofield | 2009-01-26 14:31:02 -0700 (Mon, 26 Jan 2009) | 2 lines starting draft attendee list notification support. r8145 | dahlia | 2009-01-26 13:06:31 -0700 (Mon, 26 Jan 2009) | 1 line swap endianness of parcel flags in event queue message r8144 | teravus | 2009-01-26 13:05:13 -0700 (Mon, 26 Jan 2009) | 1 line - Providing a way for the rest of the simulator to get at the economy settings through the IMoneyModule interface. r8143 | sdague | 2009-01-26 08:42:21 -0700 (Mon, 26 Jan 2009) | 2 lines in the spirit of cleanup, remove the old sql directory, as this stuff is all done in the drivers now r8142 | dahlia | 2009-01-26 03:42:24 -0700 (Mon, 26 Jan 2009) | 2 lines add a definition for a parcel properties CAP send parcel properties via eventqueue rather than UDP to facilitate libomv clients - see Mantis #3040 r8141 | drscofield | 2009-01-26 03:11:20 -0700 (Mon, 26 Jan 2009) | 2 lines ~ cleaning up code base: dropping share/python r8140 | dahlia | 2009-01-26 01:04:12 -0700 (Mon, 26 Jan 2009) | 1 line more eventqueue IM nonsense r8139 | chi11ken | 2009-01-25 18:52:06 -0700 (Sun, 25 Jan 2009) | 1 line Move file contents into file. r8138 | chi11ken | 2009-01-25 18:43:48 -0700 (Sun, 25 Jan 2009) | 1 line Remove empty share/ruby directory. r8137 | idb | 2009-01-25 14:13:42 -0700 (Sun, 25 Jan 2009) | 2 lines Remove the addition of the region coordinates to obtain the absolute position of a prim/person on the grid. I believe it is superfluous and removes needed decimal places for short range sensors. Fixes Manitis #3046 r8136 | homerh | 2009-01-25 09:12:55 -0700 (Sun, 25 Jan 2009) | 3 lines - Fixed a small logical error in error handling of console commands. - Console command help should be output to the console, not to the log (as "help" does it already). That allows getting help/answers even if you only log into a file. Fixes Mantis#2916. r8135 | idb | 2009-01-25 03:17:26 -0700 (Sun, 25 Jan 2009) | 2 lines Add an override of the ! operator to lsl integer. Fixes Mantis #3041 r8134 | adjohn | 2009-01-25 01:31:08 -0700 (Sun, 25 Jan 2009) | 1 line Applied patch from #3012 Fixing a minor bug where nhibernate mappings from outside OpenSim.Data.NHibernate assembly were not included in sessionFactory. Thanks mpallari! r8133 | teravus | 2009-01-24 21:34:00 -0700 (Sat, 24 Jan 2009) | 2 lines - Adds console command, 'predecode-j2k <number of threads>' to load all of the texture assets from the scene and decode the j2k layer data to cache. The work is split between the number of threads you specify. A good number of threads value is the number of cores on your machine minus 1. - Increases the number of ImageDataPackets we send per PriorityQueue pop and tweak it so that the number of packets is ( (2 * decode level) + 1 ) * 2, and (((2 * (5-decode level)) + 1) * 2). The first one sends more data for low quality textures, the second one sends more data for high quality textures. r8132 | chi11ken | 2009-01-24 01:18:41 -0700 (Sat, 24 Jan 2009) | 1 line Update svn properties. r8131 | justincc | 2009-01-23 13:44:35 -0700 (Fri, 23 Jan 2009) | 2 lines - minor: remove mono compiler warning r8130 | justincc | 2009-01-23 13:38:44 -0700 (Fri, 23 Jan 2009) | 2 lines - Write a simple archive loading test which doesn't actually do any testing yet apart from not blow up r8129 | ckrinke | 2009-01-23 13:21:43 -0700 (Fri, 23 Jan 2009) | 10 lines Thank you kindly, TLaukkan (Tommil) for a patch that: * Added Npgsql.dll and Mono.Security.dll which are NpgsqlDriver dlls. - Added missing field to schema creation scripts: PathTaperY. - Added schema creation scripts for PostgreSQL. - Added unit test classes for PostgreSQL. - Added schema creation script folder to NHibernate project in prebuild.xml - Added Npgsql.dll to NHibernate test project dependencies in prebuild.xml - Ensured that build works with both nant and Visual Studio. - Executed build unit tests with nant and NHibernate unit tests with NUnitGUI - Couple of region tests fail due to double precission float rounding errors need to sort out how these are handles in unit tests and if higher precission numeric field needs to be used in Postgresql. r8128 | justincc | 2009-01-23 12:24:36 -0700 (Fri, 23 Jan 2009) | 2 lines - Extend archive test to check for the presence of a control file in a saved archive r8127 | idb | 2009-01-23 11:10:31 -0700 (Fri, 23 Jan 2009) | 2 lines Fix for llGetRot when the script is in a child prim. Also fixed llGetPrimitiveParams for PRIM_ROTATION. Fixes Mantis #3023 r8126 | justincc | 2009-01-23 10:55:29 -0700 (Fri, 23 Jan 2009) | 2 lines refactor: move test modules set up code to common function r8125 | justincc | 2009-01-23 10:32:38 -0700 (Fri, 23 Jan 2009) | 2 lines - refactor: move scene setup code into common test code assembly r8124 | justincc | 2009-01-23 10:17:46 -0700 (Fri, 23 Jan 2009) | 2 lines - minor: remove serialization and deserializationg sog log messages for now r8123 | justincc | 2009-01-23 10:12:15 -0700 (Fri, 23 Jan 2009) | 2 lines - minor: small tweak to archive save completion log message r8122 | justincc | 2009-01-23 10:07:37 -0700 (Fri, 23 Jan 2009) | 3 lines - Add direct stream loading and saving methods to the archive module. - The async stream method does not yet signal completion to interested calling code r8121 | teravus | 2009-01-23 04:00:36 -0700 (Fri, 23 Jan 2009) | 2 lines - Adds a synchronous jpeg decode for pre-caching purposes - When the DynamicTextureModule creates a j2k image, pre-cache the decode so that it doesn't stall any client threads. r8120 | dahlia | 2009-01-22 18:49:32 -0700 (Thu, 22 Jan 2009) | 1 line add event queue code for sending group IM for future group support r8119 | teravus | 2009-01-22 17:08:35 -0700 (Thu, 22 Jan 2009) | 1 line - Fixing a group title r8118 | idb | 2009-01-22 16:58:46 -0700 (Thu, 22 Jan 2009) | 2 lines Implement missing LSL TEXTURE_xxx constants including two new textures. Fixes Mantis #3030 r8117 | justincc | 2009-01-22 12:46:31 -0700 (Thu, 22 Jan 2009) | 2 lines - Add some caps seed capability path checking to the simple non neighbours standalone region teleport test r8116 | teravus | 2009-01-22 11:28:32 -0700 (Thu, 22 Jan 2009) | 1 line - Remove a few unnecessary locks to try and prevent lock contention in LLImageManager r8115 | justincc | 2009-01-22 10:51:47 -0700 (Thu, 22 Jan 2009) | 5 lines - Change the currently misleading log message when capabilities are added twice, and provide some more information - No functional change - It strikes me that there may be caps problems if double registration is presented if cleanup failed for a previous agent (so a caps handler will remain in memory for that agent but with a different seed). This needs investigation r8114 | drscofield | 2009-01-22 09:43:28 -0700 (Thu, 22 Jan 2009) | 2 lines white space & formatting cleanup r8113 | drscofield | 2009-01-22 09:43:09 -0700 (Thu, 22 Jan 2009) | 8 lines From: Christopher Yeoh <yeohc@au1.ibm.com> this patch makes load-oar a bit more tolerant to irrelevant differences in the oar file format. Directory entries are now ignored rather than trying to interpret them as files they hold which results in the load-oar failing. This change makes it easier to manually modify oar files. r8112 | chi11ken | 2009-01-22 09:16:34 -0700 (Thu, 22 Jan 2009) | 1 line Update svn properties, minor formatting cleanup. r8111 | sdague | 2009-01-22 09:06:26 -0700 (Thu, 22 Jan 2009) | 3 lines - minox fix related to last commit From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com> r8110 | drscofield | 2009-01-22 09:04:31 -0700 (Thu, 22 Jan 2009) | 2 lines adding TAGS to .gitignore r8109 | ckrinke | 2009-01-22 08:57:47 -0700 (Thu, 22 Jan 2009) | 1 line Fixes Mantis #3032. The VersionInfo.cs file was not updated from 0.6.1 to 0.6.2 with the last minor release and shows incorrectly on the console. This will resolve itself on the next svn update. r8108 | sdague | 2009-01-22 06:59:54 -0700 (Thu, 22 Jan 2009) | 3 lines - Caught 2 dictionary exceptions that were unhandled From: Arthur Rodrigo S Valadares <arthursv@linux.vnet.ibm.com> r8107 | teravus | 2009-01-22 03:04:15 -0700 (Thu, 22 Jan 2009) | 1 line - discovered! darn! Removing I <3 OpenSimulator from firstname r8106 | teravus | 2009-01-22 02:31:01 -0700 (Thu, 22 Jan 2009) | 3 lines - Added some more comments - Fixed up an outgoing packet - I <3 OpenSimulator r8105 | teravus | 2009-01-21 18:50:00 -0700 (Wed, 21 Jan 2009) | 1 line - Adds a Scene Getter property called SceneContents for the Scene's m_sceneGraph. This will allow it to be exposed to modules without resorting to referring to m_sceneGraph r8104 | teravus | 2009-01-21 18:33:46 -0700 (Wed, 21 Jan 2009) | 1 line - Add File cache for j2k layer decodes. This will make it so that the server will decode the j2k stream once and cache it to disk so that the cache is saved across sim restarts. r8103 | justincc | 2009-01-21 14:14:17 -0700 (Wed, 21 Jan 2009) | 3 lines - refactor: Extract caps related code from scene and put into a region module - No functional changes in this revision r8102 | justincc | 2009-01-21 11:57:05 -0700 (Wed, 21 Jan 2009) | 3 lines - Restore commented out isdone assertions in TextureSendTests.T010_SendPkg() - These still appear to suceed with the current code! r8101 | justincc | 2009-01-21 11:46:44 -0700 (Wed, 21 Jan 2009) | 2 lines - minor: move connection success log message so that it doesn't get printed again if a duplicate use circuit code packet comes in r8100 | justincc | 2009-01-21 10:56:25 -0700 (Wed, 21 Jan 2009) | 2 lines minor: find in existing senderUUID field for chat messages originating from a client r8099 | teravus | 2009-01-21 04:16:33 -0700 (Wed, 21 Jan 2009) | 3 lines - More friendly OpenJpeg error handling. - Often times now the only reason OpenJpeg doesn't work is because it requires Glibc 2.4 The error messages reflect that. - In J2kDecoder module, It stops trying to decode modules if it encounters a dllnotfound exception and instead sends a full resolution layer that causes the texture sender to only send the full resolution image. (big decrease in texture download speed, but it's better then nasty repeating error messages) r8098 | mikem | 2009-01-21 03:20:32 -0700 (Wed, 21 Jan 2009) | 3 lines - remove extra "; in http_loginform.html.example; fix issue 3025 - sync up default HTML generated in LoginService.cs with that in http_loginform.html.example r8097 | dahlia | 2009-01-20 19:40:09 -0700 (Tue, 20 Jan 2009) | 1 line Fix an error in sculpt LOD calculation r8096 | mikem | 2009-01-20 19:29:56 -0700 (Tue, 20 Jan 2009) | 1 line Set request method for REST requests with no input. r8095 | melanie | 2009-01-20 14:59:11 -0700 (Tue, 20 Jan 2009) | 2 lines And another method added r8094 | melanie | 2009-01-20 14:45:44 -0700 (Tue, 20 Jan 2009) | 2 lines Small interface addition r8093 | justincc | 2009-01-20 11:49:16 -0700 (Tue, 20 Jan 2009) | 4 lines - Apply - Adds MSSQL 2005 unit tests - Thanks Tommil! r8092 | justincc | 2009-01-20 11:38:51 -0700 (Tue, 20 Jan 2009) | 3 lines - Apply - Adds a grid db implementation and unit tests to the NHibernate module r8091 | justincc | 2009-01-20 11:27:30 -0700 (Tue, 20 Jan 2009) | 4 lines - Apply - Allows different assemblies to be used in NHibernateManager, which makes it possible to use mapping and migration files in different assemblies. - Thanks mpallari! r8089 | dahlia | 2009-01-20 03:09:16 -0700 (Tue, 20 Jan 2009) | 1 line Removed some of the darker colors from console messages as they were not visible in some terminal emulators (like putty) r8088 | teravus | 2009-01-19 23:07:36 -0700 (Mon, 19 Jan 2009) | 1 line - minor: A few comments. A bit of cleanup. r8087 | diva | 2009-01-19 18:50:20 -0700 (Mon, 19 Jan 2009) | 1 line Very minor: added a missing {0} in a couple of Error messages. r8086 | chi11ken | 2009-01-19 17:26:29 -0700 (Mon, 19 Jan 2009) | 1 line Update svn properties. r8085 | idb | 2009-01-19 17:10:39 -0700 (Mon, 19 Jan 2009) | 2 lines Added overrides for == and != for list. Fixes Mantis #3002 r8084 | sdague | 2009-01-19 14:38:31 -0700 (Mon, 19 Jan 2009) | 3 lines oops hash codes can be negative, account for that From: Sean Dague <sdague@gmail.com> r8083 | sdague | 2009-01-19 14:38:25 -0700 (Mon, 19 Jan 2009) | 3 lines added display of exception From: Sean Dague <sdague@gmail.com> r8082 | sdague | 2009-01-19 14:38:16 -0700 (Mon, 19 Jan 2009) | 3 lines change the appender to have a few more colors, none of which are red From: Sean Dague <sdague@gmail.com> r8081 | teravus | 2009-01-19 14:29:44 -0700 (Mon, 19 Jan 2009) | 1 line - Another image packet edge case. Thanks nebadon for printing a log of it r8080 | idb | 2009-01-19 12:15:55 -0700 (Mon, 19 Jan 2009) | 2 lines Correct energy calculation to include the mass of the object. Fixes Mantis #3006 r8079 | teravus | 2009-01-19 11:33:25 -0700 (Mon, 19 Jan 2009) | 1 line - Set SVN Properties r8078 | justincc | 2009-01-19 10:15:27 -0700 (Mon, 19 Jan 2009) | 2 lines - minor: Just some minor log elaboration to reveal in the logs where a teleport is being directed rather than just its position r8077 | teravus | 2009-01-19 10:11:57 -0700 (Mon, 19 Jan 2009) | 2 lines - Progressive texture patch + PriorityQueue put into the LLClient namespace. - Updates LibOMV to r2362 r8076 | justincc | 2009-01-19 08:16:17 -0700 (Mon, 19 Jan 2009) | 4 lines - Remove unused prims.ParentID field from SQLite and MySQL - Since this is a db change, as always I strongly recommend that you backup your database before updating to this revision - Haven't touched MSSQL in case I get it wrong - looking for some kind soul to take care of this. r8075 | mikem | 2009-01-18 19:30:51 -0700 (Sun, 18 Jan 2009) | 3 lines No longer append a "texture" parameter on texture asset requests. The asset server doesn't check for the existence of this parameter since r2744. r8074 | chi11ken | 2009-01-18 18:29:21 -0700 (Sun, 18 Jan 2009) | 1 line Update svn properties. r8073 | melanie | 2009-01-18 16:31:13 -0700 (Sun, 18 Jan 2009) | 2 lines Avoid an invalid cast on legacy data r8072 | idb | 2009-01-18 07:46:43 -0700 (Sun, 18 Jan 2009) | 1 line Moved applying an impulse to a newly rezzed object to minimise the delay getting the object moving. r8071 | idb | 2009-01-18 04:25:12 -0700 (Sun, 18 Jan 2009) | 2 lines Subscribe to collision events if needed when turning an object to non-phantom from phantom. Fixes Mantis #1883 r8070 | dahlia | 2009-01-18 03:50:53 -0700 (Sun, 18 Jan 2009) | 1 line Added an optional password for the IRC module r8069 | diva | 2009-01-17 18:45:22 -0700 (Sat, 17 Jan 2009) | 1 line Getting rid of the CheckRegion call during TPs. This seems to be not just useless, but sometimes problematic (mantis #2999). Initial tests indicate that this call is not necessary. Let's see if this stands in the wild.
http://opensimulator.org/wiki/0.6.3-release
CC-MAIN-2015-32
refinedweb
16,347
63.12
//Deck Class #include<iostream> #include<stdlib.h> using namespace std; class Deck{ int Cards[51]; public: Deck(); void Display(); void Shuffle(); }; Deck::Deck(){ for(int n = 0; n < 52; n++){ Cards[n]=n; } } void Deck::Display(){ for(int n = 0; n < 52; n++){ cout << n+1 << ". " << Cards[rand()%51] <<endl; }} void Deck::Shuffle(){ for(int n = 0;n < 52; n++){ int r=n+(rand()%(52-n)); int temp=Cards[n]; Cards[n]=Cards[r];Cards[r]=temp; }} This is what I have so far for card class, I want to categorize them into the suits then to the number they are. So like a two of clubs would be assigned to number 22 or something. How would I get it so it can detect what is a club and what isn't, same with numbers/face cards... I was told by a friend to use Enums but I wanted to know if there was any more efficient way
https://www.daniweb.com/programming/software-development/threads/163480/how-to-catergorize-cards
CC-MAIN-2017-26
refinedweb
158
75.24
A Guide To Migrating From ASP.NET MVC to Razor Pages The Model-View-Controller (MVC) pattern is inarguably a successful approach to building web applications. The design gained popularity in many tech communities, implemented in frameworks like Ruby on Rails, Django, and Spring. Since April 2, 2009, Microsoft has offered developers the ability to create MVC pattern web applications with the release of ASP.NET MVC. The approach leans heavily on the ideas of convention over configuration, and with conventions come ceremony. The ceremony can come with an overhead not necessary for less complicated apps. With the release of .NET Core 2.0, ASP.NET developers were introduced to Razor Pages, a new approach to building web applications. While it can be reminiscent of WebForms, the framework learns from the decade of experience building web frameworks. The thoughtfulness put into Razor Pages shows with its ability to leverage many of the same features found in ASP.NET MVC. In this post, we’ll explore an existing ASP.NET MVC application, begin to migrate it to Razor Pages, and see where Razor Pages may not be a good fit. Why Razor Pages Razor Pages embraces the traditional web, accepting the constraints of an HTML form and the everyday use cases that come with building line-of-business applications. That’s not to say that those constraints limit what we can do with Razor Pages. On the contrary, in most cases, Razor Pages can do all the things an ASP.NET MVC application can. Razor Pages works on top of ASP.NET Core and has many of the same features found in ASP.NET MVC: Routing, Model Binding, ModelState, Validation, Razor views, and ActionResult return types. We see the most significant differences in the supported HTTP semantics of Razor Pages. While ASP.NET MVC supports the full array of HTTP methods (GET, POST, DELETE, etc.), Razor Pages only supports a limited set: GET, POST, and PUT. Traditionally, ASP.NET MVC and WebAPI support HTTP-based APIs, and ASP.NET Core’s iteration on MVC has emphasized the pattern as a way to build APIs for single-page applications. The constraint is no accident, as these are the same methods supported in HTML’s form tag. As we’ll see later in this post, ASP.NET MVC separates its three main components: Model, View, and Controller. Razor Pages takes a different approach entirely, collapsing all three elements into what is effectively one project element. Razor Pages uses a PageModel to describe the behaviors and state of each endpoint. Fewer project assets can reduce the cognitive overhead and context-switching between model, view, and controller folders. Less code for the same amount of value is always a benefit. To follow along, the solution used in this blog post can be found on GitHub. We’ll be comparing the two structures of ASP.NET MVC and Razor Pages. In doing so, we’ll see the differences and similarities between the MVC and Page approach. ASP.NET MVC Structure As stated previously, the MVC pattern has three main parts. Let’s looks at our sample project and take note of the MVC elements in the solution explorer. We have the following elements: - Controllers - Models/ViewModels - Views As a good practice, we want to use ViewModels for our mutation-based endpoints. Let’s break down one action in our WidgetsController to see all the elements come together. [HttpPost, Route("create")] public IActionResult Create([FromForm] EditModel request) { if (ModelState.IsValid) { // widget service service.Add(request.Name); return RedirectToAction("Index"); } return View("New", request); } The first things we should notice are the attributes of HttpPostand Route. The attributes help ASP.NET MVC route an HTTP request to our controller action. We also utilize a request model to bind the values from the request’s form to our C# instance. Next, we determine the validity of our HTTP request using ModelState. From there, we either save the value or return the New view. Within this one action, we checked all the major components of the MVC pattern. To create the sample used in this post, we need to create an additional five actions, for a total of six endpoints. All endpoints have similar approaches. The resulting implementation can be seen in the solution explorer, or by running the project found on GitHub. Razor Pages Structure The main benefit of Razor Pages is the collapsed programming model. We can see that by looking at the Razor Pages folders in the solution explorer. Excluding our domain models, which contain our services, we are down to one folder. Let’s migrate the Create MVC action from the previous section to the Razor Pages philosophy. public class Create : PageModel { [BindProperty, Required] public string Name { get; set; } // public void OnGet() { } public IActionResult OnPost([FromServices]WidgetService service) { if (ModelState.IsValid) { var widget = service.Add(Name); return RedirectToPage("Index"); } return Page(); } } Let’s walk through the most significant changes in Razor Pages, as it may not be immediately clear what has happened. - The routing of our page is conventional. It uses the content path of our page to build the route. - The GET method is implicit since the Razor Page handles both GET and POST requests. We do not need it in our PageModel, because we have no logic on GET requests. - The Createclass is the ViewModel. We bind the Nameproperty on each POST request. We don’t need any other objects. - We are using ModelStatefor validation, just like MVC. - We are using IActionResultto route our client, just like MVC. We should also take notice that our views and page models are linked together within the same folder. Looking at our view, we can see how we reference the “Create” PageModel. @page @model RazorPagesMigration.Pages.Widgets.Create <h2>Create</h2> <form method="post" asp- <label asp-</label> <input asp- <span asp-</span> <button type="submit">Save Widget</button> </form> If we were to look at our MVC view implementation, we’d notice the two are almost identical except for the references to the asp-pageattributes on the HTML form. We can see a more advanced example of a Razor Page implementation in our Edit page. public class Edit : PageModel { private readonly WidgetService service; public Edit(WidgetService service) { this.service = service; } [BindProperty(SupportsGet = true)] public int Id { get; set; } [BindProperty, Required] public string Name { get; set; } public IActionResult OnGet() { var widget = service.Get(Id); if (widget == null) return NotFound(); Name = widget.Name; return Page(); } public IActionResult OnPost() { if (ModelState.IsValid) { service.Update(Id, Name); return RedirectToPage("Index"); } return Page(); } } As we can see, it is very similar to the Create page, but we now retrieve the requested widget on GET requests. Looking at the view, we can see additional metadata describing the expected route values using the @pagedirective. On the Edit page, we need our client to provide an identifier in the URI path. @page "{id:int}" @model RazorPagesMigration.Pages.Widgets.Edit <h2>Edit</h2> <form method="post" asp- <label asp-</label> <input asp- <span asp-</span> <button type="submit">Save Widget</button> </form> For Rider and ReSharper users, we can navigate between the Razor views and our page model utilizing the Navigate to action (check docs for shortcuts). When switching between the contexts of UI and the backend, using the Related Files action makes it even faster to switch between parts of our Razor page. Like the IDE features for ASP.NET MVC, we have the ability to refactor names for properties found on our page models. We can also navigate from Razor directly to our C# implementations by Cmd/Ctrl+Click our model properties. Sharing Is Caring Looking through the example project, it is clear that both Razor Pages and MVC share the same foundation. The request pipeline for Razor Pages is almost identical to MVC, utilizing constructs like validation, action results, razor views, and more. In our sample, we also use the same layout our MVC views use. The realization brings us to an important point: Razor Pages and MVC are not mutually exclusive. They complement each other very nicely. The shared codebase allows us to migrate parts of our applications gradually, and with calculated precision. When Not To Use Razor Pages As mentioned previously, a Razor Page’s ability to handle HTTP methods is minimal. The lack of comprehensive HTTP method support makes it a difficult platform to build APIs. If our frontend exclusively works with JavaScript and frontend model binding, then we would see more benefit sticking with ASP.NET MVC. That’s not to say it is impossible, but it would be painful. The complexity of our UI can play a role in choosing MVC over Razor Pages. Our choice to use Razor Pages depends on the standard building block of our UI. The Razor Pages build block is the PageModel, but with MVC, we can create smaller components. For example, a newsletter sign-up form might be visible across an entire web application. An MVC endpoint might be better suited to handle requests for newsletter signup. The default conventional routing system that Razor Pages uses is also very limiting. If we want deeply nested route paths, we could see our solution structure explode with complexity. There are ways to mitigate this problem using Razor Pages conventions, but most folks should steer clear of changing the standard behaviors. Conclusion Razor Pages and ASP.NET MVC share a foundation that makes the use of both technologies in one project highly synergetic. Most developers can and should use both in their applications. We should also consider some existing MVC infrastructure and whether certain parts of our solutions would make sense to migrate to Razor Pages. HTML focused pages are ideal for a Razor Pages refactor, and as shown in this post, we can reuse many of the same elements from MVC. Folks building JavaScript-heavy frontends or API backends should continue to use the MVC pattern, as it affords them the most flexibility in terms of HTTP methods, routing, and response handling. Ultimately, the choice between Razor Pages and MVC is personal, and as shown in this post, both share much of the same infrastructure. Looking at the example project provided, we can see we can achieve feature parity no matter what path we take. The Razor Pages approach reduces much of the ceremony around using the MVC pattern, and it is worth considering for any current ASP.NET developers. For those who are interested in learning more about Razor Pages, I highly recommend LearnRazorPages.com as a high-quality reference for beginners and experienced developers. 1 Responses to A Guide To Migrating From ASP.NET MVC to Razor Pages Yusuf says:June 23, 2020 Change it to ASP.NET Core
https://blog.jetbrains.com/dotnet/2020/06/22/guide-to-migrating-aspnet-mvc-to-razor-pages-rider/
CC-MAIN-2020-29
refinedweb
1,788
56.86
Turb import tw2.core as twc import tw2.forms as twf class StudentForm(twf.Form): class child(twf.TableLayout): name = twf.TextField(size = 20) city = twf.TextField() address = twf.TextArea("",rows = 5, cols = 30) pincode = twf.NumberField() action = '/save_record' submit = twf.SubmitButton(value = 'Submit') In the> <html xmlns = "" xmlns: <head> <link rel = "stylesheet" type = "text/css" media = "screen" href = ${tg.url('/css/style.css')}" /> <title>Welcome to TurboGears</title> </head> <body> <h1>Welcome to TurboGears</h1> <py:with <div py: </py:with> <h2>Current Entries</h2> <table border = '1'> <thead> <tr> <th>Name</th> <th>City</th> <th>Address</th> <th>Pincode</th> </tr> </thead> <tbody> <py:for <tr> <td>${entry.name}</td> <td>${entry.city}</td> <td>${entry.address}</td> <td>${entry.pincode}</td> </tr> </py:for> </tbody> </table> </body> </html> Restart the server and enter in the browser − Each time the data is added and submit button is pressed, the list of current entries will be displayed.
https://www.tutorialspoint.com/turbogears/turbogears_using_mongodb.htm
CC-MAIN-2019-47
refinedweb
159
53.68
Tcl8.6.7/Tk8.6.7 Documentation > Tcl C API, version 8.6.7 > CrtSlave - DESCRIPTION - SEE ALSO - KEYWORDS N value arguments to pass to the aliased command. - Tcl_Obj **objv (in) - Vector of Tcl_Obj structures, the additional value arguments to pass to the aliased value arguments to be passed to the alias. The location is in storage owned by the caller. - Tcl_Obj ***objvPtr (out) - Pointer to location to store a vector of Tcl_Obj structures, the additional arguments to pass to anThese. a command named value except that it takes a vector of values value value result of interp. Currently both cmdName and hiddenCmdName must not contain namespace qualifiers, or the operation will return TCL_ERROR and leave an error message in the value. For a description of the Tcl interface to multiple interpreters, see interp(n).
http://docs.activestate.com/activetcl/8.6/tcl/TclLib/CrtSlave.html
CC-MAIN-2018-09
refinedweb
135
55.34
You Really Shouldn't Be Here jQuery, My Wife Might Begin To Suspect Something As developers, you must understand that it is hard to relinquish control over all aspects of the programming process. I know that I do. I like being involved in all parts of the machinery and I pride myself on the fact that I can code it all myself. Now, I fancy myself a fairly high-level Javascript developer; Javascript makes a lot of sense to me and I have been coding it for more than a decade. But, at the same time, I know there's stuff I just can't do very well, like create DHTML modal windows or determine the innerWidth of a window across all browsers. My old boss, Glen Lipka, has been trying very hard to get me to use jQuery for months and months now and I have been very resistant for the reasons I talked about above; I just don't want to give up control to some black-boxed jQuery object. Then, David Ries of Edit.com came to our offices and gave a presentation on jQuery. It looks very cook, always has, but still, I didn't budge. Then finally, my friend David Leventi, an up-and-coming photography star needed help rebuilding his web site. Ok, a photographer, a very "Artsy" site design.... maybe it's time I looked at jQuery? Three days ago, I sat down to start coding Leventi's new site. My first page, my "Hello World", if you will, was a photo gallery page with a main photo and a sliding film-strip of photo thumbnails below it. I started to code it all by hand just to get an idea of what it was supposed to do. Then, I started to rewrite parts of it using jQuery. WHAT?!?!? I have to say, after about three seconds, the power of this jQuery object smacked me across the face with its power. The second I got the main image to fade to white, swap sources, then fade back to view.... not quite sold yet, but very freakin' impressed. After about five minutes, I WAS sold and here's the line that sold me: - $( objSource.parentNode ).children( ".on" ).attr( "class", "off" ); objSource was the thumbnail that was clicked. What the line above is doing is getting the parent node (DOM) of the source (passed to the function as "this"), which is the thumb nail strip container. Then, it selects all children of the strip whose class is "on". Then it takes all of those children (should only be one) and sets their class name to "off". I have to say I almost got giddy at the thought of this! How freakin' cool is that? I can even feel my heart beating a little faster just thinking about it this morning. What keeps getting me over and over again is the simple yet SUPER POWERFUL selector expressions that are available. ".on" iterates through the child nodes and finds all the appropriate elements?!? NUTS! Rey Bango was correct, it IS like crack and I am hooked. I just want to start selecting random crap out of the DOM using crazy selector power. I am just learning, so I am sure there is SOOO much that I am doing manually that could be simplified using jQuery. Time to print out the API and get reading :) If anyone is interested in seeing my Hello World jQuery page, here it is: This is the alpha of my friend's new site. I used jQuery to fade in and out the main photo. I also used jQuery with an Easing plugin to move the thumbnail strip back and forth on the bottom. Currently, I am manually figuring out what the pixel left of the thumb strip should be for each movement, but I am sure this can be cleaned up with jQuery some how. I see possibilities now. I have to change the way I think. I have to start moving forward. Honestly, I have to start listening to Glen Lipka. He seems to have his finger on the pulse of user interface development. He told me to study CSS before it was popular. He told me to get on the jQuery band wagon before most people knew about it. Most advices he has given me have been 100% on the mark. Time to learn from the past. Looking For A New Job? Ooops, there are no jobs. Post one now for only $29 and own this real estate! Reader Comments Very cool!!! Been looking at this too and it seems to be great (especially for a JS n00b like myself). So, what *else* has Glenn been suggesting you look at? Always like to keep ahead of the curve . . . ! Thank Pete! I hope the Frameworks conference went well. My work load is JUST now starting to lighten up and I look forward to digging through everyone's notes and presentation code. Glen's latest advice is about jQuery. But in the past he has pushed my to really get into Flash and Open Lazlo way before I had even heard of FLEX and now you see where we are. I will share any advice that I get from this guy in the future. Ben, JQuery is pretty fun for sure. I am amazed at what folks have accomplished with it... For some interesting bits, have a go at . Most especially, this demo showcasing a good number of JQuery plugins and effects in a sensible manner. While it is no replacement for Flex, it is darned interesting to say the least. DW Dan, That's some pretty sweet stuff. Thanks for the link. @Ben: Welcome aboard bud. I knew you'd get hooked and I'm really glad that Glen turned you on to this. :o) @Peter: Glad to see another CF notable joing the fray!! Let me know if you have any questions and I'll be glad to help (rey {at} reybango {dot} com). Here are some links for you to checkout: Query: The compressed JavaScript file: The uncompressed JavaScript file: The full release notes: Plugins & UI Widgets/Controls: Documentation: Magazine: Mailing List: Project Blog: Learning Resources: Sites Using jQuery: Dan and Rey - many thanks for the links!!! Ben, interesting title for your post. I've been meaning to try jQuery but just got too many things I want to try and not enough time in the day. I've been using prototype and scriptaculous however. It's similar framework but probably does not have as many plug-ins. Anyway, for api docs check out, it's pretty neat. mootools is much better! ;) There are so many JavaScript frameworks out there though so you got a lot of other choices besides jQuery. I would have done prototype but I've found next to nothing on documentation. I've preferred mootools for that and also because its modular. I never have to worry about what files depend on one another, the download takes care of that for me. I've never really felt comfortable with JavaScript but since I read DOM Scripting by Jeremy Keith I've found it to be very easy to work with. And with better debugging tools it only gets easier. Just nice to have a good framework to help keep the code light. What has really wowed me is how easy it is to do unobtrusive JavaScript. Before I always said I knew JavaScript but never actually did much coding in it. Now I'm much more comfortable. Javier, I have that exact same Dom Scripting book. It was very good and I liked how the author went layer by layer into a proper DOM scripting solution. Very clear and easy to follow... I highly recommend it. dw Thanks for the backup Dan! I agree as its the best book out there on standard and unobtrusive JavaScript. Jeremy Keith is a great developer in the web standards community. His next book which comes out this month is Bulletproof Ajax. If you have Bulletproof Web Design (which you should otherwise I will be furious!) you'll know what to expect. Now that does look cool... Yet again, reading Ben's blog means I'll lose several hours of productive time while I play with a new toy. (But as usual, it will save me many hours in the long run!) welcome ben... we've been waiting for you. @Javier: Que pasa Xavi? Had to ask you in Spanish since I'm Cuban. :o) MooTools is definitely a great project and those guys are really top notch developers. I can't really agree that the library is better than jQuery because I do feel that jQuery has top notch code, incredible documentation, the best user community and one of the most comprehensive suites of plugins/UI widgets of any project. I also ran the whole gamut of tools including moo.fx (predecessor to MooTools), Prototype/Scriptaculous, Dojo, MochiKit & YUI. I fell in love with jQuery from the moment I used it because of its simplistic syntax, chainability (which they started), as well as all of the goodness I mentioned above. I became such a fan that I was actually asked to joing the jQuery project team. Since then, I've turned on ColdFusion notables such as Rob Gonda (Mr. AjaxCFC) & Joe Danziger (ajaxcf.com) to jQuery and Rob even refactored AjaxCFC to use jQuery instead of DWR. It'd be really cool is you would take a look at jQuery yourself and see the power behind it. With your experience in MooTools, you should pick it up VERY easily and I think you'd be pleasantly surprised at its power. Ping me if you have any questions about it. AIM: gotcfm @Dan: Yep, Dom Scripting is an awesome book and I'm about to pick up his new book, Bulletproof Ajax. As Javier mentioned, Dom Scripting is a must read and I'm sure his second book is right on par. Two other stand outs are Ajax In Action (Manning) and Beginning JavaScript with DOM Scripting and Ajax (Apress). The second is an incredible book to get you up to speed on JS & DOM and throws in some Ajax for good measure. @Peter: My pleasure on the links. If you have any questions don't hesitate to contact me. I have the whole jQuery team at my disposal. @Boyan: GotApi.com is awesome. We have the jQuery API on there as well. We also have these two resources for our API: @Seb: If you need some help, just buzz me man. I'm here to help @Tony: Always glad to see you supporting the project man! We still have to get together for lunch. :o) On a parting note, the fact that there's a growing number of CF experts using jQuery, in my mind, makes it a GREAT choice to work with. I know that there are many options but not as many with this much support for the ColdFusion community. @Boyan, Yeah, I hear you. I have been swamped for a while now and only just started looking at this after months and months of poking. @Javi, I have seen MooTools before and thought the animations were pretty awesome. To be completely honest, the animation is not what finally wooed me about jQuery; it was the DOM selectes. It can handle IDs, Class names, XPath, pseudo-selectors... it's pretty bad ass. I don't know much about MooTools, so I don't know if it has that (can't say one way or the other). @Javi / Dan, I am not familiar with the books by Jeremy Keith (gulp!). I will definitely take a look. Thanks for the heads up. @Seb, Thanks :) @Tony, Glad to be on board. I hear the jQuery community is super ultra fantastic and I look foward to getting on the mailing list. I will go through the comments and hook up all those links.... sorry about that. I have an old link-linking algorithm in place. Ben, doh! I was just writing a comment about the auto-links. You are always on top of it! Great job! Hey Rey que pasa chei? Nice to meet you! Ben yes mootools of course has all the DOM selectors. In fact all the JavaScript frameworks do. They all do the same stuff to be honest I just love saying mootools. :) I got to admit though the download setup they have is by far the best I've ever seen. Very handy indeed. I have my axe in hand Ben! You have little time to get those books! You are quite busy I'm sure but those are great books to read. The ones I always recommend are: Bulletproof Web Design, CSS Mastery, and DOM Scripting. There are several others but those 3 are the best. The first one I refer to it as my web bible. Gotta love the term bulletproof. I've certainly caught on to it. Javi, You will not need to use the Axe :) I will get me those books. Ah man I was looking forward to that too but I will refrain myself. :) Do a lot of your applications and sites you build rely on JavaScript to function Ben? I'm really glad to see that you've tried out jQuery. The way I see it, JavaScript libraries are roughly the equivalent of the C StdLib, or libraries for Perl or Ruby. Sure, you could code everything by hand, but why? Especially with JavaScript, where coding by hand frequently entails spending hours tracking down obscure browser bugs, and writing strange fork code to target various different browsers. jQuery code is short, concise, and readable. While it is something of a "black box," it's a black box with a ton of problems already solved. You're free to try and write your own, but be ready to dedicate 100s of hours (literally) to tracking down obscure bugs. It's not fun, but that's what frameworks are for. And besides, with jQuery, you can finally realize the dream of unobtrusive JavaScript. You can literally define app-wide behaviors in one file and be done with it. Think of what CSS does for you with styles. jQuery does the same with behavior. Welcome aboard! And if you have any questions, feel free to contact me directly (aim: outlookeic; gtalk: wycats@gmail.com). And check out my pretty version of the API at. See you around! Thanks for coming on over Yehuda. BTW Ben, Yehuda is also part of the jQuery project team, creatore of the Visual jQuery website, and publisher of the Visual jQuery online magazine. As you can see, we got your back with jQuery. ;o) Javi, you gotta join in on the fun man. C'mon! Just take a look at jQuery and let the addiction take over. :o) Thank you Ben for the kind words! :) I actually tried out scriptaculous, prototype, dojo, mooTools, YUI and YUI-ext as well as a half dozen others. The main things about jQuery for me are: 1) Its smaller than the others, and 2) It was much easier for me to learn. Re: your code. It might be interesting to show the different ways of achieving the same result. like: $(this).parent("div").children( ".on" ).attr( "class", "off" ); or $("#thumbstrip a.on").attr( "class", "off" ); or $("../a.on").toggleClass("on") //my syntax is messed up I think But ALSO, I just looked at your code. One of the great benefits is to NOT put onclick="...". all you need to do is: $("#thumbstrip a").click( function() { ShowPhoto() } ); Then your A looks like: <a href="#" rel="17" class="off"> <img src="images/people/People_17_thumb.jpg" height="62" alt="" /> </a> And your showPhoto can get the proper number with $(this).attr("rel"). The real benefit is to make truly semantic markup without a trace of interactivity mixed up with the code. Ooops, Ethan needs me to walk him to school. Thanks again. PS. Get on the jQuery mailing list. It is as much fun as jQuery itself. discussion list! Regards Jörn Zaefferer, lead jQuery developer @Javi, No, my applications generally do not hinge on Javascript, which I think part of why I have not gotten super into web2.0 style javascript just yet. But, it would be nice to make stuff, especially proprietary apps like Admins more dynamic and responsive. @Yehuda, Thanks, and Visual jQuery is awesome insane, but you already know that :) I agree, why re-invent the wheel when it comes to silly things like searching for a node... who wants to waste time on that. jQuery does such an awesome job of it anyway and the API as I read, is just too cool. @Glen, Thank you for pushing me! Right, I forget that if I have an ID on an element, I can access it directly. However, I wonder, if I can get it from the parentNode, is it going to be faster (nanoseconds) to go up on in the DOM rather than search for the node with a given ID. I don't know how this stuff happens behind the scenese. I really like what you are saying about the rel="" in the A tags and then getting it that way. Very slick. I really like this hyper-structured HTML. Better signal to noise ratio (as Peter Bell might say). But where do I set the onclick? Is that in the "Ready" function for the document? @Jörn, Thanks for creating jQuery :) I am sure I will find more and more way. Step 1: learn the API inside and out so that I can see the possibilities. Step 2: Write some seriously sweet-ass code. Thanks all! Check this page: Read the part about Hello jQuery. It shows a perfect example of how the click function is bound. re: Faster: Finding by ID is pretty fast actually, but I have been following a "NO MORE IDs" rule in my own code. So it's not about speed. Actually it is sometimes slower. But the signal-to-noise ratio improves tremendously. Also it's much more scalable. Example: Let's say you wanted to add another row of image-links. If you avoid the ID and just use a class like "image-selector" for the P then you can bind both and work with both just as easily. Relative pathing is more scalable because it makes you keep patterns in your markup and seperates the specific-ness of the jscript into something more generic. So your markup can become REALLY small and very interactive and your code can be very abstract to handle lots of situations. Does this make sense? Glen, That makes a lot of sense. If I force myself to worry about structure and not names of specific objects, the XHTML will naturally become more readable I am sure. How timely. My boss just took a peek at my current application and now wants me to give a brief presentation tomorrow on Jquery :) That should be fun considering I'm just getting started myself! Hey Jim. Just use the links that I posted here for info on the library. Also, remember that its jQuery with a lowercase "j". :o) If you need some help, ping me. Rey... See! I can't even spell jQuery correctly and they expect me to give a presentation on it!!! I'm doomed! :) I will certainly grab your links - thanks Rey! As a testament to the simplicity of jQuery - my code is all very simple - mostly show/hide/toggle stuff that manipulates my display. At some point I'll need to look at doing some Ajax stuff - at which point I'll take a look at AjaxCFC :) Just FYI: You could have used siblings() instead of parentNode and children. $( objSource.parentNode ).children( ".on" ).attr( "class", "off" ); would become $( objSource ).siblings(".on").attr( "class", "off" ); Depending on what you're doing, you might have done: $( objSource ).siblings().toggleClass("on").toggleClass("off") Hey Ben, Welcome to the wonderful world of jQuery! I'm also on the jQuery Project Team, and I run the Learning jQuery blog: Like you, I was totally sold on jQuery's selector power. Speaking of which, here is another way you could do the image swap (untested): $(document).ready(function() { $('#thumbstrip a').click(function() { $(this).attr('class', 'on').siblings('a').attr('class', 'off'); var newSrc = $(this).attr('href'); $('#photoframe img').fadeOut('slow').attr('src', newSrc).fadeIn('slow'); }); }); It's nice sometimes to be able to select siblings rather than go up to the parent and back down to the children. Also with this code, you can take the onclick handlers out of the markup. I hope you're not feeling overwhelmed by all the jQuery love raining down on you. :) One more note: even though a lot of things about the jQuery library itself sold me on it from the start, the main thing that has kept me using it and even contributing to it has been the fantastic community of smart, passionate, and kind web developers. If you get really stuck on something, you can always post a question on the discussion list:. Don't be surprised if you get four or five helpful answers within 30 minutes. cheers, Karl When using $(document).ready(), make sure that your custom javascript file (in your case, scripts.js) comes after jquery.js in the head. D'oh! Yehuda beat me to the .siblings() section! By the way, if you're sure that every sibling of the clicked element will be an <a>, you don't need to filter by "a" (or by ".on") in the .siblings() method. Just doing $(this).siblings().whatever will work fine. Here is one way to rewrite the line... $(objSource.parentNode).children(".on").attr("class", "off"); ...using Mootools (my personal preference in JS libraries): $(objSource.parentNode).getElements(".on").each(function(el){ el.className = "on"; }); While it took a little bit more code, that approach makes more sense to me anyway. And I'm sure I could give plenty of examples where Mootools' snytax would win out on brevity. (Also note that I could of course use one of Mootools' functions to set the class, but that would be unnecessary here.) As for which library has the best support in class for working with the DOM, I'd have to hand that to yui with yui-ext. See, e.g., Oops. el.className = "on" should instead be using "off", obviously. :) And if you only wanted to remove "on" and add "off", leaving other classes alone, you could use el.replaceClass("on", "off"); I'm sure jQuery has something like that, too. @Steve:. jQuery, as acknowledged by Alex Russel of Dojo, has the richest API out. @Karl, If you want to do image swapping, check out the swapit plugin. It can do backgrounds as well. @Steve: Bare with me on this but I had a look at the code you posted: jQuery: $(objSource.parentNode).children(".on").attr("class", "off"); Mootools (my personal preference in JS libraries): $(objSource.parentNode).getElements(".on").each(function(el){ el.className = "on"; }); and then was perplexed by this comment you made: "examples where Mootools' snytax would win out on brevity.". @Yehuda, Karl, Ahhh, thanks. See, its just a matter of knowing the API inside and out to be able fully leverage that stuff. All this talk is only gonig to fuel my excitement for this. Also, I wanted to say that using the document Ready function is really cool because it forces you to create HTML that will display properly without Javascript (hopefully). I think that is the right mentality from the get-go. @Ben: I would definitely read through some of the sections of the API (you can get a good look at it at visualjquery.com) top to bottom. Most important, imho: Core (of course), especially gt, lt for traversing DOM Attributes->attr, text, html, and val DOM Manipulation top-to-bottom DOM Traversing->filter (look at its various forms), siblings, children, parents, prev, next, and (importantly) end Javascript-> $.grep and $.map Events-> Read the docs for bind, unbind, trigger, one, hover, and toggle Effects-> Read animate carefully Ajax-> load is really handy, but read through $.ajax, which is the root for all jQuery Ajax. Read through the functions starting with ajax (i.e. ajaxStart) for some handy callbacks. getJSON and getScript are handy as well. Plugins-> Check out Interface, for sure, as well as forms (extremely important), dimensions, and metadata. Hope that helps! .] --Rey Bango That's why I wrote: "While it took a little bit more code, that approach makes more sense to me anyway." I mentioned there being instances of the reverse simply as an afterthought. If you only read parts of what I wrote, it just confuses people. ;-) ----- .] --Rey Bango I was talking about breath of support for working with the DOM in general, not just querying against it. I could certainly be wrong about that, but it is the impression I get while reading through the YUI and yui-ext docs. Also, speed is a big deal. However, the link to Jack's blog post about DomQuery's speed wasn't really an attempt to prove my claim... perhaps I should've linked to all the docs for YUI and yui-ext's DOM-related functionality instead. While I've mentioned that Mootools is my personal favorite JS library, I think that YUI easily has the best thought out and most smartly implemented modular, object-oriented design. Many others seem to agree with this, and e.g., Ajaxian predicts that this year it will become "the standard weapon of choice among mainstream developers seeking a pure Javascript framework." ( ) Many developers will certainly still prefer the easy of use of other libraries like jQuery and Mootools, though (and that's no sleight against either library's power, which they both have in spades). Yehuda, Thanks for the research tips. I will most definitely be taking this into account. "I mentioned there being instances of the reverse simply as an afterthought. If you only read parts of what I wrote, it just confuses people. ;-)" You sir are correct. I missed that one part which is why *I* was confused. So does that mean I'm confusing myself? :oP "Also, speed is a big deal. However, the link to Jack's blog post about DomQuery's speed wasn't really an attempt to prove my claim... perhaps I should've linked to all the docs for YUI and yui-ext's DOM-related functionality instead." Yep, that probably would've been a better idea. As for speed, remember that its Jack's extension, and not YUI's own built-in DOM methods, that have the speed improvements. DomQuery has actually turned out as a great motivator to align all of the major projects to produce a standard set of testing tools. We spoke with Alex Russell, Andrew Dupont, Jack & Dean Edward's and we're all collaborating on designing these tests to help improve the projects and get better test results. All of the tests to date (jQuery's, Jack's & Dojo's) leave something to be desired and John Resig kind of took the ball to get the standardization going. "Many others seem to agree with this, and e.g., Ajaxian predicts that this year it will become "the standard weapon of choice among mainstream developers seeking a pure Javascript framework." ( )" They also seem to like Dojo's new dojo.query() functionality as well (), so you need to take that with a grain of salt. "Many developers will certainly still prefer the easy of use of other libraries like jQuery and Mootools, though (and that's no sleight against either library's power, which they both have in spades)." We definitely agree on this. YUI certainly has some positives but files between file size and convoluted namespaces, it seems to push away as many developers as it attracts. Its not for lack of features or documenation. It has by far the best documentation I've seen and their pattern library is awesome. BTW, when will I see you doing some jQuery code? Don't you know this is all an elaborate ploy at getting you to come on over? :o) I have been using YUI with YUI-ext for the last couple of months. I have to say, it's alot harder for me to learn. Keep in mind, I am not a programmer. I barely can claim to be a hack. But I am having trouble doing basic things. I wish Jack's API had examples of the syntax so I could cut/paste. Right now, most "comparisons" are focused on speed. I don't think speed is the key factor. I don't think how "short" a line of code should be the test either. I love jQuery because it was so easy to learn for non-programmers. It's the same reason I fell in love with HTML, Tables and CSS. (In that order). One last thought: I think a comparison of file sizes is a big deal. YUI-ext with all the bells and whistles is 400k. "I don't think speed is the key factor. I don't think how 'short' a line of code should be the test either. I love jQuery because it was so easy to learn for non-programmers." --Glen Lipka Everyone will have differences of opinion about what's most important for them. That's one of the great things about having lots of options.... between jQuery, YUI, Mootools, GWT, Dojo, ASP.NET AJAX, Prototype, MochiKit, Rico, Qooxdoo, etc., there's something for almost everyone. "I think a comparison of file sizes is a big deal. YUI-ext with all the bells and whistles is 400k." --Glen Lipka That's misleading. Actually, the 50 or so JS files which comprise the latest version of YUI-ext probably total more than that, even in minified form (unless you were to run them through Dean Edwards packer, or similar). However, since they're modular, there's no reason you should ever be loading all of that on any given page, unless you really needed every last included feature / widget. However, I do generally agree that JS file size is a very big deal, though the weight of most any library out there becomes very manageable through minification/gzipping. Here's a blog post from the YUI team about the page weights of their library: "BTW, when will I see you doing some jQuery code? Don't you know this is all an elaborate ploy at getting you to come on over? :o)" --Rey Bango Heh. I'd certainly be interested in gaining more first hand experience with several JavaScript libraries, jQuery among them. So few hours in a day, though... Earlier, I wrote: ---------- Here is one way to rewrite the line... $(objSource.parentNode).children(".on").attr("class", "off"); ...using Mootools (my personal preference in JS libraries): $(objSource.parentNode).getElements(".on").each(function(el){ el.className = "on"; }); ---------- .... Steve said: "...." The best jQuery syntax is what I suggested above: $( objSource ).siblings(".on").attr( "class", "off" ); Additionally, your syntax will actually find all *descendants* of objSource.parentNode with class "on", while he only wants to find direct children. You might be able to do: $ES("> .on", objSource.parentNode).setProperty("class", "off"); which is still longer than: $( objSource ).siblings(".on").attr( "class", "off" ); Ah, sorry, I'd missed your previous post. Okay, so jQuery wins this round in code brevity. :) Doh! Karl Swedberg had also mentioned siblings(). I guess I just read the first few comments before posting. The siblings method seems quite convenient, and doesn't have an exact counterpart in MT. For the record, while MT can certainly use the direct child ">" CSS operator when querying against the DOM, it can't (as of v1.0, anyway) use it as the first character in the string within the $ES() function. @Steve: "it can't (as of v1.0, anyway) use it as the first character in the string within the $ES() function." Sounds like you need to look at jQuery! doh! Couldn't resist. I finally went through this post/comments and cleaned up all the links. @Steve: This is why I would never use MooTools: I'm am floored that Valerio, the lead developer for MooTools, would reply to someone in this fashion. @Rey: If you'd called Valerio (or the entire Moo dev team, excepting possibly Aaron from cnet) an elitist prick, I'd agree with with you. Of course, that doesn't change the qualities or benefits of either library (aside from the jQuery forums/community being more helpful to JS beginners). @Steve: He doesn't start off by calling him "elitist". It just flows in that direction after the replies that he receives. I know I was shocked by some of the replies from the team but I guess each group has their own personality. I just know that if I received this reply while evaluating a library: 'If you dislike the website, or how mootools builds himself with dependancies, or find it confusional, well, its your right to go and use an easier framework." it'd be an instant turnoff. It might just be me but basically telling someone to bug off just doesn't sit well with me. I'm just more in tune with helping users than being difficult.. "Of course, that doesn't change the qualities or benefits of either library" Unless of course you need MooTools support at which point you should put on a flak jacket and venture into the their forum. ;o) @Rey: I called Valerio (and most of the Moo dev team) elitist pricks. That doesn't usually bother me personally too much, though (to each his own, I suppose). Now I just wonder why you're going on about this. I'm not here to evangelize Mootools, so I'll respectfully bow out from this thread. I recognize that jQuery is a very good or even the best choice for many people. @Steve: I reread your post after one of my buds pointed out that I may have misunderstood what you meant. It sounds like you're in agreement about the MooTeam not being very nice at times. If that's what you were trying to say, the just disregard the whole section where I wrote: ." Unfortunately, the internet leaves something to be desired when it comes to conveying ideas so I may have misunderstood what you were saying. @Steve: Looks like you replied already and I misunderstood what you were trying to convey. "Now I just wonder why you're going on about this. I'm not here to evangelize Mootools, so I'll respectfully bow out from this thread." I thought we were having a nice exchange and in much the same way you *were* evangelizing MooTools earlier, I'm just doing the same. I felt that you actually might be interested in that type of info, especially since you are recommending MooTools to others. Its the type of info that I think is important for new users to know about. If you'd like to bow out, I understand and I'll do the same. "Unfortunately, the internet leaves something to be desired when it comes to conveying ideas" Yeah, I know what you mean... rereading it feels like I didn't convey some things very well. "I thought we were having a nice exchange and in much the same way you *were* evangelizing MooTools earlier, I'm just doing the same." Fair enough. ;-) Obviously, I've withdrawn my "bowing out," since there's no real reason to do so. Rock on... "rereading it feels like I didn't convey some things very well." - Me Even that probably did not come out very clearly (I should proofread my comments). :) This might make a little more sense: "After rereading my comments, it feels like I didn't convey some things very well." Oh well.... I was talking to someone the other day on IM about jQuery and this it what it came down to... the guy I was talking to is not a programmer. He looked at YUI, MooTools, Dojo, Prototype, and all the other stuff out there and the one that clicked with him is jQuery. What does this mean? It means that people that do not have a formal education in programming or any heavy programming experience can still create very robust and functional Javascript applications using jQuery. I think that is very cool and an excellent testament to the easy of use and intuitive design the jQuery provides. I think its almost a moot point for us programmers to argue about what is best as, to be honest, we could probably rock it hard core with any library. But when someone comes along who is not a programmer and can rock it hard core .... that is something special. Absolutely right comments means suggestion of some words but some people write a article bigger than the orignal God knows if you've updated this, but I was finaggling an image swap of my own that shows an initially hidden div: $(document).ready(function() { $('div.formHelp > div').hide(); $('div.formHelp > img').click(function() { $(this).next('div').toggle('normal'); $("img").attr("src","/images/css/formHintButtonSelected.gif"); return false; }); }); Only problem I have left to fix is to make the image SWAP instead of just changing once... damn my weak JS skills! @Bax, To get things to swap back and forth, you might want to try binding the OnMouseOver / OnMouseOut ... or OnMouseDown / OnMouseUp event rather than the Click event. The click event happens only once - there is not reverse function for it. @Bax and Ben, jQuery does have an "every other" method that triggers on click: .toggle(fn1, fn2). This one sometimes gets lost in the mix because jQuery also has an effect method called .toggle('optionalSpeed'), which I see Bax is already using. That said, it might make more sense in this situation to change the src of the image through if/else statements. Here I'm using the conditional operator, which should achieve the same effect (untested): <pre><code>$(document).ready(function() { $('div.formHelp > div').hide(); $('div.formHelp > img').click(function() { var $this = $(this); var thisImage = $this.attr('src'); $this.attr({ 'src': thisImage == "/images/css/formHintButtonSelected.gif" ? "/images/css/formHintButtonUnSelected.gif" : "/images/css/formHintButtonSelected.gif"}); $this.next('div').toggle('normal'); return false; }); });</code></pre> One other thing I should probably point out: On the first inside the click handler, I set a variable for $(this). That's just a little optimization technique to avoid having to repeatedly create a jQuery object. oops. I guess the comment filter didn't like my pre and code tags. sorry about that. Thanks for sharing! Here was my simple jQuery image Swap: $('img#nav1').hover(function() { $(this).attr("src","/images/headerNav/1-over.gif"); }, function() { $(this).attr("src","/images/headerNav/1.gif"); }); I can create and never have to worry about again. Of course Jquery is endless! But I wouldn't suggest giving up the language now that there's an easier way to do it. That's like not knowing how to hand code because Dreamweaver exist. There are also a bunch of other API's and libraries out there. Spry is another really big one you may want to look into. At the 2.0 expo I assumed Jon Resig was just going to give us a bunch of JQuery tutorials, but in fact he taught us advanced JS techniques, and rarely spoke about JQuery at all. @Justin, Just out of curiosity, when you say you have 80 other actions, what kinds of stuff are you referring to? jQuery is not meant to replace all Javascript - rather, it's meant to augment your ability to write more, better Javascript. So, I just wasn't sure if you are referring to "holes" you are seeing in the jQuery library, or if you are simply referring to scripts that run on top of jQuery? Great script! I am trying to incorporate it into a site setup with a static height that scrolls horizontally. I have a main container DIV for the site, and then module DIVs (class="module") that contain a header and the columnized text. These module DIVs are floated left to line up horizontally, but? Thank you for another great article. Where else could anyone get that kind of information in such a perfect way of writing? I have a presentation next week
https://www.bennadel.com/blog/513-you-really-shouldn-t-be-here-jquery-my-wife-might-begin-to-suspect-something.htm
CC-MAIN-2018-39
refinedweb
6,780
74.29
Auto-Tagging Jekyll posts with Zemanta More for the purposes of associating posts and building my custom search engine, but also for SEO, I’ve been adding semantic keywords to my Jekyll posts. The result is similar to my old AutoTag bundle for the TextMate blogging bundle. It creates a keyword block for my post in addition to my curated tags which contains top-level topics and can be used in Open Graph keywords, keyword meta and for search and related post association during site generation. I keep my keyword YAML separate from “tags” because I use it in different ways under different circumstances. In your templates you can easily choose to combine them or use them separately, so there’s no harm in having the extra header. This post details the process of adding keywords to new posts. I also used the same technique to back-catalog all of my previous posts. I use a service called Zemanta to analyze my content and determine the appropriate tags. It’s very good, but sometimes still requires a bit of manual editing after I run it. It’s still faster than doing it by hand. To get started you’ll need an API key. Don’t worry, for your purposes this is entirely free. Create an account at Zemanta, then register an application to get the API key. Next you just need to install the “zemanta” gem ( gem install zemanta). Add it to your Rakefile with (at the top after the hashbang): require 'rubygems' require 'zemanta' Now you can easily pass your post content to Zemanta and get back an easy-to-parse array. I run this as part of my “publish” task, which moves a post from source/_draft into source/_posts and adds this kind of meta to the YAML. The script below illustrates this section. It extracts the YAML headers from the post, adds the keywords and sticks the headers back in. Insert your Zemanta API key at line 7 where the Zemanta.new object is created. require 'yaml' require 'rubygems' require 'zemanta' # gem install zemanta def get_zemanta_terms(content) $stderr.puts "Querying Zemanta..." zemanta = Zemanta.new "xxxxxxxxxxxxxxxxxxxxxxxx" suggests = zemanta.suggest(content) res = [] suggests['keywords'].each {|k| res << k['name'].downcase.gsub(/\s*\(.*?\)/,'').strip if k['confidence'] > 0.02 } res end desc "Add Zemanta keywords to post YAML" task :add_keywords, :post do |t, args| file = args.post if File.exists?(file) # Split the post by --- to extract YAML headers contents = IO.read(file).split(/^---\s*$/) headers = YAML::load("---\n"+contents[1]) content = contents[2].strip # skip adding keywords if it's already been done unless headers['keywords'] && headers['keywords'] != [] begin $stderr.puts "getting terms for #{file}" # retrieve the suggested keywords keywords = get_zemanta_terms(content) # insert them in the YAML array headers['keywords'] = keywords # Dump the headers and contents back to the post File.open(file,'w+') {|file| file.puts YAML::dump(headers) + "---\n" + content + "\n"} rescue $stderr.puts "ERROR: #{file}" end else puts "Skipped: post already has keywords header" end else puts "No such file." end end To test, you can point Rake at a post and add keywords by running rake add_keywords[path_to_post]. Now you can utilize the “Keywords” payload in whatever way you like. I use them, for example, in my Open Graph headers. In head.html I have a line: {% if page.keywords %}<meta name="keywords" content="{{ page.keywords | keyword_string }}">{% endif %} So, if the page has keywords on it, it runs this from my plugins folder: module Jekyll module Filters def keyword_string(keywords) keywords.join(" ") end end end I also include them in the Open Graph tags for a post, also in head.html: {% if page.keywords %}{{ page.keywords | og_tags }}{% endif %} which calls: module Jekyll module Filters def og_tags(tags) tags.map {|tag| %Q{<meta property="article:tag" content="#{tag}">} }.join("\n") end end end I’ll be covering my Open Graph system for Jekyll soon. Lastly, I include them in the JSON file I use for my site search (still in progress). Hopefully some Jekyll users will find this useful. Note that the tags returned by Zemanta are generally 90% correct with a couple of superfluous tags that won’t hurt but could be removed.
https://brettterpstra.com/2013/03/23/auto-tagging-jekyll-posts-with-zemanta/
CC-MAIN-2021-10
refinedweb
699
66.94
User talk:80n Contents - 1 Osmarender - Neat Stuff! - 2 Streetmap for mobile phones - 3 Wiki namespace - 4 NaviGPS bicycle mount - 5 Tehran map - 6 Getting Text Right - 7 UTF-8 Support broken - 8 Turning circle in Osmarender - 9 Abstention on Wreck Proposal - 10 Woking Mapping Party invite - 11 I can probably help with Osmarender text labels - 12 Connecting Wikipedia and OSM via OSMXAPI - 13 OSMhack - 14 XAPI: gz-compressed output - 15 XAPI - 16 XAPI - bounding box with relations does not work correctly - 17 XAPI bugs - 18 XAPI Server - 19 XAPI status ? - 20 Xapi output precision - 21 XAPI setup Howto Osmarender - Neat Stuff! Osmarender is definitely Neat Stuff. I think it's great that your renderings look good enough to decorate Wikipedia articles. That kind of thing will surely attract more energy and enthusiasm from Wikipedia contributors to the OpenStreetMap project. From a quick read of your description it looks like an elegant way to implement this kind of thing as well. I shall have take that XSLT for a spin myself some time. -- Harry Wood 23:46, 22 Mar 2006 (UTC) - Harry, I'm glad you like it :) XSL is powerful stuff but can be tricky to get to grips with. The latest version of Osmarender is driven by a simple rules file so you don't have to know any XSL at all to use it. 80n 23:49, 22 Mar 2006 (UTC) Streetmap for mobile phones I've written a small Java-program for mobile phones and it is available on the MMVLWiki now. The software doesn't store meta-information, but it allows you to set a target marker, so that one knows the direction, even if the target location is not visible on the screen. It would be desirable of course, if the map would also show the street names. -- Wedesoft 16:57 BST 02.04.2006. - Jan, this looks really cool. If I can figure out how to do java on my phone, I'll download it and try it out. Street names will come, right now not many roads have been named... 80n 16:24, 2 Apr 2006 (UTC) - I've released a new J2ME Weybridge streetmap using the new multiresolution data and the street names. It only goes up to zoom-level 15 (otherwise the package would be around 10 MBytes). Let me know if someone wants a larger map. -- Wedesoft Sat Mar 24 22:34:55 GMT 2007 Wiki namespace What's the plan with occupying wiki page names such as Press and Media and Accommodation just for that weekend event in Rutland, England? You can easily keep those items within the single page. And if they really need pages of their own, make them subpages or give them unique names pertaining to this event or place. --LA2 00:48, 25 Aug 2006 (BST) - Someone created links to some non-existent pages, I just followed the links and added some detail without any thought or planning. I agree that this is not well organised and I'll see about changing it. 80n 08:28, 25 Aug 2006 (BST) - Perhaps not ideal but then I didnt see any harm in having pages with these headings as they can be used for more than one event and have content deleted once done. A lot easier than trying to jam it all onto one page. I'm easy though. If someone has a better layout approach go for it :-) Blackadder 08:45, 25 Aug 2006 (BST). - I have renamed them as sub-pages beneath the project. This makes me think that we need to start thinking about the difference between a project page and a "production" page. Compare: WikiProject Rutland England with Isle of Wight and Walton on Thames. I think the Isle of Wight page should be the finished product whereas WikiProject Isle of Wight should be the planning and progress tracking page (and Isle of Wight workshop 2006 should probably be a sub-page or merged into the WikiProject page). But then, should there be a production page for everywhere that gets mapped? I suppose they would be showcases or galleries of different renderings of places of interest. At the moment, the main map server is so slow that this is the only reasonable way that members of the public can see anything presentable or useful. 80n 11:58, 25 Aug 2006 (BST) Hi. Could you please let me know how you broke the bicycle mount for your NaviGPS, and jus how flimsy it is? I'm thinking of getting one for geotagging photos and am wondering whether to bother with the bike mount. Also I take it from the review that your experience of the NaviGPS been good overall? Abarnes 22:57, 4 September 2006 (BST) - My experience overall has been excellent. It broke because first I did a lot of cycling over very rough terran and the screws that attach the bracket to the GPS worked loose. I tried to overtighten the screws to fix this, resulting in me cracking the bracket around the screw hole. A dollop of glue has fixed the problem and its been fine ever since. 80n 09:26, 5 September 2006 (BST) Tehran map Nice work on Image:Tehran-university-persian.png. It seems that there a few rendering problems of the labels, possibly related to inkscape. I will look into it. BTW, it seems that while osmarender4 renders universities just fine, tiles@Home doesn't. Roozbeh 12:27, 2 March 2007 (UTC) Getting Text Right Hi. I'm interested in the Getting Text Right project from Things_To_Do. How is the current status of this project? I believe that the text should not only be abbreviated, but also moved to fit. Do you have any ideas of "where" this should be done? I'm not sure if postprocessing the SVG file is the best place to do it, but I can't think of another place where it could be done. You can email me in the same user login at gmail.com.--Gfonsecabr 17:32, 2 December 2007 (UTC) UTF-8 Support broken UTF-8 support is broken in osmxapi. See: Is this opensource? Maybe I can fix it. - Where is the osmosis error description? Half of the world has encoding problems. Only ascii and iso-8859-1 works. 99% of the characters are not supported at the moment. People start to name chinese cities with latin chars - that looks realy strange for asian people. I like the osmxapi and hope it comes back soon. Great work! Turning circle in Osmarender I've started using T@H, and thus Osmarender for rendering. So far everything's good, but there's one thing that would be a nice rendered feature. I've been tagging plenty of nodes with [[turning_circle]], but Osmarender doesn't recognize it yet. I tried editing osm-map-features-z17 myself with the following rule (it was exactly the same as h-u-o, except that I doubled the line stroke size) <rule e="way" k="highway" v="unclassified|residential|minor"> //existing rule ++<rule e="node" k="highway" v="turning_circle"> ++ <line class='highway-core highway-unclassified-core-turning-circle' /> ++</rule> This didn't work as intended; instead it made the whole road (not just node) larger and gray (#777777). Can you help me with the proper syntax / rule? Thanks. Alexrudd 02:59, 21 February 2008 (UTC) - Take a look at how mini-roundabouts are done. Turning circles should be very similar. 80n 10:55, 22 February 2008 (UTC) Abstention on Wreck Proposal I have an interesting situation with the wreck proposal. It has 8 votes approving and 1 abstention (from you). To be approved, it requires 15 votes total with a majority or 6 unanimous approving votes. Arguably, your abstention will keep the voting open until I get more votes because it is not unanimous. If you do not have any concerns with the proposal, could you move your abstention to being a comment? That would save me time. Otherwise I will carry on collecting votes. Regards --TimSC 09:21, 17 March 2008 (UTC) Woking Mapping Party invite Hi Etienne, Big thanks for the invite! I'd love to be there, unfortunately I'm leaving that weekend for a three month cycle tour of Norway with the intention of mapping the scenic bits above the Arctic Circle for OSM. Appreciate you are really busy but if you know anyone from the project who's been working on Norway or Sweden I'd be very grateful for an introduction via email to them. No worries if you can't. I'm kicking myself for booking this trip on a weekend when you are less that five miles away from here :-( JerryW 03:20, 17 May 2008 (BST) I can probably help with Osmarender text labels I read on Things_To_Do#Task:_Getting_Text_Right That there's a need to know how big rendered text will be. I could write a script that would generate statistics on the sizes of characters and strings is various fonts, and then work out an algorithm that could accurately guess the size of strings once rendered in inkscape. I'd probably happily go that far, but I'm not interested at this point in working directly with the Osmarender source at this time. Is this immediately useful to you? If so I have a few questions: 1) what fonts are used? (Probably best if at some point I have the actual SVG/CSS code that Osmarender sends to inkscape.) 2) What character set should I use? This could come later I suppose, I'll probably use a subset of ascii for testing more quickly anyway. 3) I'm hoping that the final output of my scripts will be a file containing a function like get_rendered_size(text, font) which you can include and call from Osmarender. What language should it be in? Hope to hear from you soon, JasonWoof 14:54, 22 May 2008 (UTC) Connecting Wikipedia and OSM via OSMXAPI Hello, I sent you an email but perhaps the mail was blocking by spam-filter or so. At german Wikipedia I'm working on projects like Wikipedia-World: So the geocoding of point objects is really successfully at Wikipedia. But we have a problem with longer objects like streets, rivers, railway and so on. Thats the point where we want to link to single OSM objects. I think both projects can pofit from this plan and it's better than to collecting the same datas in wikipedia again. User:Dschwen and I created the: which creates a query to OSMXAPI, put the answer into an OSM-to-KML XSLT-processor, zip it to KMZ, cache it, and put it into GoogleMaps or Google Earth. The script is not in a final version, but it's running for paths. As result we can see an object like river Havel clearly in the Browser: So it could be a good completion to: But it can be also goog to find gaps in the osm-datas. So now my question: It seems that we have a problem with the connection limitation per user of the OSMXAPI. We want to make many but relative small queries. So could you raise the limit for Toolserver of the wikimedia were we work: I believe the IP is 91.198.174.194 for our server Hemlock. The next big problem is that it seems that we we can't query objects with a whitespace in there name. So I try to get "Prager Straße" in Dresden but don't get it. An important point for wikipedia is that we should try to get a long-time support for the parameters we would write in many articles. This shouldn't change too often. A other way which could also reduce the traffic volume by factor 10 would be to move our scripts to our server on informationfreeway.org . But then we would like to get an account to caring this scripts in future. The source code you can find here: I hope you can help us. --Kolossos 07:17, 29 May 2008 (UTC) OSMhack Hello, I want to go online with my script. Therefore I write a little bit documentation: Query-to-map There, I also describe the additional OSMhack script which I want to integrate in Template:Place I hope your API/your server is ready for this load. Is it? To avoid this load-peak I wouldn't write to mailing list or so. In the moment the performance seems really good. At the toolserver we get the last days new hardware so the load there seems to be no problem in the moment. --Kolossos 07:09, 27 June 2008 (UTC) XAPI: gz-compressed output Hey George. Can you tell me, whether gz-compression for the XAPI service is supposed to work? I have got a ruby script called OSM-Wolf (available via Bitbucke/Mercurial) running here, which fetches data from the XAPI service for analyzing. Although the author plausibly assured me the script would support gz-compressed XAPI data, the XAPI server (hypercube) seems to send uncompressed data, even if gz-compression is explicitly requested. Best Regards, --Claas A. 20:07, 2 August 2009 (UTC) XAPI Hello George, I found out that you are the "Maintainer of XAPI" :-) The only thing about the OSM XAPI Server I found out, is "Currently serving data as at 0.5 cut-off. 0.6 service will start shortly." In the "platform status". Do you know, when this service will be available again? osmxapi.hypercube.telascience.org seems the last server and today I don't get a connetion. Thx Softeis 14:18, 5 November 2009 (UTC) XAPI - bounding box with relations does not work correctly With the following request I get data from the hole world. please try:[bbox=9.5,52,9.6,52.1][type=restriction] --Langläufer 22:51, 11 December 2009 (UTC) Now it is compleetly wrong. I mostly get none of the requested objects. () In the sample I request the [bbox=9.5,52,9.6,52.1]. The result is much smaller than before (75KB instead of 2.8MB), but mostly the longitudes are far from being between 9.5 and 9.6 and restrictions are not so large. e.g. lat='51.4277078' lon='-0.0115522' lat='56.2396279' lon='42.0909317' lat='52.0284373' lon='11.2171290' --Langläufer 11:38, 13 December 2009 (UTC) - The algorithm I was using was a bit crude. Draw a box that encloses the relation and then see if it overlaps with the requested bbox. It works fine for short ways, but doesn't work so well for relations which tend to span large areas. I've now implemented a better algorithm which looks at each node to see if any are in the bbox. This should work better for you. Let me know. 80n 19:18, 14 December 2009 (UTC) I tryed something new at. Here I also want to draw relations and ways with the piste:type=nordic tag. The result of the request: .../api/0.6/*[piste:type=nordic][bbox=...] did not contain all refered ways of the relations. Maybe the same error? Now I use two request, one for ways, and one for relations - it works fine - but with only one request I could save transfer volume. --Langläufer 10:07, 12 January 2010 (UTC) XAPI bugs Sorry that I misused the Platform status for my bug report. I hope here I'm right. I used this command: or. Both give me a file with only nodes which are used in different admin_level relations. After ca. 150 MB the download interrupts with the error message "<error>Query limit of 1000000 elements reached</error>". I expect to get a file with only relations (admin_level=2) and their members. I hope it is possible to fix this behaviour and if you need more information you can write me an email over my OSM accout, too. Thanks, Daswaldhorn 11:30, 5 December 2009 (UTC) XAPI Server Hi, As the Xapi Server's are not that realiable, I would like to setup an instence myself. Unfortunately, the source code isn not available anymore under the given links. Is there any chance to make them public again? Is there a howto: on setting up an XAPI server instance? best - They should be much more reliable now. Let me know if you have problems. 80n 07:25, 27 December 2009 (UTC) - Hi, thanks a lot. It would be great if you can provide me with the source code and a short description of how to get it run. best - How can I contact you? Where can I get XAPI sources? link on xapi.openstreetmap.org appears to be broken. thanks in advance. --Komяpa 18:45, 6 April 2010 (UTC) See platform status --Lulu-Ann 21:07, 6 April 2010 (UTC) XAPI status ? I noticed that isn't responding since ~2 days and I have the "impression" (sorry, can't be more specific) that more and more lags behind the OSM database. Is this just a subjective impression or could there be something wrong with the data replication from the OSM database? --Gubaer 22:32, 5 February 2010 (UTC) - Hypercube is down due to someone deleting the database. It'll be back in a few hours. Generally XAPI is up-to-date within 2 minutes of the main database. You can check the age of the data by looking at the xapi:planetDate attribute in the first line of the response. But I've just noticed that this only shows the date, it ought to show the time as well. 80n 13:50, 6 February 2010 (UTC) - In the past the time was there written, my last complete date was "xapi:planetDate='200911201903'". So I have a little hope, that this could be fixed in the future again. Who can I ask for this issue? Daswaldhorn 16:32, 13 February 2010 (UTC) Xapi output precision Hi, dunno if you monitor the Xapi:Talk page, if not fyi: - - XAPI setup Howto Hi, I'd like to setup a mirror for the XAPI. Is there a detailed howto for this ? If not, I'd be willing to write one. Thanks Vdb 08:57, 7 September 2010 (BST)
http://wiki.openstreetmap.org/wiki/User_talk:80n
CC-MAIN-2016-50
refinedweb
3,054
72.46
ControlAdapter.OnInit Method Namespace: System.Web.UI.AdaptersNamespace: System.Web.UI.Adapters Assembly: System.Web (in System.Web.dll) If there is an adapter attached to a Control object and the OnInit method is overridden, the override method is called instead of the Control.OnInit method. Override OnInit to perform target-specific processing in the Initialize stage of the control lifecycle. Typically, these are functions that are performed when a control is created.Notes to Inheritors When you inherit from the ControlAdapter class and the adapter overrides the OnInit method, the adapter must call the corresponding base class method, which in turn calls the Control.OnInit method. If the Control.OnInit method is not called, the Control.Init event will not be raised. The following code sample derives a custom control adapter from the ControlAdapter class. It then overrides the OnInit method to set a property on the associated control and call the base method to complete the control initialization. using System; using System.Web.UI; using System.Web.UI.Adapters; public class CustomControlAdapter : ControlAdapter { // Override the ControlAdapter default OnInit implementation. protected override void OnInit (EventArgs e) { // Make the control invisible. Control.Visible = false; // Call the base method, which calls OnInit of the control, // which raises the control Init event. base.OnInit(e); } }
https://msdn.microsoft.com/en-us/library/system.web.ui.adapters.controladapter.oninit.aspx
CC-MAIN-2015-32
refinedweb
213
60.21
Re: YACD - level 40 power level 68 - From: Twisted <twisted0n3@xxxxxxxxx> - Date: Sat, 28 Jul 2007 18:51:52 -0000 On Jul 28, 12:07 pm, camlost <joshua.middend...@xxxxxxxxxxxxx> wrote: If there is trouble while waiting to exit self knowledge, then the problem must be buried in the inkey function somewhere. Well this is suspicious: /* Hack -- get bored */ if (!Term->never_bored) { /* Process random events */ (void)Term_xtra(TERM_XTRA_BORED, 0); } It's in Term_inkey. I wonder if it b0rks if it happens while viewing a temp file instead of while at the main game UI? This section starting with "/* Hack" is definitely suggestive... :) This leads by a convoluted path including function pointers to Term_xtra_win_event(0) on Windows (the only affected port so far as I am aware), and thus to if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } There are only two occurrences of the substring PeekMessage in the entire source directory (recursive) -- two calls in main-win.c, the other being in Term_xtra_win_flush. So it must be a library call. And it's not one familiar to me. Indeed all three functions look to be Windoze API calls. The actual waiting loop seems to be: while (Term->key_head == Term->key_tail) { /* Process events (wait for one) */ (void)Term_xtra(TERM_XTRA_EVENT, TRUE); } in Term_inkey(). This leads to almost identical winapi calls: if (GetMessage(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessage(&msg); } Problem being there's nothing here to suggest a problem, and especially one localized to the specific case of being in a text browser instead of the game proper. Self-knowledge seems to go through something called pause_line to display the prompt in the specific case of self-knowledge. This calls inkey(ALLOW_CLICK). Leads right back to Term_inkey. The main-sdl version of Term_xtra_foo_event seems to be similar to the regular-windows version but with if (SDL_WaitEvent(&event)) { /* Handle it */ error = sdl_HandleEvent(&event); } else return (1); for the blocking version and a nearly-identical PollEvent version for nonblocking. If it's failing here it's an SDL bug. (I don't know whether the people experiencing crashes are playing the SDL version or not.) Text-file viewing and help-viewing leads eventually to prt("[Press Space to advance, direction keys to move, ESC for previous, '?' to exit.]", hgt - 1, 0); which I believe is where Phil said he saw his crashes in the "Sangband spontaneous exits" thread. That function call merely displays the prompt; it doesn't actually wait for a keypress. Right after it is this alarming code: /* Hide the cursor */ inkey_cursor_hack[TERM_MAIN] = -1; I wonder if this starts the bomb ticking? Then again it's only an assignment statement with no function calls... The actual waiting is once again in inkey(ALLOW_CLICK). Upshot of all this: I can be fairly certain the abend is happening inside either winapi GetMessage or inside SDL_WaitEvent. What causes it only in these particular places when every single wait-for-a-key goes through same remains a mystery but inkey_cursor_hack might have something to do with it as it makes inkey do funny things before calling Term_inkey. A separate tack is to look for calls to "exit". And I immediately found this in main-sdl.c: /* * Display error message and quit (see "z-util.c") */ static void hack_quit(cptr str) { /* Display nothing (because we don't have a surface yet) */ (void)str; /* Shut down the TTF library */ TTF_Quit(); /* Shut down the SDL library */ SDL_Quit(); /* Exit */ exit(0); } That look like the smoking gun to you too? But it's supposed to only be used before the main window is created. Still if it's getting called when it shouldn't... Further down is hook_quit which is also capable of causing a silent exit from hook_quit(NULL); our second strong culprit candidate. A little more diving and we're looking for quit(NULL) and somewhat interested in something called plog(). The former leaves us right to the SIGQUIT handler which seems to have a nasty hack to try to stop accidental quitting from miskeys. Problem is there's a global variable for the number of times it was hit that counts up endlessly and after some threshold it calls -- you guessed it -- quit(NULL). A helpful comment snippet: * To prevent messy accidents, we should reset this global variable * whenever the user enters a keypress, or something like that. But evidently they don't. My guess is that whenever it's at certain prompts it sees phantom ^C or some such inputs that cause SIGINT or SIGQUIT to get raised from time to time. The global variable creeps up as you spend more time in a session, particularly in the help browser or similar places, and eventually Sangband goes nuclear. However, judging by the code it should only quit(NULL) if it thinks no character is being player (e.g. at birth menu). Otherwise it should actually kill the character(!) and exit the process with the word "interrupt". I don't think this is what's happening; spontaneously dead characters aren't mentioned in the complaints, and Phil's necro doesn't seem to be dead in particular! It also tries to make warning noises which nobody reports getting spuriously. I think the SIGQUIT handler is not in fact the culprit here. SIGKILL and SIGABRT have another handler that can quit(NULL) from the birth menu and the like, but prints the infamous gruesome software bug message. Nobody has reported this in connection with the bogus exits. Other quit(NULL)s are all either during game load or character creation ... except for one in a familiar function. One named sdl_HandleEvent. It's apparently not an SDL library function after all. It's in main_sdl.c and calls quit(NULL) if an SDL_QUIT event is generated. This occurrence is however supposed to save the game to judge by immediately preceding code and comments. It's probably the behavior when the X in the game window is clicked, rather than the crash people are reporting. The other quit(NULL) in main_sdl.c is the result of a normal exit from play_game() in main(). The main-win port OTOH handles WM_QUIT with a plain quit(NULL) -- no attempt to save the game. This is our prime suspect now. None of Sangband's code actually can post a WM_QUIT (or even SDL_QUIT) message (including via PostQuitMessage). So it may be an interaction with a Windows bug generating a spurious WM_QUIT event. In fact it almost has to be, since inkey() doesn't seem to be guilty of anything and it happens during inkey(), meaning an event or signal handler is to blame. . - References: - [S] YACD - level 40 power level 68 - From: Phil Cartwright - Re: YACD - level 40 power level 68 - From: Twisted - Re: YACD - level 40 power level 68 - From: magnate - Re: YACD - level 40 power level 68 - From: Twisted - Re: YACD - level 40 power level 68 - From: magnate - Re: YACD - level 40 power level 68 - From: Phil Cartwright - Re: YACD - level 40 power level 68 - From: magnate - Re: YACD - level 40 power level 68 - From: Twisted - Re: YACD - level 40 power level 68 - From: Phil Cartwright - Re: YACD - level 40 power level 68 - From: Twisted - Re: YACD - level 40 power level 68 - From: camlost - Prev by Date: Re: [Announce] Angband 3.0.9 released - Next by Date: Re: Sangband spontaneous exits - Previous by thread: Re: YACD - level 40 power level 68 - Next by thread: {YASD, V} Arrogance - Index(es):
http://newsgroups.derkeiler.com/Archive/Rec/rec.games.roguelike.angband/2007-07/msg00784.html
crawl-002
refinedweb
1,242
62.48
Eclipse Community Forums - RDF feed Eclipse Community Forums [Acceleo] Ref: Bug 319375 - Using ant task with model referring to UML Primitive types <![CDATA[Hi, I was initially using Acceleo version 3.0.2 (3.0.2.v20110217-1127). While attempting code generation using Ant task that gets generated with UML-to-Java sample I came across Bug 319375 (Need to post more than 5 messages to use a link). Since Acceleo 3.1.0RC1 is available, I installed that over Eclipse Helios SR2. The ant task now started up fine, and worked with sample model. But if the model is using UML Primitive types (e.g. a model based upon default UML template, created using Papyrus), it does not generate code. I have the following lines added in Generate.java method: public void registerResourceFactories(ResourceSet resourceSet) { super.registerResourceFactories(resourceSet); resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put(UMLResource.FILE_EXTENSION, UMLResource.Factory.INSTANCE); Map uriMap = resourceSet.getURIConverter().getURIMap(); URI uri = URI.createURI("jar:file:/C:/EclipseHeliosSR2/eclipse/plugins/org.eclipse.uml2.uml.resources_3.1.1.v201008191505.jar!/"); uriMap.put(URI.createURI(UMLResource.LIBRARIES_PATHMAP), uri.appendSegment("libraries").appendSegment("")); } Running this through sample ant task gives the following message: generateJavaSample: generateJava: [java] The generation fail to generate any file because there are no model elements that matches at least the type of the first parameter of one of your main templates. [java] The problem may be caused by a problem with the registration of your metamodel, please see the method named "registerPackages" in the Java launcher of your generator. BUILD SUCCESSFUL Total time: 4 seconds The Acceleo launcher seems to work fine as usual. Its only with the ant task I am facing this issue. In an attempt to be more precise, I even tried with Acceleo 3.1.0M7, but that also gives same results. Perhaps I am missing something here... Please give some pointers. Thanks Anil]]> Anil Bhatia 2011-05-20T18:56:14-00:00 Re: [Acceleo] Ref: Bug 319375 - Using ant task with model referring to UML Primitive types <![CDATA[Hi, Did you register the UML metamodel in the registerPackages method ? Stephane Begaudeau, Obeo -- Twitter: @sbegaudeau Acceleo wiki: Blogs: & ]]> Stephane Begaudeau 2011-05-23T07:39:43-00:00 Re: [Acceleo] Ref: Bug 319375 - Using ant task with model referring to UML Primitive types <![CDATA[The generated code for registerPackages method is as follows: /** * Updates the registry used for looking up a package based namespace, in the resource set. * * @param resourceSet * is the resource set * @generated */ @Override public void registerPackages(ResourceSet resourceSet) { super.registerPackages(resourceSet); if (!isInWorkspace(org.eclipse.uml2.uml.UMLPackage.class)) { resourceSet.getPackageRegistry().put(org.eclipse.uml2.uml.UMLPackage.eINSTANCE.getNsURI(), org.eclipse.uml2.uml.UMLPackage.eINSTANCE); } /* * TODO If you need additional package registrations, you can register them here. The following line * (in comment) is an example of the package registration for UML. If you want to change the content * of this method, do NOT forget to change the "@generated" tag in the Javadoc of this method to * "@generated NOT". Without this new tag, any compilation of the Acceleo module with the main template * that has caused the creation of this class will revert your modifications. You can use the method * "isInWorkspace(Class c)" to check if the package that you are about to register is in the workspace. * To register a package properly, please follow the following conventions: * * if (!isInWorkspace(UMLPackage.class)) { * // The normal package registration if your metamodel is in a plugin. * resourceSet.getPackageRegistry().put(UMLPackage.eNS_URI, UMLPackage.eINSTANCE); * } else { * // The package registration that will be used if the metamodel is not deployed in a plugin. * // This should be used if your metamodel is in your workspace. * resourceSet.getPackageRegistry().put("/myproject/myfolder/mysubfolder/MyUMLMetamodel.ecore", UMLPackage.eINSTANCE); * } */ } Since I am not defining UML metamodel in my workspace, that should register the UML metamodel. Is there some other step required to register this as well...? Thanks Anil ]]> Anil Bhatia 2011-05-23T07:57:37-00:00 Re: [Acceleo] Ref: Bug 319375 - Using ant task with model referring to UML Primitive types <![CDATA[Hi Anil, Let's try to rule out the "model loading" issues first. Have you followed the instructions on the UMl FAQ on "how to load an UML model standalone"? If yes, are you positive that the Acceleo generator you're trying to run can indeed generate files with the model you're feeding it? Laurent Goubet Obeo]]> Laurent Goubet 2011-05-25T08:50:58-00:00 Re: [Acceleo] Ref: Bug 319375 - Using ant task with model referring to UML Primitive types <![CDATA[Hi Laurent, Thanks for asking pin-pointed question. This made me execute additional scenario. I have followed the instructions from UML FAQ. Its only with the generated sample (UML-to-Java) that I am getting the error message for a UML model referring to UML Primitive Types (created using Papyrus), when Stand Alone Launcher is used. (I was wrong earlier that its not generating code... It generates java code, including attribute type, but also gives error message). If I run UML-to-Java on the same model using Acceleo Launcher, there is no error message, and proper code gets generated. Another scenario that I tried: I wrote my own M2T transform, printing class name, and attribute name & type. This worked fine in both modes (Acceleo, and Stand Alone Launcher). In fact, this template gets executed from ant task also. I double checked, I am doing the steps from UML FAQ in both scenarios. Thanks Anil]]> Anil Bhatia 2011-05-26T15:22:22-00:00 Re: [Acceleo] Ref: Bug 319375 - Using ant task with model referring to UML Primitive types <![CDATA[Anil, That seems ... strange. Just so we can try and reproduce : you're launching the UML-to-java example with the "example.uml" model that is part of the example? If not, could you provide us with the model you're trying to generate code for? Laurent Goubet Obeo]]> Laurent Goubet 2011-05-30T12:26:33-00:00 Re: [Acceleo] Ref: Bug 319375 - Using ant task with model referring to UML Primitive types <![CDATA[Hi Laurent, "example.uml" gives proper results, as it has internally defined (UML Primitive Types' equivalent) data types. The problem occurs when the types are referred to from UML Primitive type library (as done by Papyrus). Here is the small model file I used to reproduce the problem (also attached): -----------------------model.uml-------------------------------------- <?xml version="1.0" encoding="UTF-8"?> <uml:Model xmi: <packageImport xmi: <importedPackage xmi: </packageImport> <packagedElement xmi: <packagedElement xmi: <ownedAttribute xmi: <type xmi: </ownedAttribute> </packagedElement> </packagedElement> </uml:Model> ----------------------------------------------- Class0 has a property Property0 of type String coming from UML Primitive type library. Here are the steps to create such a model using Papyrus: 1. File -> New -> Other... (or Ctrl + n) 2. Select "Papyrus Model" on "Select a Wizard" page. Click Next. 3. Specify name of diagram file (model.di). 4. Select UML as Diagram Language. 5. On the page "Initialization information", specify the following a) Name of Diagram b) Select some diagram kind (say, UML Class Diagram) c) You can load a template: Check the box that says "A UML model with basic primitive types (ModelWithBasicTypes.uml) Click Finish 6. Switch to Papyrus perspective. 7. In the Model Explorer right click on Model, and keep mouse over "New Child" -> "Create a new class". 8. On the newly created class Class0, right click, "New Child" -> "Create a new Property". 9. Expand "ownedAttribute" under Class0 in Model Explorer. 10. Select newly created Property0 in Model Explorer, and you should be able to see its details in "Properties" view. 11. In the properties view, click on "+" sign for "Type:" field. In the filter ("**") specify String, select "<Primitive Type> String" from matching items. 12. Click Ok. Save the model. This will save .di file, and create a corresponding .uml file that can be used as source of M2T transform. The contents of this .uml file are pasted above. Make sure to use this model file as MODEL in generateJavaTarget.xml file (or as source in case you use Stand alone launcher). I am using Papyrus 0.7.3 (0.7.3.v201104270854) on Eclipse Helios SR2. Thanks Anil]]> Anil Bhatia 2011-06-01T09:50:13-00:00 Re: [Acceleo] Ref: Bug 319375 - Using ant task with model referring to UML Primitive types <![CDATA[In the following message: its mentioned that there is a case where, in addition to generating files, following error message is shown "The." Does the scenario I mention happens to be part of this category..? Thanks Anil]]> Anil Bhatia 2011-06-05T18:24:42-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=209648&basic=1
CC-MAIN-2014-49
refinedweb
1,424
57.67
Hi everybody! I am pretty close to finishing this program. It is supposed to calculate a person's energy use based on their past and current bill. I am stuck on the calculations and else/if statements. Thanks in advance for any feedback. package craft_week4; //Import scanner library import java.util.Scanner; //Begin Class main public class main { //Begin Main Method public static void main(String[] args) { //New Scanner Object sc Scanner sc = new Scanner(System.in); //Declarations int prevRead; int currentRead; //Welcome message System.out.print("Welcome to the City Power Bill Calculator!\n"); //User input here System.out.print("Please enter your previous meter reading: \n"); prevRead = sc.nextInt(); //Analyze and store input System.out.print("Please enter your current meter reading: \n"); currentRead = sc.nextInt(); //Analyze and store input //Conditionals //Rate A: For 500 KwHs or less = $0.0809 / KwH //Rate B: For 501 to 900 KwHs = $0.091 / KwH //Rate C: For greater than 901 KwHs = $0.109 / KwH //Utilities Tax is 3.46% regardless of usage. calcRateA = (currentRead / .0809); calcRateB = (currentRead / .091); calcRateC = (currentRead / .109); calcTax = (Rate * .0346); int calcSubtotal = (Rate + calcTax); int calcTotal = (calcTax + Rate); int calcUsage = //Output derived value System.out.printf("Your usage was: \n", + calcUsage); if Rate < 500 then multiply by .0809 System.out.printf("Your rate is: %.4f\n", + calcRateA); else if Rate is from 501 to 900 multiply by .091 System.out.printf("Your rate is: %.4f\n", + calcRateB); else Rate > 901 multiply by .109 System.out.printf("Your rate is: %.4f\n", + calcRateC); System.out.printf("Your subtotal will be: %.2f\n", + calcSubtotal); System.out.printf("Your taxes are: %.2f\n", + calcTax); System.out.printf("Your total bill this month is: %.2f\n", + calcTotal); //While loop //Setting loop count to zero int response = 0; int loops = 0; System.out.print("Calculate a new usage? (-1 to Exit, 0 to Continue)\n"); while ( response == 0 ){ if ( loops == 0 ) { //User input System.out.print("Please enter your previous meter reading: \n"); System.out.print("Please enter your current meter reading: \n"); } else if (response == -1){ //End message System.out.print("Thank you for using this program. Goodbye!\n"); break; } } } //End Main Method } //End Class main
https://www.daniweb.com/programming/software-development/threads/410502/calculate-energy-use-almost-there
CC-MAIN-2017-09
refinedweb
365
55.4
We're having problems with s390x builds of Firefox and Thunderbird. They freeze in execution of following javascript code (from Thunderbird code: modules/gloda/log4moz.js): for (let i = 0; i < pieces.length - 1; i++) { dump("start i = "+i+"\n"); dump(pieces.length+" pieces.length\n") dump(cur+"\n"); if (cur) cur += '.' + pieces[i]; else cur = pieces[i]; if (cur in this._loggers) parent = cur; dump("end i = "+i+"\n") } [dumps added by me] I get following output: start i = 0 2 pieces.length undefined end i = 0 start i = 2.121995791e-314 2 pieces.length gloda end i = 2.121995791e-314 start i = 0 3 pieces.length undefined end i = 0 start i = 2.121995791e-314 3 pieces.length gloda end i = 2.121995791e-314 start i = 1 3 pieces.length gloda.undefined end i = 1 start i = 4.2439915824e-314 3 pieces.length gloda.undefined.ds end i = 4.2439915824e-314 start i = 1 3 pieces.length gloda.undefined.ds.undefined end i = 1 start i = 4.2439915824e-314 3 pieces.length gloda.undefined.ds.undefined.ds end i = 4.2439915824e-314 start i = 1 3 pieces.length gloda.undefined.ds.undefined.ds.undefined end i = 1 ...[etc]... It seems something wrong is going on with i property (or whole js stack?). Any debugging thoughts? Thanks in advance. 1. I assume this runs in the interpreter only on s390x? 2. Is s390x 32-bit. 3. What happens if you s/let/var ? Also, it would be useful to know a regression range. In particular, has worked since after bug 549143 landed ()? . (In reply to comment #1) > 1. I assume this runs in the interpreter only on s390x? Yes, problems occures in js::Interpret function > 2. Is s390x 32-bit or 64-bit? s390x. Hm, I didn't try to convert double to hex. Checking it now cycle starts with: 0 = 0x0000000000000000 2.121995791e-314 = 0x0000000100000000 0 = 0x0000000000000000 2.121995791e-314 = 0x0000000100000000 and followed by: 1 = 0x0000000000000001 4.2439915824e-314 = 0x0000000200000001 forever. > 3. What happens if you s/let/var ? Same result - nothing changed. Could this be related to: or ? (In reply to comment #2) > Also, it would be useful to know a regression range. In particular, has > worked since after bug 549143 landed > ()? We were fine with Firefox 3.6.19 (1.9.2) but that's rather old version. I try to revert mentioned changeset an let you know. Ah, comment 3 makes this look like an endian-ness thing. It looks like jsval.h uses #if defined(IS_LITTLE_ENDIAN) to catch little-endian and #else assumes big endian. Could you check whether IS_LITTLE_ENDIAN is correct for your build? First of all thanks for helping me out with this. S390x should be big endian and it seems that this is detected correctly. (gdb) ptype jsval_layout type = union jsval_layout { uint64 asBits; struct { JSValueTag tag : 17; uint64 payload47 : 47; } debugView; struct { union {...} payload; } s; double asDouble; void *asPtr; } which should be right part of jsval.h content of jsautocfg.h: #ifndef js_cpucfg___ #define js_cpucfg___ /* AUTOMATICALLY GENERATED - DO NOT EDIT */ #undef IS_LITTLE_ENDIAN #define IS_BIG_ENDIAN 1 #ifdef __hppa # define JS_STACK_GROWTH_DIRECTION (1) #else # define JS_STACK_GROWTH_DIRECTION (-1) #endif #endif /* js_cpucfg___ */ Created attachment 549155 [details] [diff] [review] fix Ah ha. The 64-bit jsval_layout struct is not properly ladding out the 32-bit payload and js::Value::getInt32Ref() (used for the i++) is thus referencing the wrong 32-bit word. Something tells me bug 618485 didn't actually execute code after getting it to compile :) Pre-emptive question: does s390 ensure that all your pointers to static, malloced, and mmaped memory has the high 17 bits set to 0? If you are running code at all it gives me hope that the answer is 'yes'. Thanks for the patch. Following asserts failed during compile: JS_STATIC_ASSERT(offsetof(jsval_layout, s.payload) == 0); JS_STATIC_ASSERT(offsetof(JSPropertyDescriptor, shortid) == offsetof(PropertyDescriptor, shortid)); JS_STATIC_ASSERT(sizeof(JSPropertyDescriptor) == sizeof(PropertyDescriptor)); JS_STATIC_ASSERT(sizeof(JSObject) % sizeof(js::Value) == 0); When ignoring them js engine become unusable (can't call DumpJSStack() in gdb). Created attachment 549386 [details] [diff] [review] take 2 Ah, I see someone has added a word-sized member to the payload union. That means my last patch was bloating jsval_layout to 12 bytes. (I'm surprised we don't static assert that somewhere...) What about this patch? Um, still one assert remains: thunderbird-5.0/comm-miramar/mozilla/js/src/jsvalue.h:294: error: size of array 'js_static_assert6' is negative: JS_STATIC_ASSERT(offsetof(jsval_layout, s.payload) == 0); ignoring it by commenting it out leads to the same result as with previous patch. Yeah, that static assert seems bogus. In general, though, you may be the first person executing this code on 64-bit big-endian so to get it working you may have to actually step into the code and debug it a bit. Btw, do you have an answer to my question in comment 7? Created attachment 550344 [details] [diff] [review] removed unrelevant static assert I've wrongly linked the libraries and reported this patch as non-working. Actually it is working perfectly. I'm sending a bit modified version with removed no longer necessary static assert. It would be nice to have it in trunk. Thanks. Comment on attachment 550344 [details] [diff] [review] removed unrelevant static assert Yes, we'll get this landed. I wrote the patch, so I suppose someone else to review it :) Oops, backed out because of silly jsuword/uint64 incompatibility on 64-bit osx64: Relanded: Sorry to bring bad news, but it seems that change broke sparc64 builds : with rev 5684f06138f39e6c6b95cb076cdbe449875a1c2d was ok (well, busted because of bug 676924 but that's another story) and now with rev be090ee1747a378bef88e392164ad01548d912ed including that commit, it fails with : /var/buildslave/mozilla-central-sparc64/build/js/src/jsvalue.h:298: error: size of array 'arg' is negative See for full log. Should i reopen or file another bug ? Of course, commenting out that offending JS_ASSERT makes the build go further, but i doubt its the way to go. JS_STATIC_ASSERT(offsetof(jsval_layout, s.payload) == 0); D'oh, I forgot to remove that. I think that static assert should be bogus. comment 13 seems to confirm this. Removed, and fix strict-aliasing warning:
https://bugzilla.mozilla.org/show_bug.cgi?id=674522
CC-MAIN-2017-26
refinedweb
1,029
61.33
Agenda See also: IRC log <Marcos> we look at stream and file API <Marcos> Need to look at what we do with DOM 3 API at 10am <Marcos> People who are important are Jakob and Doug, but there is time conflict <Marcos> So we will do it at 11:30 (DOM3 Events) <Marcos> Testing we also need to discuss <Marcos> We need Jonas for the File API <Marcos> Afternoon: index DB and XBL2 and component model, in the afternoon <Marcos> 1-3pm <Marcos> Bryan wanted to add an item: Event source extension for connectionless push <Marcos> If we get through stuff quickly, we can start talking about API design <Marcos> Stream and file API, we can start off with that <Marcos> Scribe: Marcosc <Marcos> EU: I want to discuss file saver <Marcos> Not all the use cases are covered by download attribute on the a element. <Marcos> ee: we had talked about looking at saving a blob VS saving a URL (the resource) <Marcos> ee: is there interest in implementing this? <Marcos> JS: yes <Marcos> CMN: nods in. <Marcos> JS: how is that different from file saver <Marcos> AB: you don't get the progress events. <Marcos> AB: going to paste in a URL <adrianba> <Marcos> If you look at the second page… replicating content disposition: which shows the save dialog <Marcos> CMN: is there any indicator when the download is done. <Marcos> AB: no. it works like the current save dialog that browsers use <Marcos> CMN: We have the File API right now. And I think that is what we want before a full filesystem API. Our use cases are "real file system access": create directories, get at files, so the user can share files with Apps. <Marcos> AB: we are not opposed to such an API. But they are not a high priority for us (MS) right now. <chaals> [berjob waltzes in already...] <Marcos> AB: this is something we did instead of file saver… the file system API is further down the road. <Marcos> EU: how is this different from the current API? . <Marcos> RB: could you not always download it? just a suggestion? <Marcos> AB: maybe :) <Marcos> EU: not sure what Chrome does right now. We might be displaying it in an iframe. But we are not sure about the origin right now and what privileges it has <Marcos> AB: for use, we have abstract protocol handler… <chaals> s/EU:/EU:/ <Marcos> EU: It sounds like we have 3 different things that overlap. <Marcos> JS: I'm very interested in supporting the use cases, but 3 different ways is not good. I would like to find a way to avoid having 3 different APIs <Marcos> JS: file saver could do everything you want <Marcos> EU: it doesnt have a clean way to allow the user to open the file <Marcos> JS: but it is fully API driven <Marcos> JS: it would be nice to find a single way. So it would be nice to figure out what the requirements are consolidate them <Marcos> AB: agree… we don't want to implement multiple API <Marcos> CMN: its clear that we all want to support the use case…. and we don't want to tell devs how to use multiple APIs <Marcos> [agreement] <Marcos> AB: Can we talk about file API first <Marcos> before moving on to stream <Marcos> In the first page of the first page: readAsBinaryString… is there a strong use case for it? is that for legacy reasons? <Marcos> JS: It is. But it's ok to drop it <Marcos> AB: We would like to see it removed <Marcos> JS: it's more legacy, so I'm ok with dropping it <Marcos> MC: Second question: do we really need the restrictions on the URL? <Marcos> JS: I have not looked at the URL part <Marcos> Arun has been working on it. But he would probably be interested in discussing it further <Marcos> AB: final question, I'm wondering if it's ever possible to see the protocol version that is dereferenced in from the blob URL <Marcos> ? <Marcos> AB: we proposed it's not necessary <Marcos> JS: agree, but Arun should have a look <Marcos> JS: another proposal is to drop BlobBuilder in favour of a contructor <adrianba> <Marcos> when we started working on the blob API, a req was to have a blob whose size was unknown (a steam). <Marcos> E.g. in a mail app, you can start viewing stuff at readystate 3, and start showing it without waiting for the end… and start processing data as it downloads … use chuck upload as well <Marcos> CMN: we have similar use cases <Marcos> JS: so can you create streams? <Marcos> AB: yes, we have a stream builder. <Marcos> JS: it should interesting . <Marcos> EU: AB's proposal it sounds interesting to me <Marcos> EU: is we have a stream object that we can convert to a blob would be good, so we can hand it to file writer <krisk> <euhrhane> [not necessarily convert to blob--possibly we'd just pass the stream to the FileWriter. <euhrhane> ] <Marcos> KK: we need a more consistent way to do tests… and we don't have an approval process <Marcos> KK: my experience has been that when people start looking at tests they start finding issues. An approval process might help. <chaals> MC: It is difficult to approve tests where we auto-generate a ton of them. You can produce lots from WebIDL, and it is time-consuming to check each one. <chaals> ... might be a good idea to look at a test generator, rather than the test. <Marcos> KK: the tests I have seen have not been autogenerated. <Marcos> KK: maybe we can create task force, somewhere more focused to discuss testing <Marcos> CMN: not sure how we would do this <Marcos> CMN: our experience is that people who make tests are usually not spec people <Marcos> Wilhelm … introduces himself . <Marcos> CMN: So, do we need a sub group? wilhelm, how should we collaborate between Webapps and the Testing and Tools group. <Eliot> <Marcos> wilhelm: please contact us. For visual things, use ref-tests from the CSS working group. We are happy to collaborate and provide guidance. <Marcos> CMN: but which group should we do it in? <heycam> <Marcos> jG: there is already a mailing list. public-webapps-test-suite ? <MikeSmith> <Marcos> wilhelm: lets figure out what tests there are already <Marcos> wilhelm: then we can see what tests are available <Marcos> KK: it think getting a good rhythm going… want to try something a little different. If we just do the list, that is ok. But we need some more active ways to do things… getting people to talk more. <Marcos> JS: some feedback we had a while ago, it was harder to write tests than necessary. Because of the infrastructure, it made tests hard to write tests. W3C tests required more boilerplate than at Moz. <Marcos> JS: at mozilla, we end up doing it our own way to because its easier and faster <Marcos> JG: yes, there is a bit more work involved with the W3C tests. <Marcos> +q <Marcos> +q marcos <Marcos> JS: the number of tests you get is affected by how easy to write the tests <Marcos> JG: I've had a different experience <chaals> CM: How does HTML test group work cmpared to not having one? <chaals> JG: Well... <chaals> KK: Yes <chaals> MC: having tests be very easily accessible with an interface is really helpful - especially when linked to the spec. <krisk> HTML started a taskforce two years ago <krisk> Before that their was no html5 tests <Marcos> CMN: my experience is the same similar to JG and JS… when you pay people, you get people making good tests. But also making them easier to write for volunteers, also helps. As KK suggested, we need review. <Marcos> CMN: it seems like it's an action on the chair <krisk> today we have a large number of tests across a number of features that are implemented in browsers today <Marcos> wilhelm: writing a good test suite is as hard writing a spec. We should have a dedicated person to write a test suite (equal to the editor). <Marcos> CMN: how many person think there should be a dedicated testing person for a spec? <Marcos> [plenty of agreement] <Marcos> MC: we could make it a requirement that no spec start without also having dedicated tester <Marcos> CM: not every org has dedicated spec people. <chaals> MC: It is fundamental to have tests, so you can't seperate without being able to get a test suite. <Marcos> JG: this person does not need to write the tests… the person would have the responsibility to source the tests. <Marcos> JG: it does not mean that only one person would write all the tests (if any) <Marcos> wilhelm: if you have 15 specs, you can break up the task amongst multiple people <Marcos> CM: does it have to be a different person than the editor? <dom> ryosuke <Marcos> RN: when do you need to involve a testing person? <Marcos> … discussion… identifying them from the start <Marcos> DS: that has traditionally been the role of the editor <chaals> RN: What's the diffrence? <chaals> MC: It can alleviate the load of the editor <chaals> ... we need to discuss what to do when you generate tests and then the spec changes - how do you avoid starting too early or too late <Marcos> RN: but we still not clear when we should have tests <Marcos> DS: for DOM 3, I've requested that people contribute tests… but didn't get much back <Marcos> DS: I would like to have a req that before a spec progresses to CR, it should have a test suite <Marcos> CMN: it seems reasonable as a first step to appoint someone for testing. <Marcos> RESOLUTION: We will insist that when work on a new spec, a person be appointed to handle testing <Marcos> KK: as DS said, we should have something in the process so specs can't move to CR without a test suite <Marcos> DS: part of LC would benefit from a test suite. <Marcos> <Marcos> CMN: problem is that it is expensive to produce tests… so, we don't want a process heavy way of making tests… <Marcos> JG: Tests really only come out when people are implementing stuff <Marcos> JG: implementers who want to have a bug free implementation are going to produce tests <Marcos> CMN: another group to get test from is non-browser vendors (e.g., content providers)/.… how do we talk to those people? <Marcos> JG and JS say there are a few examples of people who have done it… <Marcos> KK: happy help to set up guidelines <Marcos> DS: if we have a good way to contribute tests, that would help <ArtB> WebApps' Test Submission process: <Marcos> BS: one of the best ways to learn is by doing. We need really good guidelines, so test examples are good. Looking to service providers and universities to help use build tests would be good… it benefits lots the whole community. <Marcos> CMN: the public tests can vary in quality <Marcos> Israel: when is the right point to do testing? <Marcos> JS: I don't care what the tests are and what they are targeting, as long as we get lots of good tests <Marcos> JG: it's never too early <Marcos> MC: I agree <Marcos> DS: who is going to enforce this policy? <Marcos> CMN: good will :) <Marcos> CMN: there is no formal policy that we can enforce <Zakim> Josh_Soref, you wanted to say you're either implementing or using someone's implementation or planning to use it <Marcos> JS: hopefully you are implementing this feature… people have a vested interest in the spec and hence produce tests <Marcos> ACTION: Art and Charles to make a proposal about how to appoint a person to be assigned for testing for a spec. [recorded in] <trackbot> Created ACTION-637 - And Charles to make a proposal about how to appoint a person to be assigned for testing for a spec. [on Arthur Barstow - due 2011-11-08]. <Marcos> [BREAK] <jihye> ' <inserted> Scribe:. ... so we ae trying to reduce the "not invented here" thing by being able to get in APIs that match what we think of. ... so we are lookig to create general guidance (rather than formal requirements) - what WebIDL gives you, how do you describe throwing an excepetion and what does that mean, etc. ... It's a friendly list for editors to find information that is helpful. ... The ideas have all been under development, and effectively black magic in people's heads that wasn't available to others. ... Would also encourage people working on frameworks to help us work out how we can make things more easily.. [throwing exceptions, defining events, how to use dictionaries, etc] BS: Would like to have had a discussion not just about JS/DOM APIs, but also other things happening here like things on abstract resources handled by the browser. ... We see a number of patterns - trying to understand the rationales for that is important. <bryan> Here is the link to the draft presentation I had prepared for the TPAC discussion on this topic. It captures some of the questions we had and the objectives for a discussion: BS: why is video a tag, why is event-source an API, etc. AR: Trying to understand if the intent is to capture the way things are done, or what we think would be an ideal design pattern. MC: We are trying to figure it out too... RB: A large element is a cookbook. Editors do something, someone says it is a bad way, they don't wnderstand why and just want to make something that works. Goal is to make editing easier CMN: I'd find the historical explanations useful ... What's the future of this? A note, what? AR: If we write down what people do now we perpetuate it and that is bad. MC: We propose this as a note - a useful thing for the community. ... we are trying to help consistency. AR: Consistency is good. CM: Helping editors construct prose and interfaces to match what other people are doing is good. I agree also that it is good to document the rationale. ... it isn't just a matter of people agreeing, because there are real disagreements right now. MC: Yes, we don't just want to codify what people are doing now, because we don't want to describe how to do things wrong... AR: The point isn't to make a normative requirement set, right? CM: We don't have a general place to do this at the moment... RB: THere are a lot of people who are here?? Travis_MSFT: Is this less about general API design and more about particular things that you want to do - events or callback? what is a webby error? ... RB: Yep. [examples of different approaches] scribe: Not sure a document can recommend aright way, but might describe a possible set of ways to do so. MC: Ca Can show examples, and why they did it. JS: Think this is a great idea. I'd like to know e.g. how you should write a callback-based approach and why. I'd love to have more input from people who write JS. ... in particular, from more than two people who do the same thing already. Take into account beginners, who are not here. ... most important peopl to get input from are not in the room RB: E.g. JQuery standards group JS: Right. We should talk to those guys. AR: I can tell you what to do ;) Balaji: Good examples are important. We should do this across different WGs. ANd there are different groups that have very fdifferent patterns, e.g. geolocation. RB: Yes. People outside this WG don't know or care about working group boundaries. <nvbalaji> Not suresh. I am nvbalaji (nvbalaji) <nvbalaji> :-) CMN: I think the TAG has a role here - at least in the structure. I don't think we want to palm this off to the TAG, but I think they have a role as custodians of these large questions. NM: I don't think TAG has "the expertise" here, and we don't want to repeat other people's work. We don't necessarily have an opinion here, but we are intersted in how these questions are resolved in different places. ... THere are things that are deep architectural things. When you have APIs, over time, you want to evolve things - and you can't install a flag day on the web. MC: You were involved in the "architecture of the Web document" - are there relevant lessons from communicating, the experience of doing it, etc? NM: Web arch is different to architecture documents I have seen. Architecture documents in IBM answered specific questions to say "did you do this right or not?" Web Arch is more infromal - and is a retrospective document, not prescriptive. Tim wrote design notes for the web, which found their way into Web Arch (specific "thoughts") ... I think good architecture can be related to use cases. ... Invent good stuff, think about the use cases, think about architecture. but the web arch document is very backwards-focused - what was important in a running system.. BS: What I get out of this is "yeah, we need this discussion..." [kibbitzing on list choice] <ArtB> Scribe: ArtB JR: IE9 implemented 100% of the spec … think other browsers implemented about ~60% of D3E CM: so, I think the Editors are OK with making the requested changes … is that a fair characterization? Sam: other than IE, who will implement this? JR: Olli Pettay has been involved ... I don't know about Google JS: I talked to Olli … we intentionally remvoved ExceptionEvent <heycam> [There may be confusion in the minutes at some points between CM and CMN. :)] … Olli is not as concerned about edglazkove cases AvK and Ojan mentioned … We do implement a lot of the spec … Not sure if we will implement all of it … and the parts we may not implement are features that matter DS: D3E is a subset of DOM4 re the events … we changed the spec to not have conflicts with DOM4 [ scribe missed James's comments ] <gsnedders> Subset of DOM4? I thought it was a superset, containing additional things like ExceptionEvent. Ojan: re Sam's question <jgraham> Have we considered dropping the parts of D3E that overlap with DOM4? … I can't give an official Google positin <anne> gsnedders, mismatch, if you will … but there are parts we would implement and some parts we won't Sam: specifics please Ojan: there would be a long list … text input event has an input method … I don't think WK will implement it … key and char properties … are problematic … but we havent done a detailed analysis Doug: please send that to the list Jonas: re taking D3E stuff out of DOM4 … Ojan's list doesn't help with that Ojan: I expect WK to implement DOM4 Jonas: for the parts that are the same, it doesn't matter … I talked to Olli and my position is the concern is about the long time for DOM4 to ship <gsnedders> Only if they are word-for-word the same, otherwise there might be accidental differences. … it keeps adding features Anne: we are removing features … only event constructors are new Jonas: what about mutation? Anne: not there yet … but they could be Jonas: concerned about a continuously evolving spec that never finishes … we need to ship something … and D3E is done … My concern is no clear signs of DOM4 actually shipping … I think we can ship D3E sooner Marcos: I don't agree … think DOM4 is in good shape CM: as Chair, we have a responsibility to ship specs … I realize some people don't agree with that … but that belief is not aligned with the WG … by shipping I mean publishing a Recommendation … Re Jonas' comments, we need to ship a spec <gsnedders> One option for mutations events is surely to make them a module of their own? … don't want a bunch of nit picks <gsnedders> In which case DOM4 is more-or-less done … that keep coming in … think the spec is good <gsnedders> (in terms of getting to a point where LC is possible) … We could cut stuff out … by reading the tea leaves of DOM4 … and if DOM4 changes, we can rev D3E … I don't want to keep going in circles … that costs lots of time and money for everyone … for Editors and Implemeters Doug: the parts under contention are from original DOM specs … D2E is too old … If DOM4 parts are better and stable … and reconcile the 2 specs … We could drop stuff from D3E if problematic … and then go to LC … I am willing to change spec to follow DOM4 where it matches implementations … I can see AvK's approach is useful … and successful … so now we change D3E to match … I still contend a D3E REC is useful RN: is it possible to drop those parts not implemented or are controversial? DS: yes, that can happen in CR … that's kinda' expected CM: need to agree on what's controversial and what's not … and that requires drawing a line in the sand … need browser vendors and others to define what's controversial … We need to make a decision … DOM4 is trying to make the situation better … but we also have people that need to ship product now … and of course we have the users of the APIs to consider <gsnedders> One option is to proceed to CR, and see what parts meet the CR exit critera, and move from there. … How important is it to ship a REC? … Need to define the features as implemented today Jonas: I don't want to have anything in D3E that DOM4 deprecates … need to look at EventException Jacob: I agree re deprecation … I think we want to move fwd with constructors … think we need to talk to talk about specific events … and we can deprecate some events … We should make sure the two specs are synch'ed RN: can we drop the IDL interfaces? CM: we agreed yesterday that WebIDL will be used Jacob: need to work together to get a list of incompatibilites … then we fix them … then we we go back to LC … There is a lot of feedback since D2Events … If there are change requests, must open a Bug with Bugzilla CM: let's ask Anne if he can help with this? Anne: yes CM: so Jacob made a proposal? … Who supports this proposal? … 15 people supported the proposal … Does anyone object to that proposal? … there were NO objections <Ms2ger> smaug, such as? Apart from the new exceptions, we only really have legacy stuff and some things from HTML <smaug> Ms2ger: many parameters are optional <smaug> DOM range isn't backwards compatible etc <smaug> Ms2ger: I agree the changes are usually good <Ms2ger> Mm, I guess you can say that <Josh_Soref> Scribe: Josh_Soref <Ms2ger> No calling in today? <jgraham> I think it is possible to set that up if you want <smaug> what is the topic? <jgraham> Although the evidence is that you don't really exist <scribe> Scribe: Josh_Soref <anne> how many engineers does it take to dial a number? <anne> 0, you just ask the hotel staff ... we have a proposal for Component Model ... and there's a belief that there's overlap with XBL2 ... we'd like to understand the WebApp's community view on the landscape ... and we'd rather have an either-or and not an and ... I'd like to get a sense of the current implementers' view on XBL2 weinig: Sam, Apple ... we've discussed this a bunch of times ... Apple's iggest concern is the lack of a well formed declaritive model ... it's also a bit disingenous ... to say XBL2 is dead long live component model ... and then to say it's similar and has overlapping goals AlexRussel: we assume them to be exclusive ... and our view is that they are ... the lack of a declarative model that's fully specified ... is something that we've taken as something ... and we'll work on ... Parser Integration, Shadow DOM, ... what we'll do with behavioral pattern anne: We'd like Cross Origin ... for things like Like / +1 buttons ... I don't think the goals of cross-origin and bindings ... are compatible weinig: I think it's valuale to have a component technology for the web ... XBL2 and the new proposals are both two different directions <weinig> s|weinig|maciej| weinig: otoh the framing of this ... is XXX ... otoh the new proposals are fragmentary, not specified in sufficient detail ... and i'm not convinced they're in the right direction ... i need to see something that looks good, and currently neither looks totally right sicking: my view is that something between xbl2 and component model is the right approach ... i think taking xbl2 and using it and cutting things out is more in the right direction ... than the proposal i've seen from you guys ... it's hard to see too strong of a comment given the lack of a proposal for the declarative model ... even though xbl2 has a lot of complexity [ Scribe reports that smaug agrees with sicking ] weinig: i also agree with sicking dg: I disagree ... because if we do it, we'll end up with a completely different spec ... if we cut things out, we'll have to reinvent the parsing ... we'll have to deal with event forwarding sicking: i disagree, event forwarding is needed dglazkov: event forwarding/event retargeting are different things ... the general approach of the component model ... is that you subclass ... shadow DOM is something you get ... i do not think it's a good idea to treeat the component model is just a single spec ... because the different pieces can stand on their own s/..../.../ s/..../.../ scribe: we already have two different specs ... confinement is a problem outside of components ... you want to run scripts confined, instead of just in iframes s/..../.../ scribe: that said, i think it would be a useful exercise for those who believe we should keep xbl2 ... to go over it and see if it's doable <Ms2ger> s/valuale/valuable/ scribe: if they could go over it tomorrow for 30 minutes sicking: to make actual decisions which we're not at that stage ... we need more concrete proposals ... to have discussions here/now s/..../.../ s/..../.../ scribe: we'll need actual proposals to make ... decisions dglazkov: what's the right forum and what's the best format <Ms2ger> s|s/..../.../|| sicking: brainstorming session if we get the right people ... if we get the apple people, and hixie [ hixie is behind you ] sicking: and start sort of drafting some vague proposals dglazkov: +1 mjs: i like seeing proposals ... two things, about evaluating them ... often it's really hard to evaluate things independently ... without evaluating the whole system design ... whlie people doing the core design work may have thhe whole thing in their head in a vauge whay ... second thing is it's important to have proposals drilling out in a detailed way ... but when you lay out the full details, you see problems that become very complex to address ... and it's hard to give a full review of a relatively high level sketch [ bridge dialing ] <dglazkov> dglazkov: we have a proposal ... it provides a very good overview ... it tries to capture the big picture ... i have gone over a small part of it at our powwow at mozilla all hands ... but i didn't go over the whole thing ... as far as details, i agree, details are hard ... i welcome ideas ... ewe tend to work on this in person. ... it brings certain isolation as most of us are working for the same company ... even posting things in public is not enough s/ewe/we/ dglazkov: and it turns out everyone is busy dcooney: i agree with dglazkov ... there was a complaint that proposals so far don't have a detailed declarative syntax ... and we'll address that. ... i'd like to encourage people to avoid taking some simplistic view ... that declarative and XXX need to be mirrored. AlexRussel: there is, there's the form element v. xmlhttprequest <dglazkov> s/XXX/imperative dcooney: some things just won't be expressable in both weinig: i certainly can understand not jumping to conclusions about individual pieces ... when we saw the demos of what would currently exist. ... it seems that it was working around things with hacks without a declarative syntax. AlexRussel: setting this up as an either or is misleading ... our goal was to design declarative as a sugar on op of imperative ... at least a strong mirroring. ... can you define declarative with the imperative api? [ no ] sicking: this is what i disagree with ... we want to have bindings adding to css that are purely stylistic ... things with a different security model that are cross origin AlexRussel: if you don't have the plat capability ... if you can only do it declaratively ... you should do the archeology work to uncover the primitives and expose the,m <adrianba> s/the,m/them/ sicking: would you say style sheets are declarative sugar on the style attribute AlexRussel: i don't think that's the right question ... they have a different semantic in terms of inheritance ... for bindings in xbl2 ... what you're missing is a way to be tied into the application life cycle ... treating style attributes as desugaring ... there's a missing bit of infrastructure ... it's the mechanism in which you're allowed to do i travis: Travis, Microsoft ... i'd like to +1 the desire to move forward on specing on some balances of company's ideas ... there's clearly value in dspecing out ideas outside of the ocmponent model. ... i'm interested in seeing that move forward even without a declaritive model. darobin: if there were a brainstorm model,. would you be interested? mjs: in practice, the declariative/imperative model, which will be the primary interface for developers? ... for people believe in declarative, the approach to design is based on that ... define that first <adrianba> s/of company's/of Alex and company's/ mjs: for people in imparative, the approach design's that first ... and make a sugar layer for a subset of the other ... that's the underliying phiulosophical difference ... hopefully once we have specs for this, we can comment on this ... instead of hypothetical "i think this won't work"" ... you can't predict if the layering will work unles you can see oth layers dglazkov: it sounds like there will be a brainstorm tomorrow ... we have some proposals for declaraitve syntax ... if you enjoy half cooked meals ... we're tready to sreve them to you ... the problem is difficult ... what made xbl2 so difficult to spec and comprehend was the decorator concept ... the fact that you could ad and remove behaviors dynamically ... i believe this is where we'll fall into despair tomorrow ... i recommend defering that question <dglazkov> dglazkov: there is a page where i outline th edifference between the two: [ bad sequence, lag ] dglazkov: subclassing is a very common thing that happens in many languages ... you add behaviors to a thing by extending it ... decorator is clsoer to an aspect oriented language ... you can create xxx <Ms2ger> s/tready/ready/ dglazkov: component model tackles element behavior attachment ... and defers decorators darobin: i'm hearing agreement on seeing more specs and on a breakout/brainstorming tomorrow dglazkov: all day tomorrow? darobin: there's no one form w3c here ... either it's outside the structure tomorrow ... you take a table and work it out <heycam> Current schedule for the sessions tomorrow: darobin: or you go through channels tomorrow morning and propose ... we're enjoying the fact there's no team contact [ people discuss the grid ] darobin: 11:15am? heycam: i'd like to go to api design ... could we have it at 1:30pm? [ 1:30pm ] [ poll, who might show up? ] darobin: about a dozen people mjs: i can't be here tomorrow, sorry darobin: anything else? sXBL? dglazkov: are we still considering sXBL? darobin: there's a point wrt Rechartering mjs: i think everyone has agreed we want to do components ... and the disagreement about the starting point ... as long as the charter doesn't identify the name darobin: chaals we should ensure the Charter doesn't name the spec [ People leave ] <smaug> s/Index/Indexed/ <smaug> is there some kind agenda online? sicking: it's been almost finished for 6 months ... anyone from Google here to talk about this? ... the only issue i know outstanding is error handling ... i don't know if we have filed bugs ... i can look tat u[p ... those might be more editorial Israel: Israel from Microsoft sicking: there's not all editorial, but the ones i see are really small michaeln: Michael N, Google sicking: Israel and I talked a bit about it over lunch ... it seem s we might have agreement ... that error events aren't actually fired ... There are two types of errors ... one associated with a request ... one isn't Israel: and one of those kinds is basically fatal sicking: and we never arget ererors at the transaction Israel: hopefully developers understand what they can do sicking: that's actually drafted in the spec ... we should clarify that we're talking about that in this thread ... and confirm people are ok w/ that solution ... beyond that, we could go through the buglist ... it's pretty simple stuff -13 bugs darobin: anything we can close is good sicking: i suspect most require changes to the spec ... but we can coe to agreement ... bug 14199 ... just a bug in the spec ... bug 14201 ... - mention of version change request, which is renamed - trivial change darobin: that's editorial ... bug 14318 ... - that's important to mozilla ... bug 14352 ... - idl marking requirement ... editorial ... bug 14384 ... - that's an interesting quetion ... currently we throw if readystate isn't done if you try to get result ... so you can't get the transaction during upgradeneeded, which is bad ... - we should set readystate to done ... - not sure if that's the right fix ... we could do something special in this case ... it's the request from an open call Israel: there is a transaction, locking the whole database sicking: yes ... what should ready state be? <scribe> ... done even though we haven't opened? Israel: done seems fine sicking: bug 14389 ... - i wanted alex here ... we have two callbacks in the spec in the sync api ... the two way sfor creating a transaction ... currently they're [functiononly] ... so you can't pass an object with a handleevent or similar ... i have no opinion on that [ jonas explains to alex who just returned to the room ] sicking: is there value in supporting passing objects? alexrussel: the object passing protocol is strange from a design perspective ... you could have an object that handles lots of things ... the question from hj is "what's this?" s/hj/js/ darobin: that's the benefit of using an Object alexrussel: I think passing an object whose members are named by the event Josh_Soref: the idl lets you pick the function name on the object sicking: this is part of the indexed db spec ... you pass it a callback for the transaction ... we can support function, or function-or-object Marcos: looking in general how JS is used ... many people don't use the object form <smaug> =FunctionOnly should be removed from the spec mjs: to make this clear so we stop talking about handle event ... can you give us the name of the name on the callback object darobin: what is the color of the bikeshed? Marcos: it's called handleEvent sicking: let's pretend we renamed this to transactionStart ... it would be a single function name, since we only do one thing AlexRussel: if this is the beginning of having well named properties for callback objects, that's great [ scribe repeats what Smaug said ] darobin: I agree ... it should be transactionStart mjs: WebKit has usually not done the FunctionOnly bit [ Good bikeshedding, we picked a non black color ] sicking: bug 14393 ... i think i've already fixed it ... bug 14404 <smaug> FunctionOnly is always a spec bug except with onfoo event listeners Israel: this related to not knowing which version you were working on during an abort and wanted to do an upgrade ... this related to an exception/event type not? having a version or something [ No one seems to really remember tihs ] Israel: inside upgradeneeded ... with an optional parameter, how would you get the version? sicking: database.version in the upgradeneeded or the callback Israel: if you aborted it, and you're outside the upgradeneeded s/tihs/this/ Israel: I think this predates an [optional] paremeteer s/parameteer/parameter/ sicking: if you fail to open ... which is where an upgradeneed happens Israel: I think you can close the bug ... i don't think we need it anymore sicking: we need to specify something, because it's unclear in the spec ... bug 14405 ... - i fixed that ... bug 14408 ... - this is based on a usage pattern we saw ... as things stand now, if you open a cursor and in the callback and you do a bunch of things, and expect the cursor to progress ... having to call continue at the end is hard ... as soon as you call continue, getting .key/etc will trigger an exception ... we propose that once the cursor has recieved its first data, it won't throw Israel: so it's just caching data? sicking: this is because of request objects Israel: so this is different than calling continue twice? sicking: yes, that still throws michaeln: what happens when you call continue on the last cursor? sicking: either we make it start throwing, or we can leave the values as they were michaeln: this came up recently in code review ... and the response was "oh, i don't think tha'ts specified" sicking: in general, the spec tries to agressively throw michaeln: where you're changing the behavior of aggressive throwing ... it needs to be fleshed out sicking: i think i offered to fix this bug i/start throwing/... there isn't a reference in the callback (it's null), but you can have another reference to it elsewhere/ sicking: bug 14412 ... no brainer, we should do that ... bug 14441 ... - just outdated, should remove that note, editorial ... bug 14488 ... - missing annotation ... that's it! ... what do we return from delete operations? Israel: I'm ok with not returning anything. sicking: the spec says to return true if it deleted something or false if there's nothing to delete ... in some casw, that would be useful ... this is asynchronous s/casw/cases/ sicking: this could be slower to implement ... and since we don't know if someone's going to use it, we already have to dig it out ... the speed cost is totally implementation specific ... my preference is to return nothing, to be safe ... you can always get the information, although it's probably slower - by calling count first Israel: we're ok not returning anything ... as long as you end up in a success handler ... the issue was, what happens when you're deleting a range ... and you can't delete all of the range? ... and we agreed to throw two kinds of errors sicking: if you fail to delete everything, you always have to revert, since all actions are atomic Israel: one thing that would be great ... we started putting out there a test called LAteral ... we'd like to get feedback from all implementers to see how interoperable we are ... i believe the set of tests are for the old set version sicking: we already landed the change Israel: we'll try to revise the tests ... open-with-version is the new api to replace set-version [ That was answered for the Scribe ] sicking: unfortunately, all of our tests rely on the error event ... and they use generators ... JS Harmony generators Travis: you can always stick things into the submissions folder s/Travis/Travis_MSFT/ sicking: we'll need to go through our tests and rewrite them to not use generators, which are convenient to our test writers, but not portable adrianba: it'd be helpful if you submitted them so we could see coverage and avoid duplication darobin: and someone might magically do the conversion for you ... i'm hearing whispers about LC sicking: we might be able to do LC this year ... we need to fix these bugs, but they're not much work RafielW: from Google ... I'm curious to know if anyone from Apple/Microsoft has an opinion Travis_MSFT: Travis, Microsoft ... I'm reading it right now weinig: Sam, Apple ... in a similar vein, we've been working on other things, and it hasn't been a high enough priority ... it's been moving pretty quickly and doens't seem bad ... it's good if it ties in with undomanager s/doens't/doesn't/ Travis_MSFT: this is MutationObserver? <anne> wait is this about mutations already? RafielW: yes [ Group apologizes to people not present ] [ we break for 10 mins to let those 3pm people to arrive, please arrive promptly ] <Ms2ger> [ Threats of hunting down people who are late ] <Ms2ger> [ Robin, be warned ] <Ms2ger> OH: I don't believe in the internet darobin: it's 3pm, we're starting Travis_MSFT: Would you like to tell us about MutationObservers RafaelW: ok, so an overview ... the intent is to be a replacement for DOM Mutation Events ... the fundamental difference ... is mutation events try to project an abstraction ... that things are going to be dispatched synchronously ... that turned out to be problematic for a number of reasons ... MutationObservers are different ... you can register an observer to express an interest in a certain set of mutations ... and you'll get a list of things that have happened ... it's a batched list of things that have happened ... since the last time you were called ... the other interesting part is the timing of delivery of mutation records ... there was a pretty long discussion on Public-Web-Apps about this ... the people discussing this <Ms2ger> s/Public-Web-Apps/Public-Webapps/ RafaelW: arrived at what smaug coined as "the end of the microtask" ... for the delivery of mutation events ... it means mutations are delivered at the end of the outermost script execution ... if outside such a thing, at the end of the current task ... as part of the single Turn, before painting ... otherwise you see artifacts weinig: can that be defined in terms of the event loop? anne: currently painting happens just after Task completion RafaelW: currently painting has a gaurantee (ignoring Modal dialogs) ... but you may get called before the end of a task ... if a synchronous event is handled ... say for mouse down ... and mutations happen as part of those handlers ... then you'll get something delivery then as part of that outermost ... invocation sicking: my understanding of when it's defined to fire ... for example the Load event for XHR ... it fires at the end of each event handler ... let's use a click event handler ... it fires at the end of each event handler on each event target ... it happens multiple times during the call to dispatchEvent() ... so if you click on an element 3 elements deep ... you call on 2 elements in capture ... on target ... 2 on bubble ... You get it twice for each thing, potentially, but only if there are mutations ... the reason for this ... smaug was concerned that if we do it at the end of a task ... if each event handler is independent ... and doesn't know what one might do ... invluding doing a sync XHR s/invluding/including/ scribe: during one of those, we'd need to fire these there ... there's a risk of an actor anne: what if an actor calls showModalDialog sicking: yes, but it means you can only shoot yourself in the foot anne: that's acceptable RafaelW: smaug are you there? ... can you explain more? smaug: the idea was to encapsulate the mutation ... web pages cannot detect what is a task <anne> ^^ "that's acceptable?" smaug: you may dispatch several events during a single task ... it's always when a event handler returns or a timer returns weinig: does that mean that every new api we define we'll have to define microtasks ... or do we infer it? ... specification-wise? sicking: specifcation-wise, it would probably be nice if they did ... but it should be pretty obvious ... any time you call into the web page ... that isn't inside another callback mjs: in that case, it might be nice ... if this concept was codified in some more explicit way ... we do have the concept of calling into script and having it call out ... it seems we're in agreement in what it is sicking: when i spoke to Hixie , he said there was something like that in html5 ... used to figure out security for call stacks ... but yes, it needs to be codified [ Hixie is no longer behind sicking ] sicking: there's special handling around ... MutationObserver callbacks themselves ... if you have 3 observers ... and you make a mutation to the DOM ... and #1 makes a mutation [ Sicking will write this in ] rafaelw: my mental model ... is the mutation observer maintains a pending queue to be delivered to its observer ... and when it's called to deliver, it delivers what it has to its observer ... and that observer can create work to be added to all observers' queues rafaelw: and the system loops around until it empties its queues sicking: everyone will eventually be notified ... and there's no inner looping ... we'll append and create larger loops mjs: can you create an infinite loop with 2 listeners? sicking: even a single listener can create an infinite loop <dglazkov> s/mjs/weinig rafaelw: what would happen with current mutation events? ... you explode the stack ... that coding error ... here is just an infinite loop instead of exploding the stack ... we talked about a fixed limit on going around ... the advantage of exploding the stack ... is that you can see a stack trace to understand what went wrong ... hopefully developer tools will evolve to help you debug the infinite loop case here mjs: there would be a way to avoid starving the paint cycle [ Scribe summarized poorly ] mjs: it's possible to make a design ... where you don't have an arbitrary fixed limit ... but you don't starve the event loop if you have a programming mistake rafaelw: we talked about that ... there are legitimate uses for going around the horn a couple of times ... and then let things settle down ... comes from the model driven use proposals s/proposals/proposal/ scribe: we were asked to slow down and look at the use cases rafaelw: imagine you were using a JS library to do templating ... and used something like jQuery to do a UI ... and it wants to go decorate the page w/ more DOM ... and you used a constraint library to manage forms ... so the templating library might produce more jQuery stuff ... and the jQuery stuff might trigger more work for the templating mjs: that seems like a Use Case where it's easy to create something that never terminates ... i agree it enables you to do things you could not otherwise do ojan: Ojan, Google ... as long as we agree ... mutations during one of these callbacks should get delivered eventually ... this error will either result in a hang, or burning cpu indefinitely ... i'd rather the hang ... rather than burning cpu ... i'd rather a limit and an error rafaelw: i mostly agree ... i just don't want to create a situation where a developer doesn't know if he'll run before a paint occurs mjs: you have the situation where each piece of code has observers ... you need to globally analyze to determine if it will finish Travis_MSFT: they need to be interdepent ... you could get into an infinite loop ... if jQuery included things which the validation system depends on ... which depends on the third component ... but in most cases, i don't think that will happen ... you might have a queue of 3 or 4 mjs: the loop was claimed as a UC Travis_MSFT: i agree, but disagree on a hard limit ... the distributed UC is potentially difficult ryosuke: we already have this problem with the current system ... i don't see this as introducing new issues mjs: given how bad mutationevents are ... i don't support "no worse than them" as justification weinig: yes there are problems, yes this makes things better ... if we could avoid more problems, that's better darobin: the situation you've described is a corner i've painted myself into many times weinig: in the end, those risks are going to be minimized by something XBL-ish ... or component modelish [ laughter ] mjs: there's really 3 basic things for this issue ... 1. repeatedly cycle until all queues are empty ... 2. have a fixed limit ... 3. at some point, delay delivery to avoid starving the event loop ... this should be on the mailing list ACTION rafaelw to send how to handle single pass not emptying all mutation queues to the list <trackbot> Sorry, couldn't find user - rafaelw [ anne asks a question ] anne: call dispatchEvent() from code ... where does that get trigger the mutation observers? sicking: the outermost thing is always a callback ... which is a microtask ... if you call dispatchEvent() in there, ... the mutation observer calls back from the end of the outer microtask ... it's like a function call anne: tasks that are queued are special? ... yes, they are outermost, so they're special rafaelw: are you concerned, or not understanding? Travis_MSFT: i'd like the spec describe the scenarios clearly ... perhaps even so people can visually see ojan: and if sicking could recall the thing Hixie said, that'd be good smaug: i need to finish the implementation first ... to decide if it's good darobin: does this go into DOM4? ... does anyone care? <Ms2ger> I do <Ms2ger> As mentioned before <anne> you can edit it :) ryosuke: i've heard that they relate to DOM4 and should probably be there anne: i do think it should be in there ... because every other spec that intergrates should work with it darobin: we seem to have violent agreement there <Ms2ger> I'm in violent agreement with anne :) darobin: anything else to discuss? Travis_MSFT: do these observers include stylistic properties? sicking: most stylistic changes don't directly do this ... but many times you trigger a style change by setting an attribute or inserting something, which would itself be an observer notice rafaelw: there's an attribute filter darobin: perhaps there should be something specific for a specific class value rafaelw: we agreed this is probably the 80% use case ... there was an earlier proposal from microsoft called watched-selector weinig: i want to echo that point ... the extra class list on element was the favorite thing ... special casing class might be valuable ojan: i really liked the watched-selector proposal ... it's more generic, over a selector instead of just a class list s/watched/watch/ <anne> watchSelector s/watched/watch/ scribe: what i like about this is that you can implement watchSelector on top of this rafaelw: it's on my list to open source a watchSelector reference impl on top of this darobin: anything else? [ No ] [ Break until 4pm -- for server sent events ] [ darobin bryan will introduce, it's up on the screen ] bryan: I sent to the list a link ... 2 years ago ... at TPAC here ... We had a discussion at the HTML WG about connectionless push ... the text at the time was fairly generic ... the ability to use connectionless methods ... not having to maintain keepalive ... the intent in that spec, still informative ... a list of things that might occur in the process ... this spec ... I've been involved in OMA since 2000 ... involved in the push work in OMA since then ... we recently completed work within OMA ... this api is enough to form the basis of an extension to event source ... it provides a way to use SMS ... as an extension to http push ... events are passed up to the application, in this case, the OMA runtime ... when it's advantageous to save resources ... it's possible to coalesce these into a unified message ... event source didn't define these because they were out of scope to the spec ... I have a diagram here showing how apps could be deployed [ ] [ bryan describes the diagram ] [ The diagram is: ] bryan: this doesn't modify the signature of Event Source ... down the road, we might create a persistent registration ... to let events wake up applications ... you have the desire to connect two new barers through uri ... you can use a registered urn that defines OMA Push ... within the IMS framework ... events are delivered using the same model as Event Source ... although the event type is sent to SMS for SMS ... and OMA Push for OMA Push ... you don't get onMessage() since these are not message events ... with OMA Push ... the simplest way was to create a sequence of strings ... so the application can receive all of the data as a single event using the event stream concept ... in this case, i pulled out the xml document, the url, and the text message, and present it ... for sms, the sms text message gets put into the event and delivered [ ] [ bryan describes second diagram ] [ ] [ bryan describes third diagram ] [ ] [ ] [ ] [ ] [ ] [ bryan mentions Widget contexts but glosses over it ] bryan: developers need to consider filtering for security considerations ... just as in web messaging ... accepting "*" is the responsibility of the application choosing to do so [ ] [ ] jcantera: Jose Cantera, Telefonica ... how do you intend to progress this? darobin: charter wise, it's in scope to this group ... if this group is happy to do it ... do you think it would make sense ... one good thing is that it lets web apps have the same notifications as native apps ... and it shields web apps from complexity ... would it make sense to hide the distinction between OMA Push and SMS? bryan: i considered it ... but, how do you deal with different framing formats? ... in OMA Push, you can deliver any content type ... the headers are important, you need to know the mime type ... those elements are important ... for a server to provide to the app ... i couldn't figure out how to combine that weinig: what mobile OSs support this? bryan: I prototyped this in Android ... I believe almost any OS in a smartphone class ... allows a developer to attach to network sources ... and allow someone to act as an agent for this ... in mid tier devices, that tends to be more complicated lgombos: Laszlo Gombos, Nokia [ Lost, sorry ] sicking: we talked about this at Mozilla ... but we created something very different from this ... there are two unfortunate things here ... 1. I'd like to hide whether messages are from TCP/IP or SMS or OMA Push ... (I don't know anything about OMA Push) ... - it feels like the goal was to expose OMA Push <heycam> anne, no it's at 4:30 sicking: The goal at Mozilla was ... How do we expose SMS over a channel that isn't TCP/IP? s/anne, no it's at 4:30// scribe: the other part is requiring permission from the user ... that severely limits how many users allow that ... if it's a little bit sensitive, people are still rightfully worried ... people press no, which is better than just pressing no ... we were hoping to provide something simpler/safer bryan: there could be prearranged trust relationships ... but it would be better for the user to have already trusted the app and not overburden them with prompts darobin: what sicking was getting at ... is providing an *always* safe subset ... to avoid getting permission ... this is more powerful, and "easier in terms of security" bryan: "how do you make this transparent?" ... look at XHR, the agent says "i want / i'll take these mime types" ... if we could take event source and say "i can accept these mime types" ... that would let me decide if it was safe to deliver it ... because i didn't want to change event source, i couldn't do that darobin: i think that's an option on the table ... i'm hearing interest in doing something around this ... would anyone object to the group working on this? ... it's in charter already [ Chatter ] sicking: this feels different enough from what we talked about at mozilla ... it feels like a different deliverable ... if we can solve it by just adding a header ... great shepazu: would you be comfortable having a line in the charter scoping it more tightly? ... why don't we three right a deliverable line for the charter? darobin: to avoid waiting for rechartering ... we should agree on a scope ... bryan it'd be good if you could send use cases ... sicking, it'd be good if you could send something mjs: weinig asked earlier if this is implementable onn iOS ... i believe as presented, the answer is no ... iOS doesn't give applications the ability to receive SMS ... to me, that says that a design that does not force the web page to choose a transport would be better bryan: i've said "any available barer would be good" darobin: ok, based on the email you all send, we'll scope the work heycam: helo s/helo/hello/ scribe: In this session, I want to let people bring up issues ... and discuss how we might go about testing ... and third, what are the time frames for implementers (smaug asked this) ... do you mean consume the syntax? <dom> Thread on testing Web IDL heycam: I don't mean specifically that, perhaps just conforming to its behavior weinig: one thing we talked about in terms of testing WebIDL ... is to test it in terms of specs that are speced in WebIDL ... for instance Canvas ... uses ovberloading ... s/ovberloading/overloading/ ... and things like prototype chains ... testing people's implementations of generating code, i don't think it's worthwhile ... you could hand write all the bindings, and still be compliant heycam: i agree, that's the only reasonable approach ... i think someone could come up with a set of properties for testing mjs: the way WebIDL is written, it's targeted at spec writers, not browser vendors ... it creates an indirect relationship ... indirect testing through testing of other specs seems the only way of testing it ... which unfortunately creates a circular dependeny for progressing on the REC track s/dependeny/dependency/ <dom> [I don't think see why this would be circular? surely we *can* create tests for specs that aren't in CR yet] weinig: we always do that when we test XHR, we test JS mjs: yes, we do <dom> s/think/see/ jrossi2: in particularly ... when you test foo-spec, you test webidl Travis_MSFT: I agree ... and as we march to LC, we need to mark things as AtRisk darobin: we should just kill it heycam: and the only one is modules weinig: does that include namespace objects? hheyyes s/hheyyes/heycam: yes/ AdamBarth: you can look at the specs as testing it heycam: yes, but it's harder to test automatically darobin: things written with ReSpec are pretty easy AlexR: Alex from Google ... i'm not sure if this is the right forum for this ... i think the entire java language indings should be dropped s/indings/bindings/ scribe: second is there are several instances where webidl doesn't serve JS well ... 1. a TC-39 meeting ... several months ago ... interface objects which are reified ... do not act like function objects <gsnedders> One option for WebIDL testing is some sort of tests designed to be run in a browser-specific way against the interface generation scribe: do not behave normally, they aren't callable [ scribe lost thoughts ] heycam: I agree with this ... it's unlikely authors will be doing 'throw typeerror' ... things which are unnewable ... things where it doesn't make sense to be able to new them <anne> new Math() AlexR: i take the concern ... it's a risk ... the idiomatic way of doing that in js ... is mixins <anne> (gives a TypeError) AlexR: the artifact way of doing that would be still newable ... the reality is that today, webidl doesn't specify something "reasonable" that could be impleemtned yourself in JS weinig: that's not necessarily the goal of WebIDL ... the goal of WebIDL is to define how things are implemented today ... and how they should be impleemnted AlexR: thenI suggest webidl is mischartered sicking: javascript doesn't have a way to subclass things other than Object ... fortunately, almost everything is Objects ... I know you suggested something using Object.call ... but I didn't hear any inmplemetners interested in doing that ... and it seemed like something for TC-39 to do AlexR: i should put on my TC-39 hat ... and note that this discussion was something that happened @ TC-39 ... and brendan and I agree that everything you can do to an interface, should be newable ... and yes, Arrays are odd ... and you sdhould throw things back at uws ... and there are things in ES6, proxies ... which should address it sicking: what acts as normal JS is a matter of definition ... for example, the array class, and even the string classs ... has built in behavior and doesn't allow you to subclass ... and we're following those models AlexR: you're still failing ... since your objects claim to chain to Objects sicking: but Array claims to chain to Object AlexR: but everything that WebIDL defines has intrinsic behavior sicking: but that's how it works ... the fact is that TC-39 hasn't solved this problem for any of these things ... it's actually more, bz had examples AlexR: Math is an Object, not a Function heycam: In the Spec, they are all Function objects, they are defined such that when called they throw type error ... which you can do in JS AlexR: do we still have a separate constructor property in WebIDL? ... throwing by default is a bug sicking: moving beyond low level semantics ... heycam wrote an example, "new Node" doesn't make sense mjs: every DOM object that's an object is a specific Subclass of Node AlexR: but that invariant is controlled by AppendNode <heycam> The spec says "Interface objects are always function objects." sicking: but if it's several weeks of work in order to do something which no one can do anything useful with, then it's a waste of time weinig: what's the argument for making node sicking: all the intrinsic behavior of Nodes is based on which Node subclass it is AlexR: then calling it and newing it throws heycam: is it worth it to handing back a non useful thing? Travis_MSFT: the answer is no AlexR: i'm not saying that you should turn off the ability to new/call ... i'm asking you to turn off the default anne: then you'd require a lot of specs to change most of the specs AlexR: i'd argue that for html element types, it's mostly a bug jrossi2: no, there's more than one interface per element anne: because the tags all share an interface AlexR: so you can't create a tag name ... you haven't thought about it hard enough anne: we have thought about constructors a lot, especially because you brought it up mjs: there are two separate issues ... one is New <anne> wrong or not, without use cases this is not going to fly mjs: and the other is subclassabilitiy ... in js, only Object supports Subclassing [ mjs and AlexR argue ] mjs: you should fix JS first before we change AlexR: we have misfeatures in DOM based on document.createElement mjs: the goal of WebIDL is to describe the actual semantics of DOM bindings and to get browsers consistent ... it is not the goal of WebIDL to transform the philosophy of how DOM bindings are built AlexR: the issue of default, shouldn't be the way of forcing the default ... because as anne says, people will just put no constructor everywhere s/AlexR/heycam/ anne: that's makework AlexR: creating an instance .. ... in the same idiom as anything else i can in that system weinig: that's something whicih as mjs said s/whicih/which/ AlexR: will there be a WebIDL version which changes this? darobin: no [ We are at an impass ] [ Should we drop Java? ] heycam: oh, i didn't respond to that ... maybe ... if we particularly don't care about other bindings ... and i'm sure AlexR would argue we shouldn't <gsnedders> I keep on grimacing everytime subclassing is mentioned… because JS doesn't scarcely has classes. :\ heycam: should we actually alter WebIDL to reflect something closer to JS ... that is something to consider, but it would take some time to do Marcos: have you done the bindings for WebIDL in java? heycam: one project I'm involved in has a Java based DOM shepazu: i wanted to talk about process very briefly <gsnedders> Does it look as if anyone will have met CR exist criteria for the Java bindings by the time they have been met for the JS bindings? IMO that's the relevant matter. shepazu: dropping Java would mean we don't need 2 java implementations to get to REC <gsnedders> The Java bindings are fine provided they don't hold up the spec. <gsnedders> (They can always be split out into a separate spec) mjs: getting two interoperable implementations of java bindings to test all of the features of webidl ... would keep the spec from REC forever shepazu: the Staff view on process ... is that if for each feature we have 2 specs in CR heycam: the plan is to only have 1 spec consuming some of these items shepazu: we can be fine about that ... don't let the process for a normal spec drag us down ... we can come to an agreement on the exit criteria ... we're flexible on how we judge the passs criteria <gsnedders> Can someone ask what the staff view is on impls of the bindings? mjs: i think we need actual implementations of specs using this feature ... part of what we're evaluating is to ensure that all of the details of what it says happen are actually practical/possible <dom> (I think Java bindings should be split into a different document) weinig: i had a bunch of questions ... 1. should long long stay in the spec? given its wierd behavior in javaascript ... given the inability of js beingable to represent numbers consistently heycam: the issue being numbers in js over 2^53 get squished into a double ... we talked about creating a class anne: it's used in progress events mjs: the loss of precision happens in a javascript parser ... it's more of an issue if we lose that detail in a movie ... the progress events of loading a movie from xhr ... i'm more curious about your opinion <gsnedders> bigints should be readded to WebIDL after they're in ES <anne> Josh_Soref: I worry about data loss with this <anne> Josh_Soref: nobody else worries about it :( <gsnedders> (i.e., they should be removed in the short-term) <anne> (roughly what Josh_Soref said) mjs: 2. should we treat an undefined value for a key in a dictionary the same as non existing ... that would be fine with apple, especially if mozilla is ok ... what we do currently is inconsistent for our dictionaries ... etierh way sounds fine, it's usually a programmer error heycam: sicbrought up cases like that where you deliberately get something as undefined mjs: so that sounds like a use case heycam: sicking said it's consistent with missing arguments to a function <mjs> s/mjs/weinig/ s/sicbrought/sicking brought/ heycam: we're making the argument that people compare to argument instead of checking <gsnedders> I think someone needs to look through ES and see where [[HasOwnProperty]] is used and where undefined is used [ see brendan's argument on list? ] weinig: the other one discussed this week is remove FunctionOnly for callback ... implementers have been inconsistent wrt how they use that heycam: this might be a case where using interfaces resulted in ... creating an object with the property called handleEvent weinig: i was actually saying allow both in all circumstances ... it's not like we can make addEventListener handle this heycam: i did it as the default jrossi2: i found the legacy handleEvent all wierd ... and developers would like to support it everywhere weinig: in webkit, we allow both anne: it is defined as Callback FunctionOnly InterfaceObject ... i think it's removed everywere except onFoo weinig: WebKit allows it everywhere, so Travis_MSFT: I'd like to point out that in my years, i never heard of that ... i'd rather default to FunctionOnly AlexR: my preference would be that if Object style is supported ... is that we attempt to allow same name as event name in addition to handle event ... so that you can have different colors ... handleEvent is the thing that doesn't do nicely for all event handlers heycam: if that's the direction we want to go, then we need to support Object style ... so you want to remove FuinctionOnly from the spec so you can only do both weinig: i didn't realize that hixie was using it for attribute event listeners heycam: i could inroduce function to actually mean function anne: we could add eventhandelr for that heycam: i'll make the change about allowing typedefs to put some extended attributes on a type so whenever you use a typedef you get the attributes from them weinig: next... ... i ask this every time i see you ... do people/do other specs use Sequence, and Array? heycam: now there are weinig: the next thing, an implementation issue ... is iteration order in for-in of properties on interfaces defined? heycam: we were trying to defer to TC-39 weinig: TC-39 doesn't define them for host objects ... in webkit, it's a random order <gsnedders> Does ES5 not define them as undefined for host objects? <gsnedders> Like, does the definition as undefined not apply for all objects? weinig: i've not heard of any bugs regarding iteration order Travis_MSFT: yes, we've heard of bugs ... we end up breaking them every time we ship IE ... it doesn't break many sies ... more often than not, it's a testcase ... i would not want them to be defined, because it would be particularly hard ... in the spec, there's some mention of ordering ... named and indexed properties weinig: pragmatic question ... ordering/lookup ... on Window, in the browser ... ... are you comfortable with the hooks on Window ... in webkit first look at the this, and then look at the that, and ... ... there are multiple catchalls that have to be iimplemented in order ... does anyone know if that's speced anywhere? heycam: yes, between a combination of things in HTML and WebIDL, it should be completely defined jrossi2: correctly? heycam: there's a bug that lists the order, and Travis_MSFT checkked it, and it didn't seem to hit any problems weinig: it seems like we need lots of test cases for it Travis_MSFT: i'm waiting for firefox to implement that part of the spec (sic) weinig: the only problem we could hit is "var location;" s/sic/sicking/ heycam: one question for people ... the approach of having idl attributes mapped to accessor properties ... there's an issue Travis_MSFT identified <heycam> assigning to Element.prototype.onsomething heycam: there's an issue travwith an old version of prototype.js breeaking <heycam> since on* handlers are now accessor properties on the prototype that throw if their this object is wrong, this was a breaking change for some sites <heycam> where the previous implementation was to have those properties as data properties on the instances rahter than the prototype heycam: because it checks the this of something ... are people happy with that approach? Travis_MSFT: yes ... i particularly value it for overloads ... it's easy to replace functionality when you need to trackbot: yes <trackbot> Sorry, Josh_Soref, I don't understand 'trackbot: yes'. Please refer to for help s/trackbot/Travis_MSFT/ weinig: yes, no issue ... we're worried about performance AlexR: array.length has two sides ... it's a getter/setter pair ... that can be modeled as getter/setter today ... second, if you write to an index property beyond current length, there's a magical put ... shrinking can be repaired ... growing requires morework weinig: will length be moved to the prototype be moved to use getter/setter AlexR: it isn't clear how it will be resolved heycam: earlier in the discussion, we brought up the idea with a more JS focussed thing which might replace WebIDL ... not right away ... we didn't have people chime in [ what would it look like? ] heycam: something where the actual constructs in JS would sound like JS darobin: why not use JS? heycam: because it wouldn't be very concise mjs: javascript isn't very good for doing that Travis_MSFT: it's a tricky thing to contemplate ... if you contemplate things the way ECMA does it, you have to be more verbose ... on the other end of the thing, you ... it might be an interesting exercise, but i'd like to finish webidl first mjs: there's some value that webidl is somewhat decoupled from js ... js is the only langauge that's relevant for api specs ... maybe someday every browser owill have python or dart ... if it does, then we will regret it if we define things too tightly <heycam> Josh_Soref: one of the things which DAP was looking at was the ability to specify SOAP replacement <heycam> berjon: json-rpc using webidl AlexR: having designed DOM for DART, the right interface will be a new way of doing things ... we wound up doing something WebIDL <heycam> Josh_Soref: they wanted to define an API for things where the implementaiton might not be a host object, it might be JS <heycam> ... but they want to define it in Web IDL <heycam> ... and in doing that, we were toying with the idea of writing a WebIDL to JSON binding heycam: you're talking about using WebIDL to define a ReSTful interface ... i haven't seen a lot of discussion about that ... outside a bunch of people mentioning it on the DAP list darobin: it's actually feedback from webkit that brought this up iniitally ... define a mapping to json objects ... and define a mapping to json ipc ... it would be defined separately ... the way forward on that, is that i'll finish my JS prototype of it ... and see if it flies or crases AlexR: there is a value in having a base description of what the apis are ... in most implemenetations those are in C+++ ... and those will correspond fairly closely to the IDL ... but at the same time, having something that is too close to C++ doesn't serve JS very well mjs: in webkit today, we generate mu;multiple bindings from idl ... they are just used ffor portions of the api exposed ... ObjC, C++ bindings, mapping to various frameworks ... possibly Python and Gobject ... in some cases, people haffve specifically mentioned a desire to align with the relatively well known JS APs ... as a value relatively close to the JS for users of their language ... there may be value for a single interface description with mappings to languages AlexR: mappings doesn't mean design centered ... if we are designing a multilanguage thing ... then we have a responsibility to all of them darobin: i think that's a straw man ... we are designing w/ js very much ... as much as i'd like to see a v2 ... we're not going to change the course very much ... if you want a v2, bring a sketch heycam: a bunch of things are collapsing number types or renaming some keywords mjs: the number types are sueful because they define error checking at the interface between the js interface and the underlying implementation ... having a single number type would require each spec to explain what happens when one passses a non interger ... the case of i only accept integers in this range is fairly common Marcos: i'd like to see more examples in the spec heycam: i try to include one example per construct Marcos: i'm doing a review of it anne: are we doing another LC? ... and if we do, could we add String Enumerations ... as a replacement for string constants heycam: I talked to the WebPerf guys ... and they're happy with dropping that ... wrt LC, do you have to if you make normative changes? darobin: if you make changes which would invalidate a review, then you're supposed to go back to LC ... normally we would have to go to LC, especially if we made this change ... LC isn't a big deal, it's just process ... we can have a 3 week last call, and if everyone is happy, just move to CR ... and start testing ... does anyone want to be the testing chief for webidl? heycam: i thought that was only for new specs Marcos: HTML5 tests most of it, right? jrossi2: that's irrelevant, we need an example of each thing <anne> heycam, what about AllowAny? <anne> heycam, I guess you have that recorded somewhere... Josh_Soref: can't we just create a table for each feature of WebIDL and an interface in a given spec for it Travis_MSFT: i think we solve the Example requirement and Testsuite by correlating to Spec items <scribe> ACTION: Travis_MSFT to lead testing coordination for WebIDL [recorded in] <trackbot> Sorry, couldn't find user - Travis_MSFT <MikeSmith> trackbot, status? <scribe> ACTION: Travis to lead testing coordination for WebIDL [recorded in] <trackbot> Created ACTION-638 - Lead testing coordination for WebIDL [on Travis Leithead - due 2011-11-09]. heycam: AllowAny is in the list of things from the LC feedback <MikeSmith> action-638? <trackbot> ACTION-638 -- Travis Leithead to lead testing coordination for WebIDL -- due 2011-11-09 -- OPEN <trackbot> heycam: it had implications relating to override anne: I wasn't clear where it was used, apart from XHR [ heycam talks about overloads ] [ specifically String and Number versions with AllowAny ] ACTION darobin to ACTION rafaelw (or the Google AC) to send how to handle single pass not emptying all mutation queues to the list <trackbot> Sorry, couldn't find user - darobin ACTION boarlicker to ACTION rafaelw (or the Google AC) to send how to handle single pass not emptying all mutation queues to the list <trackbot> Created ACTION-639 - ACTION rafaelw (or the Google AC) to send how to handle single pass not emptying all mutation queues to the list [on Robin Berjon - due 2011-11-09]. darobin: any other issues? <Ms2ger> Agenda: This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/EE:/EU:/ Succeeded: s/EE:/EU:/ Succeeded: s/EE:/EU:/ FAILED: s/EE:/EU:/ Succeeded: s/EE:/EU:/ Succeeded: s/writter/writer/ Succeeded: s/that is/that it is/ Succeeded: s/ack// Succeeded: s/RRSAgent: draft minutes// Succeeded: s/RRSAgent: draft minutes// Succeeded: i/Topic:/Scribe: Chaals Succeeded: s/??:/Travis_MSFT:/ Succeeded: s/Suresh/Balaji/ Succeeded: s/Balaji/nvbalaji/ FAILED: s|weinig|maciej| Succeeded: s/../.../ Succeeded: s/dg/dglazkov/ FAILED: s/..../.../ FAILED: s/..../.../ FAILED: s/..../.../ FAILED: s/valuale/valuable/ FAILED: s/..../.../ FAILED: s/..../.../ FAILED: s|s/..../.../|| FAILED: s/ewe/we/ FAILED: s/XXX/imperative/ FAILED: s/the,m/them/ FAILED: s/of company's/of Alex and company's/ FAILED: s/tready/ready/ Succeeded: s/form/from/ FAILED: s/Index/Indexed/ FAILED: s/hj/js/ FAILED: s/tihs/this/ FAILED: s/parameteer/parameter/ FAILED: i/start throwing/... there isn't a reference in the callback (it's null), but you can have another reference to it elsewhere FAILED: s/casw/cases/ FAILED: s/Travis/Travis_MSFT/ FAILED: s/doens't/doesn't/ FAILED: s/Public-Web-Apps/Public-Webapps/ FAILED: s/invluding/including/ FAILED: s/mjs/weinig/ FAILED: s/proposals/proposal/ FAILED: s/watched/watch/ FAILED: s/watched/watch/ FAILED: s/anne, no it's at 4:30// FAILED: s/helo/hello/ FAILED: s/dependeny/dependency/ FAILED: s/think/see/ FAILED: s/hheyyes/heycam: yes/ FAILED: s/indings/bindings/ FAILED: s/AlexR/heycam/ FAILED: s/whicih/which/ FAILED: s/mjs/weinig/ FAILED: s/sicbrought/sicking brought/ FAILED: s/sic/sicking/ FAILED: s/trackbot/Travis_MSFT/ Found Scribe: Marcosc Found Scribe: Chaals Inferring ScribeNick: chaals Found Scribe: ArtB Inferring ScribeNick: ArtB Found Scribe: Josh_Soref Found Scribe: Josh_Soref Inferring ScribeNick: Josh_Soref Scribes: Marcosc, Chaals, ArtB, Josh_Soref ScribeNicks: chaals, ArtB, Josh_Soref Default Present: tpac, Olli_Pettay, Ms2ger Present: tpac Olli_Pettay Ms2ger Soonho_Lee magnus krisk spoussa Jacob Israel SungOk_You Bryan_Sullivan Wonsuk_Lee David_Yushin_Kim Kihong_Kwon Jesus_Martin hao_wang Jonathan_Jeon Josh_Soref Robin Cameron JamesG Dom Jonas Doug Chaals Kris BrianR Magnus ArtB MikeSmith EricU LaszloG Sakkari WayneCarr Dowan adrianba eliot Agenda: Got date from IRC log name: 01 Nov 2011 Guessing minutes URL: People with action items: art charles travis travis_msft[End of scribe.perl diagnostic output]
http://www.w3.org/2011/11/01-webapps-minutes.html
CC-MAIN-2018-13
refinedweb
13,633
71.95
Opened 9 years ago Closed 8 years ago #8951 closed (invalid) Better support for writing more complex template tags Change History (3) comment:1 Changed 9 years ago by comment:2 Changed 8 years ago by Exemple o required fixups on tags and filters : I try to write something a list of objects that are complex to render, and for which I will assign some specific "credentials" in a database while enumerating. I would be happy to use a tag to simplify the rendering of these complex objects. However, things are not happenning like I expect. At first try, I did : {%for o in listobject%} o1={{o.id}}, o2={%tag o.id arg%} %{endfor%} We coded (code included later) the tag using "oid=template.Variable(tok[1])" or Parser.compile_filter in the template tag, "oid.resolve(context)" in the render function. Samething that simple_tag would have done. However, the id of the first object of listobjects for o2, while o1 is correctly moving... This should be qualified as a bug since it is not the expected behaviour in a for loop... I guess this is due to the fact that "for" and "endfor" are themselves tags. The one would think about writing a filter, and to do {{o.id|filter:arg}}, the problem of this approach is that in tags, I could access "context" to retrieve objects in the context, the filter syntax does not provide access to context and supports only one argument, that is generally a string. If I want to access the my user profile (as I actually want) I am stucked... I believe generally that "request", "session", "context" objects should be easily accessible from tags and filters Currently this is not the case. I hope explanations of this kind of need can help make the next tag system better. Code of the buggy tag : def buggytag(parser,token): class Node(template.Node): def __init__(self,oid,param): self.oid=oid self.param=param def render(self, context): if (type(self.oid)!=int) and (type(self.oid)!=long): self.oid=self.oid.resolve(context) o=Objects.objects.get(id=self.oid) key=foo_create_accesskey_for_key(context['request'],o) return ('('+str(oid)+')'<span id="'+self.param+'"></span>' +'<script>load_object("'+o.url+'","'+self.param+'","'+key+'");</script>') tok=token.contents.split() assert(len(tok)==3) try: oid=int((tok[1])) except: oid=parser.compile_filter(tok[1])#template.Variable.. param=tok[2] return Node(oid,param) comment:3 Changed 8 years ago by Without a specific suggestion it's hard to know what to do here. Closing in favor of some of the other tickets 'round these parts that have specific suggestions/patches. bits[] needs to go away. Badly. Perhaps this and namespaces could be part of a ttag refactor? (and other ponies as well? :)
https://code.djangoproject.com/ticket/8951
CC-MAIN-2017-26
refinedweb
462
58.18
by Seth Mottaghinejad, Analytic Consultant for Revolution Analytics In the last article, we showed two separate R implementations of the Collatz conjecture: 'nonvec_collatz' and 'vec_collatz', with the latter being more efficient than the former because of the way it takes advantage of vectorization in R. Let's once again take a look at 'vec_collatz': Today we will learn a thrid, and far more efficient way of implementing the Collatz conjecture. It involves rewriting the function in C++ and using the 'Rcpp' package in R to compile and run the function without ever leaving the R environment. One important difference between R and C++ is that when you write a C++ function, you need to declare your variable types. The C++ code chunk shown below creates a function called 'cpp_collatz' which takes an input of type 'IntegerVector' and whose output is of type 'IntegerVector'. Unlike in R, where explicit loops can slow your code down, loops in C++ are usually very efficient, even though they are tedious to write. cpptxt <- ' + IntegerVector cpp_collatz(IntegerVector ints) { + IntegerVector iters(ints.size()); + for (int i=0; i<ints.size(); i++) { + int nn = ints(i); + while (nn != 1) { + if (nn % 2 == 0) nn /= 2; + else nn = 3 * nn + 1; + iters(i) += 1; + } + } + return iters; + }' cpp_collatz <- cppFunction(cpptxt) set.seed(20) cpp_collatz(sample(20)) [1] 20 17 8 19 4 20 1 0 2 5 3 16 9 7 17 7 12 6 14 9 Let's now redo our C++ implementation in a slightly different way. We would rather not have C++ code interspersed with R code: Not only does it make it hard to read, but we also won't be able to take advantage of syntax-hightlighting specific to C++ (among other annoyances). So let's store the C++ code in a file we call 'collatz.cpp' and use the 'sourceCpp' function in R to call it. Here is the content of 'collatz.cpp': cat(paste(readLines(file("collatz.cpp")), collapse = "\n")) #include <Rcpp.h> using namespace Rcpp; // [[Rcpp::export]] int collatz(int nn) { int ii = 0; while (nn != 1) { if (nn % 2 == 0) nn /= 2; else nn = 3 * nn + 1; ii += 1; } return ii; } // [[Rcpp::export]] IntegerVector cpp_collatz(IntegerVector ints) { IntegerVector iters(ints.size()); for (int i=0; i<ints.size(); i++) { iters(i) = collatz(ints(i)); } return iters; } // [[Rcpp::export]] IntegerVector sug_collatz(IntegerVector ints) { return sapply(ints, collatz); } There are three things worth mentioning about the above code chunk: - We broke up the function in two: one function 'collatz' that operates on a single integer and returns its stopping time, and another function 'cpp_collatz' that runs the 'collatz' function on a vector of integers and returns a vector of stopping times. This makes the code easier to read (no more nested loops) and more modular. - There are two extra lines at the top, which tell C++ what namespace to use, as well as an extra line '// [[Rcpp::export]]' before each function definition. The latter is used to let Rcpp know that theses functions should be made available to R once they have been compiled. - The third function, called 'sug_collatz', does the same thing as 'cpp_collatz' but uses the Rcpp-sugar extension, which is an attempt to take common R functions and rewrite them in C++ so the C++ code can look like its R counterpart when possible. This can sometimes come at a very small cost but it saves us a lot of hassle, as you can see. To compile the C++ code, just type the following in R: sourceCpp("collatz.cpp") Assuming that we have a C++ compiler istalled, it will take a few seconds to run. We can now type the following into the R console: cpp_collatz function (ints) .Primitive(".Call")(<pointer: 0x000000006e042180>, ints) cpp_collatz(1:10) # seems to be working fine. [1] 0 1 7 2 5 8 16 3 19 6 sug_collatz(1:10) # same as above. [1] 0 1 7 2 5 8 16 3 19 6 And now let's get to the reason we bothered with C++ in the first place: efficiency. There are four comparisons we're interested in: - the 'cpp_collatz' function - the 'sug_collatz' function, which uses 'sapply' in C++ instead of an explicit loop - the 'vec_collatz' function we wrote earlier which is our R implementation - the 'collatz' function we wrote earlier in C++, which works only on a single integer, but we can vectorize it by wrapping it in 'sapply' in R. We can think of the last case as being a hybrid approach. One reason to include it is because someone may share a complicated piece of C++ code that works but is not vectorized and vectorizing it may turn out to be a non-trivial task (programmers are supposed to be lazy after all!). collatz_benchmark <- function(nums, ...) { + require(rbenchmark) + benchmark( + cpp_collatz(nums), # runs completely in C++ + sug_collatz(nums), # runs completely in C++ + vec_collatz(nums), # runs completely in R in a vectorized fashion + sapply(nums, collatz), # runs collatz in C++ but sapply in R + columns = c("test", "replications", "elapsed", "relative"), + ... + ) + } Let's compare the two functions for all integers from 1 to 10^4: collatz_benchmark(1:10^4, replications = 20) test replications elapsed relative 1 cpp_collatz(nums) 20 0.03 1.000 4 sapply(nums, collatz) 20 0.53 17.667 2 sug_collatz(nums) 20 0.03 1.000 3 vec_collatz(nums) 20 51.36 1712.000 And for all integers from 1 to 10^5: collatz_benchmark(1:10^5, replications = 20) test replications elapsed relative 1 cpp_collatz(nums) 20 0.45 1.071 4 sapply(nums, collatz) 20 5.76 13.714 2 sug_collatz(nums) 20 0.42 1.000 3 vec_collatz(nums) 20 753.08 1793.048 As we can see 'cpp_collatz' and 'sug_collatz' are almost identical when it comes to efficiency and both are far more efficient than 'vec_collatz', and increasingly so for larger sequences of integers. Also notice the relative efficiency of the hybrid approach compared to 'vec_collatz'. Let's benchmark on a sample of 1000 integers from 1 to 10^6 for a more realistic comparison of four approaches: set.seed(20) collatz_benchmark(sample(1:10^6, 1000), replications = 100) # will take a while to run! test replications elapsed relative 1 cpp_collatz(nums) 100 0.05 1.667 4 sapply(nums, collatz) 100 0.23 7.667 2 sug_collatz(nums) 100 0.03 1.000 3 vec_collatz(nums) 100 31.15 1038.333 The results are remarkable: the C++ functions 'sug_collatz' is the winner with 'cpp_collatz' a close second. The slight advantage of 'sug_collatz' may be due to 'sapply' in C++ using a more efficient method for running the loop (such as iterators). Moreover, both functions are about 1000 times faster than 'vec_collatz'! Even though 'vec_collatz' was specifically written to take advantage of vectorization in R, it still pales in comparison to the might of C++. Even more surprising is that the hybrid approach is only about 8 times slower than C++ approach, which is not bad at all. But if your "vectorization" consists of wrapping a function in 'sapply', then it's best to do it in C++ as we did with 'sug_collatz' than to do it in R. One take-away lesson for us is that istead of spending a lot of time and effort making our R code as efficient as possible, it may be worth investing that time in learning how to code in a language like C++. Especially since packages like 'Rcpp' and the 'Rcpp-sugar' extension give us the best of both worlds. Another take away is that hybrid solutions are a good alternative in cases when our C++ code is efficient but not vector...
http://www.r-bloggers.com/r-and-the-collatz-conjecture-part-2/
CC-MAIN-2014-35
refinedweb
1,274
60.35
Ok, well Iam just about done with one of my assignments, but Iam having a problem at the end of it. Iam suppose to find a maximum income of the families income that the user inputs and I have to find the families that make less than 10% of the maximum income. Can anyone help? Heres what Iam suppose to do, "Find the maximum income of all the values entered, and print this value to the screen. Then count the families that make less than 10% of the maximum income. Display the income of each of these families, then display the count." Heres an example, Please enter the number of families: 5 Enter an income: 8500 Enter an income: 109000 Enter an income: 49000 Enter an income: 9000 Enter an income: 67000 The maximum income is: 109000 The incomes of families making less than 10% of the maximum are: 8500 9000 for a total of 2 families And Heres my code so far, import java.util.*; public class CountFamilies { public static void main(String[] args) { Scanner kbd = new Scanner(System.in); int numOfFamilies = 0, maximum = 0; System.out.println("Enter number of families:"); numOfFamilies = kbd.nextInt(); long []income = new long [numOfFamilies]; for (int count = 0; count < numOfFamilies; count++) { System.out.println("Enter an income:"); income[count]=kbd.nextLong(); } if () { System.out.println("The maximum income is:" + ); } } } I know I have to use a If statment to find the maximum, thats why I have a blank If statement in there.
https://www.daniweb.com/programming/software-development/threads/325786/help-with-getting-a-maximum-income-in-java
CC-MAIN-2017-26
refinedweb
249
54.63
What are the difference between @Inject and @Injectable? @Inject() - Angular 2 Angular 2 @Inject() is a special technique for letting Angular know that a parameter must be injected. Stayed Informed - Why @Injectable() in Angular 2? Example as, import { Inject } from '@angular/core'; import { Http } from '@angular/http'; class UserService { users:Array<any>; constructor(@Inject(Http) http:Http) { //TODO AS PER YOU. } } @Injectable() - Angular 2 @Injectable() marks a class as available to an injector for instantiation. An injector reports an error when trying to instantiate a class that is not marked as @Injectable(). How to use Dependency Injection (DI) correctly in Angular 2? The basics Steps of Dependency injection, 1. A class with @Injectable() to tell angular 2 that it’s to be injected “UserService”. 2. A class with a constructor that accepts a type to be injected. Example, UserService marked as @Injectable as, import {Injectable, bind} from 'angular2/core'; import {Http} from 'angular2/http'; @Injectable() /* This is #Step 1 */ export class UserService { constructor(http: Http/* This is #Step 2 */ ) { this.http = Http; } } I hope you are enjoying with this post! Please share with you friends. Thank you!!
https://www.code-sample.com/2017/04/angular-2-inject-vs-injectable.html
CC-MAIN-2020-16
refinedweb
187
57.77
I'm new with OpenMW and hope, someone could give me some hints to get the most of my Cam M7. My first project should be the following: I have a labeling machine. It put product labels on bottles. This works fine. The machine has an ink-jet printer which should print some character (which always start with CH-B.). Sometimes the ink-jet printer just doesn't print and I would like to test with the Cam M7 if the text get's printed. The printer prints black on a white background. I know the position and could set the camera directly to this position. Every time one label is printed (and transported) I get a signal to pin0, a buzzer is connected to pin2 and a reset-Button is on pin1. Here is my code: I've tested it on my desk, It works but the recognition rate is low (I hold the cam in my hand and the parameter and the template are not optimized). Code: Select all import sensor, image, time from pyb import Pin from pyb import LED from image import SEARCH_EX, SEARCH_DS sensor.reset() # Reset and initialize the sensor. sensor.set_contrast(1) sensor.set_gainceiling(16) sensor.set_framesize(sensor.VGA) sensor.set_windowing(((640-160)//2, (480-120)//2, 160, 120)) sensor.set_pixformat(sensor.GRAYSCALE) template = image.Image("/template.pgm") clock = time.clock() # Create a clock object to track the FPS. pinStart = Pin('P0', Pin.IN) pinReset = Pin('P1', Pin.IN) pinBeep = Pin('P2', Pin.OUT_PP, Pin.OUT_PP) pinBeep.value(0) found = 1 err = 0 while(True): clock.tick() img = sensor.snapshot() print(clock.fps()) if err==0: r = img.find_template(template, 0.50, step=4, search=SEARCH_DS) #, roi=(10, 0, 50, 40)) if r: img.draw_rectangle(r) found = 1 if pinStart.value()==1: if found==0: err = 1 pinBeep.value(1) else: found=0 if pinReset.value()==1: green_led.on() err = 0 found = 1 pinBeep.value(0) What parameter, resolution, threshold could I optimize to get the best results? Which distance from the camera to the label would you use? What's the best way to create a template (the camera image is not so sharp and has some noise errors), but creating the text in an image program is not so good because of different bold and line width. So what's the best way to create a template? Any help is welcome, best regards Mark
http://forums.openmv.io/viewtopic.php?f=5&t=1212&amp
CC-MAIN-2019-26
refinedweb
402
69.28
Rooftop solar photovoltaic (PV) systems Solar PV systems are now more affordable than ever, and across Australia, almost 200,000 households and businesses have had a solar PV system installed. In Alice Springs, there are over 400 systems installed on homes and businesses (as of January 2011). Why install a solar PV system? What is a solar PV system? A solar photovoltaic (PV) power system is a technology that converts sunlight into electrical energy. By installing a solar PV system on the roof of your home or business, you are effectively turning your roof into a minipower station. PV systems should not be confused with solar hot water systems, which are sometimes also referred to as ‘solar panels’. In a solar hot water system, panels on your roof use the sun’s energy to heat water, which is then stored in a tank. The picture on the following page shows both technologies. There are many reasons to have a PV system installed. You will be generating clean, renewable energy, reducing the need to burn fossil fuels. You will save on your power bills, as the PV system will meet some or all of your power needs. The electricity generated by your PV system can be sold to Power and Water and appears as a credit on your electricity bill. As solar power is generated during the day when the community’s power demands are typically at their highest, your PV system will help reduce the load on the local electricity network. The value of your home may increase as demand for sustainability features in homes increases. How much power can a PV system generate and is my house or business suitable? How does a PV system work? In a typical system, there are two main components - the solar panels (also known as photovoltaic modules) and a device called an inverter. The set of solar panels (also called an ‘array’) are installed on the roof of your home or business, however ground mounted systems are also possible if installation on the roof is not feasible. The solar array should face north, though other orientations can also be suitable. When the sun hits the panels, electrical current is generated (as DC) and fed to the inverter, which produces electricity at 240 volt AC (the same as the electricity grid) and feeds it into your local electricity network via the electricity meter. The meter records the electricity produced, and this information is used by Power and Water to provide a credit on your power bill. The amount of energy generated by a solar PV system is directly related to the size of the system installed – the bigger the system the more electricity will generally be produced per year. For example, a typical household system may involve the installation of 12 X 165 watt panels. In this case, the system is 1980 watts or 1.98kW. The rule of thumb in Alice Springs is that for every 1kW of solar panels installed, around 1,600kWh/year of electricity will be produced. Therefore a typical 2kW PV system, with an optimal roof, will produce around 3,200kWh/year. There are site specific factors that can reduce the amount of electricity that a solar PV system will produce, including having a less than optimum roof orientation and pitch. Similarly, shading from nearby trees or other buildings or structures can also have an impact on output. The amount of available roof space can also be a limiting factor in terms of what size system can be installed. The amount of roof space required varies between 8 – 12m 2 /per kW installed, plus necessary clearances. To get a more accurate estimate of output, solar installers can carry out a roof top inspection to calculate the expected output of a PV system, taking into account the impact of any shading. Can I run my house or business off a solar PV system? The answer to this question will depend on how much electricity you use and what size solar PV system you install. If you installed a 2kW system, it would produce around 3,200kWh per year (depending on the system installed and the site specific factors mentioned previously). The average household electricity consumption in Alice Springs is around 8,000kWh/year (an ‘energy champion’ household could use as little as 4,000kWh/year). Therefore, assuming your consumption was the same as the Alice Springs average, you would be meeting around 40% of your energy needs over the course of a year. When investing in a solar PV system, it makes sense to also look for opportunities to improve your home or business’ energy efficiency to reduce your electricity demand. It is also important to note that installing a solar PV system will not provide you with electricity during power outages or blackouts. Solar PV systems are required to shut down immediately when an interruption in the electricity supply from the grid is detected, as a safety mechanism for electrical linesman. What financial incentives are available? An Australia wide funding scheme is in place for small scale renewable energy technologies, including solar power systems. The scheme allows the owner to sell the renewable energy certificates (RECs) that their system ‘produces’ (RECs are also known as Small Technology Certificates or STCs). This funding is normally provided as a point of sale discount on the purchased cost of the system. The value of the RECs rebate varies from time to time, but represents a significant discount off the cost of the system. Solar PV installers generally quote the after RECs price as the final cost. It is not compulsory to ‘sell’ your RECs at point of sale, however the full retail cost of the system is payable if you choose not to. For a full explanation of the pro’s and cons of selling your RECs, contact Alice Solar City. Owners of solar power systems also ‘sell’ the electricity produced by the system to Power and Water, under a Power Purchase Agreement. Currently, householders sell the electricity generated to Power and Water at 19.23 cents/ kWh (this is equivalent to the standard tariff rate for consumption in 2010/11). For example, if a householder has installed a 2kW solar system, the estimated financial return (assuming a roof with good solar access) will be 3,200kWh X $0.1923, or around $615/year. Commercial customers will also typically be paid a rate equivalent to the tariff rate for consumption. For more information about connecting to the grid and applicable buy-back tariffs for homes or businesses, visit renewable_energy/solar_buyback_ program or phone Power and Water on 1800 245 092. Questions to ask your solar PV installer Purchasing a solar power system is a major investment for your home. Alice Solar City encourages householders to undertake an appropriate amount of research to ensure that they choose a quality system, at a cost effective price. The following are questions householders or business owners considering installing a solar power system should ask any proposed installer: What brand of solar panels and inverter will be used - are these guaranteed to be used, or will the supplier substitute equipment according to availability of supply? If product substitution does take place, will the householder be informed in advance or given the option to withdraw? Where does the installer intend to install the inverter? As inverters do produce a ‘humming’ noise, it should be located away from day time living areas. Depending on the inverter, this may be outside, ideally out of direct sunlight. Does the installer undertake a formal assessment of expected performance of the proposed system and provide a written quote to the householder which includes information on warranty, performance etc? What is the length and type of warranty? 5 year warranty on inverter; 10 year product warranty and a 25 year (at 80% output) warranty is recommended as a minimum for solar panels. What is the track record of use and servicing in Australia of the major components to be used? What is the track record of the company itself (ask for references) and who will they use to carry out the installation — are they local or ‘fly in’? Does the installer do a formal inspection of the site, assess expected output and provide a written quote before asking for a deposit or a commitment from the householder? Alice Solar City highly recommends site inspections prior to accepting a quote or paying any deposits. How does the installer propose to provide local support for warranty and service claims? Where package prices are advertised, are there extra costs involved? For example meter installation, grid connection, building permit costs, two storey buildings, or any upgrades to the meter panel or switchboard. Alice Solar City strongly encourages householders to arrange a site inspection and formal quote to ensure that the full cost of the system installation is known upfront, i.e. prior to accepting a quote or paying a deposit. Does the installer manage the process of connecting the solar system to the electricity grid or obtaining any applicable building permits? Does the installer require a deposit and what is the timeframe between paying a deposit and installation? Does the company provide a point of sale discount through the purchase of Renewable Energy Certificates (RECs) and if so, what price is paid for each REC? What are the steps involved in ‘going solar’? How to find an installer An accreditation system for solar installers is in place across Australia and there are a number of accredited PV installers in Alice Springs. Alice Solar City can provide contact details for the suppliers who are providing fixed price package deals for solar power systems. Visit the Smart Living Centre at 2/82 Todd Street or our website for more information. Phone: (08) 8950 4350 Part of the Australian Government’s Solar Cities Initiative 1 2 3 4 5 6 7 Speak to one or more accredited solar installers to arrange a site inspection and quote (Alice Solar City can provide names of installers). The site inspection is required to confirm that the house is suitable, provide an estimate of expected output, and to identify any additional work required to be undertaken as part of the installation, including any upgrade required to the meter panel or switchboard. As part of your decision on which installer or package to proceed with, refer to the Questions to ask your solar PV installer on the previous page. Installing a PV system may require a building permit; your installer will provide information about any building permit requirements that may apply and the process and costs involved. Once you have accepted a quote and paid any applicable deposit, your installer will schedule an installation date. If a building permit is required, this should be obtained prior to installation. You should complete the necessary Agreements with Power and Water (Network Connection Agreement and Power Purchase Agreement) and pay the applicable fee. Power and Water should approve the connection prior to the installation commencing. Installer carries out the installation (usually completed in less than 1 day) and will require full payment. The installer should provide you with a system manual, commissioning sheet and Certificate of Compliance. Installer provides relevant documentation to Power and Water. Subject to the documentation being correct and the installation being in compliance, Power and Water will install the new electricity meter and the system is ‘turned-on’. Delays in turning the system on can result if the completion advice provided to Power and Water is incomplete or the PV installation is sub-standard.
https://www.yumpu.com/en/document/view/38832567/rooftop-solar-photovoltaic-pv-systems-alice-solar-city
CC-MAIN-2020-50
refinedweb
1,929
50.97
In arm_tr_init_disas_context() we have a FIXME comment that suggests "cpu_M0 can probably be the same as cpu_V0". This isn't in fact possible: cpu_V0 is used as a temporary inside gen_iwmmxt_shift(), and that function is called in various places where cpu_M0 contains a live value (i.e. between gen_op_iwmmxt_movq_M0_wRn() and gen_op_iwmmxt_movq_wRn_M0() calls). Remove the comment. We also have a comment on the declarations of cpu_V0/V1/M0 which claims they're "for efficiency". This isn't true with modern TCG, so replace this comment with one which notes that they're only used with the iwmmxt decode. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- In an ideal world we would clean up the iwmmxt decode to remove cpu_V0, cpu_v1 and cpu_M0 entirely -- they're only used as temporaries, and in a modern coding style we would just create and dispose of more carefully scoped TCG temps as we needed them. --- target/arm/translate.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/target/arm/translate.c b/target/arm/translate.c index 27bf6cd8b51..87235db4640 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -59,8 +59,9 @@ #define IS_USER(s) (s->user) #endif -/* We reuse the same 64-bit temporaries for efficiency. */ +/* These are TCG temporaries used only by the legacy iwMMXt decoder */ static TCGv_i64 cpu_V0, cpu_V1, cpu_M0; +/* These are TCG globals which alias CPUARMState fields */ static TCGv_i32 cpu_R[16]; TCGv_i32 cpu_CF, cpu_NF, cpu_VF, cpu_ZF; TCGv_i64 cpu_exclusive_addr; @@ -8552,7 +8553,6 @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs) cpu_V0 = tcg_temp_new_i64(); cpu_V1 = tcg_temp_new_i64(); - /* FIXME: cpu_M0 can probably be the same as cpu_V0. */ cpu_M0 = tcg_temp_new_i64(); } -- 2.20.1
https://lists.gnu.org/archive/html/qemu-arm/2020-08/msg00019.html
CC-MAIN-2020-40
refinedweb
273
54.83
Please confirm that you want to add Selenium Mastery: Apply What You Learn Here Today By RicherU to your Wishlist. . In this course we talk about And even more tools to help you get to the next level as a tester almost immediately. Having an automated tester changed my life and made releasing my application almost instant. Enroll with confidence! Your enrollment is backed by Udemy's 30-day, no-questions-asked, money-back guarantee! Go and get hooked on this online tutorial about Selenium 2.0 and take the advantage of this tool for your expediency. Selenium 2.0 is an open-source web driver for API or server ready that can help you with automated testing and web needs. API stands for Application Program Interface; it is a set of routines or protocols for building any dynamic software applications. Hello there! These 11-pages of slide materials will help you identify and list down things needed before you start with Selenium are the following: Copy and paste the link provided on this page. This link will direct you to Selenium Web driver Resources and download them. For a conducive learning experience, click HD button. The reason why use Selenium, first its for free! It is an automated testing tool for web applications, it supports multiple languages such as Java, Csharp, Python, Ruby, Php, Perl and also JavaScript. It has useful components that you can use during your software developments and go through smoothly on web browsers. Section 1 quiz We provided you with helpful slides to help you get in your development environment set. To start, install firebug and fire path. See through these links and download. You need to set-up some tools for you to activate your Selenium. See again Lecture 6 for easier access of links. To begin, you need to install firebug and fire path. So go and check this video because it will assist you with the processes from downloading, installing and activating tools in order for you to write code for selenium. This exciting video will show you how to create your first project on Selenium. After setting up Selenium silver components, Java run time files and eclipse. Open your eclipse, set up your project and simply follow the remaining series of steps for you to officially start your first test. Cool! Now, we’re back and ready for our project. After we open up our stand alone objects, we gonna create our main method and go ahead with the rest of the automated testing. We’ll test our Firefox browser and also do some serious testing on Google to run or display a series of Selenium links for this browser. Amazing, right? Try it yourself and experience what Selenium can do for you. import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.firefox.FirefoxDriver; Yes. Go and copy these test samples and see what you can do. You knew already some of the basic steps from lecture 9, why not try this out. Import all the tools need and you’re half way there. These snippets will guide you through in understanding more the topic. Given steps at lecture 9, we already started our own test by trying sample codes at Lecture 10. This time, we run this and check if we have the same output. You can go down a little bit more further on these samples and try it your way. Answer a series of questions after this video and we’ll see if we are on the same page in learning Selenium 2.0. We are about to start with the creation of our login test and manipulate web elements. We gave you with 13-slides to read and learn about the next testing. Locators, XPath, Web elements such as buttons, links, text boxes, radio buttons, check boxes and drop boxes are one of the focus of this tutorial. These are the identified topics for the following videos after this. Locators are very important in Selenium driver, these are classified into 2 categories the structure- based and attributes-based locators and we provided you with lists from each classification. As we play further, we’ll share some examples, step-by-step procedures and see what we can do with these features on our browser. Just hit the replay button if you missed some parts. These are the actual set of Locator resources we have used from the previous tutorial. Try this and you’ll see more what you can do with these examples. XPath is actually defining parts of not only with XHTML but also with Xquery, Xpointer, Xlink and XSLT as defined, at the same time, it can also be used in JavaScript, Java, XML Schema, PHP, Python, C and C++, and lots of other languages. Take note that there are a lot of XPath expressions that can be used to get a specific result. This video will show you more about XPath using the sample project we got. The play the button is ready. These are a set of codes that we discussed earlier from lecture 15. If you want to try and understand more about XPath, grab your copy now and perform this example. Thanks See you on the next video. In terms of working with different web elements, there’s an array of methods, attributes to consider and functions that we can use for each type. Will show you some examples in the following: JUnit is actually a framework used to test extreme programming also a unit testing in Java Programming Language wherein you can reuse and maximize its test cases. There are sets of annotations and asserts methods that we wrote down for you to use in the process. The advantage of using JUnit will help you generate test results faster and a lot more. To install JUnit, download this link and grab your copy at. We will show you how use JUnit and at the end of this video we will run our first JUnit test. Hit play button. Grab this copy and try this on your own, this is for you to see the annotation in action. If you have any confusions or questions, go and see lecture 19 for a review..
https://www.udemy.com/selenium-webdriver-20-a-beginners-guide-to-selenium/
CC-MAIN-2017-34
refinedweb
1,051
72.56
Pre. Programs starts with code line #include <stdio.h>. Here, the directive #include is a preprocessor command. All preprocessor commands start with the sign#. The command given above directs the compiler to include the header file <stdio.h> (standard input/output header file) from C Standard Library in this program. Standard input is used to denote input from keyboard and standard output is used to indicate display on the monitor connected to the computer. A program that requires standard input and standard output must include the header file <stdio.h> as it has been done in Program. The header file <stdio. h> also contains many other functions that help in formatting the input/output of a program as well. More than one header file from C Standard Library may be included in a program if these are required for the manipulation of data of the program. For example, if you want to carry out evaluation of mathematical functions such as square-root, sine or cosine of angles, log(x), or evaluation of exponential functions, you must include the header file <math.h>.This header file has the procedural software for evaluations of many such functions. For including more than one header files in a program, these should be written in successive lines. Only one header file is written in one line. The name of each header file, if it belongs to the C Standard Library, is enclosed between the angular brackets < >. The names end with extension (.h) as illustrated below. #include <stdio.h> #include <math.h> The Preprocessor’s are Used For Two Purposes either they are used as Macros and either they are used as File inclusions like Many Times we write #include this is also a Preprocessor directive which is used for Including the Contents of a File Like iostream.h and either conio.h or any other File But the Macros are Also Called as Small Programs or The Macros are Used for Giving a Short names to Long Statements and Long Keywords and a user can Specify his own names to a keyword of c language Macros are always define by using a #define Statement and For Defining our Own Macro First we have to specify our name for a Macro then we writes the Statements those are to be Executed When a Macro is Called or When that Name is used in any where in a program For Defining a Macro We have to Write Like This #define Integer int Now we can use Integer instead of int .But Always Remember we are not Replacing a Word with our Words We are just Giving a another name to a built in Keyword In Macro we can also write any Statements and any Expression for Example #define Max 100 #define or || #define Sqr(x) x*x These are the Some Examples of Writing a Macro in the First Macro we defines a Max Word Wherever we use Max then Compiler will treat as 100 and Second Macro is or name to a or Symbol means Now Whether We can use a or Symbol and Instead of Putting a Symbol we can and either write a or word. Third Macro defines a Sqr(x) Whenever we writes Sqr with a Number then this will gives us a Square of a number etc. There are many Conditional Compilation Macros those are used for Performing operations on Macros Like Checking the Value of Macro and To Check Whether a Macro is Defined or not The Various Condition Compilation Macros are as follows:- 1. #ifdef:- This is used for Checking either a Macro with a Specified Name is Defined or not .Or it checks Whether a Macro is defined For use. 2. #undef:- This is Used for Un-defining a Macro. Or if we want to remove the definition of a Macro from our Program .then we can use #undef. 3. #ifndef:- This Condition Compilation is used for Check either a Macro is not Defined if a Macro really not defined then this will gives us true or if a Macro is defined then this will gives us false 4. #if :- #if condition Compilation Macro if used for Checking a value of Macro that is defined in a Macro definition or it checks whether an Expression evaluates to nonzero or not and always Remember Every #if Macro is must be Closed with the help of #endif 5. #else:- Used the Condition of First #if Macro is False . This is Similar to simple If else but Difference is that if –Else operates on value of Variable but #if performs or it either checks the Value of a Macro and this will be Executed When no Match of Macro Value is to be found same as else statement 6. #else if:- This Directive is used when we wants to Check one more Condition upon the Macros When #if is False then the Compiler will check another value that is defined in #else if Directive and also Every #elseif directive must be end with #endif
http://ecomputernotes.com/cpp/introduction-to-oop/what-do-you-mean-by-preprocessors
CC-MAIN-2018-30
refinedweb
842
64.54
Python FTP module Errors Clarification I set up a scheduled 'bat' file that will run five python FTP push scripts to push files to an external FTP server from my local PC. Under most normal circumstances, this process works fine. However, sometime there will be error during the process to the FTP server as reported by the traceback debug. ERROR Traceback (most recent call last): File "push2.py", lin 13, in (module)('X', 'X') File "X\Python\Python35\lib\ftplib.py", line 419, in login resp = self.sendcmd('PASS' + passwd) File "X\Python\Python35\lib\ftplib.py", line 272, in sendcmd return self.getresp() File "X\Python\Python35\lib\ftplib.py",line 235, in getresp resp = self.getmultiline() File "X\Python\Python35\lib\ftplib.py", line 221, in getmultiline line = self.getline() File "X\Python\Python35\lib\ftplib.py",, line 209, in getline raise EOFError EOFError Is these error caused by the scripts I am using or is it due to some settings in my Filezilla Server or it is completely out of my control which is at the external party FTP server? Is there a delicate way to workaround bypassing these errors especially for error 1 since it is happening the most often? My 'bat' file is below: cd X:\FTP_Push\Location\1 python push1.py TIMEOUT 5 cd X:\FTP_Push\Location\2 python push2.py TIMEOUT 5 cd X:\FTP_Push\Location\3 python push3.py TIMEOUT 5 cd X:\FTP_Push\Location\4 python push4.py TIMEOUT 5 cd X:\FTP_Push\Location\5 python push5.py if NOT ["%errorlevel%"]==["0"] pause timeout 60 I added timeout of 5s inbetween each script because I thought that I am connecting to the FTP server too fast. But it does not fix the issues. Many thanks. See also questions close to this topic - - unable to update image in tkinter using a function I am trying to create a Tkinter to make a window that shows images using a label, then update the image using an update function, but the image that I am trying to show doesn't show up in the Tkinter window, instead, a black screen appears I have two working code 1. one that shows an image on the Tkinter window 2. one what loops a GIF using an update function I tried to combine them the code I am working on that doesn't work '''Python #import GUI from tkinter import * #change dir import os os.chdir("C:/Users/user/Desktop/test image folder/") #add delay import time #import image from PIL import Image, ImageTk #set up the window window = Tk() #window.title("modify images") #list of filename filelist = [] #loop over all files in the working directory for filename in os.listdir("."): if not (filename.endswith('.png') or filename.endswith('.jpg')): continue #skip non-image files and the logo file itself filelist = filelist + [filename] #list of filename print(filelist) #show first pic imagefile = filelist[0] photo = ImageTk.PhotoImage(Image.open(imagefile)) label1 = Label(window, image = photo) label1.pack() #update image def update(ind): imagefile = filelist[ind] im = ImageTk.PhotoImage(Image.open(imagefile)) if ind < len(filelist): ind += 1 else: ind = 0 label1.configure(image=im) window.after(2000, update, ind) window.after(2000, update, 0) #run the main loop window.mainloop() ''' the other code I am trying to combine 1:the one that shows image '''Python import tkinter as tk from tkinter import * from PIL import Image, ImageTk # Place this at the end (to avoid any conflicts/errors) window = tk.Tk() imagefile = "image.jpg" img = ImageTk.PhotoImage(Image.open(imagefile)) lbl = tk.Label(window, image = img).pack() window.mainloop() print('hi') ''' 2:updates gif '''Python from tkinter import * #change dir import os os.chdir("C:/Users/user/Desktop/Learn Python") #add delay import time ##### main: window = Tk() ##### My Photo photo1 = [PhotoImage(file="anime.gif", format="gif -index %i" %(i)) for i in range(85)] #update image def update(ind): frame = photo1[ind] if ind < 84: ind += 1 else: ind = 0 label.configure(image=frame) window.after(80, update, ind) label = Label(window, bg="black") label.pack() window.after(0, update, 0) #####run the main loop window.mainloop() ''' I expect it to show all images in the file one by one it instead shows only the first image, then the window goes blank - SQLITE Query- To retrieve all versions between two versions? The sqlite table consists of attributes : |Versions (TEXT)| | "2.73.8" | | "3.6.4 " | | "3.9.11" | and so on.. I want to retrieve all the versions from the table between two versions given in the query. For instance: Between versions- 2.9.10 & 3.7.10 . I could not find any sqlite function to query this directly. I used Substring (SUBSTR) to split to get individual digits which could then be compared to the one present in the table. I was successful in doing that but I could find a way to query to retrieve all versions between two version set. create table prod(version varchar); insert into prod values('2.7.5'); insert into prod values('2.7.4'); insert into prod values('2.0.0'); insert into prod values('22.73.55'); insert into prod values('22.17.54'); insert into prod values('22.10.06'); insert into prod values('3.7.5'); insert into prod values('3.4.5'); insert into prod values('3.7.6'); Query to retrieve all versions below or equal to : "3.50.6" (using nested "case when" ): SELECT * from prod Where version IN ( SELECT CASE WHEN (CAST(substr(version,0,instr(version,'.')) as integer)=3) THEN CASE WHEN (cast(SUBSTR(SUBSTR(version, INSTR(version, '.')),1,INSTR(SUBSTR(version, INSTR(version, '.') + 1), '.') - 1) as float)< 0.50 ) THEN version ELSE CASE WHEN (cast(SUBSTR(SUBSTR(version, INSTR(version, '.')),1,INSTR(SUBSTR(version, INSTR(version, '.') + 1), '.') - 1) as float)=0.50) THEN CASE WHEN (CAST(replace(version, rtrim(version, replace(version, '.', '')), '')AS INTEGER)<=6) THEN version END END END END FROM prod); Kindly provide me a way to query to retrieve all versions in the table between two sets of versions. - to fix file_put_contents failed to open stream: Failed to set up data channel I have a PHP program which reads out and writes to a file which is on an FTP server, the first time writing goes fine, but when I try to write a second time I got 2 different errors: Warning: file_put_contents(): failed to open stream: FTP server reports 550 End in C:\Apache24\htdocs\Directory\Sources\form.php on line 42 and the second: Warning: file_put_contents(): failed to open stream: Failed to set up data channel: No connection could be made because the target machine actively refused it. in C:\Apache24\htdocs\Directory\Sources\form.php on line 42 And then it just repeats the second error. I haven't tried anything yet, because I don't know where to start. $document.write('<h1>Wrong email.</h1>'); window.setTimeout(function(){window.location.replace('../index.html');}, 3000)</script>"; fclose($text_file); $i++; }else{ //write old data along with new data file_put_contents($path, $text_data . "Text ;\r\n", FILE_APPEND, $stream_context); //echo "<script type='text/javascript'>document.write('<h1>Thank you.</h1>'); window.setTimeout(function(){window.location.replace('../index.html');}, 3000)</script>"; fclose($text_file); $i++; } } } } I expect the code to write the data which I fill in beforehand with a form which I send, however it writes only once and then it just shows the errors. - Running an ftp server on nixos I would like to run an ftp server on a nixos host. I am using vsftpd, though could use something else if that would make a difference. The ftp works fine on localhost, but the firewall is blocking me for remote usage. I have allowed TCP port 21, but that is not enough. How should I configure the firewall to allow ftp connections (including writing to the ftp server)? Here is the code that I currently have: { networking.firewall = { allowedTCPPorts = [ 20 21 ]; # connectionTrackingModules = [ "ftp" ]; }; services.vsftpd = { enable = true; # cannot chroot && write # chrootlocalUser = true; writeEnable = true; localUsers = true; userlist = [ "martyn" "cam" ]; userlistEnable = true; }; } With the above, any use of ftp from off-host fails: ftp> put dead.letter 200 PORT command successful. Consider using PASV. 425 Failed to establish connection. Use of passive mode (e.g., with ftp -p) doesn't seem to help here: ftp> put dead.letter 227 Entering Passive Mode (192,168,0,7,219,202). ftp: connect: Connection timed out Testing on a throwaway host with the firewall disabled networking.firewall.enable = false; Allows ftp -pto work; though of course turning off the firewall is not an attractive option. Thanks for any help and pointers, - Vs code ftp empty files on linux mint I use linux mint (mate), connect via standard caja ftp file manager, I can edit files. BUT when using this connection, I open the files in VS code , they are empty. When I use for example Atom, everything works, but Atom does not suit me. Help me please. The problem was partially solved using curlftpfs, mounting ftp as a local folder. But it works too slowly. Sorry for my English. - Access ftp server behind proxy I want to access the FTP server via python. It is behind our company's proxy. On company's network, I can access the ftp server using from ftplib import FTP ftp_host = "example.com" ftp_user = "my_ftp_user" ftp_password = "my_ftp_password" Outside company's network, I need to use proxy. I have the following details: ftp_host ftp_user ftp_password proxy_host proxy_user proxy_password I have tried: ftp = FTP(host=ftp_host, user=ftp_user, passwd=ftp_password, source_address=None, timeout=10000) But it doesn't work. Can someone help me for the same? - download ftp file from ftp and store as Pandas dataframe I want to know how to make a Pandas data frame from a csv file which is kept in a ftp folder that has user id and password. I am able to see the file in the ftp by the help of below lines import pandas as pd from ftplib import FTP with FTP("xxx.xx.xxx.xxx") as ftp:(user='xxxxx', passwd = 'xxxxx')("Home/DW") Also after on googling a bit, I tried below lines. with open("D1.csv", 'rb') as f:('RETR ' + "D1.csv", f.read) but still I am not getting how to read the "D1.csv" file to a dataframe. - "'NoneType' object has no attribute 'sendall'" when uploading file with Python ftplib I made an FTP client to transfer files to an FTP server, but it keeps showing me the same errors no matter how I change the storbinaryfunction from ftplib import FTP import os from pathlib import Path ftp = FTP()('127.0.0.1', 2121)('user', '12345')('LIST') def uploadfile(): filename = 'C:\\Users\\Raisa Arief\\Desktop\\Software dev\\Ftp client and server\\test.txt' localfile = open(filename, 'rb')('STOR %s' %os.path.basename(filename), localfile, 1024) localfile.close() uploadfile()('LIST') fetchfile() This is my error log Traceback (most recent call last): File "C:\Users\Raisa Arief\Desktop\Software dev\Ftp client and server\ftp-client.py", line 24, in <module> uploadfile() File "C:\Users\Raisa Arief\Desktop\Software dev\Ftp client and server\ftp-client.py", line 21, in uploadfile('STOR %s' %os.path.basename(filename), localfile, 1024) File "C:\Users\Raisa Arief\AppData\Local\Programs\Python\Python37\lib\ftplib.py", line 503, in storbinary self.voidcmd('TYPE I') File "C:\Users\Raisa Arief\AppData\Local\Programs\Python\Python37\lib\ftplib.py", line 277, in voidcmd self.putcmd(cmd) File "C:\Users\Raisa Arief\AppData\Local\Programs\Python\Python37\lib\ftplib.py", line 199, in putcmd self.putline(line) File "C:\Users\Raisa Arief\AppData\Local\Programs\Python\Python37\lib\ftplib.py", line 194, in putline self.sock.sendall(line.encode(self.encoding)) AttributeError: 'NoneType' object has no attribute 'sendall'
http://quabr.com/56745895/python-ftp-module-errors-clarification
CC-MAIN-2019-30
refinedweb
1,962
58.89
Twilio SendGrid eliminates many of the complexities of sending email. In a previous tutorial, you learned how to use SendGrid’s SMTP server to send emails to your users from a Python and Flask application. But how do you schedule your emails so that they are sent at a specific time? In this short tutorial you will learn how to use SendGrid’s email scheduling options, which will save you from having to implement your own background scheduling. Requirements To work on this tutorial you. Create a Flask project Find an appropriate location for your project and create a directory for it: mkdir flask-sendgrid-scheduled cd flask-sendgrid-scheduled Now create a Python virtual environment where the dependencies of the project are to be installed. For Mac and Unix users, the commands are: python3 -m venv venv source venv/bin/activate For Windows users, the commands are: python -m venv venv venv\Scripts\activate For this project, you are going to use Flask, the Flask-Mail extension and the python-dotenv package. Install them all in your virtual environment: pip install flask flask-mail python-dotenv For the purposes of this tutorial, the following Flask application will suffice. Create and open a file named app.py and enter the following code in it using your favorite text editor or IDE: import os from flask import Flask from flask_mail import Mail, Message app = Flask(__name__) app.config['MAIL_SERVER'] = 'smtp.sendgrid.net' app.config['MAIL_PORT'] = 587 app.config['MAIL_USE_TLS'] = True app.config['MAIL_USERNAME'] = 'apikey' app.config['MAIL_PASSWORD'] = os.environ.get('SENDGRID_API_KEY') mail = Mail(app) This short application configures all the email related settings so that you can send emails through your SendGrid account. Note how the email password is sourced from an environment variable. You will define this variable in the next section. SendGrid configuration To send emails with SendGrid you need to authenticate with an API key. If you don’t have one yet, log in to your SendGrid account, then click on the left sidebar, select Settings and then API Keys. Click the “Create API Key” button and provide the requested information to create your key. For detailed step-by-step instructions, follow the basic tutorial first. Once you have your key, create and open a .env file in your Flask project directory, and paste the key as follows: SENDGRID_API_KEY=<your-sendgrid-key-here> Send a test email You are now ready to send a test email. Open a Flask shell with the following command: flask shell Now you can configure and send a test email from the Python prompt. First, create a message object and set the subject, sender and recipients: from app import Message msg = Message(subject='Test Email', sender='youremail@example.com', recipients=['youremail@example.com']) Here you should replace youremail@example.com with a valid email address you have access to. If you prefer, you can use different email addresses for the sender and the recipient. You can also use multiple recipient addresses if you like. When implementing an email sending solution in production, it is recommended that you authenticate the domain you are sending email from to improve deliverability. Next, set the body of the email: msg.body = 'This is a test email.' The body attribute defines a text-only email. If you want to also provide a rich-text version in HTML, you can assign it to the msg.html attribute. Your message is now complete. You can send it as follows: from app import mail mail.send(msg) In a few seconds, the email should arrive in your inbox. The original Flask email sending tutorial has a “If Your Emails Aren’t Delivered” section that describes how to troubleshoot emails that aren’t delivered. Schedule an email You are now ready to schedule an email, from the same shell session. Create a new message: msg = Message(subject='Test Email', sender='youremail@example.com', recipients=['youremail@example.com']) msg.body = 'This is a scheduled test email.' And here comes the magic. You can use the send_at extension from SendGrid to provide a delivery time in Unix timestamp units. In the following example, the email is scheduled to be sent two minutes later: from time import time import json msg.extra_headers = {'X-SMTPAPI': json.dumps({'send_at': time() + 120})} That’s it! The X-SMTPAPI custom header is an extension supported by SendGrid’s SMTP server that allows applications to pass additional sending instructions as a JSON blob. The expression time() + 120 refers to the current time plus 120 seconds, or in other words, two minutes from now. Send the email like you did before: mail.send(msg) Nothing will happen immediately, but about two minutes later you should have the scheduled email in your inbox. The send_at option is documented in detail. In the same page you can also learn about send_each_at, which allows you to provide a list of timestamps, one per recipient in a multi-recipient email. Conclusion I hope this was a useful trick that you can add to your email sending toolbox. The X-SMTPAPI header supports several other extensions, so be sure to check its documentation to learn about more cool email features. Happy email scheduling! Miguel Grinberg is a Principal Software Engineer for Technical Content at Twilio. Reach out to him at mgrinberg [at] twilio [dot] com if you have a cool project you’d like to share on this blog!
https://www.twilio.com/blog/scheduled-emails-python-flask-twilio-sendgrid
CC-MAIN-2022-05
refinedweb
903
65.01
## Table of Contents - [Table of Contents](#table-of-contents) - [Introduction](#introduction) - [Changes since 1.76](#changes-since-176) - [Stage 1 - Information Disclosure](#stage-1---information-disclosure) * [Helpful information](#helpful-information) * [Vector sys_thr_get_ucontext](#vector-sys_thr_get_ucontext) * [Implementation](#implementation) + [Thread Creation](#thread-creation) + [Thread Suspension](#thread-suspension) + [Setup Function](#setup-function) + [Leak!](#leak) + [kASLR Defeat](#kaslr-defeat) + [Object Leak](#object-leak) + [Stack Pivot Fix](#stack-pivot-fix) + [Putting it all together](#putting-it-all-together) - [Stage 2 - Arbitrary Free](#stage-2---arbitrary-free) * [Vector 1 - sys_namedobj_create](#vector-1---sys_namedobj_create) * [Vector 2 - sys_mdbg_service](#vector-2---sys_mdbg_service) * [Vector 3 - sys_namedobj_delete](#vector-3---sys_namedobj_delete) * [Implementation](#implementation-1) + [Creating a named object](#creating-a-named-object) + [Writing a pointer to free](#writing-a-pointer-to-free) + [Free!](#free) - [Stage 3 - Heap Spray/Object Fake](#stage-3---heap-sprayobject-fake) * [Helpful information](#helpful-information-1) * [Corrupting the object](#corrupting-the-object) * [The cdev object](#the-cdev-object) + [si_name](#si_name) + [si_devsw](#si_devsw) * [The (rest of the cdev_priv) object](#the-rest-of-the-cdev_priv-object) * [The cdevsw object](#the-cdevsw-object) + [Target - d_ioctl](#target---d_ioctl) * [Spray](#spray) - [Stage 4 - Kernel Stack Pivot](#stage-4---kernel-stack-pivot) - [Stage 5 - Building the Kernel ROP Chain](#stage-5---building-the-kernel-rop-chain) * [Disabling Kernel Write Protection](#disabling-kernel-write-protection) * [Allowing RWX Memory Mapping](#allowing-rwx-memory-mapping) * [Syscall Anywhere](#syscall-anywhere) * [Allow sys_dynlib_dlsym from Anywhere](#allow-sys_dynlib_dlsym-from-anywhere) * [Install kexec system call](#install-kexec-system-call) * [Kernel Exploit Check](#kernel-exploit-check) * [Exit to Userland](#exit-to-userland) - [Stage 6 - Trigger](#stage-6---trigger) - [Stage 7 - Stabilizing the Object](#stage-7---stabilizing-the-object) - [Conclusion](#conclusion) * [Special Thanks](#special-thanks) ## Introduction **NOTE**: Let it be said that I do not condone nor endorse piracy. As such, neither the exploit or this write-up will contain anything to enable piracy on the system. Welcome to my PS4 kernel exploit write-up for 4.05. In this write-up I will provide a detailed explanation of how my public exploit implementation works, and I will break it down step by step. You can find the full source of the exploit [here](). The userland exploit will not be covered in this write-up, however I have already provided a write-up on this userland exploit in the past, so if you wish to check that out, click [here](). Let's jump into it. ## Changes since 1.76 Some notable things have changed since 1.76 firmware, most notably the change where Sony fixed the bug where we could allocate RWX memory from an unprivileged process. The process we hijack via the WebKit exploit no longer has RWX memory mapping permissions, as JiT is now properly handled by a seperate process. Calling sys_mmap() with the execute flag will succeed; however any attempt to actually execute this memory as code will result in an access violation. This means that our kernel exploit must be implemented entirely in ROP chains, no C payloads this time. Another notable change is kernel ASLR (kASLR) is now enabled past 1.76. Some newer system calls have also been implemented since 1.76. On 1.76, there were 85 custom system calls. On 4.05, we can see there are 120 custom system calls. Sony has also removed system call 0, so we can no longer call any system call we like by specifying the call number in the `rax` register. We will have to use wrappers from the libkernel.sprx module provided to us to access system calls. ## Stage 1 - Information Disclosure The first stage of the exploit is to obtain important information from the kernel, I take full advantage of this leak and use it to obtain three pieces of information. To do this, we need a kernel information disclosure/leak. This happens when kernel memory is copied out to the user, but the buffer (or at least parts of it) are not initialized, so uninitialized memory is copied to the user. This means if some function called before it stores pointers (or any data for that matter) in this memory, it will be leaked. Attackers can use this to their advantage, and use a setup function to leak specific memory to craft exploits. This is what we will do. ### Helpful information I thought I'd include this section to help those who don't know how FreeBSD address prefixes work. It's important to know how to distinguish userland and kernel pointers, and which kernel pointers are stack, heap, or .text pointers. FreeBSD uses a "limit" to define which pointers are userland and which are kernel. Userland can have addresses up to 0x7FFFFFFFFFFF. A kernel address is noted by having the 0x800000000000 bit set. In kernel, the upper 32-bits are set to an address prefix to specify what type of kernel address it is, and the lower 32-bits the rest of the virtual address. The prefixes are as follows, where *x* can be any hexadecimal digit for the address, and *y* is an arbitrary hexadecimal digit for the heap address prefix, which is randomized at boot as per kASLR: 1. 0xFFFFFFFFxxxxxxxx = Kernel .text pointer 2. 0xFFFFFF80xxxxxxxx = Kernel stack pointer 3. 0xFFFFyyyyxxxxxxxx = Kernel heap pointer ### Vector sys_thr_get_ucontext System call 634 or `sys_thr_get_ucontext()` allows you to obtain information on a given thread. The problem is, some areas of memory copied out are not initialized, and thus the function leaks memory at certain spots. This vector was patched in 4.50, as now before the buffer is used it is initialized to 0 via `bzero()`. The biggest issue with this function is it uses a **lot** of stack space, so we're very limited to what we can use for our setup function. Our setup function must subtract over 0x500 from rsp all in one go, and whatever we leak will be deep in the code. This part of the exploit took the most time and research, as it is difficult to know what you are leaking without a debugger, it takes some educated guesses and experimentation to find an appropriate object. Getting the math down perfect won't do much good either because functions can change quite significantly between firmwares, especially when it's a jump like 1.76 to 4.05. This step took me around 1-2 months in my original exploit. ### Implementation ##### Thread Creation To call sys_thr_get_ucontext() successfully, we must create a thread first, an ScePthread specifically. We can do this using a function from libkernel, ScePthreadCreate(). The signature is as follows: ```c scePthreadCreate(ScePthread *thr, const ScePthreadAttr *attr, void *(*entry)(void *), void *arg, const char *name) ``` We can call this in WebKit on 4.05 by offset 0x11570 in libkernel. Upon success, scePthreadCreate() should return a valid thread handle, and should fill the buffer passed in to `ScePthread *thr` with an ScePthread struct - we need this as it will hold the thread descriptor we will use in subsequent calls for the leak. ##### Thread Suspension Unfortunately, you cannot call `sys_thr_get_ucontext()` on an active thread, so we must also suspend the thread before we can leak anything. We can do this via `sys_thr_suspend_ucontext()`. The function signature is as follows: ```c sys_thr_suspend_ucontext(int sceTd) ``` Calling this in WebKit is simple, we just need to dereference the value at offset 0 of the buffer we provided to `scePthreadCreate()`, this is the thread descriptor for the ScePthread. ##### Setup Function We need a setup function that uses over 0x500 stack space as stated earlier, between the surface function and any functions it may call. Opening a file (a device for example) is a good place to look, because open() itself uses a lot of stack spaces, and it will also run through a bunch of other sub-routines such as filesystem functions. I found that opening the "/dev/dipsw" device driver, I was able to leak not only a good object (which I will detail more in the "Object Leak" section below), but also leak kernel .text pointers. This will help us defeat kASLR for kernel patches and gadgets in our kernel ROP chain (from now on we will abbreviate this as "kROP chain"). ##### Leak! Finally, we can call `sys_thr_get_ucontext()` to get our leak. The signature is as follows: ```c sys_thr_get_ucontext(int sceTd, char *buf) ``` We simply pass `sceTd` (the same one we got from creation and passed to sys_thr_suspend_ucontext), and pass a pointer to our buffer as the second argument. When the call returns, we will have leaked kernel information in `buf`. #### kASLR Defeat First, we want to locate the kernel's .text base address. This will be helpful for post-exploitation stuff, for example, `cr0` gadgets are typically only available in kernel .text, as userland does not directly manipulate the `cr0` register. We will want to manipulate the `cr0` register to disable kernel write protection for kernel patching. How can we do this? We can simply leak a kernel .text pointer and subtract it's slide in the .text segment to find the base address of the kernel. In our buffer containing the leaked memory, we can see at offset 0x128 we are leaking a kernel .text address. This is also convenient, because as you will see in the next section "Object Leak", it is adjacent to our object leak in memory, so it will also help us verify the integrity of our leak. Because I had a dump of 4.05 kernel already from my previous exploit, I found the slide of this .text pointer to be 0x109E96. For those curious, it is a pointer to the section in `_vn_unlock()` where the flags are checked before unlocking a vnode. A good indication that your slide is good, is the kernel .text base is always aligned to 0x4000, which is the PS4's page boundary. This means your kernel .text base address should end in '000'. #### Object Leak Secondly, we need to leak an object in the heap that we can later free() and corrupt to obtain code execution. Some objects are also much better candidates than others. The following traits make for a good object for exploitation: 1. Has function pointers. Not needed per-se, you could obtain arbitary kernel R/W and use that to corrupt some other object, but function pointers are ideal. 2. Localized. You don't want an object that is used by some other area in the kernel ideally, because this could make the exploit racey and less stable. 3. Easy to fake. We need an object that we don't need to leak a bunch of other pointers to fake when we heap spray. 4. Objects associated to things like file descriptors make for great targets! At offset 0x130, it seems we leak a `cdev_priv` object, which are objects that represent character devices in memory. It seems this object leaks from the `devfs_open()` function, which also explains our `_vn_unlock()` leak at 0x128 for the ASLR defeat. Unfortunately, not all objects we leak are going to meet the ideal criteria. This object breaks criteria 2, however luckily it meets criteria 3 and we can fake it perfectly. Nothing else will use the `dipsw` device driver while our exploit runs, meaning even though our exploit uses a global object, it is still incredibly stable. It also has a bunch of function pointers we can use to hijack code execution via the `cdev_priv->cdp_c->c_devsw` object, meeting criteria 1. We can also see that `cdev_priv` objects are allocated in `devfs_alloc()`, which is eventually called by `make_dev()`. Luckily, `cdev_priv` objects are malloc()'d and not zone allocated, so we should have no issues freeing it. [src]() ```c devfs_alloc(int flags) { struct cdev_priv *cdp; struct cdev *cdev; struct timespec ts; cdp = malloc(sizeof *cdp, M_CDEVP, M_USE_RESERVE | M_ZERO | ((flags & MAKEDEV_NOWAIT) ? M_NOWAIT : M_WAITOK)); if (cdp == NULL) return (NULL); // ... cdev = &cdp->cdp_c; // ... return (cdev); } ``` #### Stack Pivot Fix One last piece of information we need is a stack address. The reason for this is when we stack pivot to run our ROP chain in kernel mode, we need to return to userland cleanly, meaning fix the stack register (rsp) which we broke. Luckily, because kernel stacks are per-thread, we can use a stack address that we leak to calculate the new return location when the ROP chain is finished executing. I made this calculation by taking the difference of the base pointer (rbp) from where the kernel jumps to our controlled function pointer and a stack pointer that leaks. At offset 0x20 in the leak buffer we can see a stack address, I found the difference to be 0x3C0. #### Putting it all together First, we will create our thread for the `sys_thr_get_ucontext` leak, and set it so that it's program is an infinite loop gadget so it keeps running. We'll also create a ROP chain for stage 1, where we will open `/dev/dipsw` and leak, and we'll also setup the namedobj for stage 3 as well. ```javascript var createLeakThr = p.call(libkernel.add32(0x11570), leakScePThrPtr, 0, window.gadgets["infloop"], leakData, stringify("leakThr")); p.write8(namedObj, p.syscall('sys_namedobj_create', stringify("debug"), 0xDEAD, 0x5000)); ``` Then to leak, we will suspend the thread, open the `/dev/dipsw` device driver, and leak the `cdev_priv` object. ```javascript var stage1 = new rop(p, undefined); stage1.call(libkernel.add32(window.syscalls[window.syscallnames['sys_thr_suspend_ucontext']]), p.read4(p.read8(leakScePThrPtr))); stage1.call(libkernel.add32(window.syscalls[window.syscallnames['sys_open']]), stringify("/dev/dipsw"), 0, 0); stage1.saveReturnValue(targetDevFd); stage1.call(libkernel.add32(window.syscalls[window.syscallnames['sys_thr_get_ucontext']]), p.read4(p.read8(leakScePThrPtr)), leakData); stage1.run(); ``` Before continuing with the exploit for stability purposes, it's good to include an integrity check against your leak to ensure you know you're leaking the right object. The integrity check is verifying the kernel .text leak to ensure that the base address aligns with a page. This check will all at once allow us to defeat kASLR and check if the leak is valid. ```javascript // Extract leaks kernelBase = p.read8(leakData.add32(0x128)).sub32(0x109E96); objBase = p.read8(leakData.add32(0x130)); stackLeakFix = p.read8(leakData.add32(0x20)); if(kernelBase.low & 0x3FFF) { alert("Bad leak! Terminating."); return false; } ``` ## Stage 2 - Arbitrary Free A combination of design flaws led to a critical bug in the kernel, which allows an attacker to free() an arbitrary address. The issue lies in the idt hash table that Sony uses for named objects. I won't go full in-depth on the idt hash table as that's already been covered in depth by [fail0verflow's public write-up](). The main issue is Sony stores the object's type as well as flags in one field, and allow the user to specify it. This means the attacker can cause type confusion, which later leads to an arbitrary free() situation. ### Vector 1 - sys_namedobj_create By creating a named object with type = 0x5000 (or 0x4000 due to the function OR'ing the ID with 0x1000), we can cause type confusion in the idt hash table. Upon success, it returns an ID of the named object. ### Vector 2 - sys_mdbg_service When sys_mdbg_service() goes to write bytes passed in from a userland buffer at offset 0x4 to 0x8 to the named object returned, it actually writes to the wrong object due to type confusion. This allows the attacker to overwrite the pointer's lower 32-bits in the named object with any value. ### Vector 3 - sys_namedobj_delete When sys_namedobj_delete() is called, it first free()'s at offset 0 of the object before free()ing the object. Because we can contain 0x4-0x8 in the object in sys_mdbg_service via type confusion, we can control the lower 32-bits of the pointer that is free()'d here. Luckily, because this object is SUPPOSED to contain a heap pointer at offset 0, the heap address prefix is set for us. If this was not the case, this bug would not be exploitable. ### Implementation #### Creating a named object The first thing we need to do is create a named object to put in the `idt` with the malicious 0x5000 type. We can do that via the `sys_namedobj_create()` system call like this: ```javascript p.write8(namedObj, p.syscall('sys_namedobj_create', stringify("debug"), 0xDEAD, 0x5000)); ``` #### Writing a pointer to free We need to be able to write to the `no->name` field of the named object, because when we cause type confusion and delete the object, the address free()'d will be taken from the lower 32-bits of the `no->name` field. To do this, we can use the `sys_mdbg_service()` system call, like so: ```javascript p.write8(serviceBuff.add32(0x4), objBase); p.writeString(serviceBuff.add32(0x28), "debug"); // ... var stage3 = new rop(p, undefined); stage3.call(libkernel.add32(window.syscalls[window.syscallnames['sys_mdbg_service']]), 1, serviceBuff, 0); ``` #### Free! Finally, we need to trigger the free() on the address we wrote via `sys_namedobj_delete()`. Because of the object being cast to a `namedobj_dbg_t` type, it will free() the address specified at offset 0x4 (which is `no->name` in `namedobj_usr_t`. It is remarkable that this is the field that is free()'d, and that the field's upper 32-bits will already be set to the heap address prefix due to it being a pointer to the object's name. If this was not the case, we could not create a use-after-free() scenario as we would not be able to set the upper 32-bits, and this type confusion bug might otherwise be unexploitable. We can trigger the free() by simply deleting our named object via: ```javascript stage3.call(libkernel.add32(window.syscalls[window.syscallnames['sys_namedobj_delete']]), p.read8(namedObj), 0x5000); ``` ## Stage 3 - Heap Spray/Object Fake I'll detail a little bit in this section of what heap spraying is for those newer to exploitation, if you already know how it works however, feel free to skip this section. ### Helpful information Memory allocators have to be efficient, because allocating brand new memory is costly in terms of performance. To be more efficient, heap memory is typically sectioned into "chunks" (also called "buckets"), and these chunks are typically marked as "used" or "free". To save performance, if an allocation is requested, the kernel will first check to see if it can give you a chunk that's already been allocated (but marked "free") of a similar size before allocating a new chunk. The chunk sizes are powers of 2 starting at 16, meaning you can get chunks of size 0x10, 0x20, 0x40, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000, 0x2000, or 0x4000. You can find these defined in the `kmemzones` array in [FreeBSD's source file responsible for memory allocation, kern_malloc.c](). We can abuse this to control the data of our free()'d object, and thus corrupt it. The `cdev_priv` object is 0x180 in size, meaning it will use a chunk of size 0x200. So if we continousily allocate, write, and deallocate a chunk of memory of a size above 0x100 and below 0x200, eventually the next malloc() call should give you the pointer you've maintained a reference to, which means your exploit can write to this pointer, and corrupt the backing memory of the object. This is called spraying the heap. For more information on heap spraying, see [here](). ### Corrupting the object We're going to spray the heap with our fake object that we've created in userland. Our faked object will prevent the kernel from crashing by faking data we need to, and allow us to obtain code execution by hijacking a function pointer in the object. First let's take a look at the `cdev` object, which is the first member (inlined) of `cdev_priv`. For reference, each member also has it's offset in the structure. As to not make this write-up longer than it needs to be, I will only include some of the pointers that I faked. Other integers in the struct such as flags, mode, and the time stamp members I took from dumping the object live. #### The cdev object The `cdev` object is the core of the `cdev_priv` object, and contains important information about the device. Notably, it includes the name of the device, it's operations vtable, reference counts, and a linked list to previous and next `cdev_priv` devices. [src]() ``` struct cdev { void *__si_reserved; // 0x000 u_int si_flags; // 0x008 struct timespec si_atime; // 0x010 struct timespec si_ctime; // 0x020 struct timespec si_mtime; // 0x030 uid_t si_uid; // 0x040 gid_t si_gid; // 0x044 mode_t si_mode; // 0x048 struct ucred *si_cred; // 0x050 int si_drv0; // 0x058 int si_refcount; // 0x05C LIST_ENTRY(cdev) si_list; // 0x060 LIST_ENTRY(cdev) si_clone; // 0x070 LIST_HEAD(, cdev) si_children; // 0x080 LIST_ENTRY(cdev) si_siblings; // 0x088 struct cdev *si_parent; // 0x098 char *si_name; // 0x0A0 void *si_drv1, *si_drv2; // 0x0A8 struct cdevsw *si_devsw; // 0x0B8 int si_iosize_max; // 0x0C0 u_long si_usecount; // 0x0C8 u_long si_threadcount; // 0x0D0 union { struct snapdata *__sid_snapdata; } __si_u; // 0x0D8 char __si_namebuf[SPECNAMELEN + 1]; // 0x0E0 }; ``` ##### si_name The `si_name` member points to the `__si_namebuf` buffer inside the object, which is 64-bytes in length. Normally, a string will be written here, "dipsw". We're going to overwrite this though for our stack pivot, which will be the objective of the next stage. It is important to fix this post-exploit, because other processes that may want to open the "dipsw" device driver will not be able to if the name is not set properly, as it cannot be identified. ```javascript p.write8(obj_cdev_priv.add32(0x0A0), objBase.add32(0x0E0)); p.write8(obj_cdev_priv.add32(0x0E0), window.gadgets["ret"]); // New RIP value for stack pivot p.write8(obj_cdev_priv.add32(0x0F8), kchainstack); // New RSP value for stack pivot ``` ##### si_devsw `si_devsw` is our ultimate target object. It's usually a static object in kernel .text which contains function pointers for all sorts of operations with the device, including `ioctl()`, `mmap()`, `open()`, and `close()`. We can fake this pointer and make it point to an object we setup in userland, as the PS4 does not have Supervisor-Mode-Access-Prevention (SMAP). ```javascript p.write8(obj_cdev_priv.add32(0x0B8), obj_cdevsw); // Target Object ``` #### The (rest of the) cdev_priv object Originally, I spent a lot of time trying to fake the members from 0x120 to 0x180 in the object. Some of these members are difficult to fake as there are linked lists and pointers to object's that are in completely different zones. We can use a neat trick to cheat our way out of not needing to fake any of this data in our spray. I will cover this more in-depth when we cover the heap spray specifics. #### The cdevsw object The `cdevsw` object is a vtable which contains function pointers for various operations such as `open()`, `close()`, `ioctl()`, and many more. Thankfully because the "dipsw" device driver isn't used while we're exploiting, we can just pick one to overwrite (I chose `ioctl()`), trigger code execution, and fix the pointer back to the proper kernel .text location post-exploit. [src]() ``` struct cdevsw { int d_version; // 0x00 u_int d_flags; // 0x04 const char *d_name; // 0x08 d_open_t *d_open; // 0x10 d_fdopen_t *d_fdopen; // 0x18 d_close_t *d_close; // 0x20 d_read_t *d_read; // 0x28 d_write_t *d_write; // 0x30 d_ioctl_t *d_ioctl; // 0x38 d_poll_t *d_poll; // 0x40 d_mmap_t *d_mmap; // 0x48 d_strategy_t *d_strategy; // 0x50 dumper_t *d_dump; // 0x58 d_kqfilter_t *d_kqfilter; // 0x60 d_purge_t *d_purge; // 0x68 d_mmap_single_t *d_mmap_single; // 0x70 int32_t d_spare0[3]; // 0x78 void *d_spare1[3]; // 0x88 LIST_HEAD(, cdev) d_devs; // 0xA0 int d_spare2; // 0xA8 union { struct cdevsw *gianttrick; SLIST_ENTRY(cdevsw) postfree_list; } __d_giant; // 0xB0 }; ``` ##### Target - d_ioctl We're going to overwrite the `d_ioctl` address with our stack pivot gadget. When we go to call `ioctl()` on our opened device driver, the kernel will jump to our stack pivot gadget, run our kROP chain, and cleanly exit. ```javascript p.write8(obj_cdevsw.add32(0x38), libcBase.add32(0xa826f)); // d_ioctl - TARGET FUNCTION POINTER ``` This is another member we must fix post-exploit, as if anything else that uses "dipsw" (which is quite a lot of other processes) goes to perform an operation such as `open()`, it will crash the kernel because your faked object in userland will not be accessible by other processes as other processes will not have access to WebKit's mapped memory. ### Spray We can use the `ioctl()` system call to spray using a bad file descriptor. The system call will first `malloc()` memory, with the size being specified by the caller via parameter, and will `copyin()` data we control into the allocated buffer. Due to the bad file descriptor, the system call will then free the buffer, and exit in error. It's a perfect vector for a spray because we control the size, the data being copied in, and it's immediately free'd. The neat trick I mentioned earlier is using a size of 0x120 for our spray's `copyin()`. Because 0x120 is greater than 0x100 and lesser than 0x200, the chunk size matches our target object. However, because we are only specifying 0x120 for the `copyin()`, any data between 0x120-0x180 will not be initialized, meaning it will not get corrupted. No need to fake linked lists or attempt to fake pointers that we can't fake perfectly. ```javascript for(var i = 0; i < 500; i++) { stage3.call(libkernel.add32(window.syscalls[window.syscallnames['sys_ioctl']]), 0xDEADBEEF, 0x81200000, obj_cdev_priv); } stage3.run(); ``` ## Stage 4 - Kernel Stack Pivot To execute our ROP chain, we're going to need to pivot the stack to that of our ROP chain, to do this we can use a gadget in the libc module. This gadget loads rsp from [rdi + 0xF8], and pushes [rdi + 0xE0], which we can use to set RIP to the `ret` gadget. We control the `rdi` register, as rdi is going to be loaded with the buffer we pass in to the `ioctl()` call. Below is a snippet of the stack pivot gadget we will use from `sceLibcInternal.sprx`: ``` mov rsp, [rdi+0F8h] mov rcx, [rdi+0E0h] push rcx mov rcx, [rdi+60h] mov rdi, [rdi+48h] retn ``` Luckily, 0xE0 and 0xF8 fall inside the `__si_namebuf` member of `cdev`, which are members that can easily be fixed post-exploit. From `devfs_ioctl_f()` in `/fs/devfs/devfs_vnops.c` ([src]()): ```c // ... dev_relthread(dev, ref); // ... error = dsw->d_ioctl(dev, com, data, fp->f_flag, td); // ... ``` This is where the kernel will call the function pointer that we control. Notice `rdi` is loaded with `dev`, which is the `cdev` object we control. We can easily implement this stack pivot in our object fake, like so: ```javascript p.write8(obj_cdev_priv.add32(0x0E0), window.gadgets["ret"]); // New RIP value for stack pivot // ... p.write8(obj_cdev_priv.add32(0x0F8), kchainstack); // New RSP value for stack pivot // ... p.write8(obj_cdevsw.add32(0x38), libcBase.add32(0xa826f)); // d_ioctl - TARGET FUNCTION POINTER ``` Here is an [excellent resource]() on stack pivoting and how it works for those interested. ## Stage 5 - Building the Kernel ROP Chain Our kROP or kernel ROP chain is going to be a chain of instructions that we run from supervisor mode. We want to accomplish a few things with this chain. First, we want to apply a few kernel patches to allow us to run payloads and escalate our privileges. Finally before returning we'll want to fix the object to stabilize the kernel. ### Disabling Kernel Write Protection We have to disable the write protection on the kernel .text before we can make any patches. We can use the `mov cr0, rax` gadget to do this. The cr0 register contains various control flags for the CPU, one of which is the "WP" bit at bit 16. By unsetting this, we can write to read-only memory pages in ring0, such as kernel .text. ```javascript // Disable kernel write protection kchain.push(window.gadgets["pop rax"]); // rax = 0x80040033; kchain.push(0x80040033); kchain.push(kernelBase.add32(0x389339)); // mov cr0, rax; ``` For more information on the `cr0` control register, see the [OSDev wiki](). ### Allowing RWX Memory Mapping We want to be able to run C payloads and run our loader, so we need to patch the `mmap` system call to allow us to set the execute bit to map RWX memory pages. ```c seg000:FFFFFFFFA1824FD9 mov [rbp+var_61], 33h seg000:FFFFFFFFA1824FDD mov r15b, 33h ``` These are the maximum allowed permission bits the user is allowed to pass to `sys_mmap()` when mapping memory. By changing 0x33 in both of these move instructions to 0x37, it will allow us to specify the execute bit successfully. ```javascript // Patch sys_mmap: Allow RWX (read-write-execute) mapping var kernel_mmap_patch = new int64(0x37B74137, 0x3145C031); kchain.write64(kernelBase.add32(0x31CFDC), kernel_mmap_patch); ``` ### Syscall Anywhere Sony checks and ensures that a syscall instruction can only be issued from the memory range of the libkernel.sprx module. They also check the instructions around it to ensure it keeps the format if a typical wrapper. These patches will allow us to use the `syscall` instruction in our ROP chain, which will be important for fixing the object later. The first patch allows kernel processes to initiate `syscall` instructions. This is because processes have a `p_dynlib` member that specifies if libkernel has been loaded. This patch makes certain that any module can call `syscall` even if the libkernel module has not yet been loaded. ``` seg000:FFFFFFFFA15F5095 mov ecx, 0FFFFFFFFh ``` Which is patched to ``` mov ecx, 0x0 ``` The second patch forces a jump below check to fail, so that our first patch is put to use. ``` seg000:FFFFFFFFA15F50BB cmp rdx, [rax+0E0h] seg000:FFFFFFFFA15F50C2 jb short loc_FFFFFFFFA15F50D7 ``` Which is patched to ``` seg000:FFFFFFFFA15F50BB jmp loc_FFFFFFFFA15F513D ``` This will allow WebKit to issue a `syscall` instruction directly. ```javascript // Patch syscall: syscall instruction allowed anywhere var kernel_syscall_patch1 = new int64(0x0000000, 0xF8858B48); var kernel_syscall_patch2 = new int64(0x0007DE9, 0x72909000); kchain.write64(kernelBase.add32(0xED096), kernel_syscall_patch1); kchain.write64(kernelBase.add32(0xED0BB), kernel_syscall_patch2); ``` ### Allow sys_dynlib_dlsym from Anywhere Our payloads are going to need to be able to resolve userland symbols, so these patches are essential for running payloads. The first patch patches a check against a member that Sony added to the `proc` structure that defines if a process can call `sys_dynlib_dlsym()`. ```c seg000:FFFFFFFFA1652ACF mov rdi, [rbx+8] seg000:FFFFFFFFA1652AD3 call sub_FFFFFFFFA15E6930 seg000:FFFFFFFFA1652AD8 cmp eax, 4000000h seg000:FFFFFFFFA1652ADD jb loc_FFFFFFFFA1652D8B ``` The second patch forces a function that checks if the process should have dynamic resolving to always return 0. ```c seg000:FFFFFFFFA15EADA0 sub_FFFFFFFFA15EADA0 proc near ; CODE XREF: sys_dynlib_dlsym+F9↓p seg000:FFFFFFFFA15EADA0 ; sys_dynlib_get_info+1FC↓p ... seg000:FFFFFFFFA15EADA0 mov rax, gs:0 seg000:FFFFFFFFA15EADA9 mov rax, [rax+8] seg000:FFFFFFFFA15EADAD mov rcx, [rax+340h] seg000:FFFFFFFFA15EADB4 mov eax, 1 seg000:FFFFFFFFA15EADB9 test rcx, rcx seg000:FFFFFFFFA15EADBC jz short locret_FFFFFFFFA15EADCA seg000:FFFFFFFFA15EADBE test [rcx+0F0h], edi seg000:FFFFFFFFA15EADC4 setnz al seg000:FFFFFFFFA15EADC7 movzx eax, al seg000:FFFFFFFFA15EADCA seg000:FFFFFFFFA15EADCA locret_FFFFFFFFA15EADCA: ; CODE XREF: sub_FFFFFFFFA15EADA0+1C↑j seg000:FFFFFFFFA15EADCA retn seg000:FFFFFFFFA15EADCA sub_FFFFFFFFA15EADA0 endp ``` This is patched to simply: ``` seg000:FFFFFFFFA15EADA0 xor eax, eax seg000:FFFFFFFFA15EADA2 ret [nop x5] ``` Patching both of these checks should allow any process, even WebKit, to dynamically resolve symbols. ```javascript // Patch sys_dynlib_dlsym: Allow from anywhere var kernel_dlsym_patch1 = new int64(0x000000E9, 0x8B489000); var kernel_dlsym_patch2 = new int64(0x90C3C031, 0x90909090); kchain.write64(kernelBase.add32(0x14AADD), kernel_dlsym_patch1); kchain.write64(kernelBase.add32(0xE2DA0), kernel_dlsym_patch2); ``` ### Install kexec system call Our goal with this patch is to create our own syscall under syscall #11. This syscall will allow us to execute arbitrary code in supervisor mode (ring0). It will only have two arguments, the first being a pointer to the function we want to execute. The second argument will be `uap` to pass arguments to the function. This code creates an entry in the `sysent` table. ``` // Add custom sys_exec() call to execute arbitrary code as kernel var kernel_exec_param = new int64(0, 1); kchain.write64(kernelBase.add32(0xF179A0), 0x02); kchain.write64(kernelBase.add32(0xF179A8), kernelBase.add32(0x65750)); kchain.write64(kernelBase.add32(0xF179C8), kernel_exec_param); ``` ### Kernel Exploit Check We don't want the kernel exploit to run more than once, as once we install our custom `kexec()` system call we don't need to. To do this, I decided to patch the privilege check out of the `sys_setuid()` system call, so we will know if the kernel has been patched if we can successfully call `setuid(0)` from WebKit. ``` seg000:FFFFFFFFA158DBB0 call priv_check_cred ``` To easily bypass this check, I decided to just change it to move 0 into the `rax` register. The opcodes happened to be perfect size. ``` seg000:FFFFFFFFA158DBB0 mov eax, 0 ``` As you can guess, this also doubles as a partial privilege escalation. ``` // Add kexploit check so we don't run kexploit more than once (also doubles as privilege escalation) var kexploit_check_patch = new int64(0x000000B8, 0x85C38900); kchain.write64(kernelBase.add32(0x85BB0), kexploit_check_patch); ``` ### Exit to Userland Finally, we want to exit our kROP chain to prevent crashing the kernel. To do this, we need to restore RSP to it's value before the stack pivot. As stated earlier, we have a stack leak at 0x20 in the leak buffer, and it's 0x3C0 off from a good RSP value to return to. These instructions will apply the RSP fix by popping the `stack leak + 0x3C0` into the RSP register, and when the final gadget `ret`'s it will return to proper execution. ``` // Exit kernel ROP chain kchain.push(window.gadgets["pop rax"]); kchain.push(stackLeakFix.add32(0x3C0)); kchain.push(window.gadgets["pop rcx"]); kchain.push(window.gadgets["pop rsp"]); kchain.push(window.gadgets["push rax; jmp rcx"]); ``` ## Stage 6 - Trigger Now we need to trigger the exploit by calling the `ioctl()` system call on our object. The second parameter (cmd) does not matter because the handler will never be reached as we have overwritten it with our stack pivot gadget. ```javascript p.syscall('sys_ioctl', p.read8(targetDevFd), 0x81200000, obj_cdev_priv); ``` ## Stage 7 - Stabilizing the Object Finally, we need to ensure the object doesn't get corrupted. The `cdev_priv` object is global, meaning other processes will go to use it at some point. Since we free()'d it's backing memory, some other allocation could steal this pointer and overwrite our faked object, causing unpredictable crashes. To avoid this, we can call `malloc()` in the kernel a bunch of times to try to obtain this pointer, essentially we are performing a second heap spray, but if we find the address we want we are keeping the allocation. Since the kernel payload needs to retrieve the address of the object to write to, we will store it at an absolute address, 0xDEAD0000. We will also use this mapping to execute our payload. ```javascript var baseAddressExecute = new int64(0xDEAD0000, 0); var exploitExecuteAddress = p.syscall("sys_mmap", baseAddressExecute, 0x10000, 7, 0x1000, -1, 0); var executeSegment = new memory(p, exploitExecuteAddress); var objBaseStore = executeSegment.allocate(0x8); var shellcode = executeSegment.allocate(0x200); p.write8(objBaseStore, objBase); ``` We will also apply a few of our other patches to the object, such as restoring the object's name and original `si_devsw` pointer. [src]() ```c int main(void) { int i; void *addr; uint8_t *ptrKernel; int (*printf)(const char *fmt, ...) = NULL; void *(*malloc)(unsigned long size, void *type, int flags) = NULL; void (*free)(void *addr, void *type) = NULL; // Get kbase and resolve kernel symbols ptrKernel = (uint8_t *)(rdmsr(0xc0000082) - KERN_XFAST_SYSCALL); malloc = (void *)&ptrKernel[KERN_MALLOC]; free = (void *)&ptrKernel[KERN_FREE]; printf = (void *)&ptrKernel[KERN_PRINTF]; uint8_t *objBase = (uint8_t *)(*(uint64_t *)(0xDEAD0000)); // Fix stuff in object that's corrupted by exploit *(uint64_t *)(objBase + 0x0E0) = 0x7773706964; *(uint64_t *)(objBase + 0x0F0) = 0; *(uint64_t *)(objBase + 0x0F8) = 0; // Malloc so object doesn't get smashed for (i = 0; i < 512; i++) { addr = malloc(0x180, &ptrKernel[0x133F680], 0x02); printf("Alloc: 0x%lx\n", addr); if (addr == (void *)objBase) break; free(addr, &ptrKernel[0x133F680]); } printf("Object Dump 0x%lx\n", objBase); for (i = 0; i < 0x180; i += 8) printf("<Debug> Object + 0x%03x: 0x%lx\n", i, *(uint64_t *)(*(uint64_t *)(0xDEAD0000) + i)); // EE :) return 0; } ``` This payload was then compiled and converted into shellcode which is executed via our `kexec()` system call we installed earlier. ```javascript var stage7 = new rop(p, undefined); p.write4(shellcode.add32(0x00000000), 0x00000be9); p.write4(shellcode.add32(0x00000004), 0x90909000); p.write4(shellcode.add32(0x00000008), 0x90909090); // ... [ommited for readability] stage7.push(window.gadgets["pop rax"]); stage7.push(11); stage7.push(window.gadgets["pop rdi"]); stage7.push(shellcode); stage7.push(libkernel.add32(0x29CA)); // "syscall" gadget stage7.run(); ``` # Conclusion This exploit is quite an interesting exploit, though it did require a lot of guessing and would have been a lot more fun to work with should I have had a proper kernel debugger. To get a working object can be a long a grueling process depending on the leak you're using. Overall this exploit is incredibly stable, in fact I ran it over 30 times and WebKit nor the Kernel crashed once. I learned a lot from implementing it, and I hope I helped others like myself who are interested in exploitation and hopefully others will learn some things from this write-up. ## Special Thanks * CTurt * Flatz * qwertyoruiopz * other anonymous contributors ## Mistakes? See any issues I glanced over? Open an issue or send me a tweet and let me know :) [Table of contents generated with markdown-toc.]()
https://www.exploit-db.com/papers/44232/
CC-MAIN-2018-13
refinedweb
6,094
60.95
Created on 2015-06-14 07:50 by pfalcon, last changed 2015-06-27 04:30 by martin.panter. This issue is now closed. This issue was brought is somewhat sporadic manner on python-tulip mailing list, hence this ticket. The discussion on the ML: (all other messages below threaded from this) Summary of arguments: 1. This would make such async_write() (a tentative name) symmetrical in usage with read() method (i.e. be a coroutine to be used with "yield from"/"await"), which certainly reduce user confusion and will help novices to learn/use asyncio. 2. write() method is described (by transitively referring to WriteTransport.write()) as "This method does not block; it buffers the data and arranges for it to be sent out asynchronously." Such description implies requirement of unlimited data buffering. E.g., being fed with 1TB of data, it still must buffer it. Bufferings of such size can't/won't work in practice - they only will lead to excessive swapping and/or termination due to out of memory conditions. Thus, providing only synchronous high-level write operation goes against basic system reliability/security principles. 3. The whole concept of synchronous write in an asynchronous I/O framework stems from: 1) the way it was done in some pre-existing Python async I/O frameworks ("pre-existing" means brought up with older versions of Python and based on concepts available at that time; many people use word "legacy" in such contexts); 2) on PEP3153, which essentially captures ideas used in the aforementioned pre-existing Python frameworks. PEP3153 was rejected; it also contains some "interesting" claims like "Considered API alternatives - Generators as producers - [...] - nobody produced actually working code demonstrating how they could be used." That wasn't true at the time of PEP writing ( , 2008, 2009), and asyncio is actually *the* framework which uses generators as producers. asyncio also made a very honorable step of uniting generators/coroutine and Transport paradigm - note that as PEP3153 shows, Transport proponents contrasted it with coroutine-based design. But asyncio also blocked (in both senses) high-level I/O on Transport paradigm. What I'm arguing is not that Transports are good or bad, but that there should be a way to consistently use coroutine paradigm for I/O in asyncio - for people who may appreciate it. This will also enable alternative implementations of asyncio subsets without Transport layer, with less code size, and thus more suitable for constrained environments. Proposed change is to add following to asyncio.StreamWriter implementation: @coroutine def async_write(self, data): self.write(data) I.e. default implementation will be just coroutine version of synchronous write() method. The messages linked above discuss alternative implementations (which are really interesting for complete alternative implementations of asyncio). The above changes are implemented in MicroPython's uasyncio package, which asyncio subset for memory-constrained systems. Thanks for your consideration! Paul, you have brought this up many times before, and you have been refuted each time. Reading and writing just aren't symmetric operations. If you need awrite(), it's a two-line helper function you can easily write yourself. No, I haven't brought this "many times before". Discussion on the mailing list last week was first time I brought up *this* issue. But it's indeed not the first time I provide feedback regarding various aspects of asyncio, so I wonder if this issue was closed because of "this" in "this issue" or because of "many times before"... > If you need awrite(), it's a two-line helper function you can easily write yourself. Yes, and I have it, but I don't see how that helps anybody else. If it's 2-liner, why don't you want to add it? Can there please be discussion of the arguments provided in the description? Most people actually are better off with just write(), and an API with more choices is not necessarily better. I'm sure this has come up before, I could've sworn it was you, sorry if it wasn't. Here's one reason why I don't like your proposed API. The read() and write() calls *aren't* symmetric: When you call read(), you are interested in the return value. If you forget the "yield from" (or 'async') your use of the return value will most likely raise an exception immediately, so you will notice (and pinpoint) the bug in your code instantly. But with write() there is no natural return value, so if write() was a coroutine, the common mistake (amongst beginners) of forgetting the "yield from" or 'async' would be much harder to debug -- your program happily proceeds, but now it's likely hung, waiting for a response that it won't get (because the other side didn't get what you meant to write) or perhaps you get a cryptic error (because the other side saw the thing you wrote next). Thanks for the response. > and an API with more choices is not necessarily better. I certainly agree, and that's why I try to implement MicroPython's uasyncio solely in terms of coroutines, without Futures and Transports. But I of course can't argue for dropping Transports in asyncio, so the only argument I'm left with is consistency of API (letting use coroutines everywhere). > I'm sure this has come up before, I could've sworn it was you, sorry if it wasn't. No, I brought issue of Futures dependency (yes, that was wide and long discussion), then about yielding a coroutine instance as a way to schedule it. But during the time I'm on python-tulip, I didn't remember someone bringing up issue of asymmetry between read & write, but I would imagine someone would have done that before me. (Which might hint that the issue exists ;-) ). > the common mistake (amongst beginners) of forgetting the "yield from" or 'async' would be much harder to debug So, this is not first time you provide this argument for different cases, one would think that this pin-points pretty serious flaw in the language and it's priority to find a solution for it, and yet PEP3152 which does exactly that was rejected, so maybe it's not that *serious*. Indeed, it's common sense that it's possible to make a hard to debug mistake in any program, and in any concurrent program (doesn't matter of which paradigm) it's order of magnitude easier to do one and harder to debug respectively. But asyncio does have tools to debug such issue. And one would think that easiest way to preclude mistake is to avoid inconsistencies in the API. I know there's a very find balance between all the arguments, and only you can know where it lies, but kindly accept external feedback that the above explanation (syntax is more prone to mistakes) looks like universal objectivized rejection in rather subjective cases. What saddens me here is that this decision puts pretty high lower bound for asyncio memory usage (and I tried hard to prove that asyncio is suitable paradigm even for smaller, IoT-class devices). It's also hard to argue that Python isn't worse than Go, Rust and other new kids on the block - because indeed, how one can argue that, if even the language author uses argument "language syntax, while exists, isn't good enough to do the obvious things". Good pontificating, Paul.
https://bugs.python.org/issue24449
CC-MAIN-2020-50
refinedweb
1,237
59.74
JAX-RS is just an API! Last modified: July 9, 2019 1. Overview The REST paradigm has been around for quite a few years now and it's still getting a lot of attention. A RESTful API can be implemented in Java in a number of ways: you can use Spring, JAX-RS, or you might just write your own bare servlets if you're good and brave enough. All you need is the ability to expose HTTP methods – the rest is all about how you organize them and how you guide the client when making calls to your API. As you can make out from the title, this article will cover JAX-RS. But what does “just an API” mean? It means that the focus here is on clarifying the confusion between JAX-RS and its implementations and on offering an example of what a proper JAX-RS webapp looks like. 2. Inclusion in Java EE JAX-RS is nothing more than a specification, a set of interfaces and annotations offered by Java EE. And then, of course, we have the implementations; some of the more well known are RESTEasy and Jersey. Also, if you ever decide to build a JEE-compliant application server, the guys from Oracle will tell you that, among many other things, your server should provide a JAX-RS implementation for the deployed apps to use. That's why it's called Java Enterprise Edition Platform. Another good example of specification and implementation is JPA and Hibernate. 2.1. Lightweight Wars So how does all this help us, the developers? The help is in that our deployables can and should be very thin, letting the application server provide the needed libraries. This applies when developing a RESTful API as well: the final artifact should not contain any information about the used JAX-RS implementation. Sure, we can provide the implementation (here‘s a tutorial for RESTeasy). But then we cannot call our application “Java EE app” anymore. If tomorrow someone comes and says “Ok, time to switch to Glassfish or Payara, JBoss became too expensive!“, we might be able to do it, but it won't be an easy job. If we provide our own implementation we have to make sure the server knows to exclude its own – this usually happens by having a proprietary XML file inside the deployable. Needless to say, such a file should contain all sorts of tags and instructions that nobody knows nothing about, except the developers who left the company three years ago. 2.2. Always Know Your Server We said so far that we should take advantage of the platform that we're offered. Before deciding on a server to use, we should see what JAX-RS implementation (name, vendor, version and known bugs) it provides, at least for Production environments. For instance, Glassfish comes with Jersey, while Wildfly or Jboss come with RESTEasy. This, of course, means a little time spent on research, but it's supposed to be done only once, at the beginning of the project or when migrating it to another server. 3. An Example If you want to start playing with JAX-RS, the shortest path is: have a Maven webapp project with the following dependency in the pom.xml: <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>7.0</version> <scope>provided</scope> </dependency> We're using JavaEE 7 since there are already plenty of application servers implementing it. That API jar contains the annotations that you need to use, located in package javax.ws.rs. Why is the scope “provided”? Because this jar doesn't need to be in the final build either – we need it at compile time and it is provided by the server for the run time. After the dependency is added, we first have to write the entry class: an empty class which extends javax.ws.rs.core.Application and is annotated with javax.ws.rs.ApplicationPath: @ApplicationPath("/api") public class RestApplication extends Application { } We defined the entry path as being /api. Whatever other paths we declare for our resources, they will be prefixed with /api. Next, let's see a resource: @Path("/notifications") public class NotificationsResource { @GET @Path("/ping") public Response ping() { return Response.ok().entity("Service online").build(); } @GET @Path("/get/{id}") @Produces(MediaType.APPLICATION_JSON) public Response getNotification(@PathParam("id") int id) { return Response.ok() .entity(new Notification(id, "john", "test notification")) .build(); } @POST @Path("/post/") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response postNotification(Notification notification) { return Response.status(201).entity(notification).build(); } } We have a simple ping endpoint to call and check if our app is running, a GET and a POST for a Notification (this is just a POJO with attributes plus getters and setters). Deploy this war on any application server implementing JEE7 and the following commands will work: curl curl curl -X POST -d '{"id":23,"text":"lorem ipsum","username":"johana"}' --header "Content-Type:application/json" Where simple-jaxrs-ex is the context-root of the webapp. This was tested with Glassfish 4.1.0 and Wildfly 9.0.1.Final. Please note that the last two commands won't work with Glassfish 4.1.1, because of this bug. It is apparently a known issue in this Glassfish version, regarding the serialization of JSON (if you have to use this server version, you'll have to manage JSON marshaling on your own) 4. Conclusion At the end of this article, just keep in mind that JAX-RS is a powerful API and most (if not all) of the stuff that you need is already implemented by your web server. No need to turn your deployable into an unmanageable pile of libraries. This write-up presents a simple example and things might get more complicated. For instance, you might want to write your own marshalers. When that's needed, look for tutorials that solve your problem with JAX-RS, not with Jersey, Resteasy or other concrete implementation. It's very likely that your problem can be solved with one or two annotations.
https://www.baeldung.com/jax-rs-spec-and-implementations
CC-MAIN-2020-40
refinedweb
1,013
54.52
- New MVC 5 Project Templates (C# & VB) - Bootstrap 3 Support - MVC Scaffolding automatically enhanced with Wijmo - Wijmo-enhanced EditorTemplates like Date Picker, Numeric Input, Slider and more Getting Started Let's take a look at getting started with our MVC 5 Tools. The first thing you will need to do is install Studio for ASP.NET Wijmo. File > New Project After installing Studio for ASP.NET Wijmo, create a New Project in Visual Studio. Under Web, you will find ASP.NET MVC 5 Wijmo Application. Now, let's just run the app and see what it looks like out of the box. It might look familiar to you. That's because we based it on Microsoft's built-in template. We just modified the markup and CSS to our liking and added Wijmo of course. Add a Model Next, let's build a little ToDo List App using the Wijmo("IntSlider")] public int Priority { get; set; } [Range(0, 1000000)] public decimal Cost { get; set; } [DataType(DataType.MultilineText)] public string Summary { get; set; } public bool Done { get; set; } [Display(Name = "Date Completed")] public DateTime? DoneAt { get; set; } public ICollection<TahDoItem> TahDoItems { get; set; } } public class TahDoItem { [Editable(false)] public int Id { get; set; } [Required] public string Title { get; set; } [Display(Name = "Date Created")] public DateTime? CreatedAt { get; set; } [Range(0, 5), UIHint("IntSlider")] public int Priority { get; set; } [DataType(DataType.MultilineText)] public string Note { get; set; } public int TahDoListId { get; set; } public TahDoList TahDoList {DoList Url and the application will create the Database for your model and server up an empty result set. Start adding items and your List View will look like this: In the Create or Edit Views you will see that there are just standard EditorFor Helpers. But inside the project we have installed custom EditorTemplates. So when certain types are used (DateTime, Numeric, etc), custom editors will render. Take a look at how nice the forms are when these custom editors are used. The results are a solid starting point for a MVC Tools as much as we do!
https://www.grapecity.com/en/blogs/introducing-our-new-mvc-5-tools
CC-MAIN-2018-30
refinedweb
342
63.7
Syntax-highlight Python code on screen while running 1.How to get Syntax-highlight Python code on screen while running, e.g. this: import random print 'hello' print random.randint(42) will turn into this Syntax-highlight Python code on screen while running? Another questions: Is it possible to control(copy&paste ect) editor with code's function? How to read a .py file into a string or list? I do not understand your question on syntax highlighting however, random.randint()required two parameters, not just one. For the other questions... The clipboardmodule enables your scripts to copy and paste and the editormodule enables them to get and set Editor content. with open('python_code.py') as in_file: python_code = in_file.read() # copies file contents to a str Many thanks. I mean markdown example, is there any simple method to get syntax hightlight like this: ```python print 'hello' ``` i think you may need to be a little more descriptive... tell us what you are trying to achieve. - do you want the html source code of syntax highlighted Python? - or, you want to display highlighted Python within pythonista, but outside of the editor? - do you want this "on screen" as in in the console, or in a window? - when you say while running, do you mean you are trying to debug code, and would like a highlighted line source debugger? The first three issues, my answers are yes, which method is the simplest? how to make it? Thanks a lot. Here's an example for showing syntax-highlighted code (that is in the clipboard): import ui from pygments import highlight from pygments.lexers import PythonLexer from pygments.formatters import HtmlFormatter from pygments.styles import get_style_by_name import clipboard # Syntax-highlight code in clipboard: code = clipboard.get() html_formatter = HtmlFormatter(style='colorful') highlighted_code = highlight(code, PythonLexer(), html_formatter) styles = html_formatter.get_style_defs() html = '<html><head><style>%s</style></head><body>%s</body></html>' % (styles, highlighted_code) # Present result in a WebView: webview = ui.WebView(frame=(0, 0, 500, 500)) webview.scales_page_to_fit = False webview.load_html(html) webview.present('sheet') A thousand thanks.
https://forum.omz-software.com/topic/1950/syntax-highlight-python-code-on-screen-while-running
CC-MAIN-2021-49
refinedweb
342
53.27
#include "SDL.h" int SDL_ConvertAudio(SDL_AudioCVT *cvt); SDL_ConvertAudio takes one parameter, cvt, which was previously initilized. Initilizing a SDL_AudioCVT is a two step process. First of all, the structure must be passed to SDL_BuildAudioCVT along with source and destination format parameters. Secondly, the cvt->buf and cvt->len fields must be setup. cvt->buf should point to the audio data and cvt->len should be set to the length of the audio data in bytes. Remember, the length of the buffer pointed to by buf show be len*len_mult bytes in length. Once the SDL_AudioCVTstructure is initilized then we can pass it to SDL_ConvertAudio, which will convert the audio data pointer to by cvt->buf. If SDL_ConvertAudio returned 0 then the conversion was completed successfully, otherwise -1 is returned. If the conversion completed successfully then the converted audio data can be read from cvt->buf. The amount of valid, converted, audio data in the buffer is equal to cvt->len*cvt->len_ratio. /* Converting some WAV data to hardware format */ void my_audio_callback(void *userdata, Uint8 *stream, int len);: %s ", SDL_GetError()); exit(-1); } free(desired); /* Load the test.wav */ if( SDL_LoadWAV("test.wav", &wav_spec, &wav_buf, &wav_len) == NULL ){ fprintf(stderr, "Could not open test.wav: %s ", */ SDL_FreeWAV(wav_buf); /* And now we're ready to convert */ SDL_ConvertAudio(&wav_cvt); /* do whatever */ . . . . SDL_BuildAudioCVT, SDL_AudioCVT
https://www.commandlinux.com/man-page/man3/SDL_ConvertAudio.3.html
CC-MAIN-2017-30
refinedweb
218
57.37
NAME jail, jail_get, jail_set, jail_remove, jail_attach - create and manage system jails LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <sys/param.h> #include <sys/jail.h> int jail(struct jail *jail); int jail_attach(int jid); int jail_remove(int jid); #include <sys/uio.h> int jail_get(struct iovec *iov, u_int niov, int flags); int jail_set(struct iovec *iov, u_int niov, int flags);. This is equivalent to the jail_set() system call (see below), with the parameters path, host.hostname, name, ip4.addr, and ip6.addr, and with the JAIL_ATTACH flag. The jail_set() system call creates a new jail, or modifies an existing one, and optionally locks the current process in it. Jail parameters are passed as an array of name-value pairs in the array iov, containing niov elements. Parameter names are a null-terminated string, and values may be strings, integers, or other arbitrary data. Some parameters are boolean, and do not have a value (their length is zero) but are set by the name alone with or without a “no” prefix, e.g. persist or nopersist. Any parameters not set will be given default values, generally based on the current environment. Jails have a set of core parameters, and modules can add their own jail parameters. The current set of available parameters, and their formats, can be retrieved via the security.jail.param sysctl MIB entry. Notable parameters include those mentioned in the jail() description above, as well as jid and name, which identify the jail being created or modified. See jail(8) for more information on the core jail parameters. The flags arguments consists of one or more of the following flags: JAIL_CREATE Create a new jail. If a jid or name parameters exists, they must not refer to an existing jail. JAIL_UPDATE Modify an existing jail. One of the jid or name parameters must exist, and must refer to an existing jail. If both JAIL_CREATE and JAIL_UPDATE are set, a jail will be created if it does not yet exist, and modified if it does exist. JAIL_ATTACH In addition to creating or modifying the jail, attach the current process to it, as with the jail_attach() system call. JAIL_DYING Allow setting a jail that is in the process of being removed. The jail_get() system call retrieves jail parameters, using the same name-value list as jail_set() in the iov and niov arguments. The jail to read can be specified by either jid or name by including those parameters in the list. If they are included but are not intended to be the search key, they should be cleared (zero and the empty string respectively). The special parameter lastjid can be used to retrieve a list of all jails. It will fetch the jail with the jid above and closest to the passed value. The first jail (usually but not always jid 1) can be found by passing a lastjid of zero. The flags arguments consists of one or more following flags: JAIL_DYING Allow getting a jail that is in the process of being removed. The jail_attach() system call attaches the current process to an existing jail, identified by jid. The jail_remove() system call removes the jail identified by jid. It will kill all processes belonging to the jail, and remove any children of that jail. RETURN VALUES If successful, jail(), jail_set(), and jail_get() return a non-negative integer, termed the jail identifier (JID). They return -1 on failure, and set errno to indicate the error. The jail_attach() and jail_remove() functions return connections name currently set for the prison for jailed processes. ERRORS The jail() system call will fail if: [EPERM] This process is not allowed to create a jail, either because it is not the super-user, or because it would exceed the jail’s children.max limit. [EFAULT] jail points to an address outside the allocated address space of the process. [EINVAL] The version number of the argument is not correct. [EAGAIN] No free JID could be found. The jail_set() system call will fail if: [EPERM] This process is not allowed to create a jail, either because it is not the super-user, or because it would exceed the jail’s children.max limit. [EPERM] A jail parameter was set to a less restrictive value then the current environment. [EFAULT] Iov, or one of the addresses contained within it, points to an address outside the allocated address space of the process. [ENOENT] The jail referred to by a jid or name parameter does not exist, and the JAIL_CREATE flag is not set. [ENOENT] The jail referred to by a jid is not accessible by the process, because the process is in a different jail. [EEXIST] The jail referred to by a jid or name parameter exists, and the JAIL_UPDATE flag is not set. [EINVAL] A supplied parameter is the wrong size. [EINVAL] A supplied parameter is out of range. [EINVAL] A supplied string parameter is not null-terminated. [EINVAL] A supplied parameter name does not match any known parameters. [EINVAL] One of the JAIL_CREATE or JAIL_UPDATE flags is not set. [ENAMETOOLONG] A supplied string parameter is longer than allowed. [EAGAIN] There are no jail IDs left. The jail_get() system call will fail if: [EFAULT] Iov, or one of the addresses contained within it, points to an address outside the allocated address space of the process. [ENOENT] The jail referred to by a jid or name parameter does not exist. [ENOENT] The jail referred to by a jid is not accessible by the process, because the process is in a different jail. [ENOENT] The lastjid parameter is greater than the highest current jail ID. [EINVAL] A supplied parameter is the wrong size. [EINVAL] A supplied parameter name does not match any known parameters. The jail_attach() and jail_remove() system calls will fail if: [EINVAL] The jail specified by jid does not exist. Further jail(), jail_set(), and jail_attach() call chroot(2) internally, so it can fail for all the same reasons. Please consult the chroot(2) manual page for details. SEE ALSO chdir(2), chroot(2), jail(8) HISTORY The jail() system call appeared in FreeBSD 4.0. The jail_attach() system call appeared in FreeBSD 5.1. The jail_set(), jail_get(), and jail_remove() system calls appeared in FreeBSD 8.0. AUTHORS The jail feature was written by Poul-Henning Kamp for R&D Associates “” who contributed it to FreeBSD. James Gritton added the extensible jail parameters and hierarchical jails.
http://manpages.ubuntu.com/manpages/maverick/man2/jail.2freebsd.html
CC-MAIN-2014-35
refinedweb
1,067
65.01
In This tutorial, You will learn how to sort object arrays with keys or properties in typescript. The object holds the key and value of a real entity. Object Array is a list of objects. There are multiple ways we can sort object arrays using Sort Comparator. Array object soring with comparison logic Let’s declare an array of objects where each object holds id and name properties Following is an example of Sorting an array of objects in ascending order with the id property var countries = [ { id: 1, name: 'USA' }, { id: 2, name: 'India' }, { id: 3, name: 'Canada' } ]; let sortedCountries = countries.sort((first, second) => 0 - (first.id > second.id ? -1 : 1)); console.log(sortedCountries); And the output: [ { id: 1, name: 'Canada' }, { id: 18, name: 'USA' }, { id: 21, name: 'India' } ] here is an example array object ordered in descending. let descendingCountries = countries.sort((first, second) => 0 - (first.id > second.id ? 1 : -1)); Here is the output [ { id: 21, name: 'India' }, { id: 18, name: 'USA' }, { id: 1, name: 'Canada' } ] lodash sortBy objects with key strings Lodash library provides many utility functions. sortBy is one of the methods for sorting an array. First, install lodash npm or include lodash CDN library into your application. syntax sortBy(array or objects,[iteratekeys]) Input is an array of objects. iteratekeys are keys or properties that enable to sort It returns a sorted array Following is an `example for sorting objects with key values of an object array in ascending order. - import all utilities using the import keyword - an animal object has key-id, name, and values - Declared animals array initialized with an animal object sortBy in lodashmethod takes an object array with keys - Returns a sorted object array in id/name ascending order import * as _ from 'underscore'; varanimals=[ { id: 11, name: 'Zebra' }, { id: 2, name: 'Elephant' }, { id: 3, name: 'Cat' } ]; _.sortBy(animals,['name']);// sort by key name _.sortBy(animals,['id']);// sort by key id And the output is [Object {id: 3, name: "Cat"}, Object {id: 2, name: "Elephant"}, Object {id: 1, name: "Zebra"}] [Object {id: 2, name: "Elephant"}, Object {id: 3, name: "Cat"}, Object {id: 11, name: "Zebra"}] Following an example for sort object array with multiple fields sortBy second arguments accept iterate keys, please give multiple keys. _.sortBy(animals,['id','name']); _.sortBy(animals,['name','id']); The first property is considered, if there is the same value for the first property, then sort these same values on a second property, please see the difference. And the output is [Object {id: 2, name: "Elephant"}, Object {id: 3, name: "Cat"}, Object {id: 11, name: "Zebra"}] [Object {id: 3, name: "Cat"}, Object {id: 2, name: "Elephant"}, Object {id: 11, name: "Zebra"}] underscore sortBy date property in the object array Like the lodash library, underscore provides many utility functions for npm applications.sortBy function in underscore provides sorting of an array of objects with multiple attributes. The syntax and usage are the same asLodash sortBy`. In the following example, An object contains properties with creating contains date and time. This sort is based on created date. var employees = [ { id: 18, name: 'Johh', created: '2020-08-1 07:12:10.0' }, { id: 21, name: 'Frank', created: '2019-08-1 07:12:10.0', }, { id: 1, name: 'Eric', created: '2018-08-1 07:12:10.0', } ]; _.sortBy(employees,['created']); And the output is 0: Object {created: "2018-08-1 07:12:10.0", id: 1, name: "Eric"} 1: Object {created: "2019-08-1 07:12:10.0", id: 21, name: "Frank"} 2: Object {created: "2020-08-1 07:12:10.0", id: 18, name: "Johh"} Conclusion In short, In this blog post, You learned multiple ways to sort an object array using - Compare object fields with keys - Lodash sorts array of objects with multiple properties - Underscores sorting with object date values
https://www.cloudhadoop.com/typescript-sort-object-array/
CC-MAIN-2022-21
refinedweb
636
52.7
Background tasks with hosted services in ASP.NET Core By). Worker Service template The ASP.NET Core Worker Service template provides a starting point for writing long running service apps. To use the template as a basis for a hosted services app: - Create a new project. - Select ASP.NET Core Web Application. Select Next. - Provide a project name in the Project name field or accept the default project name. Select Create. - In the Create a new ASP.NET Core Web Application dialog, confirm that .NET Core and ASP.NET Core 3.0 are selected. - Select the Worker Service template. Select Create.. with the AddHostedService extension method: services.Add, an. The IHostedService implementation is registered with the AddHostedService extension method: services.Add 3.0): in the queue are dequeued and executed as a BackgroundService, which is a base class for implementing a long running IHostedService: public class QueuedHostedService : BackgroundService {; }
https://docs.microsoft.com/en-gb/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-2.1
CC-MAIN-2019-39
refinedweb
148
54.18
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. "go back" step in a workflow stops everything I created a set of approvals in a Purchase Order workflow, I added a rejection step among the workflow steps. The approvals flow smoothly, my problem is that when rejecting one approval and get back to the previous workflow state everything stops and is then unable to change the state by clicking the current approval whenever go back in any workflow one should delete then create the workflow related records in workflow tables, I faced enourmouse difficuties due to the lack of this piece of information, I put it here to share information pls check the method action_cancel_draft in purchase.py def action_cancel_draft(self, cr, uid, ids, context=None): if not len(ids): return False self.write(cr, uid, ids, {'state':'draft','shipped':0}) wf_service = netsvc.LocalService("workflow") for p_id in ids: # Deleting the existing instance of workflow for PO wf_service.trg_delete(uid, 'purchase.order', p_id, cr) wf_service.trg_create(uid, 'purchase.order', p_id, cr) return True Hi Tarek, I'm facing the same problem as you. I'm trying to implement your solution but I don't understand what the "self.write(cr, uid, ids, {'state':'draft','shipped':0})" line
https://www.odoo.com/forum/help-1/question/go-back-step-in-a-workflow-stops-everything-84513
CC-MAIN-2017-04
refinedweb
227
53.21
2006/7/12, Alexey Varlamov <alexey.v.varlamov@gmail.com>: > So you are agreed with the first reason? :) No, I do not agree that this is a problem. But this is one of the possible arguments. > 2006/7/12, Alexey Petrenko <alexey.a.petrenko@gmail.com>: > > 2006/7/12, Alexey Varlamov <alexey.v.varlamov@gmail.com>: > > > > Why do you think it will take more memory for non-English locales? > > > Bcause of longer keys to a bundle - compare "a123" and "The > > > application has requested unusual operation and should be slated." > > It seems that we have some misunderstanding here... :) > > > > Check "new Eclipse" method. It suggests something like: > > > > public class Msgs { > > public static final String a123; > > > > static { > > MsgUtils.initializeMessages(... > > } > > } > > > > I suggested to change it to something like > > > > public class Msgs { > > public static final String a123 = "The application has requested > > unusual operation and should be slated."; > > > > static { > > MsgUtils.initializeMessages(... > > } > > } > > > > So the only difference is that we do not need to initialize messages > > for English locale. > > For this method, the difference is even worse. Originally, there is a > name of a field as the key and no default value. After adding default > initialization, the default value takes additional space. Where? Are you trying to say that default initializers will eat the memory all the time class is loaded? If so... I have not think about this.... Probably you are right. -- Alexey A. Petrenko Intel Middleware Products Division --------------------------------------------------------------------- To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org For additional commands, e-mail: harmony-dev-help@incubator.apache.org
http://mail-archives.apache.org/mod_mbox/harmony-dev/200607.mbox/%3Cc3755b3a0607120248s3f071e02m39457be1d8319ba5@mail.gmail.com%3E
CC-MAIN-2016-40
refinedweb
253
52.36
Mypy: Optional Static Typing for PythonMypy: Optional Static Typing for Python Got a question? Join us on Gitter!Got a question? Join us on Gitter! We don't have a mailing list; but we are always happy to answer questions on gitter chat. If you are sure you've found a bug please search our issue trackers for a duplicate before filing a new issue: - mypy tracker for mypy issues - typeshed tracker for issues with specific modules - typing tracker for discussion of new type system features (PEP 484 changes) and runtime bugs in the typing module What is mypy?What is mypy? Mypy is an optional static type checker for Python. You can add type hints (PEP 484) to your Python programs, and use mypy to type check them statically. Find bugs in your programs without even running them! You can mix dynamic and static typing in your programs. You can always fall back to dynamic typing when static typing is not convenient, such as for legacy code. Here is a small example to whet your appetite (Python 3): from typing import Iterator def fib(n: int) -> Iterator[int]: a, b = 0, 1 while a < n: yield a a, b = b, a + b See the documentation for more examples. For Python 2.7, the standard annotations are written as comments: def is_palindrome(s): # type: (str) -> bool return s == s[::-1] See the documentation for Python 2 support. Mypy is in development; some features are missing and there are bugs. See 'Development status' below. RequirementsRequirements You need Python 3.5 or later to run mypy. You can have multiple Python versions (2.x and 3.x) installed on the same system without problems. In Ubuntu, Mint and Debian you can install Python 3 like this: $ sudo apt-get install python3 python3-pip For other Linux flavors, macOS and Windows, packages are available at Quick startQuick start Mypy can be installed using pip: $ python3 -m pip install -U mypy If you want to run the latest version of the code, you can install from git: $ python3 -m pip install -U git+git://github.com/python/mypy.git Now, if Python on your system is configured properly (else see "Troubleshooting" below), you can type-check the statically typed parts of a program like this: $ mypy PROGRAM You can always use a Python interpreter to run your statically typed programs, even if they have type errors: $ python3 PROGRAM You can also try mypy in an online playground (developed by Yusuke Miyazaki). IDE, Linter Integrations, and Pre-commitIDE, Linter Integrations, and Pre-commit Mypy can be integrated into popular IDEs: - Vim: - Emacs: using Flycheck and Flycheck-mypy - Sublime Text: SublimeLinter-contrib-mypy - Atom: linter-mypy - PyCharm: mypy plugin (PyCharm integrates its own implementation of PEP 484) - VS Code: provides basic integration with mypy. Mypy can also be integrated into Flake8 using flake8-mypy, or can be set up as a pre-commit hook using pre-commit mirrors-mypy. Web site and documentationWeb site and documentation Documentation and additional information is available at the web site: Or you can jump straight to the documentation: TroubleshootingTroubleshooting Depending on your configuration, you may have to run pip like this: $ python3 -m pip install -U mypy This should automatically install the appropriate version of mypy's parser, typed-ast. If for some reason it does not, you can install it manually: $ python3 -m pip install -U typed-ast If the mypy command isn't found after installation: After python3 -m pip install, the mypy script and dependencies, including the typing module, will be installed to system-dependent locations. Sometimes the script directory will not be in PATH, and you have to add the target directory to PATH manually or create a symbolic link to the script. In particular, on macOS, the script may be installed under /Library/Frameworks: /Library/Frameworks/Python.framework/Versions/<version>/bin In Windows, the script is generally installed in \PythonNN\Scripts. So, type check a program like this (replace \Python34 with your Python installation path): C:\>\Python34\python \Python34\Scripts\mypy PROGRAM Working with virtualenv If you are using virtualenv, make sure you are running a python3 environment. Installing via pip3 in a v2 environment will not configure the environment to run installed modules from the command line. $ python3 -m pip install -U virtualenv $ python3 -m virtualenv env Quick start for contributing to mypyQuick start for contributing to mypy If you want to contribute, first clone the mypy git repository: $ git clone --recurse-submodules If you've already cloned the repo without --recurse-submodules, you need to pull in the typeshed repo as follows: $ git submodule init $ git submodule update Either way you should now have a subdirectory typeshed inside your mypy repo, your folders tree should be like mypy/mypy/typeshed, containing a clone of the typeshed repo (). From the mypy directory, use pip to install mypy: $ cd mypy $ python3 -m pip install -U . Replace python3 with your Python 3 interpreter. You may have to do the above as root. For example, in Ubuntu: $ sudo python3 -m pip install -U . Now you can use the mypy program just as above. In case of trouble see "Troubleshooting" above. Working with the git version of mypyWorking with the git version of mypy mypy contains a submodule, "typeshed". See. This submodule contains types for the Python standard library. Due to the way git submodules work, you'll have to do git submodule update mypy/typeshed whenever you change branches, merge, rebase, or pull. (It's possible to automate this: Search Google for "git hook update submodule") TestsTests The basic way to run tests: $ pip3 install -r test-requirements.txt $ python2 -m pip install -U typing $ ./runtests.py For more on the tests, see Test README.md Development statusDevelopment status Mypy is beta software, but it has already been used in production for several years at Dropbox, and it has an extensive test suite. See the roadmap if you are interested in plans for the future. ChangelogChangelog Follow mypy's updates on the blog: Issue trackerIssue tracker Please report any bugs and enhancement ideas using the mypy issue tracker: If you have any questions about using mypy or types, please ask in the typing gitter instead: Compiled version of mypyCompiled version of mypy We have built a compiled version of mypy using the mypyc compiler for mypy-annotated Python code. It is approximately 4 times faster than interpreted mypy and is available (and the default) for 64-bit Windows, macOS, and Linux. To install an interpreted mypy instead, use: $ python3 -m pip install --no-binary mypy -U mypy If you wish to test out the compiled version of a development version of mypy, you can directly install a binary from. Help wantedHelp wanted Any help in testing, development, documentation and other tasks is highly appreciated and useful to the project. There are tasks for contributors of all experience levels. If you're just getting started, ask on the gitter chat for ideas of good beginner issues. For more details, see the file CONTRIBUTING.md. LicenseLicense Mypy is licensed under the terms of the MIT License (see the file LICENSE).
https://libraries.io/homebrew/mypy
CC-MAIN-2020-10
refinedweb
1,191
59.53
I’m a source-control kind of guy. Anyone that knows me would assume that I’d always insist on a source-control tool of some kind, even for my own “solo” work. But they’d be wrong – I’ve only just found one I’m happy with, and in the meantime I’ve gone several years without any source-control tool. And frankly, I’ve always been a bit perplexed at how everyone else seems to get along with these tools. Sure, in the past I’ve worked on teams using PVCS or ClearCase, and before that PANVALET on mainframes (and some other mainframe tool whose name I can’t even remember). I’ve had the odd encounter with CVS, Subversion and Perforce. And when I started setting up my own development environment environment a few years back, source-control was one of the first things I looked at (together with overall directory structures, backup, and security). But at that time I wasn’t happy with any of the tools I found. Everyone else seemed to be using CVS, but the more I learnt about it the more of a ridiculous nightmare it seemed. I looked at Subversion and Perforce and a few others, but at the time they all seemed far too awkward, limited and problematic to suit my needs – just far more trouble than would be worth. The more expensive tools were beyond my budget (and in any case, given past experiences, I kind of expected them to be worse rather than better). I think at least part of the problem was that these tools tend to address a broad but ill-defined set of loosely-related issues. It’s as if everybody knows what such source-control tools are supposed to do (unfortunately, often based on CVS, which just seems insane), but this isn’t based on any clear definition of exactly what needs such a tool should and shouldn’t be trying to address. Then each specific tool has its own particular flaws in conception, architecture and implementation. Throw non-standard services, storage mechanisms and networking protocols into the mix, and you end up having to deal with a huge pile of complications and restrictions just to get one or two key benefits. As an aside, the Google “Tech Talk” video Linus Torvalds on git has plenty of scathing comments about these traditional source-control tools and why they aren’t the answer. If you want some more examples of people who aren’t enjoying their source-control tools, there are also some great comments on the “Coding Horror” article Software Branching and Parallel Universes. In the end, it looked both simpler and safer for me to live without a source-control tool. That’s heresy in civilized software engineering circles, even for a one-man project. But it has worked fine for me up until now. In the absence of a source-control tool, I’ve maintained separate and complete copies of each version of each project, and done any merging of code between them manually (or at least, using separate tools). This loses out on the locking, merging and history tracking/recreation that a source-control tool could provide, but to date that hasn’t been of any consequence (and can partly be addressed by other means, e.g. short-term history tracking by my IDE, use of “diff” tools against old backups etc). In return I’ve not had to deal with any of the overheads, complexity or risks of any of these tools, nor had to fit the rest of my environment and procedures around them. Don’t get me wrong: on a larger team, or more complex projects, some kind of source-control tool would normally be absolutely essential, however problematic and burdensome. But I am not a larger team, and so far it hasn’t been worth my while to shoulder such burdens. Anyway, I revisit this subject every now and then, to see if the tools have reached the point where any are good enough to meet my needs (and so that I have a rough idea of what to do if I suddenly do need a source control tool after all). And this time around, at last, everything seems to have changed… This time, the world suddenly seems full of “distributed” (or perhaps more accurately, “decentralized”) source-control tools. Despite initially fearing that things had just got a whole lot more complicated, these tools have actually turned out to be exactly what I’ve been looking for all this time. I’m not going to try and explain distributed source-control tools here, but for some general background, see (for example): - Kyle Cordes’ talk notes for “A Brief Introduction to Distributed Version Control”. - Intro to Distributed Version Control (Illustrated) at betterexplained.com. - The Google Tech Talk video Linus Torvalds on git. - Understanding Mercurial (basic concepts of Mercurial and distributed source-control in general). - Distributed Revision Control Systems: Git vs. Mercurial vs. SVN on Russell Beattie’s Weblog. - DVCS Mini Roundup (summary and comparison of the currently-available tools). Of the currently-available distributed source-control tools, a quick look round suggested that Mercurial might be best for me, and some brief exploration and experimentation with it completely won me over. At last, a souce-control tool that I’m happy with! Mercurial gives me precisely the benefits I’m looking for from a source-control tool – in particular, history tracking/recreation and good support for branching and merging. It’s flexible enough to let me add these facilities into my existing development environment and directory structures without otherwise impacting them (even though this isn’t how most teams would normally use it), it doesn’t need any significant adminstration, and it seems simple and reliable. In addition, Sun has chosen it for the OpenJDK project (as stated, for example, in Mark Reinhold’s blog), and Mozilla is adopting it too (as described in Version Control System Shootout Redux Redux), so I can feel reasonably confident it’ll be around and supported for a while. Some of the particular things I like about Mercurial are: - It all seems simple and reasonably intuitive, and everything “just works”. - Branching and tagging, and more importantly merging, all look relatively simple, safe, and effective. - Its overall approach makes it very flexible. I especially like the way the internal Mercurial data is held in a single directory structure in the root of the relevant set of files. This keeps it together with the files themselves, with no separate central repository that everything depends on, whilst also not scattering lots of messy extra directories into the “real” directories. It was easy to see how this could be fitted into my existing directory structures, backup, working practices etc without any significant impact or risk, and without other tools and scripts needing to be aware of it. At the same time I don’t feel it ties me down to any one particular structure, and I can see how it could readily accommodate much larger teams or more complex situations. - Although this is entirely subjective, it feels rock solid and safe. Retrieving old versions and moving backwards and forwards between versions works quickly and reliably, with no fuss or bother. The documentation’s coverage of its internal architecture and how this has been designed for safety (e.g. writing is “append only” and carried out in an order that ensures “atomic” operation, use of checksums for integrity checks etc) gives me good confidence that corruptions or irretrievable files should be very rare. For extra safety I can still keep my existing directories in place (holding the current “tip” of each version), so that at worst my existing backup regime still covers them even if anything in Mercurial ever gets corrupted. - The documentation provided by the Distributed revision control with Mercurial open-source book seems excellent. I found it clear and readable enough to act as an introduction, but extensive and detailed enough to work as a reference. I spent a couple of hours reading through the whole thing and felt like this had given me a real understanding of Mercurial and covered everything I might need to know. - Commits are atomic, and can optionally handle added and deleted files automatically. This means that I can pretty much just carry out the relevant work without regard for Mercurial, then simply commit the whole lot at the end of each task, without having to individually notify Mercurial of each new or deleted file. This removes a lot of the need for integration with IDEs, and a lot of the potential source-control implications of using IDE “refactoring” facilities. Some of these are intrinsic benefits of distributed source control; some are due to Mercurial being a relatively new solution (and able to build on the best of earlier tools whilst avoiding their mistakes and being free of historical baggage); and some are just down to it being well designed and implemented. For anyone coming from other tools, some conversion/migration tools are listed at Mercurial’s Repository Conversion page, but of course I haven’t tried any of these myself. The only weaknesses I’ve encountered so far are: - Mercurial deals with individual files, and is therefore completely blind to empty directories. The argument seems to be that empty directories aren’t needed and aren’t significant, but I think this is more an artifact of the implementation than anything one would deliberately specify. I don’t think it’s such a tool’s place to decide that empty directories don’t matter. I have directories that exist just to maintain a consistent layout, or as already-named placeholders in readiness for future files. To work around this I’ve had to find all empty directories and give them each a dummy “placeholder” file. - Although there’s at least one Eclipse plug-in, at least one NetBeans plug-in, and a TortoiseHg project for an MS-Windows shell extension, these seem to be at a very early stage. I’d expect this situation to improve over time, especially for NetBeans (given Sun’s use of Mercurial for OpenJDK). In the meantime this doesn’t have much impact on my own use of Mercurial, as the command-line commands are simple to use and powerful enough to be practical. During normal day-to-day work, my use of Mercurial has generally been limited to a commit of a complete set of changes when ready, plus explicit “rename”s of files where necessary. - On MS Windows you need to obtain a suitable diff/merge tool separately, as this isn’t built into the Mercurial distribution (but the documentation points you at several suitable tools, and shows how to integrate them into Mercurial – and anyway, I’d rather have the choice than be saddled with one I don’t like, or have a half-baked solution as part of the source-control tool itself). I’ve now been using Mercurial for a couple of months. Despite my general dislike of all the source-control tools I’d looked at beforehand, I have been very pleased with Mercurial. If you’re looking for a new source control tool, or have always disliked tools such as CVS, Subversion and Perforce, I’d certainly recommend Mercurial as worth taking a look at. Can we pleeeeeaaaase stop using those stupid website preview things when we hover over links? Don’t you guys have any idea how annoying those things are? I occasionally accidentally hover over one when I’m scrolling with the mouse wheel and a preview of some random website will jump out at me. Sometimes I like to “feel” the text while I read it with the mouse cursor, and some random website will jump out at me. Sometimes I like to hover over a link, just so I can get an idea where the link goes from the address, and BAM. This stuff is a blight on the web. It’s not quite as bad as the pages that look up words I double click in the dictionary, but still very very very very very very very very very very very very very very very very annoying. OK, point taken. I’ve tried it, some like it, some hate it, you hate it – I’ve removed it. Over on programming.reddit.com, user “derekslager” has pointed out that Mercurial does have a “batteries included” Windows distribution that includes a preconfigured kdiff3 / hgmerge. I’m a happy Hg user too. I like how it doesn’t spray .svn directories in each subdirectory, making grep difficult. See, the thing with the empty directories is that it massively simplifies the internal model of the tree: instead of file nodes and directory nodes, there are now only file nodes. This means some optimizations can be done that would otherwise be impossible. Seen in that light, I think it’s a good trade-off. Having a few .keep files around isn’t too much of a price to pay. I still can’t find a solid criteria to choose between bzr and hg. Heach have some nice features that the other one lack. Manuzhai, I disagree with you completely I want my version control tool to work how I intuitively expect it to work and not have to find work-rounds like creating files in empty directories.. That said, I have been using hg for the last month on single developer projects and am quite happy. I haven’t had the need to use branching yet, or maybe I’ve had too much CVS branching history to attempt it.. kudus to Mike for handling the link-preview thing in the sanest I’ve seen on the net. I’m still waiting for one of the distributed scm’s to get decent tool support, eg. Eclipse: CVS full support, SVN full support. Hg alpha depending on hg binary. If the support provided by any of the two svn plugins for eclipse is full support, then I dont’t want it I too am now using Mercurial and I really do like it. I have one friend who is trying to tempt me to go the way of git. Have you checked out git at all and have any thing to add on that front? Adam, I looked at Git briefly, but was put off by: impression that it’s perhaps primarily a toolkit for higher-level tools and front-ends; doubts over using it on MS Windows (I use both Windows and Linux); and concerns over space requirements and possible need for regular “housekeeping”. I might well be doing it an injustice, but my gut reaction was that it wasn’t going to offer anything decisive over Mercurial and/or Bazaar, and it was firing enough alarm bells to put me off spending time looking at it further. I guess my main decision was that the “distributed” approach is what I’ve been looking for and already has tools that are good enough for serious use. Choosing between those tools seemed a much less critical issue. I suspect there’s little to choose between Git, Mercurial and Bazaar (pros/cons to each, balance likely to shift over time anyway). Mercurial just “felt” most right to me (very subjective…), and it does meet my requirements very well, so I was quite happy to plump for it rather than spend more time choosing between them. Interesting article… take a look at WANdisco (CVS / SVN MultiSite – Active / Active Replication). I tried Mercurial and Bazaar for doing a big merge (1500+ files) from PVCS repo on Windows. I found Bazaar more user friendly for merges than Mercurial (at least on windows). Other than that both are similar. Mercurial insisted that I should perform the merge interactively whereas all I wanted was a report of conflicting files. Bazaar went ahead, performed the merge and gave me a list of files which had the merge markers in them. I wish I could have used Mercurial. With Bazaar I was very impressed with the merge. Other than that, you need C compiler (either MSCV2003 or MinGW) to install Mercurial source code whereas for Bazaar, all dependencies are available as handy windows installers. I’ve been searching now and again for information about the ifferent DVCS, and some time ago I found a nice side-by-side comparision of Mercurial and Bazaar: It confirmed my gut feeling to choose Mercurial (almost a year ago, now). And Mercurial also has a Windows installer: -> Best wishes, and thanks for the nice article! If you want to automatically create keep files for empty directories, here is a Python script that’ll do just that. import os from os.path import join, getsize for root, dirs, files in os.walk('C:/proyects/myproject'): print root, "... ", if len(dirs) == 0 and len(files) == 0: open(root + "/keep", 'w') print "creating keep" else: print '' os.system('pause') Thanks Pocho.
http://closingbraces.net/2007/11/06/mercurial-wins-me-over/
crawl-002
refinedweb
2,836
58.62
free web hosting | free hosting | Web Hosting | Free Website Submission | shopping cart | php hosting <!POP YES 1 ibnmanager 1247289350-0 > Your Green Beer Building Society Phone: 555-028-3469 Green Bottle Beer, Green Tilt Beer green beer :: green soap beer mixture :: green bay packers beer belly :: green beer building society :: Great Britain, Elizabeth BowesLyon used to describe these sorts of forms, from enormous tanks which are less fermentable by the St. Louis Cardinals. The Official Site of The Greens. In 2004 the European Parliament. While on many issues European Greens used. On the other hand, clear green apple beer green parties around the world outside Ireland, take part in the shortlived Know Nothing Movement, which tried to minimize the smokiness of the Islamic Conference. Green symbolizes Islam because the red and green instead of blue, green and vice versa. However, dichromats often confuse red and blue. Green takes up a group of mitted to environmentalism. Go hawlfway down web page where it says How Green is a Football (soccer) club. Grayasparagus or graygreen is a Lists of countries ordered by President Lincoln in 1863. Additionally, the Protestant Irish still held ial and clan ties to Europe and features not only Irish/Scots/English, but also with its intensity, beer in green bottle with rhymes under c the hop addition le, green beer building society and volume of wort the brewer is free to restrict, or even a beer style are color, clarity, green beer building society and nature of yeast is added to darker beers, though other ingredients such as wheat beer, rye beer, or oatmeal stout. Other grains such as English ales from the malt will take on dark colour and high schools also often dress up in green on st Patricks day parade refused funding The day is celebrated by the US was designed by another mdash; Charles Thomson. Much to the massproduction and the leading role taken by Green councillors) and (as mentioned above) in the British conquest of New Orleans and Malachi Fallon was chief of police of San Francisco.Potter p.530 The Irish 19293 Native tolerance, however, was also a huge morale boost for the Irish. National Review Online. During this stage that sugars won from the spent grains fall after lautering plete. Craft breweries often have to gliadin and hordein (together, gluten) that is responsible for creating a Global Green Meeting discussed the situation of Green Parties over the last 30 years. Green Parties from around the world, and can greatly enhance the appreciation of that time and avoid possible infection by undesirable bacteria. One major exception is cider as apple skins retain significant amounts of beer. In Britain and Ireland, and to produce the ability to perceive color. With at least one scientific study confirms this under controlled conditions.Morgan MJ, import beer green bottles Adam A, Mollon JD. Dichromats detect colourcamouflaged objects that are accustomed to its recent growth, the D.I.F. Bayern e.V. (DeutschIrischer Freundeskreis GermanIrish Circle of Friends) now claims that it is not unusual that a vacuum will be used on it (concentrated chlorine bleach being a celebration of all ethnic heritages. All event proceeds are contributed back to 1040. Although the baseball season is still in use today. The Gebot ordered that the poor world develops, we must help it develop with renewable rather than cultivated ones. All beer was stored in the retina of the Irish Regiments in French service were: Bulkeley, Clare, Dillon, green bay packers beer belly Rooth, Berwich and Lally. Additionally, st patricks day computer wallpaper green there was broad support for the elimination of export subsidies for agricultural products petitions. Others simply brew to have styles of beer from going flat. If the fermentation process. For ale this temperature is usually much colder, around 50 F / 10 C. A vigorous fermentation then takes place, usually starting within 12 hours and continuing over the wortout temperature, green beer building society and also reduces evaporation. Internal calandria are generally not called beer, despite being produced by either lager or ale methods or a day of the population is much smaller, the process of led grain (typically malted grain) with water, green bay packers beer oosters produces wort. They are of Irish descent. The tiny island of Ireland attended, but since the late 1920s, so the term shamanism has been the official dyeing of the oldest surviving written record of the leprechaun as follows: He is still widely venerated in Ireland date from the BurtononTrent region. Fruits and spices are sometimes used to open a hole in the Scotland city of Rockford, Illinois. anization is claimed by traditional peoples that in the town centre. Manchester hosts a e (3 km) route through the phrase The Wearing of the 17th falls during Holy Week again until 2160. Saint Patricks DaySt Patrick redirects here, green bottle beer for other meanings of itary of Mexico against the United States. Many Irish Americans and Protestant ScotsIrish became the conduit through which the nature of ecology, and the Eastern Orthodox Church patronage= Ireland, Nigeria, Montserrat,Engineers cite web title=Roman Catholic Patron Saints Index accessdate=25 August accessyear=2006 url=:For information about the Irish volunteers who fought with the atmosphere, even when these things are in direct conflict with all things Irish. These forces were presented with a pale amber produced from using pale malts. Pale lager is a light lager called Pilsner which originated in Ireland. The Links green bottle beer green tilt beer plas пларн экспертиза безопасности google.com Green Beer Building Society
http://ycfcfat.ibnsites.com/greenbee58/green-beer-building-society.html
crawl-002
refinedweb
928
59.84
wrote, quoting me: > [fixing M$'s broken exec() implementation by a POSIXish wrapper] > >> Do you really think it a bad idea to do this? After all, if a >> programmer *really* wants the definitively different behaviour of >> Windoze exec(), IMO he should ideally be using _exec() anyway, or >> perhaps better still CreateProcess(), so that the distinction is >> made obvious, in the source code. > > While I'm one of those that have suffered from this behavior and > found out the hard way I don't think it is a good idea to make > POSIXish behavior the default. And, it was never my suggestion that it should be made so :-) > That would doubtlessly lead to many support requests along the > line "my program behaves differently when I compile it with MSVC > or MinGW". > > Since MinGW is not trying to provide POSIX in the first place I'd > suggest to leave the default (as broken as it may be) as is and > create another exec_posix() (or so) that behaves as suggested. This is essentially what my set of wrapper functions do. For each of the spawn() and exec() family functions, I create a wrapper function, e.g. spawnvp_wrapper() for _spawnvp(), with a prototype which exactly conforms to that of the wrapped function. For convenience, I declare these prototypes in winexec.h, where I also alias the *generic* name to the wrapper name, e.g. `#define spawnvp spawnvp_wrapper', and I also `#include <process.h>' at the top of winexec.h. Notice that this means that any program which simply includes process.h, rather that explicitly including winexec.h in its place, will continue to exhibit the default [broken] behaviour. Even when winexec.h *is* included in place of process.h, it is only the *generic* function names which are redirected through the wrappers; the default behaviour is still available by directly calling the function by its Microsoft specific name, e.g. _spawnvp(), with the leading underscore. Furthermore, any user who prefers to use the generic function names, but still have the wrappers available, while having default behaviour for the generic functions, is free to either `#undef' the aliases from winexec.h, or to modify winexec.h itself, so that they are never defined -- indeed, perhaps I should modify winexec.h, such that it does something like: #ifndef USE_EXEC_FUNCTION_WRAPPERS # define USE_EXEC_FUNCTION_WRAPPERS 1 #endif #if USE_EXEC_FUNCTION_WRAPPERS # define execv execv_wrapper : etc. #endif and similarly for USE_SPAWN_FUNCTION_WRAPPERS, then the user could disable the aliases, while still retaining *explicit* access to the prototyped wrapper functions, by: #define USE_EXEC_FUNCTION_WRAPPERS 0 #include "winexec.h" In any case, I'm not suggesting that these wrappers should be made default behaviour for MinGW. I'm offering them as a GPL option, for anyone who would like to use them; the fact that they are GPL does preclude integration as a MinGW default. Earnie Boyd wrote: > As far as I remember, the document you signed to give Copyright to > FSF leaves you with Copyright as well. You are free to redistribute > with a different license as Copyright owner if you desire. The > redistributed license would then be a fork of the FSF version. I'll need to check that. If this is indeed the case, then I *could* offer them as a public domain add-on option for MinGW users. (I'd need to think about that; I wouldn't be generally happy at the possibility of my code potentially lining someone else's pocket). But even if I were to make it available in this manner, it would probably still be best to view it as an optional add on feature, rather than integrating it as a MinGW default. Best
https://sourceforge.net/p/mingw/mailman/message/16128771/
CC-MAIN-2017-30
refinedweb
606
62.07
Exposing RSS Comments So far, comments have gotten a lot of play in the RSS space: - However, what’s missing is the ability to pull out a list of comments associated with an item. Instead, folks like Sam publish their comments as a separate feed, and then feed readers thread the comments with the content by comparing the elements from the type feeds (as well as all of the other feeds). That works for most standard-sized RSS sites, but what about sites that exposes hundreds of thousands of entries like msdn.com? Cached per site comment feeds don’t scale as well as on-demand per-item comment feeds. Towards that end, I’d like to propose another element to the well-formed web’s CommentAPI namespace: commentRss. The commentRss element would be a per-item URL for an RSS feed of comments. It looks very like the wfw:comment element: <wfw:commentRss xmlns: </wfw:commentRss> With wfw:commentRss, the RSS reader can pull the comments down on demand, merging them in with the cross-references from other blogs as it does now. In addition, an extension to RSS like this would allow feed readers to subscribe to comments for a particular item, either manually for conversations in which the user is interested or automatically when the user posted a comment for an item.
https://sellsbrothers.com/12548/
CC-MAIN-2021-43
refinedweb
225
53.24
Most APIs are subject to a limit on how many calls can be made per second (or minute, or other short time period), in order to protect servers from being overloaded and maintain high quality of service to many clients. In the case of FullContact, in addition to your monthly quota of profile matches, your plan has an associated rate limit that is tracked and enforced over a 60-second window. FullContact APIs use HTTP headers to communicate to your application what its rate limit is and how quickly it’s being used. Here’s an example: $ curl -I \ "" HTTP/1.1 200 OK Date: Mon, 30 Mar 2015 01:58:31 GMT Content-Type: application/json; charset=UTF-8 X-Rate-Limit-Limit: 30 X-Rate-Limit-Remaining: 11 X-Rate-Limit-Reset: 44 There are three important concepts here: So in this example, the application has 11 more calls remaining in the next 44 seconds; after that, the counter, X-Rate-Limit-Remaining, will reset back to 30. If the application were to make more calls than allowed, the API will return an HTTP 403 status code: $ curl -I \ "" HTTP/1.1 403 Forbidden Date: Mon, 30 Mar 2015 01:58:31 GMT X-Rate-Limit-Limit: 30 X-Rate-Limit-Remaining: 0 X-Rate-Limit-Reset: 11 The most reliable way to ensure your app stays within its rate limit is to look at the headers as responses come in, and use this information to slow down your calls to the FullContact API if needed. Take for example the set of headers above — we’re allowed 11 more calls over the next 44 seconds, so ideally we’d like to make one call every 4 seconds so we get as close as possible to our rate limit without going over. The python code below shows a simple implementation of this: from datetime import datetime, timedelta import time import requests class FullContactAdaptiveClient(object): REQUEST_LATENCY=0.2 def __init__(self): self.next_req_time = datetime.fromtimestamp(0) def call_fullcontact(self, email): self._wait_for_rate_limit() r = requests.get('', params={'email': email, 'apiKey': API_KEY}) self._update_rate_limit(r.headers) return r.json() def _wait_for_rate_limit(self): now = datetime.now() if self.next_req_time > now: t = self.next_req_time - now time.sleep(t.total_seconds()) def _update_rate_limit(self, hdr): remaining = float(hdr['X-Rate-Limit-Remaining']) reset = float(hdr['X-Rate-Limit-Reset']) spacing = reset / (1.0 + remaining) delay = spacing - self.REQUEST_LATENCY self.next_req_time = datetime.now() + timedelta(seconds=delay) We divide the reset number by the remaining number to get the spacing we want between calls. The additional 1 in the denominator has little effect when remaining is large, but causes the calculation of spacing to err a bit on the short side as we get towards the end of a time window. Once remaining=0 our spacing will be equal to reset. If HTTP calls were instantaneous, we could just sleep for spacing seconds between each call, but thanks to that pesky speed of light we need to subtract out how long the previous request took. In a real implementation this would be measured from actual requests, but even hardcoding it works fairly well if you try to err on the small side. Notice that this approach doesn’t care that other clients may be consuming the same rate limit. For example, a high-volume application may want to have more than one server making queries to FullContact. Since we recalculate the delay after each request, this approach tracks the correct rate limit even if multiple instances of the client are using the same API key. Hence, for distributed client applications, following these rate limit headers is the easiest way to ensure the application stays under the limits. Even the best rate-limiting code is going to occasionally get an HTTP 403 error, so make sure your application handles them gracefully. In a backend application, this probably means slowing down (the example code above should handle this since X-Rate-Limit-Remaining is 0 on 403 responses) and trying again. In a frontend application, this probably means showing a nice error message to the user. For example, let’s say your application wants to use FullContact to show extra information about registered users in your webapp. You could make the API call when rendering the user page, but it would probably be more rate-limit friendly to do it just once, when the user first registers, and store the result alongside your user database. Along the same lines, even if your usage pattern demands that you make FullContact API calls at display time, if there’s significant overlap where the same person is displayed more than once, adding a cache between your application and FullContact could be a big win. Consider using something like Memcache and refreshing data from the FullContact API only when it is older than, say, 30 days. Hopefully this gives you a good starting point on how your application can gracefully deal with rate limits. Remember that if you use an official FullContact client library (Java only for now), most of this is taken care of automatically. If you have any thoughts on rate limiting based on your experience integrating with the FullContact API, we’d love to hear from you!
https://www.fullcontact.com/developer/docs/rate-limits/
CC-MAIN-2019-43
refinedweb
876
50.97
This notebook was put together by [Jake Vanderplas]() for PyCon 2015. Source and license info is on [GitHub]().. %matplotlib inline import numpy as np import matplotlib.pyplot as plt # use seaborn for plot defaults # this can be safely commented out import seaborn; seaborn.set() # residual error around fit() from sklearn.svm import SVC # Create some simple data import numpy as np np.random.seed(0) X = np.random.random(size=(20, 1)) y = 3 * X.squeeze() + 2 + np.random.randn(20) plt.plot(X.squeeze(), y, 'o'); As above, we can plot a line of best fit: model = LinearRegression() model.fit(X, y) # Plot the data and the model prediction X_fit = np.linspace(0, 1, 100)[:, np.newaxis] y_fit = model.predict(X_fit) plt.plot(X.squeeze(), y, 'o') plt.plot(X_fit.squeeze(), y_fit); Scikit-learn also has some more sophisticated models, which can respond to finer features in the data: # Fit a Random Forest from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor() model.fit(X, y) # Plot the data and the model prediction X_fit = np.linspace(0, 1, 100)[:, np.newaxis] y_fit = model.predict(X_fit) plt.plot(X.squeeze(), y, 'o') plt.plot(X_fit.squeeze(), y_fit); Whether either of these is a "good" fit or not depends on a number of things; we'll discuss details of how to choose a model later in the tutorial. Explore the RandomForestRegressor object using IPython's help features (i.e. put a question mark after the object). What arguments are available to RandomForestRegressor? How does the above plot change if you change these arguments? These class-level arguments are known as hyperparameters, and we will discuss later how you to select hyperparameters in the model validation section..predict(): given a trained model, predict the label of a new set of data. This method accepts one argument, the new data X_new(e.g. model.predict(X_new)), and returns the learned label for each object in the array. model.predict_proba(): For classification problems, some estimators also provide this method, which returns the probability that a new observation has each categorical label. In this case, the label with the highest probability is returned by model.predict().("")
http://nbviewer.jupyter.org/github/jakevdp/sklearn_pycon2015/blob/master/notebooks/02.2-Basic-Principles.ipynb
CC-MAIN-2018-09
refinedweb
359
53.07
I am into a task where I am converting .m files to .py. But to test the code I have to dump or log values of each variables for both Python and Matlab in some log files. Then I compare each after opening them in Excel sheet using its column property. Like what is the array index - what each index / column-row value is etc. This is very tiresome and I am not sure how we can compare variable / statements output for a specific variable programmatically in regards that it is just a .m to .py conversion. You can run the program in Matlab and save all the variables using the save command. This saves to a .mat file. Then you can load the variables from that file into python using scipy.io.loadmat and compare them in python. First, in matlab: save 'data.mat' var1 var2 var3 Then in python (in the same directory, or provide a full path): import scipy.io vars = scipy.io.loadmat('data.mat', squeeze_me=True) var1_matlab = vars['var1'] var2_matlab = vars['var2'] var3_matlab = vars['var3'] Note that numpy has 1D arrays, while Matlab does not (1D arrays in Matlab are actually 2D arrays where one dimension has a length of 1). This may mean that the number of dimensions in the python and scipy versions of a variable are different. squeeze_me fixes this by eliminating dimensions with a length of 1, but it may, for example, take a 2D array from Matlab that happens to just have a length of 1 in some dimension and squeezes that to a 1D python array. So you may have to do some manual dimension matching no matter what. To get this to work, make sure matlab is configured to save files in the "MATLAB Version 5 or later" file format, though (in 2014B this is in preferences> General> MAT-Files). If you absolutely must use version 7 files, you can try hdf5storage, which says it supports them. However, you probably have scipy already installed, and I have personally used the scipy approach and confirmed it worked but have not done the same with hdf5storage.
https://codedump.io/share/F9GJGV7hIY7j/1/compare-matlab-and-python-variables
CC-MAIN-2017-47
refinedweb
356
71.04
Archive for August, 2006. VS 2005 – Where did IntelliSense go from my web.config? One of the really nice features in Visual Studio 2005 is the IntelliSense support in xml files including web.config. No longer do you have to copy some snippets from other web sites just because you don’t quite remember the exact spelling of something like> But from time to time you will notice that IntelliSense in web.config disappears. It just does not work anymore. You restart Visual Studio, reload the solution but it still does not work. What is the problem? What caused this? Web Site Administration Tool is the culprit. WSAT is a great tool and allows to jumpstart the application but when the changes to web.config are saved, the following change is made: < configuration> Will be changed to this: < configuration xmlns=““> Once the namespace information has been added to the configuration tag, the IntelliSense no longer works. The only way to make it work is to remove xmlns attribute.2 comments Great looking CSS bars Here is a great way to create some really nice looking bars without images. Just follow this link. Nothing but pure CSS goodness.No comments VS.NET 2003 SP1 is here! Nice. Right when I am getting ready to move up to Visual Studio 2005, they release a service pack for Visual Studio 2003. Tell Visual Studio what prefix to use with your custom server controls If you work on a library of custom server controls for ASP.NET, you might have noticed that when you drag your custom server control onto design surface, cc1 is used to prefix your control. It is getting pretty boring after a while. If you take a close look at the ASP.NET controls, you will notice that they are always prefixed by asp automatically. How can your library enjoy this little but really nifty feature? Just use TagPrefix attribute. It is an assembly level attribute, so make sure you stick it in the AssemblyInfo.cs of your project. Here is an example usage: [assembly: TagPrefix(”SashaSydoruk.PDX.Web.UI.WebControls”, “pdx”)] To find out more - MSDN articleNo Firefox Crop Circle Another proof that Firefox is awesome - Firefox Crop CircleNo comments
http://www.sashasydoruk.com/2006/08/
crawl-001
refinedweb
371
67.55
TOAST UI Editor is a document editing library built using JavaScript, and it offers two different modes of editors, Markdown and "What You See is What You Get" (WYSIWYG), for users to choose freely which mode is the more suitable option for different users. It has been released as opensource in 2018 and has continually evolved to receive 10k GitHub ⭐️ Stars. To carry on this momentum, the TOAST UI Editor 2.0 is released this March, 2020. (MAKE SOME NOISE 🎉) We started concept meetings and prototyping last year's second half, and after three months of development, we proudly announce the official release today. With TOAST UI Editor v2.0, we concentrated on improving the markdown parser, scroll sync, and other core features of the markdown Editor and reducing the bundle size. Let's explore what changes have been made in TOAST UI Editor 2.0! The biggest change in TOAST UI Editor 2.0 is the newly implemented markdown parser. Parser is a core piece of technology for the Editor that converts markdown data inputted by the user into HTML. The parser used in the previous version, markdown-it had issues that resulted in between the actual editing area and the preview that were difficult to handle. In order to address this, NHN Cloud's Front End Development Lab created a new markdown parser, ToastMark, for TOAST UI Editor 2.0. ToastMark is an extension of the opensource CommonMark.js which strictly adheres to the CommonMark specs, and provides an API that can directly access the abstract syntax tree, which contains the markdown documents' source mapping information, enabling us to be able to solve the previous problems. In v2.0, by replacing markdown-it with ToastMark, we unified the the segmented syntax analysis features, and as a result, were able to drastically improve the markdown editor's accuracy and stability. ToastMark will be discussed in greater detail in future articles, so stay tuned! 🤘 The following sections enumerate what has been improved in TOAST UI Editor 2.0 by implementing ToastMark. TOAST UI Editor's markdown editor had a problem that resulted in the rendered results on the preview looking different from the editing area's syntax highlights. In v2.0, we corrected the mistake by making it so that both the markdown editor's editing area and the preview area use the same result from a single syntax analsys through ToastMark. Now, users can see consistent results on both screens when editing documents. v1.x v2.0 Because TOAST UI Editor v1.x was programmed to handle large amount of data at once when editing a document, the workflow was slowed, and it sometimes led to screen blinking. ToastMark provides a way for the editor to gradually parse the markdown data. ToastMark allows the markdown editor to parse and update the preview that relates to what is being edited, so TOAST UI Editor v2.0 no longer has unnecessary delays or drops during the rendering process. Furthermore, the editing area and the preview area's scroll sync has become more intricate. Both area's scrolls will now sync better regardless of the document's length, making editing more convenient. v1.x v2.0 Lastly, with TOAST UI Editor v1.x, when the cursor was at a certain location near the input text in the editing area, the Editor could not adequately display the toolbar button which was positioned near the top of the Editor. This problem was also solved using the ToastMark API that converts the parsed markdown data into the necessary values. Editor v2.0 uses the syntax tree's node information retrieved from the markdown editing area's cursor location to display information regarding the current target text on the toolbar more accurately. v1.x v2.0 Monorepo is a system that maintains multiple modules of different repositories in a single unified structure. TOAST UI Editor, starting from v2.0, uses monorepo structure with GitHub repository, and all packages related to the editor will be managed under the following structure. - tui.editor/ ├─ apps/ │ ├─ editor │ ├─ jquery-editor │ ├─ react-editor │ └─ vue-editor ├─ plugins/ │ ├─ chart │ ├─ code-syntax-highlight │ ├─ color-syntax │ ├─ table-merged-cell │ └─ uml ├─ libs/ │ ├─ toastmark │ ├─ squire │ └─ to-mark TOAST UI Editor's main app as well as the wrapper, Editor's extensible features, and library modules are maintained in the apps folder, plugins folder, and libs folder, respectively. Now TOAST UI Editor users can find all related module information in one place. Users no longer have to look at different posts and can simply find issues and release information on the TOAST UI Editor repository. Furthermore developers can simply clone the single TOAST UI Editor repository and perform static analysis and unit tests in a unified environment to contribute to TOAST UI Editor. Before v2.0, extensible features like chart and cell merge table were provided as extensions. However, this included all of related codes in the bundle, resulting in a large bundle size. In order to address such issue, TOAST UI Editor v2.0 now uses separate packages registered on npm. The previous extensible features are called plugins instead of extensions, and can be installed using npm. As the following example demonstrates, with TOAST UI Editor v2.0, users can install only what they need and can even customize different plugins. $ npm install @toast-ui/editor-plugin-chart # Chart plugin $ npm install @toast-ui/editor-plugin-uml # UML Plugin import Editor from "@toast-ui/editor"; import chartPlugin from "@toast-ui/editor-plugin-chart"; // Chart Plugin import umlPlugin from "@toast-ui/editor-plugin-uml"; // UML Plugin import customPlugin from "./customPlugins"; // User custom Plugin const editor = new Editor({ // ... plugins: [chartPlugin, umlPlugin, customPlugin] }); Furthermore, because all plugins are maintained in a monorepo, users can find information regarding plugins much easier. From v2.0, there are five different default pluins, and more will be provided later. More detailed information and usages of the plugins can be found in the linked pages below. In order to make TOAST UI Editor v2.0 a lighter bundle, we removed the jQuery dependency as well as the following tasks. With TOAST UI Editor v2.0, we removed dependencies that made the bundle size larger. jQuery dependency was one of them. While jQuery was used with TOAST UI Editor to enable easy DOM manipulation, unnecessary jQuery functions included in the bundle file needlessly added to the bundle file size. Furthermore, there were users who felt uncomfortable using TOAST UI Editor due to the jQuery dependence. In v2.0, we used our own DOM utility functions with TOAST UI CodeSnippet instead of jQuery and were able to reduce the bundle size. TOAST UI Editor provides a feature that allows code highlighting feature within the codeblock area to emphasize codes. In this syntax highlighting, highlight.js library is used. However, we had to deal with increase in bundle size due to the fact that the highlight.js includes all 185 languages it supports in its bundle. In order to address this issue, with TOAST UI Editor 2.0, the syntax highlighting feature is separated as a plugin. By using code-syntax-highlight plugin, users can selectively use the syntax highlighting feature with chosen languages. The five languages (i18n) that TOAST UI Editor supported when it was initially released as opensource has grown to twenty with the help of active contributors. However, because all internationalization files had to be built-in, these files added to the bundle size. With v2.0, the internationalization files are offered separately by languages. Users can now choose which language to use, making the bundle lighter. By removing jQuery, separating plugins and internationalization files, and unnecessary codes, we were able to ameliorate the problem of heavy bundle size. The following is a comparison of bundle sizes when purely using Editor's basic functionalities by versions. Compared to previous versions, it is clear to see that the total bundle size has reduced from 1.42MB to 582KB. Now, users can use optimized bundle files in TOAST UI Editor 2.0. You (Editor)'ve got a plan! 👀 One of the reasons that TOAST UI Editor was able to experience such rise in popularity is that it supports both markdown and WYSIWYG at the same time. This is TOAST UI Editor's unique pride as well as its most important task. Therefore, it is our goal to provide an editor that has harmoniously unified the two different editors, markdown and WYSIWYG, for every version update. Now, we are planning a minor update that, with the help of our new parser, ToastMark, reduces the CodeMirror dependencies while enhancing the markdown editor. Furthermore, the toolbar state management and the editing area's syntax highlighting features will be improved. When the markdown editor improvements are finished, we plan on making similar internal changes to the WYSIWYG editor using ToastMark as its state manager. TOAST UI Editor aims at complete independence from WYSIWYG dependencies like Squire and ToMark to be the light yet unified dependable markdown and WYSIWYG editor. That's a lot of changes! Can I just proceed with the update? There have been monumental changes with TOAST UI Editor 2.0. Therefore, there have been numerous changes in usages, but we have prepared a easy-to-follow migration guide for you! Click on the following link to proceed with your update while following easy step-by-step instructions and explanations.
https://ui.toast.com/posts/en_20200318/
CC-MAIN-2022-40
refinedweb
1,560
54.93
Opened 3 years ago Closed 2 years ago Last modified 2 years ago #18026 closed Bug (fixed) can't update extra_data in process_step of form wizard Description The WizardView.process_step documentation indicates that it is possible to set storage extra data but this appears not to be the case. It seems the problem lies with the getter for extra_data when it finds an empty dict for extra_data it returns an anonymous dict instead of self.data[self.extra_data_key]. This is a one line change at or there abouts. def _get_extra_data(self): return self.data[self.extra_data_key] or {} ...changes to... def _get_extra_data(self): return self.data[self.extra_data_key] Credit to Ryan Show who identified the problem. I just wanted to get it into the tracker here as I've run into the problem myself. Toodle-looooooooo........ creecode Change History (9) comment:1 Changed 3 years ago by agriffis - Cc aron@… added - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 3 years ago by jezdez - Cc steph added - Triage Stage changed from Unreviewed to Accepted comment:3 Changed 3 years ago by steph comment:4 Changed 3 years ago by andrea I suppose that the intended use of the extra_data property was as single value, not as dictionary. What I mean is that you can do: mywizard.storage.extra_data = {'foo' : 'bar'} but not mywizard.storage.extra_data['foo'] = 'bar' Note that extra_data_key is a constant (extra_data_key = 'extra_data') in the BaseStorage class. I think that if the intended use has to be changed, maybe a design decision is needed. comment:5 Changed 2 years ago by steph I just created a test for this problem and also added the fix. See comment:6 Changed 2 years ago by steph - Owner changed from nobody to steph - Status changed from new to assigned comment:7 Changed 2 years ago by Claude Paroz <claude@…> - Resolution set to fixed - Status changed from assigned to closed Could you add a test for this issue? I think a setdefault instead of the anonymous dict before returning the extra data key would help. Maybe you could add a patch for this too - would be really cool!
https://code.djangoproject.com/ticket/18026
CC-MAIN-2015-11
refinedweb
360
60.04
"PrimeFaces not defined" error I am trying to print using PrimeFaces printer functionality. I created a new GlassFish 3 Java EE project and added PrimeFaces 3.1.1 .jar . The code I'm currently using looks like this: <html xmlns="" xmlns: <h:head> <title>Facelet Title</title> </h:head> <h:body> <h:form> <h:outputText <h:commandLink <p:printer </h:commandLink> </h:form> </h:body> </html> Sorry, the print function does not work. Instead, the FireBug console shows me the following error: PrimeFaces not defined source to share If you don't have a title, you will get this error. The top requires the title label to work. See # 2 for their FAQ source to share This was a minor defect and was fixed in 3.2. final; source to share This will also happen if you do not allow access to $ {WebAPP} /javax.faces.resources / ** Check your XML security configuration. JavaScript error that is undefined. source to share try this ... (if it doesn't work, probably error - open ticket for issue tracker ...) <html xmlns="" xmlns: <h:head> </h:head> <h:body> <h:form> <h:commandButton <p:printer </h:commandButton> <h:outputText <h:outputLink <p:printer <h:outputText </h:outputLink> <p:graphicImage </h:form> </h:body> </html> Listen ... Persids are using the jqprint jquery plugin .... You might be better off trying to use it directly, awaiting an official clarity response ... source to share You need to add the PrimeFaces namespace. xmlns:p="" source to share I have the same problem if I only use p: calendar on the form. All tags in the view are jsf / facelet tags. The only tag is the calendar. It looks like the java-script is being removed as an optimization? If I add more tags as p: dataTable the java script errors go away and the calendar component works. Perhaps this will point you where. As a workaround, you can use p: dataTable on a form with rendered = false. source to share
https://daily-blog.netlify.app/questions/1890183/index.html
CC-MAIN-2021-43
refinedweb
328
75.5
In the previous installment of this series, we implemented two very simple example programs, which nevertheless demonstrated quite a few of the core concepts of Qt programming. This month, let’s will take a step back and look at some of the fundamentals of programming with Qt. In the previous installment of this series, we implemented two very simple example programs, which nevertheless demonstrated quite a few of the core concepts of Qt programming. This month, let’s will take a step back and look at some of the fundamentals of programming with Qt. The class diagram in Figure One shows the static class structure of some of the more important Qt classes. Most classes ultimately derive from Qt. Qt doesn’t declare any member variables or functions, and contains only a number of public enums (such as DateFormat { TextDate, ISODate, LocalDate }); and since it doesn’t declare any member variables or functions, it merely provides a convenient grouping of the common enums without polluting the global namespace. The most important direct descendant of Qt is the QObject class. This common base class provides the necessary infrastructure for Qt’s language extensions. By inheriting from QObject it’s possible for custom objects to integrate seamlessly with Qt’s object model. We’ll investigate how that’s achieved below. Three important direct subclasses of QObject are QApplication, QWidget, and QCanvas. Each program that makes use of Qt’s GUI elements must first instantiate exactly one QApplication object, which handles initialization of the underlying windowing system. Instantiation of QApplication also provides the main event loop and some signals and slots related to the entire application’s lifecycle (such as the quit() slot). The QWidget class is the common base class of most of Qt’s GUI elements. Besides providing generic functions for basic geometry and look-and-feel management, QWidget handles mouse, keyboard, and other events generated by the user interface. (In Qt, every widget is rectangular and knows how to paint itself to the screen. Each widget is clipped by its parent and by widgets covering it. A widget without a parent is always a top-level widget, and is displayed in its own window.) QWidget also inherits from QPaintDevice, an object that can be drawn on using Qt’s general-purpose drawing tool QPainter. Other subclasses of QPaintDevice are QPixmap (for off-screen buffers), QPrinter (for “painting” to a hardcopy), and QPicture (for storing and replaying a sequence of QPainter commands). To draw onto a QPaintDevice, a QPicture is passed to a QPainter (typically in the QPainter’s constructor), and can then be drawn upon using QPainter’s drawLine(), drawRect(), or even drawCubicBezier() methods. Line and fill colors are set using QPainter’s setPen() and setBrush() methods, respectively. There is also a bitblt() function (a standalone function, not a member function) that copies pixels from one QPaintDevice to another. bitblt() is typically used to copy the contents of some backing store from memory to the screen. To build up a complicated graphic, for instance, one would perform all the graphics commands on a QPixmap, and bitblt() the results to a visible widget once the drawing is complete. It’s important to remember that QWidget is not an abstract class and can therefore be instantiated directly, for instance to create simple dialogs. However, for most applications, you can find more suitable subclasses of QWidget to create user interfaces. For example, the subclasses of QFrame — itself a direct descendant of QWidget — provide smart, yet lightweight ways to arrange other GUI elements. For graphics such as animations, you can use an alternative approach based on QCanvas. QCanvas doesn’t subclass QPaintDevice, and therefore can’t be drawn on using QPainter. Instead, objects of classes extending QCanvasItem are instantiated and can be placed and removed on a QCanvas. Compared to subclasses of QWidget, the subclasses of QCanvasItem are very lightweight: they don’t define signals and slots, for instance, and therefore cannot respond to user events directly. They are pure graphics elements, and not part of the active user interface. So, to generate animated graphics, it’s possible to give a QCanvasItem a velocity, such that they move “automatically” across a QCanvas whenever QCanvas’s advance() slot is activated. However, since QCanvas is not a widget, it cannot be rendered to the screen directly. Instead, use the QCanvasView widget (an indirect subclass of QFrame) to display a QCanvas object. A Peek Under the Covers Qt provides a set of language extensions that provide a degree of flexibility at run-time that’s unusual for a statically typed language like C++. Because C++ has no external run-time environment (as in Java), the additional information required to make these features work has to be added during the compilation cycle. Qt uses pre-processor macros and its own code-generator, moc, together with more conventional object-oriented techniques and patterns to implement its language extensions. Let’s peek under the covers a little bit and try to unravel how all of these things play together. Here, let’s focus on signals and slots — properties and RTTI are implemented in a similar fashion. The run-time language extensions fall into four groups: 1. Intra-process communication based on signals and slots. This was introduced in Part 1 of this series, and is discussed further below. 2. Run-time type information (RTTI). Information such as the name of the class and data about superclasses inherited by the current object can be obtained at run-time using the className() and inherits() member functions (the latter takes a string containing a class name as argument and returns a boolean value). 3. Limited object life-cycle management and garbage collection. Subclasses of QObject can arrange themselves in parent/child trees, employing the composite pattern. By passing a pointer to the parent object to the child’s constructor, the newly created child is adopted by the parent. As mentioned last month, a parent object deletes all of its children in its own destructor. Of course, Passing a NULL-pointer to the child’s constructor creates a child with no parent. If the object is a widget, it becomes a top-level window. The list of an object’s children is available through the children() member function, and can be manipulated using insertChild() and removeChild(). One caveat: once created, an object cannot change its parent. 4. Reflection. It’s possible to call getter/setter methods for properties defined in subclasses of QObject knowing only the name of the property and without having to downcast from QObject to the actual subclass, as in the following example: SomeClass *c = new SomeClass(parent); QObject *p = c; c->setA(a); p->setProperty(”a”, a); This mechanism requires the use of the Q_PROPERTY macro in the declaration of the subclass to map the property name to the actual accessor methods. A list of all defined property names is available through the class’ QMetaObject. The type of the property must be supported by QVariant, an opaque data type that accepts most primitive C++ data types, as well as those Qt classes, which are most likely to be used as properties (such as QColor and QSize). To take advantage of any of these facilities, a class must extend QObject and contain the pre-processor macro Q_OBJECT somewhere in the private part of its declaration. The definitions for this and related macros can be found in the qobjectdefs.h header file. The Q_OBJECT macro adds a number of function declarations and one private static data member of type QMetaObject. Most of the additional information required for the signal/slot mechanism, as well as the other language extensions, is encapsulated in this class. Since the data member is declared static, there’s exactly one instance of QMetaObject per class. The QMetaObject provides member functions such as className() and superClassName(), but its most interesting functions are probably signalNames() and slotNames(). How can the meta-object resolve this information at run-time? Enter the code-generator moc. Since moc is run after the code is written, but before it’s compiled or even pre-compiled, moc can preserve information contained in the source code that’s thrown away by the compiler. It generates code for the additional functions declared by the Q_OBJECT macro, as well as for the functions implementing signals. For each function declared as a slot, moc generates code that initializes the meta-object with the real name of the function implementing the slot, as well as the number and types of its arguments. This information is maintained in an array inside QMetaObject — in essence, QMetaObject maintains its own vtable! Qt defines a number of auxiliary classes (which are not part of the public and documented API) in the private subdirectory of $QTDIR/include to manage this data. The meta-object’s accessor method metaObject() forwards to a moc-generated function that contains code for all the initializer calls mentioned above, and by using a lazy initialization idiom, guarantees that the meta-object is properly set up before it is being used. Arguments are passed among signals and slots using a class mimicking an old-style union: the class has one data member for each built-in C++ data type. Arguments of other types (such as objects) are passed around as void pointers, which are suitably cast before being used. Since the moc generates code, it can generate code for a cast — something profoundly impossible to do using plain C/C++. Finally, control is dispatched to slots through another moc-generated function. Here, ordinary function calls to all programmer-declared slots have been generated and the suitable one is chosen using a large conditional statement based on a name lookup performed in QObject’s connect() function. Functions implementing the slots are called directly, not through function pointers — the code for the function call is right there in the generated code! The information maintained by Qt for signals is very similar to what’s maintained for slots. In addition, there are the actual implementations of the signal functions, which provide proper wrapping of the signals’ arguments (if any) and forward control to the publish/subscribe mechanism, which maintains the information about which slots are subscribed to the currently active signal. Several things may occur to you: First of all, signals are not asynchronous, although they give the illusion that they are. If a signal is connected to a slot, and the slot never returns, the application blocks forever. Control never passes back to the event-loop, rendering the user interface dead. If a single signal is connected to multiple slots, the slots are executed one after another, not in parallel. The order of processing processed is fixed by Qt’s dispatching mechanism, but is otherwise undefined. Finally, since the connection between signals and slots is made at run-time (and can in principle also be disconnected and re-connected), the linker does not check that signals and slots passed to connect() exist! Instead, if the attempt to make the connection fails at run-time, Qt’s message handler prints an appropriate message to the console — something sloppily written Qt applications frequently do. To suppress this behavior, one can install a custom message handler using the qInstallMsgHandler(QtMsg Handler) function. The type of the argument to this function is a typedef for a function pointer to a function with the following signature: void handler( QtMsgType, const char * ) where QtMsgType is a global enum that can take on the values { QtDebugMsg, QtWarningMsg, QtFatalMsg }. Consequentially, defining a handler() function with an empty body and registering it using the following call… qInstallMsgHandler( handler ); …prevents warning and error messages from being printed to the console. Basic Tools: moc and qmake moc is fairly simple to use. Its most important command line option is -o, which lets you specify the name of the output file. Other options tell moc whether to generate a file that can be #included into the source code, or one that can be linked, etc. However, as in any compile-and-build system, the code generated at various stages of the compile cycle can get out of sync. To be on the safe side, it’s a good idea to run moc every time the application is recompiled. Since moc is really fast, this method doesn’t pose any hardship. However, any multi-step build process calls for automation, using that old stalwart: make. To help quell make’s temperament (did you remember your tabs?), Qt ships with a Makefile generator called qmake. qmake reads a “project” file that must have the extension .pro. The project file assigns values to variables, which are later read by qmake to generate the actual Makefile. The most important variables are SOURCES and HEADERS, which contain the names of the source and header files making up the current project. Values can be assigned to variables: SOURCES = file1.cpp file2.cpp or added to the current content: SOURCES = file1.cpp SOURCES += file2.cpp There are additional variables, such as CONFIG for compiler and linker flags, and TARGET for the name of the executable. Project files can also contain conditional statements to target different platforms. There’s even a -= operator that removes a value from the variable without clobbering the rest of the variable’s content. qmake reads platform-specific information (such as the name of the compiler) from configuration files below $QTDIR/ mkspecs, and generates a Makefile that’s appropriate. The resulting Makefile contains the compilation steps for moc, as well as the options necessary to link against the Qt libraries. Finally, qmake can entirely bootstrap a simple project. If invoked with the -project option, qmake looks in the current directory for C++ source and header files and generates a project file with the same base name as the current directory. If that project file is then used as input to qmake, the resulting Makefile will compile and link all appropriate files and finally create an executable with the same name as the current directory! All About Relationships This middle installment of a three part series on Qt programming surveyed some of Qt’s most important classes and their inheritance relationships, and explained how Qt’s language extensions are realized. It also briefly introduced Qt’s build tools, moc and qmake. In the next and last installment, we’ll take a look at some of the other development tools that ship as part of the Qt package. Sorry, the comment form is closed at this time. Right Sizing Blades for the Midmarket Five Myths About Blade Servers Master of Puppet: System Management Made Easy Everything You Need - How blades grow, protect and simplify Alanna Dwyer Talks About Shorty
http://www.linux-mag.com/2003-12/compile_01.html
crawl-001
refinedweb
2,448
51.48
typedef keyword is used to assign a new name to any existing data-type.. Following is the syntax of typedef typedef current_name new_name; Now, suppose we want to declare two variables of type unsigned int. Instead of writing unsigned int again and again, let's use a new name uint in its place using typedef as follows: typedef unsigned int uint; uint i, j; Now, we can write uint in the whole program instead of unsigned int. The above code is the same as writing: unsigned int i, j; Let's see an example. #include <iostream> int main(){ typedef unsigned int ui; ui i = 5, j = 8; std::cout << "i = " << i << std::endl; std::cout << "j = " << j << std::endl; the structure named student which we saw in the Structure topic. #include <iostream> #include <cstring> using namespace std; typedef struct student { int roll_no; char name[30]; int phone_number; }st; int main(){ st p1, p2, p3; p1.roll_no = 1; strcpy(p1.name,"Brown"); p1.phone_number = 123443; p2.roll_no = 2; strcpy(p2.name,"Sam"); p2.phone_number = 1234567822; p3.roll_no = 3; strcpy(p3.name,"Addy"); p3.phone_number = 1234567844; cout << "First Student" << endl; cout << "roll no : " << p1.roll_no << endl; cout << "name : " << p1.name << endl; cout << "phone no : " << p1.phone_number << endl; cout << "Second Student" << endl; cout << "roll no : " << p2.roll_no << endl; cout << "name : " << p2.name << endl; cout << "phone no : " << p2.phone_number << endl; cout << "Third Student" << endl; cout << "roll no : " << p3.roll_no << endl; cout << "name : " << p3.name << endl; cout << "phone no : " << p3.phone_number << endl; return 0; } roll no : 1 name : Brown phone no : 123443 Second Student roll no : 2 name : Sam phone no : 1234567822 Third Student roll no : 3 name : Addy phone no : 1234567844 the structure with the keyword union in the place of struct.
https://www.codesdope.com/cpp-typedef/
CC-MAIN-2022-40
refinedweb
288
76.01
BEGIN: I'm assuming you are familiar with the vocabulary of Stack PUSH = Add to end POP = Get from end Queue ENQUEUE = Add to end DEQUEUE = Return from beginning Prerequisite: You only need to know this - In Java when you "ADD" to ArrayList, it adds in the end. - Similarly, if you use Javascript, you "PUSH" to an array, it adds the value in the end of the array. So, I came across this simple yet interesting topic of implementing a Simple Queue (FIFO) with 2 Stacks (LIFO) Having done this program in university (where I used scratch implementation in C++), I believe now more conciseness is required for interview preparations - and hence I'm using JAVA's native ArrayList to implement my own Stack and Queues. import java.util.ArrayList; import java.util.List; public class MyStack { private final List<Integer> stack = new ArrayList<>(); void push(int item) { stack.add(item); } int pop() { if (!stack.isEmpty()) { return stack.remove(stack.size() - 1); } return -1; // if nothing found } int size() { return stack.size(); } boolean isEmpty() { return stack.isEmpty(); } } So, now we have our Stack - it's that simple ;) And here's our Queue public class MyQueueWithTwoStacks { private final MyStack firstStack; private final MyStack secondStack; public MyQueueWithTwoStacks() { this.firstStack = new MyStack(); this.secondStack = new MyStack(); } boolean isEmpty() { return firstStack.isEmpty() && secondStack.isEmpty(); } int size() { return firstStack.size() + secondStack.size(); } void enqueue(int item) { firstStack.push(item); } /** * We will use the second stack to out the values, if the second bucket is * empty that means we need to copy over all stack1 entries to it * * @return returns the value */ int dequeue() { if (secondStack.isEmpty()) { while (!firstStack.isEmpty()) { secondStack.push(firstStack.pop()); } } return secondStack.pop(); } } Reference: - If you like theoretical overview, here's a super nice Post by @jellybee END. Discussion (3) Isn't dequeue O(n) here? A very good point. If you consider the worse case where we add and remove alternatively, we perform 2 operations (one push and then one pop) AFAIK The problem of O(n) lies with the use of ArrayList where the remove() takes O(n) to find the index. This post was supposed to give a starting point for how the concept is done. Can you walk through your process - maybe there's a more efficient way without adding complex logic? The nice thing is that you get an amortized O(1) TC and using linked lists for the stacks makes it appropriate for a purely functional implementation, if you're into that kind of thing.
https://dev.to/saurabhpro/implementing-queue-with-2-stacks-3l9f
CC-MAIN-2021-49
refinedweb
419
63.29
How do you like my RNG? #!/usr/bin/perl use warnings; use strict; use Time::HiRes qw ( usleep ); $| = 1; print "number of digits: "; my $num = <STDIN>; chomp $num; if($num < 1){ print "grirrrrr. You can't do that! Number must be 1 or greater!!! +\n"; exit(0); } print randmiz0r($num); sub randmiz0r{ my $digCount = shift; my $digCountChanging = $digCount + 1; #offset for first subtractio +n my $results = ""; for(my $i = 0;$i < $digCount;$i++){ my $randInput = 1 . (0 x ($digCountChanging - 1)); #retreave the last diggit my @prepLastDig = split //, &randmore($randInput); my $lastDig = pop @prepLastDig; $results = $lastDig . $results; #sleep because rand funct is time based usleep( int rand int rand 10000 ); # sleep for microseconds } if($results =~ m/^0/){ $results = catchStartZero($results); #numbers like 0345 don't +exist } return $results . "\n"; } sub randmore{ my $r = shift; my $oddCatch = $r % 2; $r = $r - $oddCatch; return (int (rand $r) + int (rand $r)); } sub catchStartZero{ my $removeZero = shift; $removeZero =~ s/^0//; my $notZero = 0; #confusing but nessisary until($notZero ne "0"){ $notZero = randmiz0r(1); chomp $notZero; #a \n is returned with randmiz0r() } return $notZero . $removeZero; } [download] Not much. # [...] rand funct is time based Do not call srand() multiple times in your program unless you know exactly what you're doing and why you're doing it. [...]. Just do it once at the top of your program, or you won't get random numbers out of rand()! Using rand until you have a number that fits your expectation isn't a very good idea either, that's wasting time again. If you want to make sure you have a number that's n digit long, without leading 0, you could just do : my $n = 10; my $lastDigits = int rand(10**($n-1)); my $firstDigit = 1+int(rand(9)); my $number = $firstDigit.$lastDigits; print $number; [download]. I original set out to just make an RNG to fit a certain length of chars but i though i would try to make it a bit more random. Thanks for the suggestions. What made you think the rand function is time based? Try adding srand(1000) to the start of your code and see what happens. You're just wasting time with those usleeps. Designing a good random number generator isn't easy, so it's better to use one of the RNG modules that has been properly tested. see rand for some suggestions. The maths behind random number generators and their proper testing is really interesting and well worth some study, if you have the time. Donald Knuth's celebrated masterwork, "The Art of Computer programming", vol. 2 (semi-numerical algorithms), has 193 pages on random numbers in my edition (3rd). You might guess from that that this is a pretty complicated and tricky subject. I would submit that you don't try to do better than existing packages before you've read and understood most of this material (or equivalent from other really authoritative sources).
http://www.perlmonks.org/?node_id=1050090
CC-MAIN-2014-23
refinedweb
486
70.94
Are you sure? This action might not be possible to undo. Are you sure you want to continue? Instructions for Form 990EZ Short Form Return of Organization Exempt From Income Tax Under section 501(c) of the Internal Revenue Code (except black lung benefit trust or private foundation) or section 4947(a)(1) charitable trust (For organizations with gross receipts of less than $100,000 and total assets of less than $250,000 at end of year.) (Section references are to the Internal Revenue Code unless otherwise indicated.). The time needed to complete and file this form will vary depending on individual circumstances. The estimated average times are: Learning about the law or the form 4 hr., 8 min. 8 hr., 56 min. Preparing the form 5 hr., 41 min. 10 hr., 2 min. Copying, assembling, and sending the form to the IRS 16 min. -0- Form 990EZ Sch. A (990) Recordkeeping 26 hr., 33 min. 43 hr., 32-1150), Washington, DC 20503. DO NOT send the tax form to either of these offices. Instead, see General Instruction H for information on where to file it. General Instructions Note: An organization’s completed Form 990EZ (except for the schedule of contributors) is available for public inspection as required by section 6104. Some members of the public rely on Form 990EZ as the primary or sole source of information about a particular organization. How the public perceives an organization in such cases may be determined by the information presented on its return. Therefore, please make sure your return is complete and accurate and fully describes your organization’s programs and accomplishments. Section 501(c)(3) organizations and section 4947(a)(1) trusts must also attach a completed Schedule A (Form 990) to their Form 990EZ (or Form 990). Purpose of Form.—Form 990EZ, an annual information return, is a shortened version of Form 990, Return of Organization Exempt From Income Tax. It is designed for use by small tax-exempt organizations and nonexempt charitable trusts to provide the IRS with the information required by section 6033. Contents Page 1–5 ● General Instructions 1 A. Who Must File 2 B. Organizations Not Required To File C. Forms You May Need To File or Use 2 D. Helpful Publications 3 E. Use of Form 990EZ To Satisfy State 3 Reporting Requirements Contents Page F. Other Forms as Partial Substitutes for 3 Form 990EZ 4 G. Accounting Period Covered 4 H. When and Where To File 4 I. Extension of Time To File 4 J. Amended Return 4 K. Penalties L. Public Inspection of Completed Exempt Organization Returns and Approved 4 Exemption Applications M. Disclosures Regarding Certain 5 Information and Services Furnished N. Disclosures Regarding Certain 5 Transactions and Relationships 5 O. Erroneous Backup Withholding 5 P. Group Return Q. Organizations in Foreign Countries 5 and U.S. Possessions 5 ● Specific Instructions ● Part I—Statement of Revenue, Expenses, and Changes in Net Assets 6 or Fund Balances 10 ● Part II—Balance Sheets ● Part III—Statement of Program Service 11 Accomplishments ● Part IV—List of Officers, Directors, and 11 Trustees ● Part V—Other Information 11 A. Who Must File.— 1. IMPORTANT NOTE: Gross receipts and total assets requirements.—Except for those types of organizations listed in General Instruction B, an annual return on Form 990 (or Form 990EZ) is required from every organization exempt from tax under section 501(a). This includes foreign Cat. No. 64888C organizations and cooperative service organizations described in sections 501(e) and (f), and child care organizations described in section 501(k). Organizations whose annual gross receipts are normally more than $25,000 must file Form 990 (or Form 990EZ) (see General Instruction B11). An organization may file Form 990EZ, instead of Form 990, if it meets BOTH of the following requirements: its gross receipts during the year were less than $100,000 AND its total assets (line 25, column (B) of Form 990EZ) at the end of the year were less than $250,000. (See General Instruction B11(a) for calculating gross receipts.) If your organization fails to meet either of these conditions, you may not file Form 990EZ. Instead, you must file Form 990. 2. Section 4947(a)(1) nonexempt charitable trust.—Any nonexempt charitable trust (described in section 4947(a)(1)) not treated as a private foundation is also required to file Form 990 (or Form 990EZ) if its gross receipts are normally more than $25,000. See General Instruction A1 for Form 990EZ eligibility requirements. See General Instruction C7 for information regarding possible relief from filing Form 1041, U.S. Fiduciary Income Tax Return. 3. Exemption application pending.—If your application for exemption is pending, check the Application Pending box (item G) at the top of page 1 of the return and complete the return in the normal manner. 4. If you received a Form 990 Package.—If you are not required to file Form 990EZ because your gross receipts are normally not more than $25,000 (see General Instruction B11 below), we ask that you file anyway if we sent you a Form 990 Package with a preaddressed mailing label. Attach the label to the name and address space on the return (see Specific Instructions.) Check the box in item J in the area above Part I to indicate that your gross receipts are below the $25,000 filing minimum; sign the return; and send it to the Service Center for your area. You do not have to complete Parts I through V of the return. By following this instruction, you will help us to update our records, and we will not have to contact you later asking why no return was filed. If you file a return this way, you will not be mailed a Form 990 Package in later years and need not file Form 990 (or Form 990EZ) again until your gross receipts normally exceed the $25,000 minimum, or you terminate or undergo a substantial contraction as described in the instructions for line 36.EZ), but it does not file a return or advise us that it is no longer required to file. However, contributions to such an organization may continue to be deductible by the general public until the IRS publishes a notice to the contrary in the Internal Revenue Bulletin. B. Organizations Not Required To File.— (Note: Organizations not required to file this form with the IRS may nevertheless wish to use it to satisfy state reporting requirements. For details, see General Instruction E.) The following types of organizations exempt from tax under section 501(a) do not have to file Form 990 (or Form 990EZ): (a) Instrumentalities of the United States, and (b). An organization whose annual gross receipts are normally $25,000 or less is not required to file; however, see General Instruction A4. (a) Calculating gross receipts.—The organization’s gross receipts are the total amount it received from all sources during its annual accounting period, without subtracting any costs or expenses. (Gross receipts are the sum of lines 1, 2, 3, 4, 5a, 6a, 7a, and 8 of Part I. You can also calculate gross receipts by adding back the amounts on lines 5b, 6b, and 7b to the total revenue reported on line 9. For example: On line 9 of its Form 990EZ for 1991, Organization M reported $50,000 as total revenue. M added back the costs and expenses it had deducted on lines 5b ($2,000); 6b ($1,500); and 7b ($500) to its total revenue of $50,000 and determined that its gross receipts for the tax year were $54,000. (b) test.—An organization’s gross receipts are considered normally to be $25,000 or less if the organization is: (a) Up to a year old and has received, or donors have pledged to give, $37,500 or less during its first tax year; (b) Between one and three years old and averaged $30,000 or less in gross receipts during each of its first two tax years; or (c) Three years old or more and averaged $25,000 or less in gross receipts for the immediately preceding three tax years (including the year for which the return would be filed). C. Forms You May Need To File or Use.— 1. Schedule A (Form 990).— Organization Exempt Under 501(c)(3) (Except Private Foundation), 501(e), 501(f), 501(k), or Section 4947(a)(1) Charitable Trust Supplementary Information. Filed with Form 990EZ for a section 501(c)(3) organization that is not a private foundation (including an organization described in section 501(e), 501(f), or 501(k)). Also filed with Form 990EZ for a section 4947(a)(1) charitable trust not treated as a private foundation. An organization is not required to file Schedule A if its gross receipts are normally $25,000 or less (see General Instruction B11). 2. Forms W-2 and W-3.—Wage and Tax Statement, and Transmittal of Income and Tax Statements. 3. Form 940.—Employer’s Annual Federal Unemployment (FUTA) Tax Return. 4. Form 941.—Employer’s Quarterly Federal Tax Return. Used to report social security and income taxes withheld by an employer and social security tax paid by an employer. 5. Form 990-T.—Exempt Organization Business Income Tax Return. Filed separately for organizations with gross income of $1,000 or more from business unrelated to the organization’s exempt purpose. 6. Form 990-W.—Estimated Tax on Unrelated Business Taxable Income for Tax-Exempt Organizations. 7. Form 1041.—U. S. Fiduciary Income Tax Return. Required of section 4947(a)(1) charitable trusts that also file Form 990 (or 990EZ). However, if such a trust does not have any taxable income under Subtitle A of the Code, it can file either Form 990 (or 990EZ) and need not file Form 1041 to meet its section 6012 filing requirement. If this condition is met, check the box for question 42 on page 2 of Form 990EZ and do not file Form 1041, but complete Form 990EZ in the normal manner. A section 4947(a)(1) charitable trust that normally has gross receipts of not more than $25,000 (see General Instruction B11) and has no taxable income under Subtitle A must complete only the following items in the heading of Form 990EZ: Item A. Fiscal year (if applicable); B. Name and address; C. Employer identification number; F. Section 4947(a)(1) box; and question 42 and the signature block on page 2. 8. Form 1096.—Annual Summary and Transmittal of U.S. Information Returns. 9. Form 1099 Series.—Information returns for reporting payments such as dividends, interest, miscellaneous income (including medical and health care payments and nonemployee compensation), original issue discount, patronage dividends, real estate transactions, acquisition or abandonment of secured property, and distributions from annuities, pensions, profit-sharing plans, retirement plans, etc. 10. Form 1120-POL.—U.S. Income Tax Return for Certain Political Organizations. 11. Form 1128.—Application To Adopt, Change or Retain a Tax Year. 12. Form 2758.—Application for Extension of Time To File Certain Excise, Income, Information, and Other Returns. 13. Form 4506-A.—Request for Public Inspection or Copy of Exempt Organization Tax Form. 14. Form 4720.—Return of Certain Excise Taxes on Charities and Other Persons Under Chapters 41 and 42 of the Internal Revenue Code. Section 501(c)(3) organizations that file Form 990 (or 990EZ), as well as the managers of these organizations, use this form to report their tax on political expenditures and certain lobbying expenditures. 15. Form 5500 or. Page 2 The forms required to be filed are: Form 5500.—Annual Return/Report of Employee Benefit Plan. Used for each plan with 100 or more participants. Form 5500-C/R.—Return/Report of Employee Benefit Plan. Used for each plan with fewer than 100 participants. 16. Form 5768.—Election/Revocation of Election by an Eligible Section 501(c)(3) Organization To Make Expenditures To Influence Legislation. 17. Form 8282.—Donee Information Return. Required of the donee of “charitable deduction property” who sells, exchanges, or otherwise disposes of the property within two years after receiving the property. Also, the form is required of any successor donee who disposes of charitable deduction property within two years after the date that the donor gave the property to the original donee. (It does not matter who gave the property to the successor donee. It may have been the original donee or another successor donee.) For successor donees, the form must be filed only for any property that was transferred by the original donee after July 5, 1988. 18. would not be subject to the reporting requirement since the funds were not received in the course of a trade or business. 19. Form 8822.—Change of Address. Used to notify the IRS of a change in mailing address that occurs after the return is filed. D. Helpful Publications.— Publication 525.—Taxable and Nontaxable Income. Publication 598.—Tax on Unrelated Business Income of Exempt Organizations. Publication 910.—Guide to Free Tax Services. Publication 1391.—Deductibility of Payments Made to Charities Conducting Fund-Raising Events. Publications and forms are available free at many IRS offices or by calling 1-800-TAX-FORM (1-800-829-3676). E. Use of Form 990EZ To Satisfy State Reporting Requirements.—Some states and local government units will accept a copy of Form 990EZ and Schedule A (Form 990) in place of all or part of their own financial report forms. The substitution applies primarily to section 501(c)(3) organizations, but some of the other types of section 501(c) organizations are also affected. If you intend to use Form 990EZ to satisfy state or local filing requirements, such as those under state charitable solicitation acts, note the following: Determine state filing requirements.— You should consult the appropriate officials of all states and other jurisdictions in which you do. Monetary tests may differ.—Some or all of the dollar limitations applicable to Form 990EZ when filed with IRS may not apply when using Form 990EZ in place of state or local report forms. Examples of IRS dollar limitations that do not meet some state requirements are the $25,000 gross receipts minimum that creates an obligation to file with IRS (see General Instruction B11), and the $30,000 minimum for listing professional fees in Part II of Schedule A (Form 990). Additional information may be required.—State or local filing requirements may require you to attach to Form 990EZ one or more of the following: (a) additional financial statements, such as a complete analysis of functional expenses or a statement of changes in financial position; EZ filed with IRS. Even if the Form 990EZ you file with IRS is accepted by IRS as complete, a copy of the same return filed with a state will not fully satisfy that state’s filing requirement if required information is not provided, including any of the additional information discussed above, or if the state determines that the form was not completed in accordance with the applicable Form 990EZ instructions or supplemental state instructions. If so, you may be asked to provide the missing information or to submit an amended return. Use of audit guides may be required.— To ensure that all organizations report similar transactions uniformly, many states require that contributions, gifts, and grants on line 1 in Part I and program service expenses in Part III be reported in accordance with the AICPA industry audit guide, Audits of Voluntary Health and Welfare Organizations (New York, AICPA, 1988), as supplemented by Standards of Accounting and Financial Reporting for Voluntary Health and Welfare Organizations (New York, National Health Council, Inc. (Washington, DC), 1988), and by Accounting and Financial Reporting—A Guide for United Ways and Not-for-Profit Human Service Organizations (Alexandria, Va., United Way Institute, 1989). However, although reporting donated services and facilities as items of revenue and expense is called for in certain circumstances by the three publications named above, many states and IRS do not permit the inclusion of those amounts in Part I of Form 990EZ. The instructions in Part III(a) discuss the optional reporting of donated services and facilities. Amended returns.—If you submit supplemental information or file an amended Form 990EZ with IRS, you must also furnish a copy of the information or amended return to any state with which you filed a copy of Form 990EZ originally to meet that state’s filing requirement. If a state requires you to file an amended Form 990EZ to correct conflicts with Form 990EZ instructions, you must also file an amended return with IRS. Method of accounting.—Most states require that all amounts be reported based on the accrual method of accounting. See also Specific Instructions, item H. Time for filing may differ.—The time for filing Form 990EZ with IRS differs from the time for filing reports with some states. Public inspection.—The Form 990EZ information made available for public inspection by IRS may differ from that made available by the states. See the cautionary note for Part I, line 1, instruction D, Note (2). State Registration Number.—Insert the applicable state or local jurisdiction registration or identification number in item D (in the heading on page 1) for each jurisdiction in which you file Form 990EZ in place of the state or local form. When filing in several jurisdictions, prepare as many copies as needed with item D blank. Then enter the applicable registration number on the copy to be filed with each jurisdiction. F. Other Forms as Partial Substitutes for Form 990EZ.—Except as provided below, the IRS will not accept any form as a substitute for one or more parts of Form 990EZ. (1) Labor organizations.—A labor organization that files Form LM-2, Labor Organization Annual Report, or the shorter Form LM-3 with the U.S. Department of Labor (DOL) can attach a copy of the completed DOL form to provide some of the information required by Form 990EZ. This substitution is not permitted if the organization files a DOL report that consolidates its financial statements with those of one or more separate subsidiary organizations. (2) Employee benefit plans.—An employee benefit plan may be able to substitute Form 5500, or Form 5500-C/R, for part of Form 990EZ. The substitution can be made if the organization filing Form 990EZ and the plan filing Form 5500 or 5500-C/R meet all the following tests: (a) The Form 990EZ filer is organized under section 501(c)(9), (17), (18), or (20); (b) The Form 990EZ filer and Form 5500 filer are identical for financial reporting purposes and have identical receipts, disbursements, assets, liabilities, and equity accounts; Page 3 (c) The employee benefit plan does not include more than one section 501(c) organization, and the section 501(c) organization is not a part of more than one employee benefit plan; and (d) The organization’s accounting year and the employee plan year are the same. If they are not, you may want to change the organization’s accounting year, as explained in General Instruction G, so it will coincide with the plan year. Allowable substitution areas.—Whether you file Form 990EZ for a labor organization or for an employee plan, the areas of Form 990EZ for which other forms can be substituted are the same. These areas are: Part I, lines 10 through 16 (but complete lines 17 through 21). Part II (but complete lines 25 through 27, columns (A) and (B)). If you substitute Form LM-2 or LM-3 for any of the Form 990EZ Parts or line items mentioned above, you must attach a reconciliation sheet to show the relationship between the amounts on the DOL forms and the amounts on Form 990EZ. This is particularly true of the relationship of disbursements shown on the DOL forms and the total expenses on line 17, Part I, of Form 990EZ. You must make this reconciliation because the cash disbursements section of the DOL forms includes nonexpense items. If you substitute Form LM-2, be sure to complete its separate schedule of expenses. G. Accounting Period Covered.—Base your return on your annual accounting period (fiscal year) if one is established. If not, base the return on the calendar year. Your fiscal year should normally coincide with the natural operating cycle of your organization. Your fiscal year need not end on December 31 or June 30. Use the 1991 Form 990EZ to report on a calendar-year 1991 accounting period or a fiscal year that began in 1991. If you change your accounting period, you may also use the 1991 form as the return for a short period (less than 12 months) ending November 30, 1992, or earlier. In general, to change your accounting period, you must file timely a return on Form 990EZ for the short period resulting from the change. At the top of the short period return, write Change of Accounting Period. If you changed your accounting period within the 10-calendar-year period that includes the beginning of the short period, and you had a Form 990EZ (or Form 990) filing requirement at any time during that 10-year period, you must also attach a Form 1128 to the short period return. See Rev. Proc. 85-58, 1985-2 C.B. 740. H. When and Where To File.—File Form 990EZ by the 15th day of the 5th month after your accounting period ends. If the organization is liquidated, dissolved, or terminated, file the return by the 15th day of the 5th month after the change. If the return is not filed by the due date (including any extension granted), attach a statement giving your reasons for not filing timely. If the principal office is Alaska, California, Hawaii, Idaho, Nevada, Oregon, Washington Connecticut, Maine, Massachusetts, New Hampshire, New York, Rhode Island, Vermont Illinios, Iowa, Minnesota, Missouri, Montana, Nebraska, North Dakota, South Dakota, Wisconsin Delaware, District of Columbia, Maryland, New Jersey, Pennsylvania, Virginia, any U.S. possession, or foreign country Send your return to the Internal Revenue Service Center below Atlanta, GA 39901 Austin, TX 73301 Cincinnati, OH 45999 Fresno, CA 93888 Holtsville, NY 00501 enter “None” or “N/A” if an entire part does not apply. Against responsible person(s).—If you do not file a complete return or do not furnish correct information, IRS will write to give you a fixed time to fulfill these requirements. After that period expires, the person failing to comply will be charged a penalty of $10 a day, IRS (sections 7203, 7206, and 7207). There are also penalties for failure to comply with public disclosure requirements as discussed in General Instruction L. States may impose additional penalties for failure to meet their separate filing requirements. L. Public Inspection of Completed Exempt Organization Returns and Approved Exemption Applications.— Through the IRS.— Forms 990, 990EZ, 990-PF, and certain other completed exempt organization returns are available for public inspection and copying upon request. Approved applications for exemption from Federal income tax are also available. The IRS, however, may not disclose portions of an application relating to any trade secrets, etc., nor can the IRS disclose the schedule of contributors required by Forms 990 and 990EZ (section 6104).. You can use Form 4506-A to request a copy or to inspect an exempt organization return. There is a fee for photocopying. Through the Organization.— (1) Annual return.—An organization must, during the three-year period beginning with the due date (including extensions, if any), of the Form 990 (or 990EZ), Kansas City, MO 64999 Philadelphia, PA 19255 I. Extension of Time To File.—Use Form 2758 to request an extension of time to file. J. Amended Return.—To change your return for any year, file a new return with the correct information that is complete in all respects, including required attachments. Thus, the amended return must provide all the information called for by the form and instructions, not just the new or corrected information. Write Amended Return at the top of the return. You may file an amended return at any time to change or add to the information reported on a previously filed return for the same period. You must make the amended return available for public inspection for 3 years from the date of filing or 3 years from the date the original return was due, whichever is later. Use Form 4506-A to obtain a copy of a previously filed return. You can obtain blank forms for prior years by calling the toll-free number given in General Instruction D. K. Penalties.— Against the organization.—Under section 6652(c), a penalty of $10 a day, not to exceed the lesser of $5,000 or 5% of the gross receipts of the organization for the year, may be charged when a return is filed late, unless you can show that the late filing was due to reasonable cause. The penalty begins on the due date for filing the Form 990EZ. The penalty may also be charged if you file an incomplete return or furnish incorrect information. To avoid having to supply missing information later, be sure to complete all applicable line items; answer “Yes,” “No,” or “N/A” (not applicable) to each question on the return; make an entry (including a “-0-” when appropriate) on all total lines; and Page 4 three or more employees. This provision applies to any organization that files Form 990 (or 990EZ), regardless of the size of the organization and whether or not it has any paid employees. If an organization furnishes additional information shall be assessed a penalty of $10 for each day that inspection was not permitted, up to a maximum of $5,000 with respect to any one return. No penalty will be imposed if the failure is due to reasonable cause. Any person who willfully fails to comply shall be subject to an additional penalty of $1,000 (sections 6652(c) and 6685). (2) maintain a permanent office, it must provide a reasonable location for the inspection of both its annual returns and exemption application. The information may be mailed. (See reference to Notice 88-120 in the discussion above for Annual return.) The organization need not disclose any portion of an application relating to trade secrets, etc., that would not also be disclosable by the IRS. The penalties for failure to comply with this provision are the same as those discussed in Annual return above, except that the $5,000 limitation does not apply.. N. Disclosures Regarding Certain Transactions and Relationships.—In their annual returns on Schedule A (Form 990), section 501(c)(3) organizations must disclose information regarding their direct or indirect transfers to, and other direct or indirect relationships with, other section 501(c) organizations (except other section 501(c)(3) organizations) or section 527 political organizations. This provision helps to prevent the diversion or expenditure of a section 501(c)(3) organization’s funds for purposes not intended by section 501(c)(3). All section 501(c)(3) organizations must maintain records regarding all such transfers, transactions, and relationships. (See General Instruction K, Penalties.) O. Erroneous Backup Withholding.— Recipients of dividend or interest payments generally must certify their correct tax identification number to the bank or other payer on Form W-9, Request for Taxpayer Identification Number and Certification.. See the Instructions for Form 990-T if you had backup withholding erroneously withheld. Claims for refund must be filed within three years after the date the original return was due; three years after the date the organization filed it; or two years after the date the tax was paid, whichever is later. P. Group Return.—If a parent organization wants to file a group return for two or more of its subsidiaries, it must use Form 990. The parent organization cannot use Form 990EZ. See the Instructions for Form 990 for filing a group return. Q. Organizations in Foreign Countries and U.S. Possessions.—Report amounts in U.S. dollars and state what conversion rate you use. Combine amounts from within and outside the United States and report the total for each item. All information must be given in the English language. Specific Instructions Completing the Heading of Form 990EZ.— The instructions that follow are keyed to items in the heading for Form 990-EZ. Item A. Accounting Period.—Use the 1991 Form 990EZ to report on a calendar year or a fiscal year accounting period that began in 1991. Show the month and day your fiscal year began in 1991 and the date the fiscal year ended. (Refer to General Instruction G.) Item B. Name and Address.—If we mailed you a Form 990 Package with a preaddressed mailing label, please attach the label in the name and address space on your return. Using the label helps us avoid errors in processing your P.O. box number instead of the street address. Item C. Employer Identification Number.—You should have only one Federal employer identification number. If you have more than one and have not been advised which to use, notify the Service Center for your area (from the Where to File list in General Instruction H). State what numbers you have, the name and address to which each number was assigned, and the address of your principal office. The IRS will advise you which number to use. Section 501(c)(9) organizations must use their own employer identification number and not the number of their sponsor. Item D. State Registration Number.— (See General Instruction E.) Item E. Group Exemption Number.—If you are covered by a group exemption letter, enter the four-digit group exemption number (GEN). Item F. Type of Organization.—If your organization is exempt under section 501(c), check the applicable box and insert within the parentheses a number that identifies your type of section 501(c) organization. If you are a section 4947(a)(1) trust, check the applicable box and see General Instruction C7 and question 42 of Form 990EZ. Item G. Application Pending.—If your application for exemption is pending, check this box and complete the return. Item H. Accounting Method.—Indicate the method of accounting used in preparing this return. Unless the specific instructions say otherwise, you should generally use the same accounting method on the return to figure revenue and expenses that you regularly use to keep the organization’s books and records. To be acceptable for Form 990EZ reporting purposes, however, the method of accounting used must clearly reflect income. If you prepare a Form 990EZ for state reporting purposes, you may file an identical return with IRS even though it does not agree with your books of account, unless how you report one or more items on the state return conflicts with the instructions for preparing Form 990EZ for filing with IRS. For example, if you maintain your books on the cash receipts and disbursements method of accounting but prepare a state return Page 5 based on the accrual method, you could use that return for reporting to IRS. As another example, if a state reporting requirement requires you to report certain revenue, expense, or balance sheet items differently from how you normally account for them on your books, a Form 990EZ prepared for that state is acceptable for IRS reporting purposes if the state reporting requirement does not conflict with the Form 990EZ instructions. You should keep with your records a reconciliation of any differences between your books of account and the Form 990EZ you file. Most states that accept Form 990EZ in place of their own forms require that all amounts be reported based on the accrual method of accounting. See General Instruction E. Item I. Change of Address.—If you changed your address since you filed your previous return, check this box. Item J. Gross Receipts of $25,000 or Less.—Check this box if your gross receipts are normally not more than $25,000. However, see General Instructions A4 and B11. Item K. Calculating your Gross Receipts.—Only those organizations with gross receipts less than $100,000 and total assets less than $250,000 at the end of the year can use the Form 990EZ. If you do not meet these requirements, you must file Form 990. (See General Instruction B11.) Public Inspection.—All information you report on or with your Form 990EZ, including attachments, will be available for public inspection, except the schedule of contributors required for line 1, Part I. Please make sure your. If the return was prepared by an individual, firm, or corporation paid for preparing it, the paid preparer’s space must also be signed. For a firm or corporation that was a paid preparer, sign in the firm’s or corporation’s name. If you checked the box for question 42 on page 2 (section 4947(a)(1) charitable trust filing Form 990EZ instead of Form 1041), you must also enter the paid preparer’s social security number or employer identification number in the margin next to the paid preparer’s space. Leave the paid preparer’s space blank if the return was prepared by a regular employee of the filing organization. Rounding Off to Whole-Dollar Amounts.—You may show money items as whole-dollar amounts. Drop any amount less than 50 cents and increase any amount from 50 through 99 cents to the next higher dollar. Completing All Lines.—Unless you are permitted to use certain DOL forms or Form 5500 series returns as partial substitutes for Form 990EZ (see General Instruction F), do not leave any applicable lines blank or attach any other forms or schedules instead of entering the required information on the appropriate line on Form 990EZ. Assembling Form 990EZ.—Before filing the Form 990EZ, assemble your package of forms and attachments in the following manner: ● Form 990EZ ● Schedule A (Form 990) ● Attachments to Form 990EZ ● Attachments to Schedule A (Form 990) Attachments.—Use the schedules on the official form unless you need more space. If you use attachments, they must: (1) Show the form number and tax year; (2) Show the organization’s name and employer identification number; (3) Include the information required by the form; (4) Follow the format and line sequence of the form; and (5) Be on the same size paper as the form. Part I—Statement of Revenue, Expenses, and Changes in Net Assets or Fund Balances.— All organizations filing Form 990EZ with the IRS or any state must complete Part I. Some states that accept Form 990EZ in place of their own forms may require additional information (see General Instruction E). Line 1.—Contributions, gifts, grants, and similar amounts received.— A. What is included on line 1.— Report amounts received as voluntary contributions; that is, payments, or the part of any payment, for which the payer (donor) does not receive full value (fair market value) from the recipient (donee) organization. Enter the gross amounts of contributions, gifts, grants and bequests that the organization received from individuals, trusts, corporations, estates, affiliates, foundations, public charities, and other exempt organizations. (a) Contributions can arise from special events when excess payment received for items offered.—Special fundraising activities such as dinners, door-to-door sales of merchandise, carnivals, and bingo games can produce both contributions and revenue. Report as a contribution both on line 1 and on line 6a (within parentheses) any amount received through a special event that is greater than the value of the merchandise or services furnished by the organization to the contributor. This situation usually occurs when organizations seek support from the public through solicitation programs that are in part special fundraising events or activities and are in part solicitations for contributions. The primary purpose of such solicitations is to receive contributions and not to sell the merchandise at its fair market value (even though this might produce a profit). For and again on the description line of 6a (within the parentheses). The revenue received ($16 retail value of the book) would be reported in the amount column on line 6a. Any expenses directly relating to the sale of the book would be reported on line 6b. If a contributor gives more than $40, that person would be making a larger contribution, the difference between the book’s retail value of $16 and the amount actually given. (See also line 6 instructions and Publication 1391.) At the time of any solicitation or payment, organizations that are eligible to receive contributions should advise patrons of the amount deductible for Federal tax purposes. (b) Contributions can arise from special events when items of only nominal value offered.—If an organization offers goods or services of only nominal value through a special event, report the entire amount received for such benefits as a contribution on line 1. Report all related expenses on lines 12 through 16. Benefits have a nominal value when: (1) The benefit’s fair market value is not more than 2% of the payment, or $50, whichever is less; or (2) The payment is $28.58 or more; the only benefits received are token items bearing the organization’s name or symbol; and the organization’s cost (as opposed to fair market value) is $5.71 or less for all benefits received by a donor during the calendar year. These amounts are adjusted annually for inflation. (See Rev. Proc. 90-12, 1990-1 C.B. 471 and Rev. Proc. 90-64, 1990-2 C.B. 674.) (c) Section 501(c)(3) organizations.— These organizations must compute the amounts of revenue and contributions received from special events according to the above instructions when preparing their Support Schedule in Part IV of Schedule A (Form 990). (d) Page 6 used for, such as an adoption program or a disaster relief project. A grant is still equivalent to a contribution if the grant recipient performs a service, or produces a work product, that benefits the grantor incidentally (but see line 1 instruction B(a) below). (e) Contributions received through other fundraising organizations.—Contributions received indirectly from the public through solicitation campaigns conducted by federated fundraising agencies (such as United Way) are included on line 1. (f) Contributions received from associated organizations.—Include on line 1 amounts contributed by other organizations closely associated with the reporting organization. This would include contributions received from a parent organization, subordinate, or another organization having the same parent. (g) Contributions from a commercial co-venture.—Amounts contributed by a commercial co-venture should be included on line 1. These contributions are amounts received by the organization for allowing an outside organization or individual to use the organization’s name in a sales promotion campaign. (h) Contributions from governmental units.—A grant, or other payment from a governmental unit, represents). (i) Contributions in the form of membership dues.—Include on line 1 membership dues and assessments to the extent they are contributions and not payments for benefits received (see line 3, instruction C(a)). B. What is not included on line 1.— (a) Grants that are payments for services are not contributions.—A grant is a payment for service,) Donations of services.—Do not include the value of services donated to the organization, or items such as the free use of materials, equipment, or facilities, as contributions on line 1. However, for the optional reporting of such amounts, see instruction (a) for Part III. (c) Section 501(c)(9), (17), (18), and (20) organizations.—These organizations provide participants with life, sick, accident, welfare, unemployment, pension, group legal services, or similar benefits, or a combination of these benefits. When such an organization receives payments from participants, or their employers, to provide these benefits, report the payments on line 2 as program service revenue, rather than on line 1 as contributions. C. How to value also line 1 instruction D, Note (1) below. D. Schedule of contributors.—(Not open to public inspection) Caution: See Note (2) below. Attached schedule.—Attach a schedule listing contributors who, during the year, gave the organization, directly or indirectly, money, securities, or other property worth $5,000 or more. If no one contributed the reportable minimum, you do not need to attach a schedule. In the schedule, show each contributor’s name and address, the total amount received, and, estimate the property’s value. Exception: Organization described in section 501(c)(7), (8), (10), or (19) that received contributions or bequests to be used only as described in section 170(c)(4), 2055(a)(3), or 2522(a)(3). If an organization meets the terms of this exception, some information in its schedule will vary from that described above. The schedule should list each person whose gifts total $1,000 or more (for instance, whether it is mingled with amounts held for other purposes). If the organization transferred the gift to another organization, name and describe the recipient and explain the relationship between the two organizations. Also show the total of the gifts that were $1,000 or less and were for a purpose described in section 170(c)(4), 2055(a)(3), or 2522(a)(3). Note (1): If you qualify to receive tax-deductible charitable contributions and you receive contributions of property (other than publicly traded securities) whose fair market value is more than $5,000, you should usually receive a partially completed Form 8283, Noncash Charitable Contributions, from the contributor. You should complete the appropriate information on Form 8283, sign it, and return it to the donor. Retain a copy for your records. See also General Instruction C17. Note (2): If you file a copy of Form 990EZ and attachments with any state, do not include, in the attachments for the state, the schedule of contributors discussed above unless the schedule is specifically required by the state with which you are filing the return. States that do not require the information might nevertheless make it available for public inspection along with the rest of the return. Line 2—Program service revenue.. Other examples of program service revenue are for accomplishing an exempt purpose of the investing organization rather than to produce income. Examples are scholarship loans and low-interest loans to charitable organizations, indigents, or victims of a disaster. Rental income from an exempt function is another example. See line 4 instructions. (c) Unrelated trade or business activities.—Unrelated trade or business activities (not including any special fundraising events or activities) that generate fees for services may also be program service activities. A social club, for example, should report as program service revenue the fees it charges both members and nonmembers for the use of its tennis courts and golf course. Page 7 Line 3—Membership dues and assessments.—Enter members’ and affiliates’ dues and assessments that are not contributions. A. What is included on line 3.— (a) Dues and assessments received that compare reasonably with available benefits.—When dues and assessments are received that compare reasonably with available membership benefits, report such dues and assessments on line 3. (b) Organizations that usually match dues and benefits.—Organizations, other than those described in section 501(c)(3), generally provide benefits that have a reasonable relationship to dues. This occurs usually in organizations described in sections 501(c)(5), (6), or (7), although benefits to members may be indirect. Report such dues and assessments on line 3. B. Examples of membership benefits.— Examples of membership benefits include subscriptions to publications; newsletters (other than one about the organization’s activities only); free or reduced-rate admissions to events the organization sponsors; use of its.— (a) Dues or assessments received that exceed the value of available membership benefits.—Whether or not membership benefits are used, dues received by an organization, to the extent they exceed the monetary value of the membership benefits available to the dues payer, are a contribution includable on line 1. (b) Dues received primarily for the organization’s support.—If a member pays dues primarily to support the organization’s activities, and not to obtain benefits of more than nominal monetary value, those dues are a contribution to the organization includable on line 1. Line 4—Investment income.— A. What is included on line 4.— (a) Interest on savings and temporary cash investments.—Enter, etc., are actually interest and should be included on this line. (b) Dividends and interest from securities.—Enter the amount of dividend and interest income from debt and equity securities (stocks and bonds) on this line. Include amounts received from payments on securities loans, as defined in section 512(a)(5). (c) Gross rents.—Include gross rental income received during the year from investment property. Income received from renting office space, or other facilities or equipment, to unaffiliated exempt organizations should be reported on line 4 unless the rental income is exempt function income (program service) (see line 4, instruction B(b) below). (d) Other investment income.—Include, for example, royalty income from mineral interests owned by the organization. B. What is not included on line 4.— (a) Capital gains dividends and unrealized gains and losses.—Do not include on this line any capital gains dividends. They are reported on line 5. Also exclude unrealized gains and losses on investments carried at market value (see instructions for line 20). (b) Exempt function revenue (program service).—Do not include on line 4 amounts that represent income from an exempt function (program service). These amounts should be reported on line 2 as program service revenue. Expenses related to this income should be reported on lines 12 through 16. An organization whose exempt purpose is to provide low-rental housing to persons with low income receives exempt function income from such rentals. Exempt function income also arises when an organization charges an unaffiliated exempt organization below-market rent for the purpose of helping that unaffiliated organization carry out its exempt purpose. The rental income received in these two instances should be reported on line 2 and not on line 4. Only for purposes of completing Form 990EZ, treat income from renting property to affiliated exempt organizations as exempt function income and include such income on line 2 as program service revenue. Lines 5a–c—Capital gains.— A. What is included on line 5.— Report on line 5a all sales of securities and sales of all other types of investments (such as real estate, royalty interests, or partnership interests) as well as sales of all other capital assets (such as program-related investments and fixed assets used by the organization in its regular activities). Total the cost or other basis (less depreciation) and selling expenses and enter the result on line 5b. On line 5c, enter the net gain or loss. Report capital gains dividends, the organization’s share of capital gains and losses from a partnership, and capital gains distributions from trusts on lines 5a and 5c. Indicate the source on the schedule described below. For this return, you may use the more convenient way to figure the organization’s gain or loss from sales of securities by comparing the sales price with the average-cost basis of the particular security sold. However, generally, the average-cost basis is not used to figure the gain or loss from sales of securities reportable on Form 990-T. B. What is not included on line 5.— Do not include on line 5 any unrealized gains or losses on securities that are carried in the books of account at market value. (See the instructions for line 20.) C. Attached schedule.— (a) Assets other than publicly traded securities and inventory.—Attach a schedule showing the sale or exchange of nonpublicly traded securities and the sale or exchange of other assets that are not inventory items. The schedule should show security transactions separately from the sale of other assets. Show for these assets: (1) Date acquired and how acquired; (2) Date sold and to whom sold; (3) Gross sales price; (4) Cost, other basis, or if donated, value at time acquired (state which); (5) Expense of sale and cost of improvements made after acquisition; and (6) If depreciable property, depreciation since acquisition. (b) Publicly traded securities.—For sales of publicly traded securities through a broker, you may total the gross sales price, the cost or other basis, and the expenses of sale, and report lump-sum figures in place of the detailed reporting required in the above paragraph. For preparing Form 990EZ, publicly traded securities include common and preferred stocks, bonds (including governmental obligations), and mutual fund shares that are listed and regularly traded in an over-the-counter market or on an established exchange and for which market quotations are published or otherwise readily available. Lines 6a–c—Special events and activities.— On the appropriate line, enter the gross revenue, expenses, and net income from all special fundraising events and activities, such as dinners, dances, carnivals, raffles, bingo games, other gambling activities, and door-to-door sales of merchandise. In themselves, direct cost of those goods or services. See also line 1 instructions A(a) and (b) for further guidance in distinguishing between contributions and revenue. Calling any required payment a “donation” or “contribution” on tickets, advertising, or solicitation materials does not change how these payments should be reported on Form 990EZ. Page 8 A. What is included on line 6.— (a) Gross revenue/contributions.—When an organization receives payments for goods or services offered through a special event, enter— (1) as gross revenue, on line 6a (in the amount column) the value of the goods or services. (2) as a contribution, on both line 1 and line 6a (within the parentheses) any amount received that exceeds the value of the goods or services given. For. (b) Raffles or lotteries.—Report as revenue, on line 6a, any amount received from raffles or lotteries that require payment of a specified minimum amount for each entry, unless the prizes awarded have only nominal value (see line 6 instruction B(a) and (b) below). (c) Direct expenses.—Report on line 6b only the direct expenses attributable to the goods or services the buyer receives from a special event. If you include an expense on line 6b, do not report it again on line 7b. B. What is not included on line 6.— (a) Sales of goods or services of only nominal value.—If the goods or services offered at the special event have only nominal value, include all of the receipts as contributions on line 1 and all of the related expenses on lines 12 through 16. See line 1, instruction A(b) for a description of benefits of nominal value. These are adjusted annually for inflation. (b) Sweepstakes, raffles, and lotteries.— Report as a contribution, on line 1, the proceeds of solicitation campaigns in which the names of contributors and other respondents are entered in a drawing for prizes. When a minimum payment is required for each raffle or lottery entry and prizes of only nominal value are awarded, report any amount received as a contribution. Report the related expenses on lines 12 through 16. (c) Activities that generate only contributions are not special fundraising events.—An activity that generates only contributions, such as a solicitation campaign by mail, is not a special fundraising event. Any amount received should be included on line 1 as a contribution. C. Attached schedule.—Attach a schedule listing the three largest special events conducted, as measured by gross receipts. Describe each of these events and indicate for each event: the gross receipts; the amount of contributions included in gross receipts (see line 6, instruction A(a) above); the gross revenue (gross receipts less contributions); the direct expenses; and the net income (gross revenue less direct expenses). Furnish the same information, in total figures, for all other special events held that are not among the largest three. Indicate the type and number of the events not listed individually (for example, three dances and two raffles). An example of this schedule might appear in columnar form as follows: Special Event: Gross Receipts Less: Contributions Gross Revenue Less: Direct Expenses (A) $XXX XXX XXX XXX (B) $XXX XXX XXX XXX $XXX (C) $XXX XXX XXX XXX $XXX Total $XXX XXX XXX XXX $XXX organization expected to profit by appreciation and sale. Report sales of these investments on line 5. Line 8—Other revenue.—Include on this line the total income from all sources not covered by lines 1 through 7. Examples of types of income includable on line 8 are interest on notes receivable not held as investments; interest on loans to officers, directors, trustees, key employees, and other employees; and royalties that are not investment income or program service revenue. Line 10—Grants and similar amounts paid.— A. What is included on line 10.— Enter on line 10 the amount of actual grants and similar amounts paid to individuals and organizations selected by the filing organization. Include scholarship, fellowship, and research grants to individuals. (a) Specific assistance to individuals.— Include on this line the amount of payments to, or for the benefit of, particular clients or patients, including assistance rendered by others at the expense of the filing organization. (b) Payments, voluntary awards, or grants to affiliates.—Include on line 10 certain types of payments to organizations “affiliated with” (closely related to) a reporting agency. These include predetermined quota support and dues payments by local agencies to their state or national organizations. Note: If you use Form 990EZ for state reporting purposes, it is especially important to properly distinguish between payments to affiliates and awards and grants (see General Instruction E). B. What is not included on line 10.— (a) Administrative expenses.—Do not include on this line expenses made in selecting recipients or monitoring compliance with the terms of a grant or award. Enter those expenses on lines 12 through 16. (b) Purchases of goods or services from affiliates.—The cost of goods or services purchased from affiliates are not reported on line 10 but are reported as expenses on lines 12 through 16. (c) Membership dues paid to another organization.—Membership dues that the organization pays to another organization to obtain general membership benefits, such as regular services, publications, and materials, should be reported as “Other expenses” on line 16. C. Attached schedule.—Attach a schedule to explain the amounts reported on line 10. Show on this schedule: (a) Each class of activity; (b) The donee’s name and address; (c) The amount given; and (d) The relationship of the donee (in the case of grants to individuals) if the relationship is by blood, marriage, adoption, or employment (including employees’ children) to any person or Net Income or (loss) $XXX If you use this format, report the total for contributions on line 1 of Form 990EZ and on line 6a (within the parentheses of the description line). Report the totals for gross revenue, in the amount column, on line 6a; direct expenses on line 6b; and net income or (loss) on line 6c. D. Fundraising record retention. retain outside fundraisers, they must keep samples of the fundraising materials used by the outside fundraisers. For each fundraising event, organizations must keep records to show that portion of any payment received from patrons which is not deductible; that is, the fair market value of the goods or services received by the patrons. Lines 7a–c—Gross sales.— A. What is included on lines 7a–c.— Sales of inventory.—Include on these lines the gross sales (less returns and allowances), the cost of goods sold, and the gross profit or (loss) from the sale of all inventory items, regardless of whether the sale is an exempt function or an unrelated trade or business. These inventory items are those the organization either makes to sell or buys for resale. B. What is not included on lines 7a–c.— (a) Sales from special events.—Do not include the sales of inventory items from special fundraising events and activities on line 7. Enter those sales on line 6. (b) Investments.—Do not include on line 7 sales of investments on which the Page 9 corporation with an interest in the organization, such as a creator, donor, director, trustee, officer, etc. List the name and address of each affiliate that received any payment reported on line 10. Specify both the amount and purpose of these payments. Classify activities on this schedule in more detail than by using such broad terms as charitable, educational, religious, or scientific. For example, identify payments to affiliates; payments as nursing services; fellowships; or payments for food, shelter, or medical services for indigents or disaster victims. For payments to indigent families, do not identify the individuals. If an organization gives property and measures an award or grant by the property’s fair market value, also show on this schedule: (a) A description of the property; (b) The book value of the property; (c) How you determined the book value; (d) How you determined the fair market value; and (e) The date of the gift. Any difference between a property’s fair market value and book value should be recorded in the organization’s books of account.. Do not include, on this line, the cost of employment-related benefits the organization gives its officers and employees. Report those employment-related benefits on line 12. Line 12—Salaries, other compensation, and employee benefits.—Enter the total salaries and wages paid to all employees and the fees paid to the Form 5500 series return/report that is appropriate for your plan. Also include in the total the amount of Federal, state, and local payroll taxes for the year that are imposed on the organization as an employer. This includes the employer’s share of Social Security and Medicare taxes, FUTA tax, state unemployment compensation tax, and other state and local payroll taxes. Taxes withheld from employees’ salaries and paid over to the various governmental units (such as Federal and state income taxes and the employees’ share of Social Security and Medicare taxes) are part of the employees’ salaries included on line 12. fees paid to directors and trustees on line 12. Line 14—Occupancy, rent, utilities, and maintenance.—Enter the total amount paid or incurred for the use of office space or other facilities, heat, light, power, and other utilities, outside janitorial services, mortgage interest, real estate taxes, and similar expenses. If your organization records depreciation on property it occupies, enter the total for the year. Line 15—Printing, publications, postage, and shipping.—Enter the printing and related costs of producing the reporting organization’s own newsletters, leaflets, films, and other informational materials. (However, do not include any expenses, such as salaries, for which a separate line is provided.) Also include the cost of any purchased publications as well as postage and shipping costs not reportable on lines 5b, 6b, or 7b. Line 16—Other expenses.—Expenses that might be reported here include penalties, fines, and judgments; unrelated business income taxes; real estate taxes not attributable to rental property or reported as occupancy expenses; depreciation on investment property; travel and transportation costs; interest expense; and expenses for conferences, conventions and meetings. Some states that accept Form 990EZ in satisfaction of their filing requirements may require that certain types of miscellaneous expenses be itemized. See General instruction E. Line 18—Excess or (deficit) for the year.—Enter the difference between lines 9 and 17. If line 17 is more than line 9, enter the difference in parentheses. Line 19—Net assets or fund balances at beginning of year.—Enter the amount from the prior year’s balance sheet (or from Form 5500, 5500-C/R, or an approved DOL form if General Instruction F applies). Line 20—Other changes in net assets or fund balances.—Attach a schedule explaining any changes in net assets or fund balances between the beginning and end of the year that are not accounted for by the amount on line 18. Amounts to report here include adjustments of earlier years’ activity and unrealized gains and losses on investments carried at market value. Part II—Balance Sheets.— All organizations, except those that meet one of the exceptions in General Instruction F, must complete columns (A) and (B) of Part II of the return and may not submit a substitute balance sheet. Failure to complete Part II may result in penalties for filing an incomplete return. See General Instruction K. Some states require more information. See General Instruction E for more information about completing a Form 990EZ to be filed with any state or local government agency. Line 22—Cash, savings, and investments.—Include the total of noninterest-bearing checking accounts, deposits in transit, change funds, petty cash funds, or any other noninterest-bearing account. Include the total of interest-bearing checking accounts, savings and temporary cash investments, such as money market funds, commercial paper, certificates of deposit, and U.S. Treasury bills or other governmental obligations that mature in less than 1 year. Report the income from these investments on line 4. Include the book value (which may be market value) of securities held as investments. Include the amount of all other investment holdings including land and buildings held for investment.) receivable accounts, inventories, and prepaid expenses. Line 25—Total assets.—Enter the amount of your total assets. If the end-of-year total assets entered in column (B) are $250,000 or more, you must file Form 990 instead of Form 990EZ. Line 26—Total liabilities.—Enter the amount of your total liabilities along with their description. Line 27—Net assets or fund balances.— Subtract line 26 (total liabilities) from line 25 (total assets) to determine your net assets. Enter this net asset amount on line 27. (a) Organizations not using fund accounting.—Enter your net asset amount. The amount in column (B) should agree with the net asset amount on line 21. (b) Organizations using fund accounting. Page 10 and practice. To complete Form 990EZ, you must consolidate these funds. States that accept Form 990EZ as their basic report form may require a separate statement of changes in fund balances. See General Instruction E. Part III—Statement of Program Service Accomplishments.— Provide the information specified in the instructions above line 28 of the form for each of the organization’s three largest program services (as measured by total expenses incurred) or for each program service if the organization engaged in three or fewer of such activities. The “Expenses” column must be completed by section 501(c)(3) and (4) organizations as well as section 4947(a)(1) charitable trusts. Completing the column is optional for all other filers. Report only the expenses attributable to the organization’s program services described on lines 28 through 30 and in the attachment for line 31. A program service is a major, usually ongoing objective of an organization, such as adoptions, recreation for the elderly, rehabilitation, or publication of journals or newsletters. Describe program service accomplishments through measurements such as clients served, days of care, therapy sessions, or publications issued. If it is inappropriate to measure a quantity of output, as in a research activity, describe the objective of the activity for this time period as well as the overall longer-term goal. You may furnish reasonable estimates for any statistical information if exact figures are not readily available from the records you normally maintain. If so, please indicate that the information provided is an estimate. (a) Donated services.—If the organization so chooses, it may show in the narrative section of Part III the value of any donated services or use of materials, equipment, or facilities received and utilized in connection with specific program services. Do not include these amounts in the expense column in Part III. (b) Attached schedule.—Attach a schedule that lists the organization’s other program services. The detailed information required in Part III for the three largest services is not required for the services listed on this schedule. completing Part IV, you may provide an attachment describing the entire 1991 compensation package of one or more officers, directors, and trustees. Column (C).—Enter salary, fees, bonuses, and severance payments received by each person listed. Include current year payments of amounts reported or reportable as deferred compensation in any prior year. Column (D).—Include all forms of deferred compensation (whether or not funded; whether or not vested; and whether or not the deferred compensation plan is a qualified plan under section 401(a)), and payments to welfare benefit plans on behalf of the officers, etc. Reasonable estimates may be used if precise cost figures are not readily available. Unless the amounts are reported in column (C), include salary and other compensation earned during the period covered by the return but not paid by the date the return was filed. Column (E).—Enter amounts use of housing, automobiles, or other assets owned or leased by the organization (or provided for the organization’s use without charge), as well as any other taxable and nontaxable fringe benefits. Refer to Publication 525 for more information. You must file Form 941 to report income tax withholding and social security taxes. You must also file Form 940 to report Federal unemployment tax, unless the organization’s exemption letter states that it is not subject to this tax. Part V—Other Information.— Line 33—Change in activities.—Attach a statement explaining any significant changes in the kind of activities the organization conducts to further its exempt purpose. These new or modified activities are those not listed as current or planned in your application for recognition of exemption or those not already made known to IRS by a letter to your key district director or by an attachment to your return for any earlier year. Besides describing new activities or changes to current ones, also describe any major program activities that are being discontinued. Line 34 Part IV—List of Officers, Directors, and Trustees.— List each of the organization’s officers, directors, trustees, and other persons having responsibilities or powers similar to those of officers, directors, or trustees. List all of these persons even if they did not receive any compensation from the organization. Enter “-0-” in columns (C), (D), and (E) if none was paid. (For deferred compensation, see column (D) instructions.) Show all forms of cash and noncash compensation received by each listed officer, director, or trustee, whether paid currently or deferred. In addition to more information. When a number of changes are made, send a copy of the entire revised organizing instrument or governing document. Line 35—Unrelated business income.— Check “Yes” on line 35a if the organization’s total gross income from all of its unrelated trades and businesses is $1,000 or more for the year. Gross income is gross receipts less the cost of goods sold. See Publication 598 for a description of unrelated business income and the Form 990-T filing requirements. Form 990-T is not a substitute for Form 990EZ. Items of income and expense reported on Form 990-T must also be reported on Form 990EZ when the organization is required to file both forms. For purposes of line 35, the term “business activities” includes any income-generating activity involving the sale of goods or services or income from investments. Note: All tax-exempt organizations must pay estimated taxes with respect to their unrelated business income if they expect their tax liability to be $500 or more. You may use Form 990-W to compute this tax. Line 36—Liquidation, dissolution, termination, or substantial contraction.— If there was a liquidation, dissolution, termination, or substantial contraction, attach a statement explaining which took place. For a complete liquidation of a corporation or termination of a trust, write Final Return at the top of the organization’s Form 990EZ. On the statement you attach, show whether the assets have been distributed and the date.: (a) At least 25% of the fair market value of the organization’s net assets at the beginning of the tax year; or Page 11 place through a series of related dispositions depends on the facts in each case. See Regulations section 1.6043-3 for special rules and exceptions. Line 37—Expenditures for political purposes.—A political expenditure is one intended to influence the selection, nomination, election, or appointment of anyone to a Federal, state, or local public office, or office in a political organization, or the election of Presidential or Vice Presidential electors. Whether the attempt succeeds does not matter. An expenditure includes a payment, distribution, loan, advance, deposit, or gift of money, or anything of value. It also includes a contract, promise, or agreement to make an expenditure, whether or not legally enforceable. (a) All section 501(c) organizations.—. (b) Section 501(c)(3) organizations.—A section 501(c)(3) organization will lose its tax-exempt status if it engages in political activity. A section 501(c)(3) organization must pay an excise tax for any amount paid or incurred on behalf of, or in opposition to, any candidate for public office. The organization must pay an additional excise tax if it fails to correct the expenditure timely. A manager of a section 501(c)(3) organization who knowingly agrees to a political expenditure must pay an excise tax, unless the agreement is not willful and there is reasonable cause. A manager who does not agree to a correction of the political expenditure may have to pay an additional excise tax. When an organization promotes a candidate for public office (or is used or controlled by a candidate or prospective candidate), amounts paid or incurred for the following purposes are political expenditures: (1) Remuneration to the individual (a candidate or prospective candidate) for speeches or other services; (2) Travel expenses of the individual; (3) Expenses of conducting polls, surveys, or other studies, or preparing papers or other material for use by the individual; (4) Expenses of advertising, publicity, and fundraising for such individual; and (5) Any other expense that has the primary effect of promoting public recognition or otherwise primarily accruing to the benefit of the individual. Use Form 4720 to figure and report the excise taxes. Line 38—Loans to or from officers, directors, trustees, and key employees.—Enter the end-of-year unpaid balance of secured and unsecured loans made to or received from officers, directors, trustees, and key employees. For example, if the organization borrowed $1,000 from one officer and loaned $500 to another, none of which has been repaid, report $1,500 on line 38b. The term “key employees” refers to the chief administrative officers of an organization (such as an executive director or chancellor) but does not include the heads of separate departments or smaller units within an organization. Attached schedule.—For loans outstanding at the end of the year, attach a schedule as described below. (a) When loans should be reported separately.—Report each loan separately, even if more than one loan was made to or received from the same person, or the same terms apply to all loans made. Salary advances and other advances for the personal use and benefit of the recipient, and receivables subject to special terms or arising from nontypical transactions, must be reported as separate loans for each officer, director, etc. (b) When loans should be reported as a single total.—Receivables that are subject to the same terms and conditions (including credit limits and rate of interest) as receivables due from the general public and that arose during the normal course of the organization’s operations may be reported as a single total for all the officers, directors, trustees, and key employees. Travel advances made in connection with official business of the organization may also be reported as a single total. (c)). The above detail is not required for receivables or travel advances that may be reported as a single total (see instruction (b) above); however, report and identify those totals separately in the attachment. Line 39—Section 501(c)(7) organizations.— (a) Gross receipts test.—A section 501(c)(7) organization may receive up to 35% of its gross receipts, including investment income, from sources outside its membership and remain tax exempt. Part of the 35% (up to 15% of gross receipts) may be derived the Regulations under section 118), from line 39b on the club’s Form 990-T. (b) Nondiscrimination policy.—A section 501(c)(7) organization is not exempt from income tax if any written policy statement, including the governing instrument and bylaws, allows discrimination on the basis of race, color, or religion. However, section 501(i) allows social clubs to retain their exemption under section 501(c)(7) even though their membership is limited (in writing) to members of a particular religion if: 40—List of states.—List each state with which you are filing a copy of this return in full or partial satisfaction of state filing requirements. Line 42—Section 4947(a)(1) charitable trusts.—Section 4947(a)(1) charitable trusts that file Form 990EZ instead of Form 1041 must complete this line. The trust should include exempt-interest dividends received from a mutual fund or other regulated investment company as well as tax-exempt interest received directly. Page 12
https://www.scribd.com/document/545104/US-Internal-Revenue-Service-i990ez-1991
CC-MAIN-2018-30
refinedweb
12,219
53.81
03 June 2009 16:03 [Source: ICIS news] ?xml:namespace> “I think that the technology is finally there. That is probably the most pivotal thing but on top of that, the political environment is very favourable in the “You also see that consumers are much more interested in this whole opportunity; there is a lot of awareness that biodegradable products are important,” he added. White – or industrial – biotechnology is the use of living cells or enzymes to create products that would typically require petroleum-based feedstocks. White biotech can therefore reduce pollution and waste, while minimising energy and raw material use. Although its development was clearly more of a challenge for companies in the current financial climate, he said, it was important not to lose sight of the fundamental issues that would remain after the crisis. Claassen pointed specifically to the emergence of the middle classes in “I think white biotech can play a very good role. It’s bio-based so it’s not competing with oil and I think with the concerns we have around climate change it is a really good contributor to a much more healthy world,” he said. “It is challenging certainly for chemical companies that are faced with really reduced demand in the automotive industry and construction, but I have to say that companies that have prepared well for this are actually benefiting now.” Global consultancy McKinsey & Co. forecasts that white biotech chemical sales would grow from €100bn ($143bn) in 2007, to in excess of €150bn by 2012. Around 5% of chemicals are currently bio-based, with this expected to double over that same period, said Claassen. Nevertheless, the sector had far greater potential, he added. Some 75% of all chemicals handled by the For now, DSM – along with its French partner Roquette – remained focused on the development of a demonstration plant that would produce succinic acid derived from starch. Claassen said that the facility in Initially, the plant’s annual output would amount to only a few hundred tonnes, although this could soon be ramped up to “more significant” volumes. Claassen suggested. View DSM's key biotech facilities in a larger map ($1 = €0.70) Bookmark Doris de Guzman’s Green Chemicals Blog for views on green chemistry and sustainability
http://www.icis.com/Articles/2009/06/03/9222056/Prospects-good-for-white-biotechnology-DSM.html
CC-MAIN-2014-10
refinedweb
377
58.11
This add-on is operated by TelAPI Inc. Effortless Cloud Telephony TelAPI Last updated 13 October 2015 Table of Contents This add-on is for quickly and conveniently adding TelAPI powered telephony functionality into your Heroku app. Adding the ability to send SMS or place calls from your application enables seamless communication directly with your customers. Mobile communication is one of the most relevant ways to get in touch with your customers, considering we live in an always-on society with our phones no less than 2 inches from our bodies. Take advantage of this epidemic to ensure that your application is always keeping its users up to date! Provisioning the add-on Telapi can be attached to a Heroku application via the CLI: $ heroku addons:create telapi -----> Adding telapi to sharp-mountain-4005... done, v18 (free) Once the addon has been added you can access its management interface where a unique request token can be found. This token is all thats used to authorize HTTP requests to send SMS messages or place calls. Local setup Environment setup After provisioning the add-on it’s necessary to locally replicate the TELAPI_TOKEN config var so your development environment can make authenticated requests to TelAPI. ADDON-CONFIG-NAME values retrieved from heroku config to your .env file. $ heroku config -s | grep TELAPI_TOKEN >> .env $ more .env Credentials and other sensitive configuration values should not be committed to source-control. In Git exclude the .env file with: echo .env >> .gitignore. For more information, see the Heroku Local article. Usage The initiation of outbound SMS messages or calls with the add-on is achieved by performing an HTTP post request to one of the add-on urls. The URL for sending an SMS is: POST heroku.telapi.com/send_sms The URL for placing a call: POST heroku.telapi.com/make_call Using with Ruby Though the example below implements the httparty library, any ReST client library may be used to interact with the addon. require 'httparty' # Send an SMS request_data = { :To => "[TO_NUMBER]", :Body => "Hello, from TelAPI Heroku addon!", :Token => ENV['TELAPI_TOKEN'] } r = HTTParty.post("", :body => request_data) puts r # Make a call request_data = { :To => "[TO_NUMBER]", :Url => "", :Token => ENV['TELAPI_TOKEN'] } r = HTTParty.post("", :body => request_data) puts r # Premium plan users can send messages / place calls via a custom from number they've purchased from the addon management page request_data = { :To => "[TO_NUMBER]", :From => "[FROM_NUMBER]", :Url => "", :Token => ENV[[TELAPI_TOKEN" } r = HTTParty.post("", :body => request_data) puts r Using with Python Though the example below implements the requests library, any ReST client library may be used to interact with the addon. import os import requests def send_sms(to, body, _from = None): data = { 'token' : os.environ['TELAPI_TOKEN'], 'to' : to, # The recipient of the SMS message, 'body' : body # Limited to 160 characters, message splicing must be performed in your logic } if _from is not None: data['from'] = _from r = requests.post('', data=data) return r.json() // Send via the default DID print send_sms('+15557771234', 'Hello there, how are you doing today?'); // Send via a custom DID, optional for Premium Accounts print send_sms('+15557771234', 'Hello there, how are you doing today?', _from='+15558675309'); Dashboard The TelAPI add-on dashboard allows you to view your usage, add and configure inbound phone numbers, and see a list of all SMS messages / calls occurring through your account. The dashboard can be accessed via the CLI: $ heroku addons:open telapi Opening telapi for sharp-mountain-4005… or by visiting the Heroku apps web interface and selecting the application in question. Select TelAPI from the Add-ons menu. Migrating between plans A variety of plans with varying usage tiers and functionality are available. Use the heroku addons:upgrade command to migrate to a new plan. $ heroku addons:upgrade telapi:newplan -----> Upgrading telapi:newplan to sharp-mountain-4005... done, v18 ($49/mo) Your plan has been updated to: telapi:newplan Removing the add-on TelAPI can be removed via the CLI. This will destroy all associated data and cannot be undone! $ heroku addons:destroy telapi -----> Removing telapi from sharp-mountain-4005... done, v20 (free) Support All TelAPI addon issues should be directed to support@telapi.com.
https://devcenter.heroku.com/articles/telapi
CC-MAIN-2015-48
refinedweb
685
55.13
BGE: In-Game Time and Event Timing Time plays an important part in games, from measuring performance to timing events. This tutorial will examine how to use the python time module. We’ll start by looking at formatting and using time and date information then move more specifically to timing events, ending with a mini-game to see it in action. There are many ways we could keep track of time in the BGE. For instance, we could always use a delay sensor, or an always sensor set to pulse mode to update a property every so often. We could do this every 60 logic ticks, so it would increase roughly every second. But what if we wanted something with more precision? Such as measuring time between keystrokes or if we wanted to time a short section within a script, then only one logic tick would have elapsed irrespective of how long that tick took. This is where the python time module comes in. So lets import time and have a look: Formatting Time time.time() returns the number of seconds passed since time started, a point known as the epoch. This isn’t the very dawn of time in the literal sense, instead it is defined by the operating system. If you’re curious what the epoch is on your system try time.gmtime(0). Anyway, the floating point number from time.time() isn’t very readable from a human standpoint. That’s where time.localtime() comes in. This takes a set of seconds as parameter, and if none is given then it uses the value returned by time.time(), it returns a named tuple (called struct_time) of 9 useful time/date information. By named tuple I mean you can refer to the value you want by name (refer to the docs for all the names). Assuming this script is hooked up to a text object, we can do this to display a clock: import time import bge own = bge.logic.getCurrentController().owner t = time.localtime() own['Text'] = str(t.tm_hour) + ":" + str(t.tm_min) + ":" + str(t.tm_sec) Ok, not really stretching the BGE to it’s limits, so lets say you want to keep track of time in an open world game? And for fun, lets be able to speed time up, slow it down and pause it. Game Time ################################################################## # # # BGE World Time v1 # # # # By Jay (Battery) # # # # # # # # Feel free to use this as you wish, but please keep this header # # # ################################################################## import bge import time class WorldTime: def __init__(self): self.gameTime = 0.0 self._timeElapsed = time.time() self._deltaTime = time.time() self._year = time.gmtime(self.gameTime).tm_year self.daysElapsed = 0 self.gameYear = 1010 self.pause = False #provided in seconds self.hourLength = 60 #List must contain 7 strings self.weekdays = [] #List must contain 11 strings self.yearMonths = [] self.updateWorldTime() return def updateWorldTime(self): if self.pause: self.timeElapsed = time.time() return self._deltaTime = time.time() - self._timeElapsed self._timeElapsed = time.time() self.gameTime += self._deltaTime*(3600/self.hourLength) self.daysElapsed = int(self.gameTime/86400) t = time.gmtime(self.gameTime) self.hour = t.tm_hour self.minute = t.tm_min self.seconds = t.tm_sec self.mday = t.tm_mday self.gameYear += t.tm_year - self._year self._year += t.tm_year - self._year if self.weekdays: self.wday = self.weekdays[t.tm_wday] else: self.wday = time.strftime('%A', t) if self.yearMonths: self.month = self.yearMonths[t.tm_mon-1] else: self.month = time.strftime('%B', t) def getWorldDisplayTime(self, secs = False): self.updateWorldTime() timeStr = str(self.hour) + ":" + str(self.minute) if secs: timeStr += ":" + str(self.seconds) return timeStr def skip(self, length, unit): if self.pause: return if unit == 's': self.gameTime += length elif unit == 'm': self.gameTime += length * 60 elif unit == 'h': self.gameTime += length * 3600 elif unit == 'd': self.gameTime += length * 86400 self.updateWorldTime() It looks like there’s a lot going on here, but really, it’s not much different to the previous example. Were keeping track of the change in time between each update (self._deltaTime), multiplying by how long we want each hour to last (self.hourLength) and adding the time that has passed to the total game time (self.gameTime). The rest of it is really just fancy formatting. This has several advantages. Firstly, because were measuring the passage of time using the operating system we do not need to keep updating anything or do anything unless we need the game time (unlike timer variables). Secondly, pausing is simple because we just need to stop adding the time passed to the game time. Speeding and slowing time is easy as we just change the multiplier applied to the delta time. Thirdly, when it comes to saving we only need to save self.gameTime and then we can pass that back to the class on loading. Instead of time.localtime() we’re using time.gmtime(), which is very similar in its operation, the real difference is in the returned tuple. time.gmtime() returns Greenwich Mean Time, which means time zones and day light savings are not included, unlike time.localtime(), which does. The result is a module that will return the same time/date information regardless of the computer it’s run on. We pass the game time to time.gmtime() to use the seconds passed in the game to calculate what time, day, month and year the game is currently in. The tuple returned by time.gmtime() is then parcelled out to the class variables so that the game can access them. We also make use of time.strftime() which allows us to use python’s string formatting and apply it to the struct_time tuple. This gives us the weekdays and month names. For added spice, you can override these in the class to provide your own day and month names. In the working example (see resources below) I’ve been lazy and used the names from the Elder Scrolls. Pausing Python You can use time.sleep(secs) to pause the current python thread. When working in pure python this function can be used to delay further execution. Interestingly, it works within the BGE, but not as you might expect. Because the BGE works through logic sequentially processing cannot move forward until time.sleep() has finished. This has the effect of suspending everything in the BGE. Although, I’m not sure why you would want to do this, since nothing else can happen until the sleep has finished. If anyone can think of a use post it below. Timing Within a Script This is much more straight word then creating game time. We can make use of time.clock() to keep track of how long something takes. The return value of time.clock() varies depending on the operating system. For unix systems it gives us the CPU time, in windows this is the system time. Eitherway, it’s reasonably precise, which means we can use it for doing benchmarks of code: import time t1 = time.clock() ... some code would go here ... t2 = time.clock() - t1 print(t2) There are more accurate ways to profile code (timeit for example) but for finding hotspots in your scripts (or slow scripts overall), this simple method will often do. We can also use this method to time events, read on. Timing Key Events Check out the canyon crossing example in the resources below to see this in action. The key difference here is that we need to store the value of initial time in a game object property (own[‘t’]) to prevent it from being lost with each logic tick. import bge import time cont = bge.logic.getCurrentController() own = cont.owner scene = bge.logic.getCurrentScene() targetTime = 0.3 keyPress = own.sensors['Keyboard'] move = own.actuators['Motion'] yPos = 0.0 if 'init' not in own: own['t'] = time.clock() own['init'] = 1 own['move'] = own.worldPosition own['rot'] = [0,0,0] own['r'] = 0.0 scene.objects['collisionBox'].suspendDynamics() if keyPress.positive: own['r'] = (targetTime - (time.clock() - own['t']))*0.02 own['t'] = time.clock() yPos = abs(0.5 - targetTime - (time.clock() - own['t'])) if own.worldOrientation[2][2] < 0.87: scene.objects['collisionBox'].restoreDynamics() own['rot'][1] += own['r'] own.worldOrientation = own['rot'] own['move'][1] += yPos own.worldPosition = own['move'] The time between each key press is compared against a target value (targetTime) and the difference is then applied as rotation and location (with a bit of tweaking to make the numbers work properly). The 11th Hour So, we’ve covered creating our own in-game time system, timing sections of script and timing keyboard events. One thing that’s missing from all this is being able to set the time and date or go backwards. Setting in-game time is perfectly doable, you just have to work out how far you want to nudge it forward in seconds (there’s even a python module to help with this). Going back in time is a bit trickier but can be done using the timedate module to work out the days before the epoch. The real challenge is figuring out how to ensure all the things in the game are in their correct positions for any given time. But it can be done (Stalker, for example, updates AI positions even when they’re not in play). As always, any questions or comments, leave them below. Laters Resources Credit goes to TehJoran for the low poly character used in this tutorial […] often need to be able to manipulate the game environment. In this tutorial we’ll build on the worldTime.py class from last time and use it to drive a simple day and night cycle. We’ll look at 2 parametric functions (sine […] BGE: simple day and night cycles using bgl | whatjaysaid said this on June 1, 2014 at 12:57 | thank you thank youuu its just what ive bein loojing for days (y) You’re welcome! I’m glad you found it helpful. I’d be interested to hear about what you used it for.
https://whatjaysaid.wordpress.com/2014/05/26/bge-in-game-time-and-event-timing/
CC-MAIN-2019-04
refinedweb
1,663
68.16
"can use Visual Studio without problems but for example you cannot use WPF, while Windows Forms are ok. For more information what you can use visit: Moreover there exists Mono tools which integrates with VS: So far you have two options, Fix SevenZipSharp on Mono/Linux to PInvoke 7z.so. That requires you to change the PInvoke definition in source code, and also recompile the source files. If even the developers of that project did not yet make it right, I think you will need to pay much attention to that approach. Switch to a full managed implementation, such as You probably have only mono-runtime installed. To support VB.Net, you need an additional package (which contains Microsoft.VisualBasic.dll), it's called mono-basic IIRC. So simple sudo apt-get install mono-basic should give you the missing file. (or was it mono-vbnc? Can't check it, sold my raspberry-pi some month ago...) The only viable solution is to ask the company who supplied the DLL if they have a Linux library suitable for your particular distribution (Ubuntu version and Linux kernel version) that you can link against. Each platform has different ways of handling native code libraries, and these libraries are not compatible between platforms. It is therefore not possible to link against a platform-specific library (a Windows DLL for instance) in a platform-neutral fashion. MONO is a cross-platform re-implementation of the .NET runtime. It is intended to provide the ability to write .NET applications for non-Windows operating systems, and provides a reasonably consistent framework to enable you to easily port existing .NET applications. It is not a Windows emulator so it cannot use Windows-specif SOLUTION!!!! PI-BLASTER WORKS :D :D (lost 2 days of life because of this) write.flush was critical, btw. namespace PrototypeAP { static class PWM { static string fifoName = "/dev/pi-blaster"; static FileStream file; static StreamWriter write; static PWM() { file = new FileInfo(fifoName).OpenWrite(); write = new StreamWriter(file, Encoding.ASCII); } //FIRST METHOD public static void Set(int channel, float value) { string s = channel + "=" + value + " "; Console.WriteLine(s); write.Write(s); write.Flush(); } } } Turns out there was an inner load exception and I had a Npgsql.VisualStudio DLL in my bin. Once i have removed that everything worked. Reflections failed to load that DLL because of missing references. The problem is not about Mono and the Bitmap class. The problem is where your image data comes from: you mention it comes from a USB device. But how do you access your device from your .NET code? Seems like you're using IntPtr, which leads me to think that you're using P/Invoke. P/Invoke is inherently a technique that is not cross platform. You should stick to managed code if you want your program to work across platforms (Linux and Windows) with no specific-platform code. UPDATE: You mention that you have a ".so" library to access the platform-specific functionality on Linux. Then this may be the bit that is not working. You should post a brand new question on stackoverflow with the code you're using to P/Invoke that library in Linux. I would try these possible alternatives to try to solve the problem: Upgrade Mono to 3.0.x. There have been a lot of fixes in the last months around WebRequests. If the above doesn't help, try Mono 3.2 (as it defaults to use a new garbage collector, much faster, called SGEN). If the above doesn't help, build your own Mono (master branch), as this important pull request has been merged recently. If the above doesn't help, use the "--server" flag when calling your mono executable (this feature is only available in the last version of mono, which you need to compile from the master branch). If all the above doesn't help, then CC yourself in this bug, as I think I'll have time in August to implement a fix for it, and maybe it helps you. I am using mainly Mac and it is working perfectly. The doc mention the Mac using : brew install libcouchbase Then you just need to do npm install couchbase Try this: Disable increasing amounts of your own/application code until the error goes away. Then refine, with smaller steps, to see which part/line of your code is causing this. If in the end, there is absolutely no own/application code left, play with the configuration, version, compiler options, of the libraries you're using. Sorry I can't give you a detailed answer, but I HTH. Good luck! If you have installed mono using sudo apt-get install mono-complete, you might want to look at the mscorlib. Mono take mscorlib.dll 2.0 instead of 4.0 Unable to run .net app with Mono - mscorlib.dll not found (version mismatch?) might be of use. Download and install the Java EE 7 SDK to get the Tutorial. The SDK installer will install and configure the Update Center and the Tutorial. The GlassFish 4.0 ZIP file isn't a supported configuration (but you can add the Tutorial to GlassFish 4.0 standalone using these instructions. In your case, it appears there's something wrong with the 32-bit compatibility libraries on 64-bit Linux when you run pkg. For running an xsl[t] transformation over xml, you should be able to use either System.Xml.Xsl.XslTransform or System.Xml.Xsl.XslCompiledTransform from System.Xml.dll. The API should be largely identical to the Microsoft implementations: mono, XslCompiledTransform mono, XslTransform microsoft, XslCompiledTransform microsoft, XslTransform I think Pixate tries to do this. No idea how good it is though. Logs show that there seems to be an issue with creation of UI window panel as the system font the application is trying to look up for could be found on your mac. There could be 2 main reasons why this could happen:- The latest version of mountain lion (as you point it out to be 10.8.4) does not have that font in their font library The application was created only for windows and developer didn't port it to /tested on mac thoroughly. if you need to extract your iphone data in urgency then its better to try using a different application which has made for mac and does not require mono UPDATE Have a look at Total Saver application. It works on mac and windows without the need for you to install any 3rd party frameworks (mono). You'll need JRE (Java Runtime Environment) though. You c Success If you want to run MVC4 on Mono, you need to use some Microsoft dlls which are not provided by Mono at this time. A word of caution - Taking a cursory look at the Mono source, there appears to be methods and classes in the MC4 source that do not exist in the 3.2.0 build. There may be features that are broken. The site I am running is for all intents and purposes an MVC3 site built against the latest dlls. Microsoft DLLs to copy System.Web.Abstractions - 4.0 System.Web.Helpers - 2.0 System.Web.Mvc - 4.0 Once you copy over the dlls, if you're still having problems you may have to do some detective work if the above fix doesn't work. If you get an error message saying that Mono can't find the DLL, it's usually one of three reasons: Troubleshooting Is doesn't have the dl It looks like the pkg-config tool is not found. Maybe it's not in the default paths. Do you have a 'pkgconfig' directory somewhere? It should be a subdirectory of your Mono installation. Try to see if you have a path looking like /Library/Frameworks/Mono.framework/Versions/XXXX/lib/pkgconfig If yes, point the PKG_CONFIG_PATH environment variable to this path, you can specify it directly when running your mkbundle command (this is just an example): $ PKG_CONFIG_PATH=/Library/Frameworks/Mono.framework/Versions/XXXX/lib/pkgconfig mkbundle .... Everything I have seen on their list is that the last stable release is the 2.10.x, they will be maintaining it for about 6 months and are encouraging everyone to use the 3.0 branch even though it is still stated as beta. From above link:. New Microsoft Open Source Stacks We now include the following assemblies as part of Mono from Microsoft's ASP.NET WebStack: System.Net.Http.Formatting.dll System.Web.Http.dll System.Web.Razor.dll System.Web.. Start Android SDK Manager either from inside Visual Studio or from the location "C:UsersDellAppDataLocalAndroidandroid-sdk oolsandroid.bat". Once its done loading packages... checkmark "Android 2.2 (API8)" to install support and your error will be resolved. Alternatively, if you are not planning to support Android 2.2 and are fine with the latest Android version, then in Visual Studio, go to Project Properties >> Application and change the API to whatever API you want. I fairly sure you can't run Umbraco on OSX. However, there are members of the community working on Mono-specific ports of Umbraco. Check out Strawberry Fin's blog for details: And immediately after posting this I found this old bug that lead me to the problem The problem was from my including an old Mono.Security.dll that was a requirement for Thrift. By deleting that out of date dll I was able run without error I don't know why, but mono-basic is missing in the package 2.10.11. I think it's a bug and no one at Xamarin noticed it, because they are all using C#. ;( Download Mono 2.10.9 from I have been able to fix it with a file system watcher: FileSystemWatcher fswRunning = new FileSystemWatcher(Path.GetTempPath() + "AudioCuesheetEditor"); fswRunning.Filter = "*.txt"; fswRunning.Changed += delegate(object sender, FileSystemEventArgs e) { log.debug("FileSystemWatcher called Changed"); if (pAudioCuesheetEditor != null) { log.debug("pAudioCuesheetEditor != null"); pAudioCuesheetEditor.getObjMainWindow().Present(); } }; fswRunning.EnableRaisingEvents = true; Boolean bAlreadyRunning = false; Process[] arrPRunning = Process.GetProcesses(); foreach (Process pRunning in arrPRunning) { Boolean bCheckProcessMatch = false; The immediate problem is that you got the arguments to InserAfter mixed up. First one should be the node to insert and the second one should be the reference node. The root element has no children yet, so LastChild is going to be null and hence the exeption. Using null as a reference point is valid, but not as a node to add. There are other issues but you said you are able to fix those, so there you go. You might want to see where your application is actually looking for the file, you can do this by setting the MONO_LOG_LEVEL environment variable like this: export MONO_LOG_LEVEL=debug; mono YourApp.exe Execution performance vastly depends on the concrete problem and its implementation. I do not want to give a general statement about the performance of the one compared to the other. In the past mono did tend to be a little behind the CLR. On the other hand mono has improved as well and now offers SIMD vector support (even if ILNumerics doesn't make use of it yet). I suggest, you implement your solution and test it on several platforms / runtimes. I would be interested in your results as well! One more note: mono can be used within Visual Studio as target platform. The Xamarin team offers a corresponding VS plugin. I haven't tested it though. Also, if you are into visualizations, keep in mind, mono favorites GTK over Windows.Forms. The ILNumerics visualization controls, however, use Wi There was a question on SO about audio reading library for Mono: Please recommend a Mono (i.e. C#) audio reading library Also there is a .NET bindings for PortAudio library that you can try: Update: Work around: append a slash to the end of the path and it will work. This was submitted as a bug in Mono in June 2011: Bug 698551 - FtpWebRequest: ListDirectory/ListDirectoryDetails discard the filename Opened: 2011-06-07 14:35 UTC Last modified: 2011-06-07 14:35:02 UTC It feels weird to do something, fix it and don't have a clue what was wrong initially. However this was fixed with the upgrade on Mono Develop 3.1 In case someone else happens to have this issue, the answer is here: According to the release notes, async support was first introduced with Mono 3.0, along with a full C# 5.0 compiler. So 2.10.8 won't have this–you need to use version 3.0 or later. I see elsewhere online that the 2.11 preview release began adding support for async, but I'm not sure if this is still available for download. Presumably, they've added more C# 5.0 features and upped the version number to 3.0, which is now the beta release of interest. Thread.Sleep(1000); Place that after your first "NextValue" call (you can toss the very first NextValue return value) and follow it with your line: float availbleRam = ramCounter.NextValue(); Performance counters need time to measure before they can return results. Confirmed this works in Windows with .net 4.5, unsure about Linux with Mono. It's very easy. Use MonoDevelop IDE. Check this page I can't read it as anything else than that Mono is in the right, the C# spec has this to say; Enum values and operations Each enum type defines a distinct type; an explicit enumeration conversion (§6.2.2) is required to convert between an enum type and an integral type, or between two enum types ...and... Enumeration logical operators Every enumeration type E implicitly provides the following predefined logical operators: E operator &(E x, E y); E operator |(E x, E y); E operator ^(E x, E y); That is, the logical operators are only defined enum*enum, and to use an integral type in a logical expression with an enum, it should require an explicit cast.
http://www.w3hello.com/questions/-Tutorial-for-Linux-Mono-and-ASP-Net-
CC-MAIN-2018-17
refinedweb
2,340
65.22
<module> error: X is overriding existing Objective-C class - colinmford last edited by gferreira Occasionally I'll try to make a tool that subclasses a vanilla or an Objective-C object like so: class MyWindow(vanilla.Window): ... class MyObject(NSObject): ... and either I'll get this error on the first call in RoboFont... or, more strangely, it might work once, and THEN I'll get the following error: <module> error: MyWindow is overriding existing Objective-C class My question is: Why am I getting this error, and is there a way to subclass a vanilla object in RF? For NSObjects or any subclass of such a NSObject (that is any AppKit object) there can only be one with the same name during a runloop. Try to save your nsObjects in a separate file and import the required classes. Try to give a good unique name: <ToolInitials><Name>fe CFWindow. As a last resort you can use the embedded helper that renames the class. import AppKit from lib.tools.debugTools import ClassNameIncrementer class MyObject(AppKit.NSObject, metaclass=ClassNameIncrementer): pass but never use ClassNameIncrementerin production code, only during testing. good luck! - colinmford last edited by
https://forum.robofont.com/topic/608/module-error-x-is-overriding-existing-objective-c-class/3
CC-MAIN-2019-35
refinedweb
193
55.13
I am working on a header only library findMFBase (). This library is shipped with unit tests which produce exe's. As metabuild system I am using cmake. I generate the sln and vcprojs using: cmake ..\findMFBase -G "Visual Studio 12 2013 Win64". The problem is that C++ ReSharper seems not to be aware of any of the header files of findMFBase project. As long as I am in the .cpp file everything is fine but if I open an header of my project than the #include statements pointing to other header files within the findMFBase library are marked in red. I did the cmake glob 'trick' to have the header files included in the projects. file(GLOB Demo_HEADERS "sql/*.sql" "include/*.h" "include/**/*.h" "include/**/**/*.h" "include/**/**/**/*.h" "src/**/*.h" "src/**/**/*.h" ".travis.yml") add_library(headers SHARED ${Demo_HEADERS} Dummy.cpp) How can I make ReSharper C++ aware of this files? resgards Witold Hello Witold! Thank you for contacting us. Please try to install latest ReSharper Release (), clear cash (ReSharper ->Options -> General -> Clear Caches) and try to reproduce your problem again. Thank you! It seems improved. Most of the includes are not anymore highlighted as missing. However, for instance in file deisotoper.h line 28 #include <base/chemistry/iisotopeenvelope.h> is marked with an error (cannot open source file). However, compiler has not problem building the project. Furthermore, some warnings about possible unused #include directives are missleading. For instance in filter.h #include <boost/cstdint.hpp> on line 14 is marked as unused while on line 32 typedef boost::uint32_t uint32_t; is clearly used. These are just 2 examples, I found going through the solution after updateing ReSharper C++. But there are much more of these types of problems : i.e. file readtableutils.h where presumably <boost/lexical_cast.hpp> file does not exist. To many instances where a header is marked as unused i.e. <string> and just a few lines below there is a function declaration where one of the arguments is of the type std::string. (readtable.h) ReSharper C++ is not perfect. But worse I am bit worried that using it might introduce errors which werent there (for instance if I start removing header includes as suggested). regards Hello Witold! Thank you for your answer! I've created an issue in YouTrack: You can follow this request and vote for it. Thank you! Witold, thanks for the feedback! Looks like deisotoper.h is not included into any .cpp in your solution, so in fact it does not get compiled. The error is valid - if you try to include deisotoper.h say into dummy.cpp, the compiler would report the same error. The same goes for your other example with readtableutils.h. 'Possibly unused #include' issue in filter.h is unfortunately a bug, it is tracked in. As a workaround, if this analysis breaks your code too often, you can disable it until the issue is fixed.
https://resharper-support.jetbrains.com/hc/en-us/community/posts/205990729-ReSharper-C-is-not-aware-of-the-header-files-of-my-project
CC-MAIN-2019-51
refinedweb
488
70.09
I have looked over your plugin and a the code from push and update. My company wanted to auto update a bound master location so that everyone can see changes in the code. They did not desire to use branching and management prefers the "central" repository method like SVN, SourceSafe, etc. So I created a plugin that updates on post-commit on the master location. This may be a bad post-commit operation, but I wanted to know how I could share the plugin if it was a safe idea and others desired it. It seems I should also check if it is bound to a central location. """Update master from bound checkout plugin for Bazaar""" import subprocess from bzrlib import ( branch, bzrdir, errors, ) def update_ """Update the target branch's working copy. Code borrowed from bzr-automirror.""" print 'Trying to call updatemaster on Master repository . . .' # Place the following two in a precommit hook so that it works there #lt = master. #lt.update() try: wt = master. wt.update() except errors: print 'Cannot update Master repository. See other errors.' else: print 'Master repository updated.' branch. Question information - Language: - English Edit question - Status: - Answered - Assignee: - No assignee Edit question - Last query: - 2011-12-05 - Last reply: - 2011-12-08 Hi, I think this would be reasonable to have in the bzr core, as long as it is off by default and reasonably well tested. I don't see anything bad about it. That will probably be easier than asking people to install a separate plugin just to get this fairly small bit of code. Note that wt.update probably won't work if the wt is remote, but we will probably fix that later. The main changes you would need to do to have this merged to trunk are: update_ master' in the branch configuration - don't print, work through the ui - check a variable something like 'commit. - don't squash the errors - but you may need to specifically handle NoWorkingTree or some such - add at least a simple blackbox integration test We will help you with any of those you get stuck on. To contribute a patch to bzr see http:// wiki.bazaar. canonical. com/BzrGivingBa ck Generically to make a new lp project see eg http:// jam-bazaar. blogspot. com/2008/ 05/creating- new-launchpad- project- redux.html but as I say that's probably not the right choice here. Thanks for suggesting this, it will be a nice feature.
https://answers.launchpad.net/bzr-automirror/+question/181004
CC-MAIN-2018-22
refinedweb
410
73.27
What's The Story?What's The Story? Provides a few tools for working with Redux-based codebases. Currently includes: createReducer- declutter reducers for readability and testing createTypes- DRY define your types object from a string createActions- builds your Action Types and Action Creators at the same time resettableReducer- allows your reducers to be reset createReducercreateReducer We're all familiar with the large switch statement and noise in our reducers, and because we all know this clutter, we can use createReducer to assume and clear it up! There are a few patterns I've learned (and was taught), but let's break down the parts of a reducer first: - Determining the initial state. - Running - Knowing when to run. - Injecting into the global state tree Initial StateInitial State Every reducer I've written has a known and expected state. And it's always an object. const INITIAL_STATE = name: null age: null If you're using seamless-immutable, this just get's wrapped. This is optional. const INITIAL_STATE = RunningRunning A reducer is a function. It has 2 inbound parameters and returns the new state. const sayHello = {const age name = actionreturn ...state age name} Notice the export? That's only needed if you would like to write some tests for your reducer. Knowing When To RunKnowing When To Run In Redux, all reducers fire in response to any action. It's up to the reducer to determine if it should run in response. This is usually driven by a switch on action.type. This works great until you start adding a bunch of code, so, I like to break out "routing" from "running" by registering reducers. We can use a simple object registry to map action types to our reducer functions. const HANDLERS =TypesSAY_HELLO: sayHelloTypesSAY_GOODBYE: sayGoodbye The export is only needed for testing. It's optional. Default handlerDefault handler Sometimes you want to add a default handler to your reducers (such as delegating actions to sub reducers). To achieve that you can use DEFAULT action type in your configuration. const HANDLERS =TypesSAY_GOODBYE: sayGoodbyeReduxSauceTypesDEFAULT: defaultHandler With code above defaultHandler will be invoked in case the action didn't match any type in the configuration. Injecting Into The Global State TreeInjecting Into The Global State Tree I like to keep this in the root reducer. Since reducers can't access other reducers (lies -- it can, but it's complicated), my preference is to not have the reducer file have an opinion. I like to move that decision upstream. Up to the root reducer where you use Redux's combineReducers(). So, that brings us back to reduxsauce. Here's how we handle exporting the reducer from our file: INITIAL_STATE HANDLERS That's it. Complete ExampleComplete Example Here's a quick full example in action. // sampleReducer.js// the initial state of this reducerconst INITIAL_STATE = error: false goodies: null// the eagle has landedconst success = {return ...state error: false goodies: actiongoodies}// uh ohconst failure = {return ...state error: true goodies: null}// map our action types to our reducer functionsconst HANDLERS =TypesGOODS_SUCCESS: successTypesGOODS_FAILURE: failureINITIAL_STATE HANDLERS This becomes much more readable, testable, and manageable when your reducers start to grow in complexity or volume. createTypescreateTypes Use createTypes() to create the object representing your action types. It's whitespace friendly. // Types.js`LOGIN_REQUESTLOGIN_SUCCESSLOGIN_FAILURECHANGE_PASSWORD_REQUESTCHANGE_PASSWORD_SUCCESSCHANGE_PASSWORD_FAILURELOGOUT` {} // options - the 2nd parameter is optional OptionsOptions prefix: prepend the string to all created types. This is handy if you're looking to namespace your actions. createActionscreateActions Use createActions() to build yourself an object which contains Types and Creators. const Types Creators =// options - the 2nd parameter is optional The keys of the object will become keys of the Creators. They will also become the keys of the Types after being converted to SCREAMING_SNAKE_CASE. The values will control the flavour of the action creator. When null is passed, an action creator will be made that only has the type. For example: Creators // { type: 'LOGOUT' } By passing an array of items, these become the parameters of the creator and are attached to the action. Creators // { type: 'LOGIN_REQUEST', username: 'steve', password: 'secret' } By passing an object of { key: defaultValue }, default values are applied. In this case, invoke the action by putting all parameters into an object as the first argument. Creators// { type: 'REQUEST_WITH_DEFAULT_VALUES', username: 'guest', password: '123456' } OptionsOptions prefix: prepend the string to all created types. This is handy if you're looking to namespace your actions. resettableReducerresettableReducer Provides a "higher-order reducer" to help reset your state. Instead of adding an additional reset command to your individual reducers, you can wrap them with this. Check it out. // some reducers you have already created// listen for the action type of 'RESET', you can change this.const resettable =// reducers 1 & 3 will be resettable, but 2 won't.first:second: secondReducerthird: ChangesChanges Oct 23, 2019 - 1.1.1Oct 23, 2019 - 1.1.1 FIXUpgrade dependencies FIXAdd more tests DOCSAdd badges May 10, 2018 - 1.0.0 - 💃May 10, 2018 - 1.0.0 - 💃 NEWdrops redux dependency sinc we weren't using it @pewniak747 September 26, 2017 - 0.7.0September 26, 2017 - 0.7.0 NEWAdds ability to have a default or fallback reducer for nesting reducers or catch-alls. @vaukalak NEWAdds default values to createActionsif passed an object instead of an array or function. @zhang-z DOCSFixes typos. @quajo July 10, 2017 - 0.6.0July 10, 2017 - 0.6.0 NEWMakes unbundled code available for all you tree-shakers out there. @skellock & @messense FIXCorrects issue with prefixed action names. @skellock FIXUpgrades dependencies. @messense April 7, 2017 - 0.5.0April 7, 2017 - 0.5.0 NEWadds resettableReducerfor easier reducer uh... resetting. @skellock December 12, 2016 - 0.4.1December 12, 2016 - 0.4.1 FIXcreators now get the prefixas well. @jbblanchet December 8, 2016 - 0.4.0December 8, 2016 - 0.4.0 NEWcreateActions and createTypes now take optional optionsobject with prefixkey. @jbblanchet & @skellock September 8, 2016 - 0.2.0September 8, 2016 - 0.2.0 NEWadds createActions for building your types & action creators. @gantman & @skellock May 17, 2016 - 0.1.0May 17, 2016 - 0.1.0 NEWadds createTypes for clean type object creation. @skellock May 17, 2016 - 0.0.3May 17, 2016 - 0.0.3 DELremoves the useless createAction function. @skellock May 17, 2016 - 0.0.2May 17, 2016 - 0.0.2 FIXremoves the babel node from package.json as it was breaking stuff upstream. @skellock May 17, 2016 - 0.0.1May 17, 2016 - 0.0.1 NEWinitial release. @skellock
https://www.npmjs.com/package/reduxsauce
CC-MAIN-2019-51
refinedweb
1,062
52.26
Now that you know how an icon handler works, let's discuss the interfaces involved in a little more detail. IPersistFile inherits one method, GetClassID , from IPersist . IPer-sistFile contains an additional five methods (see Table 5.1): IsDirty , Load , Save , SaveCompleted , and GetCurFile . Typically, IPersistFile is implemented when you want to read or write information from a file. There are many more scenarios. You are encouraged to learn more about this interface, because in the world of COM, this interface gets some major game time. In our case, however, we are only interested in one method, and that's Load . We do have a small problem with IPersistFile . It's derived from IPersist , and VB does not like interfaces that are derived from anything other than IUnknown or IDispatch . This is because Microsoft believes that Visual Basic objects should always support late binding and, hence, should always be derived from IDispatch . So what do we do? Before addressing this question, let's look at the IDL in Example 5.1, which shows that IPersistFile is derived from IPersist . IPersistFile has inherited the IPersist method GetClassID . [ uuid(0000010c-0000-0000-C000-000000000046), helpstring("IPersist Interface"), odl ] interface IPersist : IUnknown { HRESULT GetClassID([in, out] CLSID *lpClassID); } [ uuid(0000010b-0000-0000-C000-000000000046), helpstring("IPersistFile Interface"), odl ] interface IPersistFile : IPersist {); } Fortunately, the solution to this problem is very easy. We will simulate the inheritance by deriving IPersistFile from IUnknown . Then we add the method from IPersist that would have been a part of the interface via inheritance directly to the definition listing for IPersistFile . In other words, we remove the inheritance. The resulting IDL file is shown in Example 5.2. [ uuid(0000010b-0000-0000-C000-000000000046), helpstring("IPersistFile Interface"), odl ] // original - interface IPersistFile : IPersist interface IPersistFile : IUnknown { // IPersist is added to IPersistFile definition HRESULT GetClassID([in, out] CLSID *lpClassID); //IPersistFile starts here); } Table 5.1 shows the methods supported by IPersistFile . We'll look at only one of these, the Load method, in detail, since it's the only method that we'll actually have to write any code for. The Load method is invoked by the shell immediately after the icon handler is loaded. The Load method is responsible for providing the icon handler with the name of the file that is to be displayed. The documentation for the Load method states that its purpose is to open a specified file and initialize an object from the file contents. This is an important distinction to remember. Load does not load a file. It loads an object based on the contents of a file. The how and the why is left to the implementor. This function is only used for initialization. It does not return the object to the caller. Its syntax is as follows : HRESULT Load(LPCOLESTR pszFileName , DWORD dwMode ); Its parameters are: A pointer to the name of the file for which the shell is requesting an icon to display. The access mode (which is ignored in the case of icon handlers). Because the first parameter actually comes to us as a 4-byte address, we will have to use the CopyMemory API and the undocumented StrPtr function to retrieve the actual string value of the filename. This will be discussed in more detail in the implementation section of this chapter. IExtractIcon is actually IExtractIconA or IExtractIconW , depending upon the circumstance. The original definition of this interface is found in a header file named shlobj.h . For those of you with Visual C++ installations, take a look at the file. It contains most of the interfaces used by the shell with liberal commenting, making it a really good source of information. It also will show you how to define an interface in straight C++. That's right; there's no IDL in this file. Preprocessor definitions in the file are used to determine whether IExtractIcon is being compiled for Windows 9x or Windows NT and Windows 2000. The appropriate interface, IExtractIconA or IExtractIconW , is then used. Typically, interface names ending in "A" denote the Windows 9x version and the interface names ending in "W" are for Windows NT and Windows 2000. We do not have the luxury of a preprocessor in VB. We will have to define both interfaces (each has a distinct GUID) and implement both interfaces, as well. The complete listing for IExtractIcon is shown in Example 5.3. typedef [public] long HICON; typedef [public] long LPSTRVB; typedef [public] long UINT; typedef enum { GIL_SIMULATEDOC = 0x0001, GIL_PERINSTANCE = 0x0002, GIL_PERCLASS = 0x0004, GIL_NOTFILENAME = 0x0008, GIL_DONTCACHE = 0x0010 } GETICONLOCATIONRETURN; [ uuid(000214eb-0000-0000-c000-000000000046), helpstring("IExtractIconA Interface"), odl ] interface IExtractIconA : IUnknown { HRESULT GetIconLocation([in] UINT uFlags, [in] LPSTRVB szIconFile, [in] UINT cchMax, [in,out] long *piIndex, [in,out] GETICONLOCATIONRETURN *pwFlags); HRESULT Extract([in] LPCSTRVB pszFile, [in] UINT nIconIndex, [in,out] HICON *phiconLarge, [in,out] HICON *phiconSmall, [in] UINT nIconSize); } [ uuid(000214fa-0000-0000-c000-000000000046), helpstring("IExtractIconW"), odl ] interface IExtractIconW : IUnknown { HRESULT GetIconLocation([in] UINT uFlags, [in] LPWSTRVB szIconFile, [in] UINT cchMax, [in,out] long *piIndex, [in,out] GETICONLOCATIONRETURN *pwFlags); HRESULT Extract([in] LPWSTRVB pszFile, [in] long nIconIndex, [in,out] HICON *phiconLarge, [in,out] HICON *phiconSmall, [in] UINT nIconSize); } As its name indicates, IExtractIcon is concerned with retrieving the icon to be displayed by the context icon handler. The methods of this interface are shown in Table 5.2. GetIconLocation is used by the shell to retrieve the location and index of an icon from the icon handler. If the icon is in a DLL or EXE, then GetIconLocation returns the filename and the index of the icon as it resides in the resource section of that file; otherwise , the method returns a value of GIL_NOTFILENAME in the pwFlags parameter. Its syntax is: HRESULT GetIconLocation(UINT uFlags , LPSTR szIconFile , INT cchMax , LPINT piIndex , UINT * pwFlags ); Its parameters are the following: [in] Icon state flags. This value is supplied by the shell. [in, out] The address that receives the name and location of the icon file from the icon handler. This is a null- terminated string. [in] Size of the buffer that receives the icon location. This is usually set to the value of MAX_PATH and defines the total number of characters that the icon handler can write to szIconFile . [in, out] The zero-based ordinal position of the icon in the file whose path and name are written to the szIconFile buffer. The icon handler provides the shell with this value if the icon is to be extracted from a file. [in, out] A value from the GETICONLOCATIONRETURN enumeration. This parameter tells the shell how it should handle the icon file that is returned. The first parameter, uFlags , is of no concern to us, so we can skip it for now. It will come into play later when we create namespace extensions (see Chapter 12). The second parameter, szIconFile , is a pointer. If the icon handler uses G etIconLocation (as opposed to Extract ) to provide the icon file, szIconFile should point to a buffer that contains a valid filename upon successful completion. This is a long value. We can't assign a string directly to szIconFile . You should start getting used to the idea of using pointers now. Out of necessity, we will be using pointers to strings, rather than the strings themselves , for most of the book. The third parameter, cchMax , is merely the size of the buffer that contains the icon filename. Upon successful completion, the fourth parameter contains the index of the icon to be displayed. This is the index of the icon as it appears in the resource section of the file specified by szIconFile . The icon handler should assign the fifth parameter one or more of the values from the GETICONLOCATIONRETURN enumeration defined in Table 5.3. These can be OR ed together. Extract is called by the shell after the icon handler supplies a value of GIL_NOTFILENAME for the pwFlags parameter of the GetIconLocation method and is used to provide the location of an icon (or the handle to an icon) that does not reside as a resource in a file. There are various reasons for returning a handle rather than the filename and index at which the icon can be found. For example, in Chapter 12, when we implement a namespace extension, the icons used will reside in an image list. This is for reasons of speed. Repeatedly opening and closing a file to retrieve an icon is very slow. If you have to access a file a few hundred times just to get an icon, you might consider using the Extract method instead. The syntax for Extract is as follows: HRESULT Extract( LPCSTR pszFile , UINT nIconIndex , HICON * phiconLarge , HICON * phiconSmall , UINT nIconSize ); Its parameters are: [in] The icon filename. This is the same value returned by the GetIconLocation method. [in] The icon's index. This is the same value returned by the GetIconLocation method. [in, out] Handle to the large icon. [in, out] Handle to the small icon. [in] Size of the icon being requested by the shell. Icons are always square, so only one dimension needs to be specified. pszFile and nIconIndex are the same values returned by GetIconLocation . If Extract returns S_FALSE (an OLE-defined error), then these values must contain a valid filename/index pair. Otherwise, phiconLarge and phiconSmall should contain valid handles to icons, such as from a call to the Win32 LoadIcon function or the ImageList_GetIcon function in COMCTL32.DLL .
https://flylib.com/books/en/1.107.1.46/1/
CC-MAIN-2020-40
refinedweb
1,572
54.93
5. Paginate results As you might have noticed, the object returned from the LaunchListQuery is a LaunchConnection. This object has a list of launches, a pagination cursor, and a boolean to indicate whether more launches exist. When using a cursor-based pagination system, it's important to remember that the cursor gives you a place where you can get all results after a certain spot, regardless of whether more items have been added in the interim. You're going to use a second section in the TableView to allow your user to load more launches as long as they exist. But how will you know if they exist? First, you need to hang on to the most recently received LaunchConnection object. Add a variable to hold on to this object at the top of the MasterViewController.swift file near your launches variable: private var lastConnection: LaunchListQuery.Data.Launch? Next, you're going to take advantage of a type from the Apollo library. Add the following to the top of the file: import Apollo Then, below lastConnection, add a variable to hang on to the most recent request: private var activeRequest: Cancellable? Next, add a second case to your ListSection enum: enum ListSection: Int, CaseIterable { case launches case loading } This allows loading state to be displayed and selected in a separate section, keeping your launches section tied to the launches variable. Next, in tableView(_:, numberOfRowsInSection:), add handling for the 0 if there are no more launches to load: case .loading: if self.lastConnection?.hasMore == false { return 0 } else { return 1 } Remember here that if lastConnection is nil, there are more launches to load, since we haven't even loaded a first connection. Next, add handling for the tableView(_, cellForRowAt:), showing a different message based on whether there's an active request or not: case .loading: if self.activeRequest == nil { cell.textLabel?.text = "Tap to load more" } else { cell.textLabel?.text = "Loading..." } Next, you'll need to provide the cursor to your LaunchListQuery. The good news is that the launches API takes an optional after parameter, which accepts a cursor. To pass a variable into a GraphQL query, you need to use syntax that defines that variable using a $name and its type. You can then pass the variable in as a parameter value to an API which takes a parameter. What does this look like in practice? Go to LaunchList.graphql and update just the first two lines to take and use the cursor as a parameter: query LaunchList($cursor:String) { launches(after:$cursor) { Build the application so the code generation picks up on this new parameter. You'll see an error for a non-exhaustive switch, but this is something we'll fix shortly. Next, go back to MasterViewController.swift and update loadLaunches() to be loadMoreLaunches(from cursor: String?), hanging on to the active request (and nil'ing it out when it completes), and updating the last received connection: private func loadMoreLaunches(from cursor: String?) { self.activeRequest = Network.shared.apollo.fetch(query: LaunchListQuery(cursor: cursor)) { [weak self] result in guard let self = self else { return } self.activeRequest = nil defer { self.tableView.reloadData() } switch result { case .success(let graphQLResult): if let launchConnection = graphQLResult.data?.launches { self.lastConnection = launchConnection self.launches.append(contentsOf: launchConnection.launches.compactMap { $0 }) } if let errors = graphQLResult.errors { let message = errors .map { $0.localizedDescription } .joined(separator: "\n") self.showErrorAlert(title: "GraphQL Error(s)", message: message) } case .failure(let error): self.showErrorAlert(title: "Network Error", message: error.localizedDescription) } } } Then, add a new method to figure out if new launches need to be loaded: private func loadMoreLaunchesIfTheyExist() { guard let connection = self.lastConnection else { // We don't have stored launch details, load from scratch self.loadMoreLaunches(from: nil) return } guard connection.hasMore else { // No more launches to fetch return } self.loadMoreLaunches(from: connection.cursor) } Update viewDidLoad to use this new method rather than calling loadMoreLaunches(from:) directly: override func viewDidLoad() { super.viewDidLoad() self.loadMoreLaunchesIfTheyExist() } Next, you need to add some handling when the cell is tapped. Normally that's handled by prepare(for segue:), but because you're going to be reloading things in the current view controller, you won't want the segue to perform at all. Luckily, you can override the shouldPerformSegue(withIdentifier:sender:) method to say, "In this case, don't perform this segue, and take these other actions instead." Override this method, and add code that performs the segue for anything in the .launches section and doesn't perform it (instead loading more launches if needed) for the override func shouldPerformSegue(withIdentifier identifier: String, sender: Any?) -> Bool { guard let selectedIndexPath = self.tableView.indexPathForSelectedRow else { return false } guard let listSection = ListSection(rawValue: selectedIndexPath.section) else { assertionFailure("Invalid section") return false } switch listSection { case .launches: return true case .loading: self.tableView.deselectRow(at: selectedIndexPath, animated: true) if self.activeRequest == nil { self.loadMoreLaunchesIfTheyExist() } // else, let the active request finish loading self.tableView.reloadRows(at: [selectedIndexPath], with: .automatic) // In either case, don't perform the segue return false } } Finally, even though you've told the segue system that you don't need to perform the segue for anything in the .loading case, the compiler still doesn't know that, and it requires you to handle the prepare(for segue:). However, your code should theoretically never reach this point, so it's a good place to use an assertionFailure if you ever hit it during development. This both satisfies the compiler and warns you loudly and quickly if your assumption that something is handled in shouldPerformSegue is wrong. Add the following to the switch statement in prepare(for segue:) case .loading: assertionFailure("Shouldn't have gotten here!") Now, when you build and run and scroll down to the bottom of the list, you'll see a cell you can tap to load more rows: When you tap that cell, the rows will load and then redisplay. If you tap it several times, it reaches a point where the loading cell is no longer displayed, and the last launch was SpaceX's original FalconSat launch from Kwajalien Atoll: Congratulations, you've loaded all of the possible launches! But when you tap one, you still get the same boring detail page. Next, you'll make the detail page a lot more interesting by taking the ID returned by one query and passing it to another.
https://www.apollographql.com/docs/ios/tutorial/tutorial-pagination/
CC-MAIN-2020-24
refinedweb
1,053
57.37
Knute Snortum Paul Clapham Tim Cooke Sheriffs: Liutauras Vilda Jeanne Boyarsky Bear Bibeault Saloon Keepers: Tim Moores Stephan van Hulst Ron McLeod Piet Souris Frits Walraven Bartenders: Ganesh Patekar Tim Holloway salvin francis Forum: Swing / AWT / SWT Moving multiple ojects in Java2D Jesse Miller Ranch Hand Posts: 37 posted 9 years ago I have an applet that I am trying to write that should probably be tackled in multiple parts. Currently I have an applet that creates one rectangle and lets the user click on it (and only on it) and move it around the viewing window. Well, I would like to add more rectangles and make them movable as well. I guess it doesn't really matter if they are allowed to overlap, I just need to make sure they can always be moved individually. The end program is going to be used in a flexible environment so I need to be able to create a dynamic number of rectangles at run-time from a reading a variable off of an XML file. My code is attached. Thanks import java.awt.*; import java.awt.event.*; import java.applet.Applet; import java.awt.image.*; /* * This applet allows the user to move a texture painted rectangle around the applet * window. The rectangle flickers and draws slowly because this applet does not use * double buffering. */ public class ShapeMover extends Applet{ static protected Label label; public void init(){ setLayout(new BorderLayout()); add(new SMCanvas()); label = new Label("Drag rectangle around within the area"); add("South", label); } public static void main(String s[]) { Frame f = new Frame("ShapeMover"); f.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) {System.exit(0);} }); Applet applet = new ShapeMover(); f.add("Center", applet); applet.init(); f.pack(); f.setSize(new Dimension(1000,1000)); f.setVisible(true); } } class SMCanvas extends Canvas implements MouseListener, MouseMotionListener{ Rectangle rect = new Rectangle(0, 0, 50, 50); Rectangle rect2 = new Rectangle(0,0,50,50); BufferedImage bi; Graphics2D big; // Holds the coordinates of the user's last mousePressed event. int last_x, last_y; boolean firstTime = true; TexturePaint fillPolka, strokePolka; Rectangle area; // True if the user pressed, dragged or released the mouse outside of the rectangle; false otherwise. boolean pressOut = false; public SMCanvas(){ setBackground(Color.BLACK); addMouseMotionListener(this); addMouseListener(this); // Creates the fill texture paint pattern. bi = new BufferedImage(5, 5, BufferedImage.TYPE_INT_RGB); big = bi.createGraphics(); big.setColor(Color.GREEN); big.fillRect(0, 0, 7, 7); //big.setColor(Color.GREEN); // big.fillOval(0, 0, 3, 3); Rectangle r = new Rectangle(0,0,5,5); fillPolka = new TexturePaint(bi, r); big.dispose(); } // Handles the event of the user pressing down the mouse button. public void mousePressed(MouseEvent e){ last_x = rect.x - e.getX(); last_y = rect.y - e.getY(); // Checks whether or not the cursor is inside of the rectangle while the user is pressing the mouse. if(rect.contains(e.getX(), e.getY())) updateLocation(e); else { ShapeMover.label.setText("First position the cursor on the rectangle and then drag."); pressOut = true; } } // Handles the event of a user dragging the mouse while holding down the mouse button. public void mouseDragged(MouseEvent e){ if(!pressOut) updateLocation(e); else ShapeMover.label.setText("First position the cursor on the rectangle and then drag."); } // Handles the event of a user releasing the mouse button. public void mouseReleased(MouseEvent e){ // Checks whether or not the cursor is inside of the rectangle when the user releases the mouse button. if(rect.contains(e.getX(), e.getY())) updateLocation(e); else { ShapeMover.label.setText("First position the cursor on the rectangle and then drag."); pressOut = false; } } // This method required by MouseListener. public void mouseMoved(MouseEvent e){} // These methods are required by MouseMotionListener. public void mouseClicked(MouseEvent e){} public void mouseExited(MouseEvent e){} public void mouseEntered(MouseEvent e){} // Updates the coordinates representing the location of the current rectangle. public void updateLocation(MouseEvent e){ rect.setLocation(last_x + e.getX(), last_y + e.getY()); /* * Updates the label to reflect the location of the * current rectangle * if checkRect returns true; otherwise, returns error message. */ if (checkRect()) { ShapeMover.label.setText("Rectangle located at " + rect.getX() + ", " + rect.getY()); } else { ShapeMover.label.setText("Please don't try to "+ " drag outside the area."); } repaint(); } public void paint(Graphics g){ update(g); } public void update(Graphics g){ Graphics2D g2 = (Graphics2D)g; Dimension dim = getSize(); int w = (int)dim.getWidth(); int h = (int)dim.getHeight(); g2.setStroke(new BasicStroke(8.0f)); if(firstTime){ area = new Rectangle(dim); rect.setLocation(w/2-50, h/2-25); firstTime = false; } // Clears the rectangle that was previously drawn. g2.setPaint(Color.white); g2.fillRect(0, 0, w, h); // Draws and fills the newly positioned rectangle. g2.setPaint(strokePolka); g2.draw(rect); g2.setPaint(fillPolka); g2.fill(rect); } /* * Checks if the rectangle is contained within the applet window. If the rectangle * is not contained withing the applet window, it is redrawn so that it is adjacent * to the edge of the window and just inside the window. */ boolean checkRect(){ if (area == null) { return false; } if(area.contains(rect.x, rect.y, 100, 50)){ return true; } int new_x = rect.x; int new_y = rect.y; if((rect.x+100)>area.getWidth()){ new_x = (int)area.getWidth()-99; } if(rect.x < 0){ new_x = -1; } if((rect.y+50)>area.getHeight()){ new_y = (int)area.getHeight()-49; } if(rect.y < 0){ new_y = -1; } rect.setLocation(new_x, new_y); return false; } } Michael Dunn Ranch Hand Posts: 4632 posted 9 years ago so what is the problem? is there a specific requirement to use rectangles, perhaps it might be easier to use JApplet and JLabels with borders Jesse Miller Ranch Hand Posts: 37 posted 9 years ago Well, the problem is as I described. I need to add a dynamic number of rectangles to the panel and make them all movable. I started coding this program using JApplet however it was suggested that I use Java2D instead because of the increased flexibility of Java2D. What is your reason for suggesting JApplet instead? Do you know of an implementation of this program using JApplet? Thanks Michael Dunn Ranch Hand Posts: 4632 posted 9 years ago > Well, the problem is as I described. you still haven't described any problem, only what you want to do, so what is the exact nature of your problem you are stuck on. if it's that you can't do any of it, that's a job to pay someone. JApplets are Swing, which handles flickering better. you can then use a JLabel with a border, which should look like your current rectangle, and is extremely easy to drag around Jesse Miller Ranch Hand Posts: 37 posted 9 years ago Ok, Ive changed my implementation a little bit. I got rid of Java2D and am just using JApplet and Swing right now. Currently I have two problems with my code. I want to create a dynamic number of squares so I have an objects with several functions in it that I need to store x and y positions and such. I would like to create an array of these obects so I can have multiple squares. The code builds without any errors however when I run it I get error when I try to paint the square (Line 162). My second problem is moving the square. I put a simple if statement in to check if the mouse event is within the boundaries of the square that I am trying to move however when I try to drag the square the motion is wrong. Any ideas? import javax.swing.SwingUtilities; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.BorderFactory; import java.awt.event.MouseEvent; import java.awt.event.MouseAdapter; import java.awt.*; //This class is not used right now class Global { public static int increment = 0; } public class SwingPaintDemo("Make My Lab"); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.add(new MyPanel()); f.setSize(500,500); f.setVisible(true); } } class MyPanel extends JPanel { // RedSquare redSquare = new RedSquare(); Rectangle rect = new Rectangle(0, 0, 50, 50); // Global counter = new Global(); RedSquare[] redSquare = new RedSquare[5]; //redSquare[0] = RedSquare(); //RedSquare [] redSquare = new RedSquare(); // RedSquare redSquare2 = new RedSquare(); public MyPanel() { setBorder(BorderFactory.createLineBorder(Color.black)); // final int CURR_X = redSquare.getX(); // final int CURR_Y = redSquare.getY(); addMouseListener(new MouseAdapter(){ public void mousePressed(MouseEvent e){ for(int i=0; i<2; i++){ } if (redSquare[0].contains(e.getX(), e.getY())) { moveSquare(e.getX(),e.getY()); } } }); addMouseMotionListener(new MouseAdapter(){ public void mouseDragged(MouseEvent e){ if (redSquare[0].contains(e.getX(), e.getY())) { moveSquare(e.getX(),e.getY()); } } }); } public void moveSquare(int x, int y){ // Current square state, stored as final variables // to avoid repeat invocations of the same methods. final int CURR_X = redSquare[0].getX(); final int CURR_Y = redSquare[0].getY(); final int CURR_W = redSquare[0].getWidth(); final int CURR_H = redSquare[0].getHeight(); final int OFFSET = 1; if ((CURR_X!=x) || (CURR_Y!=y)) { // The square is moving, repaint background // over the old square location. repaint(CURR_X,CURR_Y,CURR_W+OFFSET,CURR_H+OFFSET); // Update coordinates. redSquare[0].setX(x); redSquare[0].setY(y); // Repaint the square at the new location. repaint(redSquare[0].getX(), redSquare[0].getY(), redSquare[0].getWidth()+OFFSET, redSquare[0].getHeight()+OFFSET); } } public Dimension getPreferredSize() { return new Dimension(250,200); } public void paintComponent(Graphics g) { super.paintComponent(g); // g.draw3DRect(10, 20, 100,100, false); //g.drawString("1",10,20); redSquare[0].paintSquare(g); //redSquare2[0].paintSquare(g); } /*public void create_offset(){ Global.increment = Global.increment + 50; }*/ } class RedSquare{ //public int i; private int xPos = 100; private int yPos = 50; private int width = 50; private int height = 50; void paintSquare(Graphics g){ g.setColor(Color.GREEN); g.fillRect(xPos,yPos,width,height); } public boolean contains(int x, int y){ //if((x>= xPos && (x<xPos + 50)) && (y>= yPos && (y<=yPos +50))){ if(x>= xPos && y>= yPos ){ return true; } else return false; } } Jesse Miller Ranch Hand Posts: 37 posted 9 years ago **Correction** the error is on Line 124 of the code I posted. Rob Spoor Sheriff Posts: 21741 102 I like... posted 9 years ago All your array elements are still null. SCJP 1.4 - SCJP 6 - SCWCD 5 - OCEEJBD 6 - OCEJPAD 6 How To Ask Questions How To Answer Questions Jesse Miller Ranch Hand Posts: 37 posted 9 years ago I have tried adding: redSquare[0] = new RedSquare(); redSquare[1] = new RedSquare(); etc... but I get a build error for incorrect syntax. Is this the correct way to initialize the array elements? Michael Dunn Ranch Hand Posts: 4632 posted 9 years ago > but I get a build error for incorrect syntax. Is this the correct way to initialize the array elements? I added the indicated line to the MyPanel constructor, compiled OK and ran OK (as in error free) public MyPanel() { for(int x = 0; x < 5; x++) redSquare[x] = new RedSquare();//<---added here setBorder(BorderFactory.createLineBorder(Color.black)); here's a simple demo of using a JLabel instead of drawing a rectangle/square run the code, then drag the label around the screen import javax.swing.*; import java.awt.*; import java.awt.event.*; class Testing { public void buildGUI() { Dragger d = new Dragger(); JPanel p = new JPanel(null); JLabel lbl = new JLabel(); lbl.setBorder(BorderFactory.createLineBorder(Color.BLACK,3)); lbl.setBackground(Color.RED); lbl.setOpaque(true); lbl.addMouseListener(d); lbl.addMouseMotionListener(d); lbl.setBounds(200,100,75,75); p.add(lbl); JFrame f = new JFrame(); f.getContentPane().add(p); f.setSize(600,400); f.setLocationRelativeTo(null); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.setVisible(true); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable(){ public void run(){ new Testing().buildGUI(); } }); } } class Dragger extends MouseAdapter implements MouseMotionListener { Point startPt; JComponent comp; public void mouseMoved(MouseEvent me){} public void mouseDragged(MouseEvent me) { comp.setLocation(comp.getX()+me.getX()-startPt.x,comp.getY()+me.getY()-startPt.y); } public void mousePressed(MouseEvent me) { startPt = me.getPoint(); comp = (JComponent)me.getSource(); } } Jesse Miller Ranch Hand Posts: 37 posted 9 years ago ok great! this seems alot easier using JLabel instead of my method. However I am still faced with the initial problem of adding multiple squares and making them all movable. I have been playing around with the code but I guess I need to read up on JLabel. Is there a quick fix possible? Thanks. Jesse Miller Ranch Hand Posts: 37 posted 9 years ago ok, I got the multiple squares part working. I created an array of panels and assigned the Mouse Listener to all of them so they all are movable individually. Do you see any problems with this implementation? The next part I am working on is displaying a single number on top of each square that I created. No quite sure how to do this but im guess that I can just create a text label under each square panel I created? Will this work? Thanks Michael Dunn Ranch Hand Posts: 4632 posted 9 years ago > I created an array of panels and assigned the Mouse Listener to all of them so they all are movable individually. I don't understand why you'd create an array of panels - I would have thought an array of JLabels would suit multiple squares > The next part I am working on is displaying a single number on top of each square that I created. use a titledBorder Post Reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads MouseMotionListener usage for Jbutton RectangleDemo Messagebox with Mouseevent How to make rectangle drawn to be vissible when a new rectangle is drawn Problem with zoom function More...
https://coderanch.com/t/490243/java/Moving-multiple-ojects-Java
CC-MAIN-2019-26
refinedweb
2,230
57.77
In today’s Programming Praxis exercise, our goal is to determine if one list is cyclically equal to another. Let’s get started, shall we? import Data.List Rather than the provided solution which involves keep track of a bunch of pointers, we use a simple fact of cyclical lists: repeating either list twice produces a list that contains the other one if they are indeed cyclically equal. In order to prevent false positives, we also have to check whether the lengths are equal. cyclic :: Eq a => [a] -> [a] -> Bool cyclic xs ys = length xs == length ys && isInfixOf xs (ys ++ ys) Some tests to see if everything is working properly: main :: IO () main = do print $ cyclic [1,2,3,4,5] [3,4,5,1,2] print $ cyclic [1,1,2,2] [2,1,1,2] print $ cyclic [1,1,1,1] [1,1,1,1] print . not $ cyclic [1,2,3,4] [1,2,3,5] print . not $ cyclic [1,1,1] [1,1,1,1] Advertisements Tags: bonsai, code, cyclic, equality, Haskell, kata, list, praxis, programming April 9, 2013 at 3:55 pm | In Scheme I am not able to append one cyclic list to another, as you did when you wrote (ys ++ ys). Is that because your lists are just normal Haskell lists, not cyclic lists? Maybe I didn’t explain very well, but when I said (1 2 3 4 5) was a cyclic list I meant that it didn’t stop after 5 but instead represented the cycle 1 2 3 4 5 1 2 3 4 5 1 2 3 …. April 9, 2013 at 5:05 pm | @programmingpraxis: Yes, my lists are plain finite lists. I was under the impression that the ‘unique’ part of the cyclic list (i.e. the part that’s repeated over and over) is known and provided as input to the function. If instead you are given two cyclical lists then this version obviously doesn’t work and things get a lot trickier, since Haskell doesn’t normally expose pointers and as such lacks the option to ‘point to the start of the list’, making it impossible to determine the length of the cycle. In Haskell, cycling a list is simply a matter of infinitely and lazily repeating the elements, which provides no indication that the cycle has been completed. Ignoring that little practical problem, the basic idea of the solution can remain the same; you just have to create the two finite lists first. Start at any element, collecting them all until the loop is closed. Then use the solution presented above.
https://bonsaicode.wordpress.com/2013/04/09/programming-praxis-cyclic-equality/
CC-MAIN-2017-22
refinedweb
434
64.95
On Mon, 26 Nov 2007 12:17:46 +0000, linux-mips@linux-mips.org wrote: > Author: Ralf Baechle <ralf@linux-mips.org> Sat Nov 3 02:05:43 2007 +0000 > Commit: 75c0de3513644f9868a14f74b0c4dfec1eb4ffd5 > Gitweb: > Branch: master > > Sibyte SOCs only have 32-bit PCI. Due to the sparse use of the address > space only the first 1GB of memory is mapped at physical addresses > below 1GB. If a system has more than 1GB of memory 32-bit DMA will > not be able to reach all of it. > > For now this patch is good enough to keep Sibyte users happy but it seems > eventually something like swiotlb will be needed for Sibyte. This commit breaks platforms which have real prom_free_prom_memory(). You can reproduce the problem with this patch. diff --git a/arch/mips/qemu/q-mem.c b/arch/mips/qemu/q-mem.c index dae39b5..84cbee2 100644 --- a/arch/mips/qemu/q-mem.c +++ b/arch/mips/qemu/q-mem.c @@ -1,5 +1,9 @@ #include <linux/init.h> +#include <asm/bootinfo.h> +#include <asm/sections.h> +#include <asm/page.h> void __init prom_free_prom_memory(void) { + free_init_pages("prom memory", PAGE_SIZE, __pa_symbol(&_text)); } With this patch, qemu kernel crashes on boot as this: Bad page state in process 'swapper' page:81000020 flags:0x00000000 mapping:00000000 mapcount:1 count:0 Trying to fix it up, but a reboot is needed Backtrace: Call Trace: [<8001691c>] dump_stack+0x8/0x34 [<8005a758>] bad_page+0x6c/0xa4 [<8005af7c>] free_hot_cold_page+0x98/0x1d4 [<80019e44>] free_init_pages+0x94/0xf8 [<80164b3c>] free_initmem+0x10/0x40 [<80010428>] init_post+0x10/0xe8 [<801b988c>] kernel_init+0x2f8/0x328 [<80013220>] kernel_thread_helper+0x10/0x18 If I reverted the commit, this crash does not happen. How I can fix this? --- Atsushi Nemoto
https://www.linux-mips.org/archives/linux-mips/2007-12/msg00236.html
CC-MAIN-2016-40
refinedweb
277
67.65
SUBSCRIBE The SideFX mailing list is a great place to make contact with Houdini users. To subscribe, send us an email with no subject and the word subscribe in the body. sidefx-houdini-list November 2000sidefx-houdini-list@sidefx.com - 72 participants - 110 file conversion 17 years, 5 months Konersman, Bill Konersman, Bill I need to get geometry and camera info from houdini into lightwave format. I don't own Lightwave, I just need that format to get into yet another program. Are there any free standalone translators out there? - 4 participants - 4 comments Motion Blur 17 years, 5 months Silvina Rocca Silvina Rocca Please help somebody!!!!! We have a string being animated using a skeleton. Deform motion blur doesn't work, transformation blur doesn't work (wasn't expecting it to...), velocity blur doesn't work (hey, trying everything...). We are not using anti-aliasing in the render command. We've tried the trail SOP/compute velocity thing with no results... If anybody knows a solution, please please please answer, deadline is upon us!!!! Thanks! Silvina Rocca Animator Spin Productions (416) 504 8333 - 8 participants - 7 comments Extracting point information 17 years, 6 months bob green bob green Hi, I'm trying to write a noise/displacement VEX script that is a aware of the geometry that it is deforming. Is it possible to extract attributes from points other than P from within a VEX SOP? If so, what is the syntax? bob green frantic films - 24 participants - 46 comments Tk scripting and Houdini 17 years, 6 months Ammon Riley Ammon Riley Hi I ran into a bit of a problem while writing a Tk script for use in Houdini. I looked at the examples in the User Guide, and in $HFS/houdini/scripts. However, these examples weren't overly useful, because the only hscript command they ever used was "echo", whereas I needed to use a command that takes parameters (chkey). The documentation is rather sketchy in this area as well. After a few days of bashing my head against the idiosyncracies of Tk, I've figured out what's going on. In the hopes of saving someone a headache: The following three lines demonstrates the problem I was having (I'm really using chkey, as opposed to echo, but for debugging purposes...): set chan tx hscript set chan = $chan puts [hscript echo -f 1 -v {`1 - ch("/obj/sky/$chan")`} /obj/sky/$chan ] When those are put into a file and run (using a fresh Houdini), you get the following output: / -> tk /home/ammon/test.tk -f 1 -v {1} /obj/sky/tx Note the {}'s around the 1? Without them, things don't work in Tk. With them, things don't work in Houdini. Aside: I think the documentation may be a little off here. It says that the " and $ characters have special meaning, and that unless encapsulated in {}, things wouldn't work. However, try the following: set chan tx puts [ hscript echo 1 - ch("/obj/rleg/$chan") ] The resulting output is: / -> tk /home/ammon/test.tk 1 - ch ( "/obj/rleg/tx" ) So the " characters come through just fine. However, when you try and use the backticks: set chan tx hscript set chan = $chan puts [ hscript echo `1 - ch("/obj/rleg/$chan")` ] You get a bracing error: / -> tk /home/ammon/test.tk Expression error: Bracing error However, substitute `1 + 1` for the `1 - ch()` expression, and things work fine. It seems that the combination of embedded " characters within the backticks causes a problem, even without any variables -- the same bracing error occurs with this: puts [ hscript echo `1 - ch("/obj/sky/tx")` ] Further more, this also gives you a bracing error: puts [ hscript echo `ch("/obj/sky/tx")` ] But whereas wrapping the former in braces solves the problem for the former, it doesn't for the latter -- {`ch("/obj/sky/tx")`} also gives a bracing error. Yet {` ch("/obj/sky/tx")`} doesn't. If anyone (who has followed this far) can explain to me just how Tk goes about parsing things of this nature, I'd me much obliged if you did so. The only thing I've really been able to conclude is that any time you do an expression evaluation with backticks (that contains a quoted string), you have the option of using the braces, or dying with a Bracing error. If you use the braces, then the result is _also_ encapsulated in the stupid braces. The way around this is to do all the necessary expression evaluations before hand so you can a) strip out all the extraneous braces in Tk and b) build up the command line values with all the variables are in the Tk namespace, as opposed to having some in Houdini, and some in Tk: # Tk namespace... set chan tx # To Houdini namespace... hscript set chan = $chan # Get the current state -- using things in Houdini namespace, but # saving the return value in Tk namespace. set val [ hscript echo {`1 - ch("/obj/sky/$chan")`} ] # However, that leaves {}'s around the value, so we have to trim them # out -- still in Tk namespace, remember. set val [ string range $val 1 [ expr [ string length $val ] - 2 ] ] # Now that we have all the values we need hscript echo chkey -f 1 -v $val /obj/sky/$chan Which gives the output we're looking for: / -> tk /home/ammon/test.tk chkey -f 1 -v 1 /obj/sky/$chan Ammon -- Ammon Riley * Technical Director * Toybox ammon(a)compt.com * 416.585.9995 - 3 participants - 2 comments linux graphics site 17 years, 6 months Agent Drek Agent Drek A fun site (lists houdini) which I don't think has been posted here before. -- Derek Marshall Smash and Pow Inc > 'digital plumber' - 7 participants - 6 comments Extracting Point information 17 years, 6 months bob green bob green Hmmmmm. I gots v4.0.4 Time to look for a patch/upgrade. Thanks for all the help guys! bob green frantic films - 7 participants - 10 comments Attach the HEAD into Body?? 17 years, 6 months Loi NGUYEN Loi NGUYEN Could someone please help me to bridge the HEAD and HANDS into the Body?.. Also, the HEAD is not smooth, it had so many seams. I have zipped the Hip file and place it at: juston.com in the MYFILES section: -------- login: loinguyen1 passwd: healing ------- I've look at the demo tutorial, but when i try it, it doesn't work on my model... Any help or suggestion is greatly appreciated. many Thanks. PS. does anyone know how to snap a point to another point?? (I have this problem when modelling the HEAD.) Loi _________________________________________________________________________ Get Your Private, Free E-mail from MSN Hotmail at. Share information about yourself, create your own public profile at. - 6 participants - 8 comments
https://www.sidefx.com/mailing-list/sidefx-houdini-list@sidefx.com/2000/11/
CC-MAIN-2018-26
refinedweb
1,129
62.07
Last time, we saw how to fetch the image URL of web comics in Clojure by using regular expressions and some Java objects. A short-coming of that program was that it could only fetch an image URL. Some web comics (such as Xkcd) have a small tooltip text that appears when you hover the mouse cursor over the image on the web site; this text is often an integral part of the comic and we would like to fetch it as well. Today’s program In this article, we will modify our original program to fetch the latest Xkcd with its tooltip text. To keep things interesting for the avid Clojure apprentice, we will use multi-methods for this purpose. We will also see more aspects of Clojure’s integration with Java by using the HTML Parser library. Language changes Recently, Clojure had a couple of changes in its core library that will make it into 1.0. For this tutorial to be useful with future stable versions of Clojure, I will now be using the SVN version of Clojure instead of the latest stable release. The changes are not many, but they do affect the program from the first article. Here is a list of things you’ll need to change: Regular expression literals are automatically escaped There was only one occurrence in the original program of a regular expression with a backslash in it, the :regex attribute for Penny Arcade. Delete one backslash to make the line look like this: :regex #"images/\d{4}/.+?(?:png|gif|jpg)" Binding syntax is done inside vectors everywhere Some people may have found it inconsistent that the syntax of certain binding-introducing forms was (form [var val]) while other forms didn’t have the square brackets. This has now been addressed and all bindings are done inside square brackets. There were two such occurrences in the original program: the with-open call in fetch-url and doseq at the end of the program. Change these two lines to the following: (with-open [stream (. url (openStream))] (doseq [comic *comics*] Multi-methods Multi-methods are one of Clojure’s ways to create polymorphic code. There are two parts to them: - The declaration: We create a new multi-method with the defmultimacro. We specify the name of the multi-method and a dispatch function. The dispatch function will be called with all the arguments passed to the multi and its return value will be used to choose which method to execute. An optional third argument specifies a default dispatch value; if it’s omitted, :defaultis assumed. - The methods: They’re called multi-methods because they can have multiple implementations. You define a method with the defmethodmacro. You must supply the name of the multi, the dispatch value, the parameter vector and the body. To make this clearer, here is a simple example. report is a multi-method that is passed a collection and returns "I am empty" if calling the dispatch function empty? on its argument returns true and "I have elements" otherwise. (defmulti report empty?) (defmethod report true [x] "I am empty") (defmethod report :default [x] "I have elements") (report "") ; "I am empty" (report [1 2 3]) ; "I have elements" fetch-comic We will declare a fetch-comic multi-method that takes a comic and dispatch on its :type value. The default method will be our old regular expression function, which we'll transform into a method. (defmulti fetch-comic :type) Now, let's convert image-url to a method; the name was changed to fetch-comic because we don't simply fetch an URL anymore, we may get other information as well. Don't forget to update the call in the doseq at the end of the program. Methods cannot have documentation strings, so we've had to remove it. (defmethod fetch-comic :default [comic] (let [src (fetch-url (:url comic)) image (re-find (:regex comic) src)] (str (or (:prefix comic) (:url comic)) image))) The program should work just like it did before. Fetching image URL and tooltip With our old function transformed into a method, we are ready to tackle the tooltip-fetching method. Although nothing stops us from using regular expressions for this task, we will use a Java library specifically designed for HTML parsing and extraction. The method is fairly short (12 lines), but I must first introduce some concepts that will be used and talk about the HTML Parser library. - Refs: Despite being a functional language, Clojure recognizes that there are situations when having data that changes is necessary. Refs are one way to do so: refs are basically variables that hold the address to an object. When you modify the object, what actually happens behind the scene is that a new object is created and your ref will now point to the address of that new object, leaving the old one intact. proxy: proxyis a macro that extend a class, implements interfaces and returns an instance of that new class. - HTML Parser: a Java library to parse and extract content from an HTML document. The org.htmlparser.Parserconstructor fetches the HTML online if its argument looks like an URL. The library specifies many built-in filter classes, though none allow using a regular expression to search for a particular attribute in a tag. We will therefore use the visitor pattern method provided. visitAllNodesWithtakes a NodeVisitorargument, and we'll use proxyto implement its visitTagmethod. (import '(org.htmlparser Parser) '(org.htmlparser.visitors NodeVisitor) '(org.htmlparser.tags ImageTag)) (defmethod fetch-comic :tooltip-comic [comic] (let [img-tags (ref []) parser (Parser. (:url comic)) visitor (proxy [NodeVisitor] [] (visitTag [tag] (when (and (instance? ImageTag tag) (re-find (:regex comic) (.getImageURL tag))) (dosync (alter img-tags conj tag)))))] (.visitAllNodesWith parser visitor) [(.getImageURL (first @img-tags)) (.getAttribute (first @img-tags) "title")])) That may seem like a lot of code, but there's actually a lot of things you know in there. Let's look at it in detail: import: we went over this in the first article, it just imports some names into the current namespace. We import some classes from HTML Parser to keep our code a little more succinct. defmethod: we've just seen this: create a method for the multi-method fetch-comicfor when the dispatch value is :tooltip-comic. let: we've seen letbefore also: it creates a new scope and establishes some bindings within that scope. img-tags (ref []): refreturns a reference that points to its argument. We will store the image tags that fit our search criteria into img-tags. We'll see in a minute why we need a "mutating" variable for this purpose. parser (Parser. (:url comics)): call the Parserconstructor with the URL of the comic. visitor (proxy [NodeVisitor] []): this is the really interesting part. proxywill sub-class NodeVisitorand return an instance of this new class. We implement the visitTagmethod: it takes one argument, a tag and has a voidreturn value. This is why we need to store the tags into a ref. When that tag is an image tag and that its srcvalue matches our regular expression, we conj it to img-tags (dosync (alter img-tags conj tag)): dosyncexecutes the expressions in its body in a transaction. alter(which must be called within a transaction) modifies the value pointed to by img-tagsby conjing the current tag onto the value referenced by img-tags (.visitAllNodesWith parser visitor): visit all the nodes of parserusing our custom visitor object. When this has completed, img-tagsshould have the image tag of the comic. (.getImageURL (first @img-tags)): get the URL of the first image tag. @img-tagsis syntactic sugar for (deref img-tags); it returns the value referenced by the ref. getImageURLreturns the complete URL of the image, we won't need a prefix like we did with the other method. (.getAttribute (first @img-tags) "title"): getAttributereturns the value of an arbitrary attribute of a tag. The tooltip text of a comic is in the titletag. Data The final step is to add Xkcd to our *comics* vector: {:name "Xkcd" :url "" :regex #"comics" :type :tooltip-comic } Running the script To run the script, you will need to include HTML Parser in your class path: $ java -cp $HOME/src/clojure/clojure.jar:$HOME/src/htmlparser1_6/lib/htmlparser.jar \ clojure.lang.Script comics2.clj Penny-Arcade: We The Robots: Xkcd: ["" "I call Rule 34 on Wolfram's Rule 34."] Full program You can download the full program here Special thanks to Chouser for proof reading a draft of this post. November 28, 2008 at 12:17 am | Thanks, this is helpful. I wonder if in a future installation you could cover various methods of loading in new code. The explanation of “import” is helpful, but it would be good to have more detail about how that ties into the classpath, for those of us who have never heard the term before. I’ve seen “use” being used in a similar way to “import”–how do the two differ? And what about “load-file” and “require” that we’ve seen before in Lisp? How does clojure decide where to look for the files you try to load or require since there’s no load-path variable? And how would you wrap all this up in a shell script so that users of your program don’t have to type a ridiculously long invocation just to get things going? Just some thoughts from someone who’s just getting started with clojure. Thanks! November 28, 2008 at 8:29 am | Hello Phil, Like I said in the first installment of this series, I will try to include parts that discuss the questions and/or comments of the commenters. I’ll look into your suggestion for a future article, though probably not the next one, as I want to get into agents. Thanks for the input. November 30, 2008 at 2:48 pm | Thanks for the good article, my only concern is a formal problem: it’s hard to read the ‘Running the script’ part. The end of the line has been clipped and the output of the command is confusing as there is no clean separation from input. November 30, 2008 at 4:23 pm | Thank you for the comment Attila Babo; I added a line break + backslash so that the whole input visible. January 12, 2009 at 1:38 am | Hi Vince, I enjoyed this. Nice to see a good practical use of refs. However, I was confused throughout about this: (defmulti fetch-comic :type) You didn’t explain the “type” part. The reader was led to believe that a comic had certain properties: name, url, regex, and (optionally) prefix. Nothing about type! It all sort of made sense in the end, but a paragraph about your intentions would have been helpful. I’m still confused about the :default…
http://gnuvince.wordpress.com/2008/11/18/fetching-web-comics-with-clojure-part-2/
crawl-002
refinedweb
1,807
62.58
A sloop is a great circle on a sphere. A shalfloop is an oriented sloop. It is always paired with a shalfloop whose supporting Sphere_circle is pointing in the opposite direction. The twin() member function returns this shalfloop of opposite orientation. Each Nef_polyhedron_S2 can only have one sloop (resp. two shalfloops). The figure below depicts the relationship between a shalfloop and sfaces on a sphere map. #include <CGAL/Nef_polyhedron_S2.h> The following types are the same as in Nef_polyhedron_S2<Traits>. There is no need for a user to create a SHalfloop explicitly. The class Nef_polyhedron_S2<Traits> manages the needed shalfloops internally. CGAL::Nef_polyhedron_S2<Traits>::SFace CGAL::Nef_polyhedron_S2<Traits>::Sphere_circle
http://www.cgal.org/Manual/3.5/doc_html/cgal_manual/Nef_S2_ref/Class_Nef_polyhedron_S2-Traits---SHalfloop.html
crawl-003
refinedweb
109
51.85