text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
30 May 2013 22:08 [Source: ICIS news]
MEDELLIN, Colombia (ICIS)--A naphtha reformer at Petroleos de Venezuela’s (PDVSA) Puerto La Cruz refinery remains offline following a glitch in the unit’s gas compressor four days ago, the company said on Thursday.
According to the state-run oil company, the reformer at ?xml:namespace>
Restart of the reformer will begin once the engine motor has been repaired, the company said.
PDVSA would not say when the work will be complete. However, the company guaranteed that domestic fuel supplies would not be affected while the unit is offline.
The Puerto de la Cruz refinery in the northeastern state of Anzoategui has a processing capacity of 200,000 bbl/day and produces gasoline, jet fuel, diesel, light naphtha, liquefied petroleum gas (LPG) and paraffin.
The company earmarks 46% of output for the domestic market; the remainder is exported to the South and Central America, the Caribbean | http://www.icis.com/Articles/2013/05/30/9673920/naphtha-unit-remains-offline-at-venezuelas-puerto-la-cruz.html | CC-MAIN-2015-14 | refinedweb | 155 | 50.16 |
Suppose we have a Prolog source file
ex.pl containing:
% ex.plforeign(f1, p1(+integer,[-integer])). foreign(f2, p2(+integer,[-integer])). foreign_resource(ex, [f1,f2]). :- load_foreign_resource(ex).
and a C source file
ex.c with definitions of the functions
f1 and
f2, both returning
SP_integer and having an
SP_integer as the only parameter. The conversion declarations in
ex.pl state that these functions form the foreign resource
ex. Normally, the C source file should contain the following
two line near the beginning (modulo the resource name):
#include <sicstus/sicstus.h> /* ex_glue.h is generated by splfr from the foreign/[2,3] facts. Always include the glue header in your foreign resource code. */ #include "ex_glue.h"
To create the linked foreign resource, simply type (to the Shell):
% splfr ex.pl ex.c
The linked foreign resource ex.so (file suffix .so
is system dependent) has been created. It will be dynamically linked by
the directive
:- load_foreign_resource(ex). when the file
ex.pl is loaded. For a full example, see Foreign Code Examples.
Dynamic linking of foreign resources can also be used by runtime systems. | https://sicstus.sics.se/sicstus/docs/4.2.3/html/sicstus.html/Creating-the-Linked-Foreign-Resource.html | CC-MAIN-2015-35 | refinedweb | 183 | 54.08 |
If you’ve played around with Python and you’re confident writing code, then you may be ready to step it up and try developing your own Python-powered website!
While Python is well-known for its capabilities in data analysis and machine learning, it is also used to run some of the largest websites on the internet, including Reddit, Netflix, Pinterest, and Instagram.
Choosing a Framework: Django or Flask?
If you want to try out some Python web development, you first have to choose between two frameworks: Flask or Django. They both do the same thing – they convert your Python code into a fully-functioning web server.
Flask is a minimalist version in which everything is an optional extra you can choose, whereas Django comes with everything already “plugged in” and has pre-defined libraries, techniques, and technologies that you use out-of-the-box.
Flask is good for beginners and for quickly building small/medium applications. Django is arguably better for large enterprise applications, but also requires a deeper understanding.
For this tutorial, we’ll start with Flask, but you can always hop over to Django in the future once you’re comfortable with the basics.
Installing Flask
As with most libraries in Python, installation is a dream – just install it with pip:
$ pip install Flask
That’s it! Onto the next section.
Client and Server
Before we begin developing our first Flask web app, we need to clear up the roles of the client and the server. The client is a web browser, like Google Chrome or Firefox. When you type in a web address and press enter, you’ll send a request to a server that will “serve” the website you want.
The server will respond to your request with the website (in the form of HTML, CSS, and JavaScript) and then render this request in your web browser. HTML defines the structure of the website, CSS gives it styling, and JavaScript is the logic that executes in your browser.
Flask is a server-side framework or “back-end” as it’s often called. You program the bit of code that receives a request, builds a web page, and then responds with the necessary HTML (and optional CSS and JavaScript).
The discipline of client-side development or “front-end” is concerned with how the website behaves in the browser after it is received from the server. We will not focus on this as client-side development is pretty much only done in JavaScript.
Your First Flask Web App
Let’s begin with the time-honored tradition of a “Hello World” application. This example is a slightly modified version of the Hello World app in the Flask documentation.
from flask import Flask # Import Flask app = Flask(__name__) # Create a Flask "app" instance @app.route("/") # Instructs that the default url of should trigger this code @app.route("/index") # Instructs that the url of should also trigger this code def index(): return "<h1>Hello World!</h1>" # The HTML to return to the browser if __name__ == '__main__': # Startup stuff app.run(host='127.0.0.1', port=5000) # Specify the port and url
Run the code and then go to Google Chrome (or your browser of choice) and type in “”, without the quotes. You should see a delightful “Hello World” appear, welcoming you to the world of web development.
A Scalable Structure for Flask Web Apps
While having all of the code in a single file is a quick and exciting introduction to Flask, you will probably want to split your code into separate files to make it more manageable.
To help you in this, I’ve created a simple structure you can use for your Flask applications. Head over to GitHub to grab the source code for yourself.
This structure has a “start.py” script that you run to fire-up your webserver. All of the Python code is inside a folder called “code”. This folder is where your “routes.py” file lives. It contains the definitions for requesting web URLs and how your app responds when they are.
If you have any Python logic such as calculating stats or machine learning models, then these should also go inside the “code” folder.
The “templates” folder houses the HTML files that will be returned by the server. They’re called templates because you can optionally insert values to change the text and graphics at runtime.
The “static” folder contains code like CSS and Javascript that does not change at runtime.
Setting Up Deployment
With what we’ve discussed so far, you’ll be able to create a web app that runs on your computer, but if you try and access your website from another computer, you’ll run into difficulties. This is the topic of deployment.
First, we need to adjust the port on which your website presents itself. We want to set the host to “host=0.0.0.0” and the port to “port=80”. Setting the host and port makes your website visible on the IP address of your host computer.
For example, if you run your Flask web app on a Raspberry Pi with an IP address of 192.168.1.10, then you can navigate to that IP address in the web browser of a separate computer, and you’ll see your web page served up for you. This is a great foundation for IoT projects and live sensor dashboards.
We define that “port=80” as this is the default port for communication over the HTTP protocol. When you request a URL in your browser over HTTP, it knows to go to port 80, so you don’t have to state it explicitly in the web address.
For all of this to work, you’ll have to be on the same network as the computer running your web server, and you’ll need to make sure that the firewall on your host computer (if you have one) is opened up to requests on port 80.
Next Steps and Going Further
If you want to take a deep-dive into Flask web development, then I strongly recommend the Flask Mega Tutorial by Miguel Grinberg. It’s very detailed and is an invaluable source for learning Flask. | https://maker.pro/custom/tutorial/an-introduction-to-the-python-flask-framework | CC-MAIN-2019-35 | refinedweb | 1,040 | 70.23 |
Table of Contents
Introduction
The goal of these coding conventions is to encourage consistency within the code, rather than claiming some particular style is better than any other. As such, the main guideline is:
- When modifying a piece of code, try to follow its existing style. In particular:
- Primarily, try to match the style of the functions that you're editing (assuming it's at least self-consistent and not too bizarre), in order to avoid making it less self-consistent.
- Secondly, try to match the style of the files that you're editing.
- Then, try to match the style of the other code in the subdirectory you're editing.
- Finally, try to match the global guidelines discussed on this page.
Our code is currently not entirely consistent in places, but the following guidelines attempt to describe the most common style and are what we should converge towards. (Obviously we always want clean, readable, adequately-documented code - lots of articles and books already talk about how to do that - so here we're mostly describing minor details.)
Common
- Use the US variant of English when choosing variable names. For example, use 'color' instead of 'colour'.
C++
Creating new files
- All source files (.cpp, .h) must start with the following GPL license header, before any other content:replacing
/* Copyright (C) 2018 Wildfire Games. * This file is part of 0 A.D. * * 0 A.D. is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 2 of the License, or * (at your option) any later version. * * 0 A 0 A.D. If not, see <>. */
2018with the year that the file was last updated.
Exception: Code in
source/lib/(and a few other files) should use the MIT license instead:/* Copyright (C) 2018 Wildfire. */
- Wrap header files in include guards, using the name
INCLUDED_filename, e.g. the file
Foo.hshould say:
#ifndef INCLUDED_FOO #define INCLUDED_FOO ... #endif // INCLUDED_FOO
- All source files must have the
svn:eol-styleproperty set to
native
Formatting
- Use tabs for indentation, not spaces.
- For any alignment within a line of code (as opposed to indentation at the start), use spaces, not tabs.
- Indent braces and use whitespace likeException: Code in
int CExampleObject::DoSomething(int value) { if (value != 0) { Prepare(); m_Value = value; } for (int i = 0; i < value; ++i) DoSomethingElse(i, (unsigned int)value); return value; }
source/lib/omits the space before the '
(' in statements like "
if(...)", "
while(...)", etc.
- Try to avoid very wide lines. Typical wrapping points are 80 characters, or 120, or 132, etc. (There's no strict limit - aim for whatever seems most readable.)
- Write switch statements like
switch (n) { case 10: return 1; case 20: foo(); // fall through to next case [this should be explicit if you don't end with break or return] case 30: bar(); break; case 40: { int i = n*2; // [need the extra {...} scope when declaring variables inside the case] return i; } case 50: { int z = n*2; n = foobar(n, z); break; // [place breaks inside the brackets] } default: debug_warn(L"invalid value for n"); // [only do this kind of warning if this case is an engine bug] }
Error reporting
See Logging for more info.
- For engine bugs (that is, error cases which need to be fixed by C++ developers, and which should never be triggered by users or modders (even if they write invalid data files)), use "
debug_warn(L"message")" to report the error. (This pops up an ugly dialog box with stack trace and continue/debug/exit buttons.)
- For error cases that could be triggered by modders (e.g. invalid data files), useThis gets display on screen in red, and saved in the log files.
LOGERROR("Failed to load item %d from file %s", i, path);
LOGERRORtakes
printf-style format strings (specially cppformat). Exception: Code in
source/lib/can't use
LOGERROR(it can only use things defined in
source/lib/).
- The engine should try to cope gracefully with
LOGERRORcases, e.g. abort loading the current file; it should never crash in those cases.
Documentation
- Use Doxygen comments (explained here as JavaDoc style), e.g.
/** * A dull object for demonstrating comment syntax. */ class CExampleObject { /** * Sets the object's current value to the passed value, if it's non-zero. * * @param v the new value to set it to, or 0 to do nothing. * * @return the value that was passed in. */ int DoSomething(int v); /// Current value (always non-zero) int m_Value; };
- Try not to repeat class names or function names in the descriptions, since that's redundant information.
- Try to avoid very wide lines. As wrapped block comments are extremely unreadable. Always try to keep each line of those below 80 chars (counting tabs as 4 chars).
- Don't need to bother documenting every line of code or every member function or every member variable; only when it'll add to a competent reader's understanding of the program.
Strings
- Use
CStrinstead of
std::string. Use
CStrWinstead of
std::wstring. (These are subclasses of
std::[w]stringwith various extra methods added for convenience.)
- Exception:
source/lib/and
source/simulation2/and
source/scriptinterface/tend to prefer
std::[w]stringinstead.
CStr8is an alias for
CStr. Prefer to use
CStr.
- Compare strings using
==(not e.g. "
a.compare(b) == 0"). Compare to literals directly (e.g. "
someCStrVariable == "foo"", not "
someCStrVariable == CStr("foo")").
- For portability, use the following formats for printf-style functions:
printf("%s", "char string"); printf("%ls", L"wchar_t string"); wprintf(L"%hs", "char string"); wprintf(L"%ls", L"wchar_t string");
- In AtlasUI, you should prefer the wxWidgets API and construct strings like this (see the wxWiki for more examples):
wxString str = _T("SomeUnicodeSTRING"); // Don't use wxT() directly because it breaks OS X build wxString translatedStr = _("A string that may be translated"); // Translated string for UI purposes // Sometimes you have to pass messages to the engine and need C++ strings: std::string cppStr = str.c_str(); std::wstring cppWStr = str.wc_str(); // Don't use c_str() because it breaks OS X build
Preprocessor instructions
The
#include instructions, placed after the license notice, should follow a consistent order (which is currently not the case). Between each section in the following list, we usually add an empty line.
- The first non-comment line of any source file must be
#include "precompiled.h"
- For
.cppfiles that contain function implementations, headers that declare those functions should be included here. For instance,
#include "Simulation2.h"for
Simulation2.cpp, or
#include "ICmpPosition.h"for
CCmpPosition.cpp.
- Include the rest of the files that come from 0 A.D. Write the complete path relative to
source/, for instance
#include "simulation2/Simulation2.h". Use case-insensitive alphabetical order.
- Include system libraries, in case-insensitive alphabetical order.
Example, for the file
source/simulation2/Simulation2.cpp (this is not the actual list, but an example that should cover all cases):
#include "precompiled.h" #include "Simulation2.h" #include "graphics/Terrain.h" #include "lib/file/vfs/vfs_util.h" #include "lib/timer.h" #include "maths/MathUtil.h" #include "ps/Loader.h" #include "ps/lowercase.h" #include "ps/Profile.h" #include "ps/XML/Xeromyces.h" #include "scriptinterface/ScriptInterface.h" #include "scriptinterface/ScriptRuntime.h" #include "simulation2/components/ICmpAIManager.h" #include "simulation2/components/ICmpCommandQueue.h" #include "simulation2/components/ICmpTemplateManager.h" #include "simulation2/MessageTypes.h" #include "simulation2/system/ComponentManager.h" #include "simulation2/system/ParamNode.h" #include <iomanip> #include <string>
Misc
- In header files, avoid
#includeand use forward declarations wherever possible.
- Class names are UpperCamelCase and prefixed with
C, e.g.
CGameObject. Member functions are UpperCamelCase, e.g.
CGameObject::SetModifiedFlag(...). Member variables are UpperCamelCase prefixed with
m_, e.g.
CGameObject::m_ModifiedFlag. Files are named e.g.
GameObject.cpp,
GameObject.h, usually with one major class per file (possibly with some other support classes in the same files). Local variables and function parameters are lowerCamelCase. Structs are treated similarly to classes but prefixed with
S, e.g.
SOverlayLine.
- Write pointer/reference types with the symbol next to the type name, as in
void example( int* good, int& good, int *bad, int &bad );
- Use STL when appropriate.
- Don't use RTTI (
dynamic_castetc). Exception:
source/tools/atlas/AtlasUI/can use RTTI.
- Avoid global state: global variables, static variables inside functions or inside classes, and singletons.
- When a module needs access to objects from outside its own environment, prefer to pass them in explicitly as arguments when instantiating that module, rather than making the objects global and having the module reach out to grab them.
- When unavoidable, global variables should be named with a
g_prefix.
- Prefer global variables over singletons, because then they're not trying to hide their ugliness.
- Use
nullptrfor pointers (instead of
NULL).
- Don't do "
if (p) delete p;". (That's redundant since "
delete nullptr;" is safe and does nothing.)
- If deleting a pointer, and it's not in a destructor, and it's not being immediately assigned a new value, use "
SAFE_DELETE(p)" (which is equivalent to "
delete p; p = nullptr;") to avoid dangling pointers to deleted memory.
- Be sure to be aware of Code And Memory Performance guidelines
- Use "for range" loop instead of "std::for_each".
// Avoid std::vector<T> anyVector; std::for_each(anyVector.begin(), anyVector.end(), [] (const T& element){ // code } // Better for (const T& element : anyVector) { // code }
- Reminding the default value of parameters, if any, in a function definition can be useful.
int Increase(int param, int optionalParam = 1); //... int Increase(int param, int optionalParam /* = 1 */) { return param + optionalParam; }
- In
c++11, it is possible to use the
autotype-specifier, wherein the appropriate data-type is determined at compile time. Although potentially useful, overuse can cause serious problems in the long-run. Therefore, this code feature should be used in moderation. Specifically:
- It should be clear from reading the code what type the
autois replacing. (Add a comment if necessary.)
- It should only ever be used as a replacement for an iterator type.
- It should only be used if the data-type-specifier it is standing in for is long. As a rule of thumb, if the line would be shorter than your chosen suitable line width (see convention on avoiding wide lines under formatting above) without it, don't use it.
- Favour early returns where possible. The following:is better when written like:
void foo(bool x) { if (x) { /* lines */ } }
void foo(bool x) { if (!x) return; /* lines */ }
JavaScript
- Use the same basic formatting as described above for C++.
- Use roughly the same Doxygen-style comments for documentation as described above for C++. (But we don't actually run Doxygen on the JS code, so there's no need to make the comments use technically correct Doxygen syntax.)
- Don't omit the optional semicolons after statements.
- Use quotes around the key names in object literals:
let x = 100, y = 200; let pos = { "x": x, "y": y };
- Create empty arrays and objects with "
[]" and "
{}" respectively, not with "
new Array()" and "
new Object()".
- Global variables and constants are named with a
g_prefix and the rest of the name in CamelCase, ie.
var g_ParsedDataor
const g_BarterActions.
- Non-standard SpiderMonkey extensions to the JS language may be used (though prefer to use equivalent standard features when possible). Documentation: 1.6, 1.7, 1.8, 1.8.1, 1.8.5, typed arrays.
- To convert a string to a number, use the "
+" prefix operator (not e.g.
parseInt/
parseFloat):
let a = "1"; let b = a + 1; // string concatenation; b == "11" let c = (+a) + 1; // numeric addition; c == 2
- Always check for undefined properties and/or invalid object references, if it's possible they could occur.
- To test if a property or variable is undefined, use explicit type+value equality (
===), instead of value equality (
==) or
typeof():
if (someObject.foo === undefined) // foo is not defined else // foo is defined
- In general you don't want to explicitly check for
nullwhich has a distinct, often misunderstood, meaning from
undefined. A few parts of the engine return a
nullobject reference (for example, the component system when a component is not available for the specified entity, or the GUI when a requested object was not found), you can check for valid object references easily:
if (!cmpFoo) // Oh it's not a valid component, don't use it else // It is a valid component, we can use it
- Use
varonly when declaring variables in the global scope. When inside functions, ifs, loops, etc., use
let. We prefer
letbecause:
- It is restricted to the scope it is declared in, preventing unintentional clobbering of values
- It is stricter than
var, and throws an error if it a variable is declared twice within the same scope
{ var x = 1; var x = "foo" // No Error } { let x = 1; let x = "foo"; // TypeError: redeclaration of let x }
JSON
- Basically follow the JS formatting conventions
- When on the same line, insert spaces after
{, and
:and before
}, but not after
[or before
]; e.g.
{ "foo": "bar" }and
["foo", "bar"]
- Use tabs for indentation, not spaces
XML
- All XML files should start with
<?xml version="1.0" encoding="utf-8"?>and be UTF-8 encoded (preferably without a BOM but that doesn't really matter).
- Empty-element tags should be written without a trailing space: use
<foo/>or
<foo bar="baz"/>, not
<foo />nor
<foo bar="baz" />.
- Indent using whatever the similar existing XML files use. (Usually two spaces; sometimes four spaces, or tabs, etc.)
GLSL
- Use the same basic formatting as described above for C++.
- Use lowerCamelCase for names:
// Bad vec2 local_position; mat4 CalculateInvertMatrix(...); int _pixelcounter; // Good vec2 localPosition; mat4 calculateInvertMatrix(...); int pixelCounter;
- File extensions for shaders:
.fs- fragment shader
.vs- vertex shader
.gs- geometry shader
.cs- compute shader
- All GLSL files should start with right version.
#version 120
- Prefer to use own uniforms instead of built-in (
gl_ProjectionMatrix,
gl_ModelViewProjectionMatrix, etc).
- Strictly follow specifications for the selected version. Some drivers ignore errors, that can fail others.
- Prefer to use swizzle masks:
vec4 a; vec2 b; // Simple assign. a.x = b.x; a.y = b.z; // Assign with swizzle mask. a = b.xz;
- Prefer to use prefixes for global variables to distinguish them:
a_- for attributes,
v_for varyings,
u_- for uniforms.
uniform mat4 u_invertTransform; varying vec2 v_uv; attribute vec3 a_normal; | https://trac.wildfiregames.com/wiki/Coding_Conventions | CC-MAIN-2019-43 | refinedweb | 2,353 | 57.77 |
In this example, we'll return to our LogProcessing application. Here, we'll update our log split routines to divide lines up via a regular expression as opposed to simple string manipulation.
core.py, add an
import restatement to the top of the file. This makes the regular expression engine available to us.
__init__method definition for
LogProcessor, add the following lines of code. These have been split to avoid wrapping.
_re = re.compile( r'^([\d.]+) (\S+) (\S+) \[([\w/:+ ]+)] "(.+?)" ' \ r'(?P<rcode>\d{3}) (\S+) "(\S+)" "(.+)"')
splitmethod with one that takes advantage of the new regular expression:
def split(self, line): """ Split a logfile. ...
No credit card required | https://www.safaribooksonline.com/library/view/python-26-text/9781849512121/ch05s04.html | CC-MAIN-2018-30 | refinedweb | 107 | 61.63 |
Table of Contents:
Standard Pipeline Configurations
Configuring software launches
Example: Grouping versions of the same application, auto-detect
Example: Grouping versions of the same application, manual mode
Example: Restrict by users or groups
Example: Restrict software versions by project
Example: Add your own Software
Configuring published file path resolution
Resolving local file links
A Brief History of Browser Integration
Toolkit Configuration File
Running integrations while offline
Managing updates via manual download
Freezing updates for a single project
Freezing updates for your site
Freezing updates for all but one project
Safely Upgrading a locked off site
Taking over a Pipeline Configuration
Launching the setup wizard from Desktop
Select a configuration type
Default configuration templates
Basing your new project on an existing project
Using a configuration template from git
Browsing for a configuration template
Choosing a project folder name
Selecting a configuration location
What can I do once I have a configuration?
Introduction
This document serves as a guide for administrators of Shotgun integrations. It's one of three: user, admin, and developer. Our User Guide is intended for artists who will be the end users of Shotgun integrations in their daily workflow, and our Developer Guide is technical documentation for those writing Python code to extend the functionality. This document falls between those two: it's intended for those who are implementing Shotgun integrations for a studio, managing software versions, and making storage decisions for published files.
Standard Pipeline Configurations
At the heart of any Toolkit setup is the Pipeline Configuration, a set of YAML files that manage everything from installed apps to software versions, and in some cases, even hold the templates for setting up your production directory structure and file names. The Pipeline Configuration is highly customizable, but we offer two starting points.
The Basic Config
Our out-of-the-box integrations are designed to run without the need to set up or modify any configuration files. When you use our out-of-the-box integrations, there's nothing to administer, but Toolkit uses an implied Pipeline Configuration under the hood, and we call this Pipeline Configuration the Basic Config. The Basic Config makes three Toolkit apps – The Panel, Publisher, and Loader – available in all supported software packages, and looks to your Software Entities in Shotgun to determine which software packages to display in Shotgun Desktop. The Basic Config does not include filesystem location support. When you use out-of-the-box integrations on a project, your copy of the Basic Config is auto-updated whenever you launch Desktop, so you'll always have the latest version of our integrations. You can subscribe to release notes here, and see the Basic Config in Github here.
The Default Config
This is the default starting point for our Advanced project setup. It includes filesystem location support and a wider array of Toolkit apps and engines.
You can see the Default Config in Github here. For a detailed description of the Default Config's structure, see the
config/env/README.md file in your Pipeline Configuration, or view it here in Github.
If you're familiar with the old structure of the Default Config, take a look at the Default Config Update FAQ.
With the v1.1 release of Integrations, we reorganized the structure of the Default Config to help maximize efficiency and readability, and to make it match the Basic Config's structure more closely. You can see what's changed in the v1.1 Release Notes. You can still base projects on the legacy Default Config. Just choose "Legacy Default" when prompted to select a configuration in the Desktop Set Up Project Wizard.
The Publisher
The Publisher is designed to ease the transition between the out-of-the-box workflow and the full pipeline configuration. In the out-of-the-box setup, files are published in place, which avoids the need to define templates or filesystem schema. Once a project has gone through the advanced setup and has a full Pipeline Configuration, the same publish plugins will recognize the introduction of templates to the app settings and begin copying files to their designated publish location prior to publishing. Studios can therefore introduce template-based settings on a per-environment or per-DCC basis as needed for projects with full configurations. The Default Config comes fully configured for template-based workflows and is a good reference to see how templates can be configured for the Publish app. See the tk-multi-publish2.yml file in the Default Config in Github for more info.
For details on writing plugins for the Publisher, see the Publisher section of our Developer Guide.
Configuring software launches
It’s simple to rely on Shotgun’s auto-detection of host applications on your system: just launch Shotgun Desktop, choose a project, and Desktop will display launchers for all supported software packages that it finds in standard application directories. But we also offer robust tools for more fine-grained management of the software in your studio. You can restrict application visibility to specific projects, groups, or even individual users. You can specify Versions, deactivate a given software package across your site, and group software packages together. All of this is managed through Software entities in Shotgun.
When you create a new Shotgun site, it will have a set of default Software entities—one for each supported host application. You can modify these and add your own to manage the software that shows up in Desktop exactly how you want it.
To see your Software entities in Shotgun, open the Admin menu by clicking on the profile icon in the upper right corner of the screen, and choose
Software.
The Software entity has the following fields:
- Software Name: The display name of the Software in Desktop.
- Thumbnail: Uploaded image file for Desktop icon.
- Status: Controls whether or not the Software is available.
- Engine: The name of the integration for the content creation tool.
- Products: For Software packages that include variants (e.g., Houdini vs. Houdini FX), you can specify a comma separated list here. Valid only in auto-detect mode, not manual mode.
- Versions: Specific versions of the software to display. You can specify a comma separated list here. Valid only in auto-detect mode, not manual mode.
- Group: Entities with the same value for the
Groupfield will be grouped under a single icon in Desktop and a single menu in Shotgun. For example, you could create an FX group that includes Houdini and Nuke.
- Group Default: When one member of a group has
Group Defaultchecked, clicking the icon or menu item for the group will launch this software.
- Projects: A way to restrict software to certain projects.
- User Restrictions: A way to restrict software to certain users or groups.
- Linux/Mac/Windows Path: Use these fields to explicitly specify an OS-specific path to software.
- Linux/Mac/Windows Args: Commandline args to append to the command when launching the Software.
We can learn a lot about how these fields work together by demonstrating some ways of using them.
Example: Grouping versions of the same application, auto-detect
Say you have three versions of Maya on your filesystem: Maya 2016, Maya 2017, and Maya 2018. You want to make all of these available to your artists, but you want them to be grouped under a single icon in Desktop.
If these three versions of Maya are installed in the standard location on your filesystem, then this will all happen automatically. When you select a project in Desktop, it will scan the standard applications directory on the local machine, and will find the three versions. Since you already have a Maya software entity in Shotgun, with no specific versions or paths specified, it will display all versions it finds in Desktop.
A few things to note here:
- When Shotgun auto-detects your software, a single Software entity generates the menu items for all versions.
- None of the Path fields have values specified. The Software entity is in auto-detect mode, so the App is assumed to be in the standard location.
These will show up in Desktop as you see here: one icon for Maya, with a drop-down listing all the available versions. If you click on the icon itself, you’ll launch the latest version of Maya.
Example: Grouping versions of the same application, manual mode
It’s perfectly fine to store Maya in a non-standard location in your studio. You’ll just need to create your own Software entities, and specify paths to let Shotgun know where to find your software. Your setup may look like this:
Some notes here:
- Unlike in auto-detect mode, here you have a Software entity for each version of a given software package.
- In order to group them together, use the
Groupand
Group Defaultfields. Software entities that share the same value for
Groupwill be grouped in Desktop in a dropdown under a single icon, which uses the
Groupvalue as its name.
- When you click on that icon itself, you’ll launch the software within the group with
Group Defaultchecked.
- When you specify a value for any of Linux Path, Mac Path, or Windows Path on a Software entity, that entity will shift to Manual mode. Unlike auto-detect mode, where the software would show up in Desktop when a path field is empty, in manual mode, a software package will only show up on a given operating system if a path is specified for it and the file exists at the specified path.
- In this example, none of the three Maya versions would show up in Desktop on Windows because no
Windows Pathhas been specified.
Example: Restrict by users or groups
Now, say with that last example that we’re not ready to make Maya 2018 available to all users just yet. But we do want TDs, Devs, and our QA engineer, Tessa Tester, to be able to access it. We can achieve this with the
User Restrictions field. Here’s an example:
We made a couple changes from the last example:
- The group default is now Maya 2017. We want that to be the production version, so with that box checked, clicking the icon for Maya will now go to this version.
- We’ve added a few values to the
User Restrictionsfield: It can take both users and groups, and we’ve added our Dev and TD groups, as well as the user Tessa Tester. Now, only those people will see Maya 2018 in Desktop.
Example: Restrict software versions by project
Sometimes you want to do more complex version management across projects in your studio. You may have a project in a crunch to deliver, which you want to lock off from new versions of software, while at the same time, its sequel may just be starting up and able to evaluate newer versions. In this case, you may have your Software entities set up like this:
A few important things to note:
- We’ve removed the
Groupand
Group Defaultvalues here, as only one Maya version will ever show up in Desktop for a given environment.
- We’ve set the
Software Namefor all three versions to “Maya”. This way, on every project, users will have an icon with the same name, but it will point to different versions depending on what’s configured here.
- We’ve set Maya 2016’s
Statusfield to
Disabled. We are no longer using this version in our studio, and this field toggles global visibility across all projects.
- We’ve specified values for
Projectsfor Maya 2017 and Maya 2018. This
Projectsfield acts as a restriction. Maya 2017 will only show up in the Chicken Planet project, and Maya 2018 will only show up in Chicken Planet II.
- Note that once you’ve specified a value for
Projectsfor a Software entity, that Software will only show up in the projects you've specified. So, if you have other projects in your studio in addition to the Chicken Planet series, you’ll need to specify software for them explicitly.
Example: Add your own Software
There are several reasons you might add a new software entity in addition to those that Shotgun Desktop has auto-detected on your system:
- You want to make an application for which there is no engine available to your users through Desktop.
- You have in-house software, or third-party software that we don’t have an integration for, for which you’ve written your own engine.
- Your software doesn’t live in a standard location, so you want to point Shotgun to it manually. (This case was described in the “Grouping versions of the same Application, Manual mode” example above.)
In these cases, you can add your own Software entities. You'll need to have a value for the
Software Name field.
If you're using an in-house engine for your software, specify the engine name in the
Engine field. Some studios may want to include apps in Desktop that don’t have Shotgun integrations, as a convenience for artists. Your artists can launch the app straight from Desktop. You can even use all of the settings above to manage versions and usage restrictions. In this case, leave the
Engine field empty, but you'll need to specify a value for at least one of
Mac Path,
Linux Path, and
Windows Path.
Configuring published file path resolution
When you publish a file, the Publisher creates a PublishedFile entity in Shotgun, which includes a File/Link field called
Path. Later on, a different user may try to load this file into their own work session using the Loader. The Loader uses complex logic to resolve a valid local path to the PublishedFile across operating systems.
The way in which the Loader attempts to resolve the publish data into a path depends on whether the the publish is associated with a local file link or a
file:// URL.
Resolving local file links
Local file links are generated automatically at publish time if the path you are publishing matches any local storage defined in the Shotgun Site Preferences. If the publish is a local file link, its local operating system representation will be used. Read more about local file links here.
If a local storage doesn’t define a path for the operating system you are currently using, you can use an environment variable to specify your local storage root. The name of the environment variable should take the form of
SHOTGUN_PATH_<WINDOWS|MAC|LINUX>_<STORAGENAME>. So, if you wanted to define a path on a Mac for a storage root called "Renders", you'd create a
SHOTGUN_PATH_MAC_RENDERS environment variable. Let's go deeper with that example:
- Say your Shotgun site has a storage root called "Renders", with the following paths specified:
- Linux path:
/studio/renders/
- Windows path:
S:\renders\
Mac path:
<blank>
You are on a Mac.
- You want to load a publish with the path
/studio/renders/sq100/sh001/bg/bg.001.exrinto your session.
The Loader can parse the path and deduce that
/studio/renders/ is the storage root part of it, but no storage root is defined for Mac. So, it will look for a
SHOTGUN_PATH_MAC_RENDERS environment variable, and if it finds one, it will replace
/studio/renders in the path with its value.
Note: If you define a
SHOTGUN_PATH_MAC_RENDERS environment variable, and the local storage Renders does have Mac path set, the local storage value will be used and a warning will be logged.
Note: If no storage can be resolved for the current operating system, a
PublishPathNotDefinedError is raised.
Resolving file URLs
The Loader also supports the resolution of
file:// URLs. At publish time, if the path you are publishing does not match any of your site's local storages, the path is saved as a
file:// URL. Contrary to local file links, these paths are not stored in a multi-OS representation, but are just defined for the operating system where they were created.
If you are trying to resolve a
file:// URL on a different operating system from the one where where the URL was created, the Loader will attempt to resolve it into a valid path using a series of approaches:
- First, it will look for the three environment variables
SHOTGUN_PATH_WINDOWS,
SHOTGUN_PATH_MAC, and
SHOTGUN_PATH_LINUX. If these are defined, the method will attempt to translate the path this way. For example, if you are trying to resolve Windows, you could set up
SHOTGUN_PATH_WINDOWS=P:\prodand
SHOTGUN_PATH_LINUX=/prodin order to hint the way the path should be resolved.
- If you want to use more than one set of environment variables, in order to represent multiple storages, this is possible by extending the above variable name syntax with a suffix:
- If you have a storage for renders, you could for example define
SHOTGUN_PATH_LINUX_RENDERS,
SHOTGUN_PATH_MAC_RENDERS, and
SHOTGUN_PATH_WINDOWS_RENDERSin order to provide a translation mechanism for all published that refer to data inside your render storage.
- Then, if you also have a storage for editorial data, you could define
SHOTGUN_PATH_LINUX_EDITORIAL,
SHOTGUN_PATH_MAC_EDITORIAL, and
SHOTGUN_PATH_WINDOWS_EDITORIAL, in order to provide a translation mechanism for your editorial storage roots.
Once you have standardized on these environment variables, you could consider converting them into a Shotgun local storage. Once they are defined in the Shotgun preferences, they will be automatically picked up and no environment variables will be needed.
- In addition to the above, all local storages defined in the Shotgun preferences will be handled the same way.
- If a local storage has been defined, but an operating system is missing, this can be supplied via an environment variable. For example, if there is a local storage named
Rendersthat is defined on Linux and Windows, you can extend to support mac by creating an environment variable named
SHOTGUN_PATH_MAC_RENDERS. The general syntax for this is
SHOTGUN_PATH_<WINDOWS|MAC|LINUX>_<STORAGENAME>.
- If no root matches, the file path will be returned as is.
Here's an example:
Say you've published the file
/projects/some/file.txt on Linux, and a Shotgun publish with the URL was generated. In your studio, the Linux path
/projects equates to
Q:\projects on Windows, and hence you expect the full path to be translated to
Q:\projects\some\file.txt.
All of the following setups would handle this:
- A general environment-based override:
SHOTGUN_PATH_LINUX=/projects
SHOTGUN_PATH_WINDOWS=Q:\projects
SHOTGUN_PATH_MAC=/projects
A Shotgun local storage called “Projects”, set up with:
- Linux Path:
/projects
- Windows Path:
Q:\projects
Mac Path:
/projects
A Shotgun local storage called “Projects”, augmented with an environment variable:
- Linux Path:
/projects
Windows Path:
`
Mac Path:/projects`
SHOTGUN_PATH_WINDOWS_PROJECTS=Q:\projects
Note: If you have a local storage
Renders defined in Shotgun with
Linux path set, and also a
SHOTGUN_PATH_LINUX_RENDERS environment variable defined, the storage will take precedence, the environment variable will be ignored, and a warning will be logged. Generally speaking, local storage definitions always take precedence over environment variables.
Advanced configuration
For information on the underlying method that performs the resolution of PublishedFile paths, take a look at our developer reference docs.
If you are using Advanced Project Setup, you can add support beyond local file links and
file:// URLs by customizing the
resolve_publish core hook. Possible customizations include:
- Publishes with associated uploaded files could be automatically downloaded into an appropriate cache location by the core hook and the path would be be returned.
- Custom URL schemes (such as
perforce://) could be resolved into local paths.
Browser Integration
Browser integration for Shotgun Toolkit refers to access to Toolkit apps and launchers by way of right-click context menus in the Shotgun web application. These menus, an example of which is shown above, contain actions configured for various entity types. In the case where you have multiple pipeline configurations for a project, the actions are organized by pipeline configuration. Browser integration allows you to launch content creation software like Maya or Nuke that is aware of your Shotgun context, right from the browser.
A Brief History of Browser Integration
Over the years, Shotgun Toolkit’s browser integration has taken several forms. As technologies and security requirements have progressed, so has the approach to implementing browser integration.
Java Applet (deprecated)
The first implementation consisted of a Java applet that provided access to the local desktop from the Shotgun web application. As Java applets became recognized as an exploitable security risk, they fell out of favor, necessitating its deprecation.
Browser Plugin (deprecated)
Replacing the deprecated Java applet was a browser plugin making use of NPAPI to access the local desktop from the Shotgun web application. As NPAPI also became known as a security risk, the major web browsers began blocking its use. This necessitated deprecating the browser plugin.
Websockets v1 via Shotgun Desktop (legacy)
Hosting a websocket server within the Shotgun Desktop app was, and still is, the approach to communicating with the local desktop from the Shotgun web application. The first implementation of this websocket server’s RPC API made use of the same underlying technology developed for the Java applet and browser plugin before it. When the server received a request from Shotgun, the tank command from the associated project’s pipeline configuration was used to get the list of commands to show in the action menu.
Websockets v2 via Shotgun Desktop
The second iteration of the websocket server’s RPC API changes the underlying mechanism used to get, cache, and execute Toolkit actions. This implementation addresses a number of performance issues related to the earlier browser integration implementations, improves the visual organization of the action menus, and adds support for out-of-the-box Shotgun Integrations, which work without explicitly configuring Toolkit. This is the current implementation of browser integration.
Configuration
To control what actions are presented to the user for each entity type, you modify YAML environment files in your project’s pipeline configuration. There are a few things to understand and consider when first attempting customization.
Which engine configuration?
The Toolkit engine that manages Toolkit actions within the Shotgun web app is
tk-shotgun, so it’s this engine’s configuration that controls what shows up in the action menus.
In the above example from tk-config-basic, there are two apps configured that will result in a number of engine commands turned into menu actions. Toolkit apps will register commands that are to be included in the action menu, including launcher commands for each software package found on the local system that correspond to the list of Software entities in the Shotgun site. The result is the list of menu actions shown here:
The browser integration code found installations of Houdini, Maya, Nuke, and Photoshop on the user's system, which resulted in menu actions for launching each of those integrations. Note that in a given environment configuration file, the engine for a Software entity needs to be present in order for that Software's launcher to show up for entities of that environment. So, in this example, the
tk-houdini,
tk-maya,
tk-nuke, and
tk-photoshopcc engines must all be present in the file from which this snippet was taken. If you wanted to remove, for example, Maya from the list of launchers on this entity, you could just remove the
tk-maya engine block from the environment config file.
In addition to these launchers, the Publish app’s “Publish…” command is included in the menu.
Which YML file?
You can take one of two paths: making use of the primary environment configuration (
config/env/*.yml), as controlled by the config’s pick_environment.py core hook, or the legacy approach employed by tk-config-default, which uses
config/env/shotgun_<entity_type>.yml files.
In the case where the standard environment files are used, browser integration uses the
pick_environment core hook to determine which environment configuration file to use for a given entity’s action menu. In the simplest case, the environment corresponds to the entity type. For example, if you right-click on a Shot, the resulting action menu will be configured by the
tk-shotgun block in
config/env/shot.yml. You can customize the
pick_environment hook to use more complex logic. Should there be no
tk-shotgun engine configured in the standard environment file, a fallback occurs if a
shotgun_<entity_type>.yml file exists. This allows browser integration to work with legacy configurations that make use of the entity-specific environment files.
Tip: Removing Software from the Browser Launchers with tk-config-default2
Updating the configuration for launching software from the Shotgun browser varies from tk-config-default to tk-config-default2.
With tk-config-default2, updates should be applied to config/env/includes/settings/
tk-shotgun.yml, whereas in tk-config-default, they were done in config/env/
shotgun_task.yml.
As an example, let’s remove Mari from the list of options when launching from an Asset through the browser.
First, navigate to
config/env/asset.yml and notice how the
tk-shotgun engine engine block is pointing to
@settings.tk-shotgun.asset. The
@ symbol signifies that the value for the configuration is coming from an included file. This means you'll need to go to your env/includes/settings/
tk-shotgun.yml to make the update.
While in your
env/includes/settings/tk-shotgun.yml, notice how each block is per Entity. So, for instance, Asset first:
# asset settings.tk-shotgun.asset: apps: tk-multi-launchapp: "@settings.tk-multi-launchapp" tk-multi-launchmari: "@settings.tk-multi-launchapp.mari""
To remove Mari from the list of options on an Asset in the browser, remove the Mari line (
tk-multi-launchmari: "@settings.tk-multi-launchapp.mari"):
# asset settings.tk-shotgun.asset: apps: tk-multi-launchapp: "@settings.tk-multi-launchapp""
Then, follow the same instructions for each entity (like Shot) from which you'd like to remove the ability to launch a particular software in the Shotgun browser. Note that once you save the file, you may need to wait a minute and hard-refresh the browser for it to take effect.
Caching
Browser integration has a robust caching mechanism, which allows menu actions to be shown to the user as quickly as possible. This is necessary because the process of bootstrapping Toolkit and getting a list of engine commands can be time consuming.
When is the cache invalidated?
The websocket server’s RPC API looks at two things to determine whether the cached data is still valid: YAML file modification times, and the contents of the site’s Software entities. If one of the environment YAML files in a given config has been modified since the cache data was written, the requisite data is recached and fresh data returned to the Shotgun web application. Similarly, if any field on any Software entity in Shotgun has been modified since the data was cached, Toolkit is bootstrapped and new data is cached.
Where is the cache file on disk?
The cache file location is dependent upon the operating system.
OS X: ~/Library/Caches/Shotgun/<site_name>/site.basic.desktop/tk-desktop Windows: %APPDATA%\Shotgun\<site_name>\site.basic.desktop\tk-desktop Linux: ~/.shotgun\<site_name>\site.basic.desktop\tk-desktop
Hook Methods
A
browser_integration.py hook is included in
tk-framework-desktopserver, which provides the following hook methods:
get_cache_key: This method determines the cache entry's key for the given configuration URI, project entity, and entity type. The default implementation combines the configuration URI and entity type.
get_site_state_data: This method can be used to include additional queried data from Shotgun into the hash that's used to test the validity of cached data. By default, the state of all Software entities that exist on the site are used, but if additional data should be included in the hash, that can be implemented in this hook method.
process_commands: This method provides a place to customize or alter the commands that are to be returned to the Shotgun web application. The data structure provided to the method is a list of dictionaries, with each dictionary representing a single menu action. Data can be altered, filtered out, or added into the list as is necessary and will be reflected in the menu requesting Toolkit actions immediately.
Logs
Logs for browser integration can be found in Toolkit’s standard log location. The relevant log files are
tk-desktop.log and
tk-shotgun.log. In addition, if you are using Google Chrome, some relevant log output is sometimes available in the developer console within the browser.
Troubleshooting
The complex nature of communicating from a web application with the local desktop means that there are possible points of failure along the way. Below are a few such situations and some suggestions of first steps to take when you encounter them.
“Open or install Shotgun Desktop…” shown in the action menu
This likely means one of three things:
Shotgun Desktop is not currently running on the local machine. It seems obvious, but it is definitely worth double checking.
Chrome or the Python websocket server has refused the connection, resulting in the Shotgun web application being unable to communicate with Shotgun Desktop. This situation is most likely related to the self-signed certificates that allow the connection to proceed when requested. Regenerating these certificates from scratch often resolves the issue, and can be triggered from Shotgun Desktop, as shown below.
- Shotgun Desktop’s websocket server failed to start on launch. This situation is likely limited to situations where a bad release of the websocket server has gone out to the public, which should be exceedingly rare. In this situation, logging will be present in tk-desktop.log explaining the error, which can be sent to Shotgun’s support team.
No actions are shown in the action menu
This is indicative of a configuration problem if actions were expected for this entity type. Some possible issues:
The
tk-shotgunengine is configured in the correct environment YAML file, but there are no apps present in that configuration. In this case, it’s likely that the intention was for no actions to be present for this entity type.
The
tk-shotgunengine is configured in the correct environment YML file, and apps are present, but actions still do not appear in the menu. This is likely due to apps failing to initialize. In this case, there will be information in tk-shotgun.log and tk-desktop.log describing the problems.
The environment that corresponds to this entity type does not contain configuration for
tk-shotgun. The end result here is the same as #1 on this list. In this case, you can look at the pipeline configuration’s
pick_environmenthook to determine which environment is being loaded for this entity type, and the configuration of
tk-shotguncan be verified there.
There is an empty list of menu actions cached on disk. To force the cache to be regenerated, there are a few options:
- Update the modification time of a YAML file in your project's configuration. This will trigger a recache of menu actions when they are next requested by Shotgun. Worth noting is that this will trigger a recache for all users working in the project.
- Update the value of a field in any of the Software entities on your Shotgun site. The behavior here is the same as the above option concerning YAML file modification time, but will invalidate cached data for all users in all projects on your Shotgun site. Software entities are non-project entities, which means they're shared across all projects. If data in any of the Software entities is altered, all projects are impacted.
- The cache file can be deleted on the host suffering from the problem. It is typically safe to remove the cache, and since it is stored locally on each host, it will only cause data to be recached from scratch on that one system. The cache is stored in the following SQLite file within your Shotgun cache location:
<site-name>/site.basic.desktop/tk-desktop/shotgun_engine_commands_v1.sqlite
“Toolkit: Retrieving actions…” is never replaced with menu actions
There are a few possibilities for this one:
The websocket server has not yet finished caching actions. If this is the first time actions are being retrieved after a significant update to the project’s config, the process can take some time to complete. Wait longer, and observe the contents of
tk-desktop.logto see if processing is still occurring.
The websocket server has failed to respond and never will. This situation should be rare, but if it becomes obvious that there is no additional processing occurring as a result of the request for actions, as seen in
tk-desktop.log, contact Shotgun support, providing relevant log data.
The user is working in more than one Shotgun site. With Shotgun Desktop authenticated against a single site, requesting menu actions from a second Shotgun site results in the user being queried about restarting Shotgun Desktop and logging into the new site. If that request is ignored, the second site will never receive a list of menu actions.
Toolkit Configuration File
If your studio is using a proxy server, if you want to pre-populate the initial login screen with some values, or if you want to tweak how the browser-based application launcher integrates with Shotgun Desktop, there is a special configuration file called
toolkit.ini. Shotgun Desktop does not require this file in order to run; it’s only needed if you need to configure its behavior.
Toolkit looks for the file in multiple locations, in the following order:
- An environment variable named
SGTK_PREFERENCES_LOCATIONthat points to a file path.
- Inside the Shotgun Toolkit preferences folder: (Note that this file does not exist by default in these locations; you must create it.)
- Windows:
%APPDATA%\Shotgun\Preferences\toolkit.ini
- macOS:
~/Library/Preferences/Shotgun/toolkit.ini
- Linux:
~/.shotgun/preferences/toolkit.ini
The
SGTK_PREFERENCES_LOCATION environment variable option allows you to store your configuration file somewhere else on your computer or on your network. Please note that
toolkit.ini is the current standard file name. If you were using
config.ini, check below in the “Legacy Locations” section.
You can see a documented example of a configuration file here.
Please note that this example file is called
config.ini but it can be just renamed to
toolkit.ini
Please also note that you can use environment variables as well as hard coded values in this file, so that you could, for example, pick up the default user name to suggest to a user via the USERNAME variable that exists on Windows.
Legacy Locations (DEPRECATED)
Although
toolkit.ini is the current standard file name, we previously used a
config.ini file for same purpose. The contents of
toolkit.ini and
config.ini are the same.
The
config.ini will be searched for using the following deprecated locations:
- An environment variable named
SGTK_DESKTOP_CONFIG_LOCATIONthat points to a file.
- In the following paths:
- Windows:
%APPDATA%\Shotgun\desktop\config\config.ini
- macOS:
~/Library/Caches/Shotgun/desktop/config/config.ini
- Linux:
~/shotgun/desktop/config/config.ini
Proxy Configuration
If your studio is accessing the internet through a proxy, you’ll need to tell Toolkit to use this proxy when it accesses the Internet. Do so by specifying your proxy as the value of the
http_proxy setting:
http_proxy: <proxy_server_address>
Running Shotgun Desktop with a locally hosted site
If your Shotgun site URL does not end with
shotgunstudio.com, it means that you are running a local Shotgun site. In this case, it is possible that your site has not yet been fully prepared for Shotgun integrations and the Shotgun team may need to go in and do some small adjustments before you can get going! In this case, please submit a ticket and we'll help sort you out.
Connecting to the app store with a locally hosted site
If you are using a local Shotgun site with access to the Internet through a proxy, you might want to set an HTTP proxy for accessing the app store, but not the local Shotgun website. To do this, simply add the following line to
toolkit.ini:
app_store_http_proxy: <proxy_server_address>
where
<proxy_server_address> is a string that follows the convention documented in our developer docs.
If you need to override this setting on a per-project basis, you can do so in
config/core/shotgun.yml in your project’s Pipeline Configuration.
Offline Usage Scenarios
In general use, Shotgun Desktop automatically checks for updates for the Desktop app itself, the tk-desktop engine, and the basic configuration at launch time. However, there are cases where you might want to run integrations while offline or on machines that are completely disconnected from the Internet. The following section describes how to address each of these scenarios.
Running integrations while offline
Scenario: I want to run Shotgun integrations, but I am not connected to the Internet. We have a local Shotgun install.
Solution
- If you can temporarily connect to the internet, just download Shotgun Desktop. It comes prepackaged with a set of integrations, and pre-bundled with all the apps and engines needed for the Shotgun integrations for all supported DCCs. When you start it up, it will automatically try to look for upgrades, but if it cannot connect to the Shotgun App Store, it will simply run the most recent version that exists locally.
Good to know
- Some Toolkit operations (such as registering a Publish) require access to your Shotgun site. So, this solution only works for locally hosted sites.
- Updates are downloaded to your local machine.
- If you switch between being connected and disconnected, Desktop, as well as in-app integrations like those inside Maya and Nuke, will download upgrades at startup whenever they are connected.
Managing updates via manual download
Scenario: Our artist workstations are disconnected from the internet, so we cannot use the auto-updates in Desktop. We still want to get updates, but we have to download them via a single online machine and manually transfer them to artists or into a centralized location.
Solution
- Run Shotgun Desktop on a workstation connected to the internet. When it starts up, the latest upgrades are automatically downloaded at launch time.
- Option 1: Shared Desktop bundle
- Copy the bundle cache to a shared location where all machines can access it.
- Set the
SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHSenvironment variable on offline machines to point to this location.
- When Desktop starts up on offline machines, they will pick up the latest upgrades that are available in the bundle cache.
- Option 2: Local deployment
- Distribute the updated bundle cache to the correct bundle cache location on each local machine.
Good to know
- With Option 1, the Toolkit code will be loaded from the location defined in
SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHS. If this location is on a shared storage, make sure that it is performant enough to load many small files.
- For Windows setups, this is often not the case. Here we would instead recommend Option 2.
Locking off updates
While Desktop’s auto-updates are handy for making sure you always have the latest, sometimes you’ll want to freeze a project, or even your whole site, locking it to a specific version and preventing any updates.
Freezing updates for a single project
Scenario: My project is about to wrap and I would like to freeze it so that no Shotgun integration updates are automatically downloaded.
Solution
- Determine the version you want to lock your project to. You can find the integration releases here.
- In Shotgun, create a Pipeline Configuration entity for the project you want to lock down, with the following fields populated (In this example, we are locking down the config to use v1.0.36 of the integrations):
- Name:
Primary
- Project: The project you want to lock down
- Plugin ids:
basic.*
Descriptor:
sgtk:descriptor:app_store?name=tk-config-basic&version=v1.0.36
Anyone starting Shotgun Desktop on the project will now always use v1.0.36. Any new users starting to work on the project will also get v1.0.36.
Good to know
- Updates are downloaded to your local machine.
- The next time a user launches Desktop while connected to the Internet,
v1.0.36of the basic config, and all of its related code, will be downloaded to their machine.
basic.*means that all plugins in the basic configuration will pick up this override. If, for example, you wanted to freeze the Nuke and Maya integrations only, you could specify
basic.maya, basic.nuke.
- To test, you can create a duplicate of this Pipeline Configuration entity, and add your username to the
User Restrictionsfield. This will restrict the entity such that it's only available to you and won't impact other users. You can then launch Maya or some other software from this duplicate configuration and confirm that it’s running the expected integrations versions.
Known issues
- The Flame integration is namespaced
basic.flame, and so is implied to be part of
basic.*. However, the Flame integration isn't actually included in the basic config. So, if you are using Flame for a project and implement this override, the Flame integration will stop working.
- The solution would be to create an additional Pipeline Configuration override specifically for flame:
- Name:
Primary
- Project: The project you want to lock down (or None for all projects)
- Plugin ids:
basic.flame
- Descriptor:
sgtk:descriptor:app_store?name=tk-config-flameplugin
Freezing updates for your site
Scenario: I don’t want any updates. I want full control over what is being downloaded and used in all projects in my studio.
Solution
- Follow the steps in the above example, but leave the
Projectfield blank. With no override in the
Projectfield, this Pipeline Configuration entity will apply to all projects, including the “site” project, i.e., the site configuration that is used by Desktop outside of any project.
Good to know
- This is the workflow to use if you want to “lock down the site config”. This would lock down everything, and you can then proceed with the advanced project setup via the Desktop menu.
- If you lock down your entire site to use, for example,
v1.2.3, you can still lock down an individual project to use another config.
Known issues
- Flame would be affected by this. See the ‘Known Issues’ section of the above scenario for a solution.
Freezing updates for all but one project
Scenario: I’d like to lock down all projects in our site, except for our test project, which we still want to allow to auto-update.
Solution
- Freeze updates for your site as described in the above section.
- Configure the exception project’s Pipeline Configuration entity to have the following field values:
- Name:
Primary
- Project: The project you want not to lock down
- Plugin ids:
basic.*
- Descriptor:
sgtk:descriptor:app_store?name=tk-config-basic
Good to know
- Note that you’ve omitted the version number from the Descriptor field for the project. This will mean that the project is tracking the latest release of the basic config.
Safely Upgrading a locked off site
- Scenario: We’re locked down to v1.0.0, and we’d like to upgrade to v2.0.0, but first I want to test out the new version before deploying it to the studio.*
Solution
- Duplicate the Pipeline Configuration entity in Shotgun by right-clicking on it and selecting "Duplicate Selected".
- Name the cloned config “update test”, and assign yourself to the User Restrictions field.
- You will now begin to use this Pipeline Configuration.
- Change the descriptor to point to the version you wish to test.
- You can invite any users you want to partake in testing by adding them to the User Restrictions field.
- Once you are happy with testing, simply update the main Pipeline Configuration to use that version.
- Once users restart Desktop or DCCs, the update will be picked up.
Taking over a Pipeline Configuration
Without setting up any configurations, you get a basic set of Shotgun integrations out-of-the-box, and this document covers the kinds of administration you can do with these out-of-the-box integrations. This basic setup is built on top of Shotgun's Toolkit platform, which supports much richer customization. Within Desktop, the Toolkit Project Setup Wizard will lead you through the process of creating a full, customizable Pipeline Configuration for your project.
Each section below explains in detail each of the steps of the Wizard with examples and suggestions of sensible default values in case you are not sure how to set things up.
Launching the setup wizard from Desktop
Once you have navigated to a project there will be an "Advanced Project Setup..." menu item in the user menu in the bottom right hand of Desktop. Click on this menu item to launch the Toolkit Setup Wizard.
Select a configuration type
When you start configuring a new project, the first thing to decide is which configuration template to use. A configuration template is essentially the complete project configuration with all settings, file system templates, apps and logic needed to run the project.
- If this is your very first project, head over to the Shotgun defaults to get you started.
- If you already have configured projects and configurations for previous projects, you can easily reuse these by basing your new project on an existing project
- For advanced workflows, you can use external configurations or configs stored in git repositories.
Default configuration templates
This is the place to go if you want to start from scratch. The default configuration contain all the latest apps and engines set up with a default file structure and file naming convention.
Once you have installed the default configuration, you can manually tweak the configuration files and customize it to fit the specific needs of your pipeline. Once you have got a project up and running, you can base your next project on this configuration.
The Default Configuration
This is the default Toolkit VFX configuration and usually a great starting point when you start setting things up. It comes with 3dsmax, Flame, Houdini, Nuke, Mari, Maya, Motionbuilder, and Photoshop set up and contains a simple, straight forward folder setup on disk.
The configuration contains a number of different pieces:
- A file system setup
- A set of templates to identify key locations on disk
- A set of preconfigured engines and apps which are chained together into a workflow.
File System Overview
The standard config handles Assets and Shots in Shotgun. It breaks things down per Pipeline Step. A pipeline step is similar to a department. Each pipeline step contains work and publish areas for the various supported applications. The Shot structure looks like this:
Applications and workflows
The config contains the following components:
- Maya, Mari, Nuke, 3dsmax, Flame, Houdini, Photoshop, and Motionbuilder support
- Shotgun Application Launchers
- Publishing, Snapshotting, and Version Control
- A Nuke custom Write Node
- Shotgun integration
- A number of other tools and utilities
In addition to the apps above, you can easily install additional apps and engines once the config has been installed.
Basing your new project on an existing project
This is a quick and convenient way to get up and running with a new project with all the defaults and settings that you had in a previous project. Toolkit will simply copy across the configuration from your old setup to the new project. This is a simple and pragmatic way to evolve your configuration - each new project is based on an older project.
For more ways and documentation on how to evolve and maintain your pipeline configuration, see here:
Using a configuration template from git
Use this option if you want to keep your project's configuration
connected to source control. Specify a url to a remote git or github repository and the setup process will clone it for you.
Note that this is not just github, but works with any git repository. Just make sure that the path to the repository ends with
.git, and Toolkit will try to process it as a git setup. Because your project configuration is a git repository, you can
commit and push any changes you make to your master repository
and beyond that to other projects. Using a github based configuration makes it easy to keep multiple
Toolkit projects in sync. You can read more about it here:
Please note that if you are running on Windows, you need to have git installed on your machine and accessible in your
PATH. On Linux and Mac OS X, it is usually installed by default.
Browsing for a configuration template
Use this option if you have a configuration on disk, either as a folder or zipped up as a zip file. This can be useful if someone has emailed a configuration to you or if you keep a master config on disk which you are basing all your projects on. This is usually an expert option and we recommend either using a config from another project or one of our app store default configs.
Setting up a storage
Each Toolkit project writes all its files and data to one or more shared storage locations on disk. For example, a configuration may require one storage where it keeps textures, one where it keeps renders and one where it stores scene files. Normally, these storages are controlled from within the Shotgun Site Preferences, under the File Management tab.
The Toolkit Setup wizard will ask you to map each storage root required by the configuration to a local storage in Shotgun.
The required root is listed on the left with its description (as defined in the configuration's
roots.yml file). On the right, a list of existing Shotgun local storages is listed. You must select a storage for each required root and enter a path for the current OS if one does not already exist in Shotgun.
You can also add paths for other operating systems that have not been defined. Existing paths are locked to ensure you don't accidentally affect other projects that may be relying on that storage path. The mapping page in the wizard will ensure that you've mapped each required root and that each mapping is valid.
You can create a new local storage in the wizard as well by selecting the
+New item at the end of the storage selection list. You will be prompted for a local storage name and path for the current OS.
Once the project is being set up, Toolkit will create a folder for each new project in each of the storage locations. For example, if your primary storage location is
/mnt/projects, a project called The Edwardian Cry would end up in
/mnt/projects/the_edwardian_cry. And if the config is using more than just the primary storage, each of the storages would end up with an
the_edwardian_cry folder.
Your primary storage location is typically something like
/mnt/projects or
\\studio\projects and is typically a location where you are already storing project data, grouped by projects. It is almost always on a shared network storage.
Choosing a project folder name
Now it is time to choose a disk name for your project. This folder will be created in all the different storages which are needed by the configuration. You can see a quick preview in the UI - for most configurations this will only preview the primary storage, but if you are using a multi root config, additional storages will show up too. Toolkit will suggest a default project name based on the name in Shotgun. Feel free to adjust it in order to create what is right for your setup.
Selecting a configuration location
Lastly, please decide where to put your configuration files on disk. Toolkit will suggest a location based on previous projects, so that they all end up in the same place on disk.
The configuration normally resides on a shared storage or disk, so that it can be accessed by all users in the studio who needs it. If you are planning on using more than one operating system for this project, make sure to enter all the necessary paths. All paths should represent the same location on disk. Often, the path can be the same on Mac OS X and Linux but will be different on Windows.
If this is your first project, you typically want to
identify a shared area on disk where you store all your future pipeline configurations. This is typically a location where you store software or software settings shared across the studio. This could be something like
/mnt/software/shotgun. It may vary depending on your studio network and file naming conventions.
When you set up your first configuration, set it up with paths for all the platforms you use in your studio. This will make it easier later on to create an environment which is accessible from all machines. As a hypothetical example, if your project name is Golden Circle you may type in the following three paths:
linux: /mnt/software/shotgun/golden_circle macosx: /servers/production/software/shotgun/golden_circle windows: \\prod\software\shotgun\golden_circle
What can I do once I have a configuration?
Once you are up and running with your first configuration, please navigate to our 'next steps' documentation to learn more about how to configure and adjust Toolkit to better suite your studio needs:
You can also learn more in our Advanced Project Setup documentation.
Advanced functionality
Silent installs
If you are on a Windows network, you can use the argument "/S" to force the .exe Shotgun Desktop installer to do a silent install. Then you can push a copy of the shortcut to the executable to the startup folder.
10 Comments
"It’s simple to rely on Shotgun’s auto-detection of host applications on your system: just launch Shotgun Desktop and choose a project, and Desktop will display launchers for all software packages that it finds in standard application directories, for which we support integration"
Is this part ready? I just created a new Project, ran the Integration Beta and got the Set Up Toolkit Project screen - is that expected?
Is it possible to expose a hook for business logic of turning a software entity into a command when in "manual mode"?
Hey Rob, I think we worked through your question in a support ticket. If you have taken over the site configuration, then you will still use that by default (so everything is backward compatible). I believe that was the reason you saw this. You'd need to either update your manual site configuration with the new releases to match behavior, or get rid of the site config to revert back to the latest out of the box behavior.
Hi Travis, you can create a Software entity and point it at a script if you wanted to, but there isn't a direct way to use that to run commands when you have taken over a config. To get commands into the Shotgun menu in this case the process is the same as it has been, you need to add the app that will register the command to the tk-shotgun engine instance for the appropriate environment in your configuration.
Thanks Rob. I'm looking for more control in how a software entity resolves what to launch in Shotgun Desktop, without writing my own launcher and everything. This seems precisely the kind of task hooks excel at.
Totally loving this new feature. Just started trying this yesterday!
Hi, is there any way to fully disable the integrations in tk-desktop?
Hi
I used to preconfigure some environment variables to help maya by using the hook
before_app_launch.py
How would I best do this with the new integrations default.
HI,
I can't find where the YML files are stored in a zero config project. I think this could be explained more clearly.
Regards
@Juanmaria Garcia
The pipeline configurations for zero config project are stored in the user profile directory
On Windows, the path will be something like
C:\Users\USERNAME\AppData\Roaming\Shotgun\SERVERNAME\
On Linux, under
~/.shotgun/ | https://support.shotgunsoftware.com/hc/en-us/articles/115000067493-Integrations-Admin-Guide | CC-MAIN-2019-26 | refinedweb | 9,309 | 52.49 |
Week 14
-
NETWORKING AND COMMUNICATIONS
Group Assignment
The partners on this group assignment are:
- Silvia Lugo
- Diego Santa Cruz
- Ivan Callupe
- Carlos Nina (Me)
The goal at the group assignment was connect two boards, the first one with a button (Diego's board, on the rigth) and other with a LED (my board, on the left). the communication was made via serial communication.
The LED board, has an Attinny44, the button board has an Attinny45. Due to only the Attiny 45 has the built-in hardware for serial communications, we had to establish it by software.
The code for the button boards is this. We use a library "SoftwareSerial.h" to create the communication.
Button board code
#include < SoftwareSerial.h> //remove the space between < and SoftwareSerial.h int rx = 1; //Declares the RX pin int tx = 2; //Declares the TX pin SoftwareSerial mySerial(rx, tx); //Setting up the RX/TX pins as a SoftwareSerial char button = 4;//Declares the character that represents the virtual button current state void setup(){ mySerial.begin(9600); //Start the serial communication and select the its speed that deppends of the frequency that it will be program the attiny pinMode(button, INPUT); //Configures the BUTTON pin as an input } void loop() { if (digitalRead(button) == HIGH) { // Condition For LED on mySerial.println("1"); //Prints in the screen the actual state } else if (digitalRead(button) == LOW) { // Condition For LED off mySerial.println("0"); //Prints in the screen the actual state } }
(*If you use de code, please remove the blank space before _SoftwareSerial.h. I include this to show the name properly).
LED board code
#include < SoftwareSerial.h> //remove the space between < and SoftwareSerial.h int rx = 0; //Declares the RX pin int tx = 1; //Declares the TX pin SoftwareSerial mySerial(rx, tx); //Setting up the RX/TX pins as a SoftwareSerial char buttonState = '1';//Declares the character that represents the virtual button current state char lastButtonState = '0'; //Declares the character that represents the virtual button last state int ledPin = 8; //Declares the pin where the indicator LED is already attached void setup(){ mySerial.begin(9600); //Start the serial communication and select the its speed that //deppends of the frequency that it will be program the attiny pinMode(ledPin, OUTPUT); //Configures the LED pin as an output } void loop() { buttonState = mySerial.read(); //Reads the message a "1" or a "0" from the command line if (buttonState != lastButtonState) { //Checks if there exist a change in the virtual button state if (buttonState == '1') { // Condition For Motor ON mySerial.println("ON"); //Prints in the screen the actual state digitalWrite(ledPin, HIGH); //Turns ON the indicator LED } else if (buttonState == '0'){ // Condition For Motor OFF mySerial.println("OFF"); //Prints in the screen the actual state digitalWrite(ledPin, LOW); //Turns OFF the indicator LED } delay(50); } lastButtonState = buttonState; //Sets the current state as a last state }
(*If you use de code, please remove the blank space before _SoftwareSerial.h. I include this to show the name properly).
To load the program to the boards, we used Arduino IDE, to do that, we had to install the new boards to the Arduino IDE, this process is described on my assignemnet on week 7 (electronics design).
If you are familiar with the procedure, you can use this url, to get the new board to arduino IDE.
Finally. After load the progrmas and connect the boards. We could turn on the LED of one board via a serial communication of a button from another board.
INDIVIDUAL ASSIGNMENT
This week I started control a Fabduino via Bluetooth to turn on a LED. This is important, because I will going to use this setting on my final project, but with servos instead of LEDs.
The connection is this. I used a FTDI adapter to feed mi Fabduino. and a modelue HC-05 to connect via serial communication the Fabduino and my Android phone.
Due to this week is about communication, the process of the design of the mobile app (with MIT APP Inventor), will be described on the week 16 (Interface and Application Programming).
But, I have to say, for the first test with serial communication via bluetooth, I used an App called "BT Terminal Free", thanks a recommendation of my partner Abdon Troche, who helps me a lot with this assignment.
(Taken from the App website).
The code I used, to turn on the led, via a serial communication between the mobile app and the Fabduino is this:
#include
#define RxD 10 #define TxD 11 #define DEBUG_ENABLED 1 SoftwareSerial blueToothSerial(RxD,TxD); int led_matrix = 8;(){ pinMode(RxD, INPUT); pinMode(TxD, OUTPUT); setupBlueToothConnection(); pinMode(led_matrix,OUTPUT); digitalWrite(led_matrix,LOW); } void loop(){ char recived_char; char last_recived_char; while(1){ //check if there's any data sent from the remote bluetooth shield if(blueToothSerial.available()){ recived_char = blueToothSerial.read(); if( recived_char == '1' ){ digitalWrite(led_matrix,HIGH); last_recived_char = recived_char; } else if( recived_char == '0' ){ digitalWrite(led_matrix,LOW); last_recived_char = recived_char; } } } } "
Finally, I could turn on the LED with an android app via serial communication.
Files of this assignment:
<<< Go to Week 13 assignment | >>> Go to Week 15 assignments | http://fabacademy.org/2019/labs/tecsup/students/carlos-ochoa/week-14.html | CC-MAIN-2020-10 | refinedweb | 841 | 50.97 |
The post below was originally published (in Polish) on forum 4programmers.net in the "Hello world bez bibliotek i asm" (link) thread.
--post start--
A piece of code from me - please note that I wanted to demonstrate a method and not create an always-working-code :)
The code was written to work on linux (32-bits x86) but you can use the same method on 64-bits or on Windows both 32- and 64-bits.
The code does not use any libraries (it doesn't even look for any in the memory) and there is no inline assembly/etc (well, no direct or explicit inline assembly/etc ;>).
I've placed the explanation of the method below the code.
volatile unsigned int something_wicked_this_way_comes(
int a, int b, int c, int d) {
a ^= 0xC3CA8900; b ^= 0xC3CB8900; c ^= 0xC3CE8900; d ^= 0x80CDF089;
return a+b+c+d;
}
void* find_the_witch(unsigned short witch) {
unsigned char *p = (unsigned char*)something_wicked_this_way_comes;
int i;
for(i = 0; i < 50; i++, p++) {
if(*(unsigned short*)p == witch) return (void*)p;
}
return (void*)0;
}
typedef void (*gadget)() __attribute__((fastcall));
int main(void) {);
if(!eax_from_esi_call_int) return 1;
if(!set_esi) return 3;
if(!set_ebx) return 4;
if(!set_edx) return 5;
set_edx(12), set_ebx(1), set_esi(4);
eax_from_esi_call_int("Hello World\n");
return 0;
}
This code uses a method really similar to the JIT-language exploitation techniques when the memory is protected via XD/NX/XN/DEP/etc - i.e. I tried to implicitly place in executable memory a couple of "gadgets" (think: ret2libc or return oriented programming -) and then use them to make a syscall call into the kernel (so, there are no libraries needed at all, but of course there is interaction with the environment, i.e. the Linux kernel).
These gadgets are places in the something_wicked_with_way_comes function as the constants used in XORs.
a ^= 0xC3CA8900; b ^= 0xC3CB8900; c ^= 0xC3CE8900; d ^= 0x80CDF089;
The above C code on assembly / machine code level might look like this (compiled using gcc; disassembled using objdump afair):
[...]
6: 35 00 89 ca c3 xor eax,0xc3ca8900
b: 89 45 08 mov DWORD PTR [ebp+0x8],eax
e: 8b 45 0c mov eax,DWORD PTR [ebp+0xc]
11: 35 00 89 cb c3 xor eax,0xc3cb8900
16: 89 45 0c mov DWORD PTR [ebp+0xc],eax
19: 8b 45 10 mov eax,DWORD PTR [ebp+0x10]
1c: 35 00 89 ce c3 xor eax,0xc3ce8900
21: 89 45 10 mov DWORD PTR [ebp+0x10],eax
24: 8b 45 14 mov eax,DWORD PTR [ebp+0x14]
27: 35 89 f0 cd 80 xor eax,0x80cdf089
[...]
So, if we would disassemble the code with a slight misalignment (one or two bytes) we would get a code that differs a little:
6: 35 00 89 ca c3 → mov edx, ecx ; ret
11: 35 00 89 cb c3 → mov ebx, ecx ; ret
1c: 35 00 89 ce c3 → mov esi, ecx ; ret
27: 35 89 f0 cd 80 → mov eax, esi ; int 0x80
Thanks to the above I'm certain that in this case the needed gadgets do reside in memory (of course if the compiler would work in a slightly different way the opcodes might never show up; but in this specific compilation-case they did).
Going further into the code, I use the find_the_witch function to actually find these gadgets in memory in the something_wicked_this_way_comes function (the argument for the scanning function are the two first bytes of a gadget I'm looking for represented as uint16_t (little endian)).);
One more important thing - here's the gadget type:
typedef void (*gadget)() attribute((fastcall));
It has two essential features:
1. The unspecified amount of arguments denoted by the C's () (please note that in C++ () means (void), but in C it's closer to (...)).
2. The fastcall convention thanks to which the function arguments will be places in the general purpose registers and not on the stack (in case of the first few arguments of course) - in this specific case the first argument is always placed in the ecx register (the gadgets are designed to use this fact).
After that I "construct" a simple assembly-like hello world using the gadgets I have:
set_edx(12), set_ebx(1), set_esi(4);
eax_from_esi_call_int("Hello World\n");
This will be executed as following:
(main) mov ecx, 12
mov eax, set_edx
call eax
(gadget) mov edx, ecx
ret
(main) ...
... ...
(gadget) ...
int 0x80
Or, skipping the parts from the main() function:
[gadget 1] mov edx, 12 (length of the string)
[gadget 2] mov ebx, 1 (stdout)
[gadget 3] mov esi, 4 (sys_write)
[handled by fastcall] mov ecx, address "Hello World\n"
[gadget 4] mov eax, esi
[gadget 4] int 0x80
Of course I'm missing a C3 (ret) after the int 0x80 (no place left in a 4-byte gadget) so the program will crash AFTER writing out "hello world". However it would be fairly simple to fix this :)
Test:
$ gcc -m32 test.c -O0
$ ./a.out
Hello World
Segmentation fault (core dumped)
$
--post stop--
An elegant fix to the Segmentation fault problem was posted by Azarien in the same thread - he created another function called graceful_exit where, using the existing gadgets, he invoked the exit syscall. And then he added the call to this function in the something_wicked_this_way_comes just after d ^= 0x80CDF089; - thanks to this after the gadget 89 F0 CD 80 is executed the CPU will execute whatever is next after the CD 80 (int 0x80) and that would be the call to the graceful_exit function.
The said patch looks like this (Azarien's changes are yellow; there was another change in the patch - the gadget type declaration was moved to the top of the file but I'll skip this in the listing):
void graceful_exit()
{
set_ebx(0);
set_esi(1);
eax_from_esi_call_int(0);
}
volatile unsigned int something_wicked_this_way_comes(
int a, int b, int c, int d) {
a ^= 0xC3CA8900; b ^= 0xC3CB8900; c ^= 0xC3CE8900; d ^= 0x80CDF089;
graceful_exit();
return a+b+c+d;
}
As said, very elegant solution :)
It's worth also taking a look at MSM's post and the discussion underneath it (in Polish) - MSM's method uses the commonly known (in RE/shellcoding) technique of looking up kernel32 address in the loaded DLLs list in PEB, finding the GetProcAddress in the import tables and acquiring the addresses of all API functions required to print out "Hello World" (that being said, it kinda relies on some libraries; still, fun to look at).
And that's that. Cheers ;>
char _start[] __attribute__ ((section(".text#"))) = {
0xE8, 0x0D, 0x00, 0x00, 0x00, 0x48, 0x65, 0x6C, 0x6C, 0x6F,
0x20, 0x57, 0x6F, 0x72, 0x6C, 0x64, 0x21, 0x0A, 0x5E, 0x31,
0xC0, 0x89, 0xC2, 0xFF, 0xC0, 0x89, 0xC7, 0xB2, 0x0D, 0x0F,
0x05, 0x48, 0x31, 0xFF, 0x6A, 0x3C, 0x58, 0x0F, 0x05};
Hello World!
The variant without -nostdlib parameter is similar, but it also has a main():
[...]
int
main(void)
{
((void (*)(void))a)();
return 0;
}
Sure, that's the most obvious solution, but it doesn't really respect the rule of "no direct inline assembly/etc" - in this case it's inline machine code, Turbo Pascal style :)
We've talked about this kind of solution on the Polish side of this post.
That being said - sure, it would work ;)
--Thanks
You forgot -nostdlib
So you are allowing libraries. And that makes the whole thing a lot easier.
extern long int syscall (long int __sysno, ...) __attribute__ ((__nothrow__ , __leaf__));
int main(void)
{
syscall( 1, 1, "Hello world!\n", 13);
}
I think I wa trying to make sure the function won't be optimized away.
@John
*cough* well the title of this post includes the phrase "without libraries" so I thought -nostdlib goes without saying :) *cough*
Add a comment: | http://gynvael.coldwind.pl/?id=477 | CC-MAIN-2018-13 | refinedweb | 1,267 | 56.32 |
Hi everyone! It's been a while since I wrote a tutorial. In this one, we will be working on Unreal Engine 4. You can learn more about Unreal and how to install Unreal Engine 4 here. I really like the graphics quality of Unreal. Its graphics quality is unmatched, and it's free to use! So, let's get started!
You will also need to install VisualStudio (Windows) or XCode (Mac) along with it. These programs likely come with Unreal when you install it, but if not, make sure you install it.
Once you have Unreal Engine and VisualStudio or XCode installed, you can launch the latest version of Unreal and start a C++ project. For now, we don't need to worry about the difference between Blueprint & C++ project. Select a basic template.
It will take some time in the process. After completion, it will look like this. If you have selected C++ project, you can see a folder named C++ classes in the content browser in bottom left corner. On the left side, you have the mode window, from which you can select different default 3D models and other components. Play with it. Your VisualStudio or XCode must have also been opened at the same time.
In Unreal, all the objects are considered as Actors, inputs as Pawn and so on. Since we are making our own 3D model, we need to have an actor class.
Right click on the content window, and select a new C++ class. Extend it to be an actor class.
Navigate to Visual Studio. (If you are making the actor in Blueprint class, it will open Visual Studio automatically.) You can browse your classes in Solution Explorer window.
We will be using UProceduralMeshComponent for building our mesh. A Mesh is a structure for making an actor or object. Navigate to Build.cs file and add the following line.
PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore", "ProceduralMeshComponent", "ShaderCore", "RenderCore" , "RHI"});
Note: If in between an error is shown anytime, you can always close VisualStudio and go to your project folder in File Explorer. Right click on .uproject and select generate project files. We need to do this if the compiler gives an error. (But this does not help if the error is because of a ';' ).
Now, open your VisualStudio again. Navigate your header file( .h) of your actor class. Add the following code at the bottom but under the braces.
private: UPROPERTY(VisibleAnywhere, Category = Materials) class UProceduralMeshComponent* mesh;
It is possible that an error might be showing - you know the solution.
Now in the .cpp class of your actor. Include the UProcedural header class.
#include "ProceduralMeshComponent.h"
Now, a mesh is generated using either triangles or faces. You can take any object. You will notice it is made up of many triangles or faces in other cases.
To create a mesh, we need vertices, triangles(or faces), normals, textures, colors, and tangents. You can read all about them in the docs.
Now, you can imagine the number of triangles in an object depends on the number of vertices. If 3 there are vertices, you will have 1 triangle. If there are 4 vertices, then you will have 2 triangles. We will need to keep these in mind as well.
We are making a square this time. You can add this code to your .cpp file in your A<className>() function.
mesh = CreateDefaultSubobject<UProceduralMeshComponent>(TEXT("mesh")); RootComponent = mesh; TArray<FVector> vertices; vertices.Add(FVector(0, 0, 0)); // 0th vertice vertices.Add(FVector(0, 100, 0)); // 1th vertice vertices.Add(FVector(0, 0, 100)); // 2nd vertice vertices.Add(FVector(0, 100, 100)); // 3rd vertice TArray<int32> Traingle; //I have created 4 triangle because we need to make sure it is rendered from behind as well. Otherwise, the object will be seen from front only and not from behind. Traingle.Add(0); Traingle.Add(1); // for front face - clockwise direction Triangle.Add(2); Triangle.Add(1); Triangle.Add(2); // for front face - clockwise direction Triangle.Add(3); Triangle.Add(2); Triangle.Add(1); // for back face - anti-clockwise direction Triangle.Add(0); Triangle.Add(3); Triangle.Add(2); // for back face - anti-clockwise direction Triangle.Add(1); TArray<FVector> normals; normals.Add(FVector(1, 0, 0)); normals.Add(FVector(1, 0, 0)); normals.Add(FVector(1, 0, 0)); // you need to calculate the direction of normals, using 3d vectors. normals.Add(FVector(1, 0, 0)); TArray<FVector2D> UV0; UV0.Add(FVector2D(1, 1)); UV0.Add(FVector2D(0, 1)); UV0.Add(FVector2D(1, 0)); UV0.Add(FVector2D(0, 0)); TArray<FLinearColor> vertexColors; vertexColors.Add(FLinearColor(0.75, 0.00, 0.00, 1.0)); vertexColors.Add(FLinearColor(0.75, 0.00, 0.00, 1.0)); vertexColors.Add(FLinearColor(0.75, 0.00, 0.75, 1.0)); // the 4th argument determines alpha value (0,1) vertexColors.Add(FLinearColor(0.75, 0.00, 0.75, 1.0)); TArray<FProcMeshTangent> tangents; tangents.Add(FProcMeshTangent(0, 1, 0)); tangents.Add(FProcMeshTangent(0, 1, 0)); tangents.Add(FProcMeshTangent(0, 1, 0)); tangents.Add(FProcMeshTangent(0, 1, 0)); mesh->CreateMeshSection_LinearColor(1, vertices, Square, normals, UV0, vertexColors, tangents, false);
That concludes our mesh creation! You can save it and compile it. Now, if you drag your actor in the scene, it will compile a square!
You can try commenting the other 2 triangles and compile it. A square will still come up, but try to see it from behind. You won't be able to.
Now, if you add any material (or image) to it, if it gets rendered upside down or another way or the light is low. You will need to adjust the vertices positions in Triangle TArray and Normals.
You can try making more complex models or using obj files. You need to make a parser for obj models to get the vertices, normals, etc. You need to know one thing, that Texures = UV. Using that parser, you can populate the values and get a model like this.
This is some Pokemon. I still need to add color and some other details though.
That's it for this tutorial. Enjoy! | https://maker.pro/3dr/tutorial/how-to-create-3d-models-in-unreal-engine-4-with-uproceduralmeshcomponent | CC-MAIN-2019-18 | refinedweb | 1,019 | 61.63 |
Even amidst all the ruckus caused by my home renovations, my temporary loss of two other machines and a whole new set of personal logistics (besides work, of course), I can’t stop pondering solutions for information management – and that includes RSS.
Now, if you happen to recall my latest piece regarding my RSS setup, you’ll remember that I am still tackling the issue of how to go about doing Bayesian classification, and that I was using newspipe as an archiver.
A lot of people missed the point there and pointed out that Mail.app has built-in RSS support – which is correct, except that Mail.app does not store inline images or enclosures along with the feed items, something that I find to be a rather myopic omission (just filed as #5777759) and absolutely essential for archiving.
Now, Mail.app stores its RSS items into
.emlx format (which Jamie Zawinsky documented), just like “ordinary” messages. And the format is pretty straightforward:
byte count for message as first line MIME dump of message XML plist with flags
And the MIME dump invariably contains a part with well-formed HTML (at least the samples I looked at), with neat direct references to inline images and stuff.
So I had one of those shower epiphanies: Why not parse the
.emlx file, download the referenced images (since the URLs are low-hanging fruit), and add the images back into the
.emlx file as inline attachments?
That way I could just use Mail.app (without newspipe in the middle) and run a simple archival script every now and then.
Lo and behold, after 20 minutes of Python coding (and thanks to the ineffable miracle that is Beautiful Soup), I have a little proof of concept that does just that – images in RSS items are downloaded, injected into a new MIME message, and the whole thing is replaced into the
.emlx file, updating the byte count appropriately.
And Mail.app seems to like it, too.
Update: Here’s the source code, after a few cleanups and some re-structuring towards making it a Python class that I can re-use later:
#!/usr/bin/env python # encoding: utf-8 """ emlx.py Created by Rui Carmo on 2008-03-03. Released under the MIT license """ from BeautifulSoup import BeautifulSoup import os, re, codecs, email, urllib2 from email.MIMEImage import MIMEImage from email.MIMEMultipart import MIMEMultipart # Message headers used by Mail.app that we want to preserve preserved_headers = [ "X-Uniform-Type-Identifier", "X-Mail-Rss-Source-Url", "X-Mail-Rss-Article-Identifier", "X-Mail-Rss-Article-Url", "Received", "Subject", "X-Mail-Rss-Author" "Message-Id", "X-Mail-Rss-Source-Name", "Reply-To", "Mime-Version", "Date" ] class emlx: """emlx parser""" def __init__(self, filename): """initialization""" self.filename = filename self.opener = urllib2.build_opener() # Mimic Mail.app User-agent self.opener.addheaders = [('User-agent', 'Apple-PubSub/59')] self.load() def load(self): # open the .emlx file as binary (and not using codecs) to ensure byte offsets work self.fh = open(self.filename,'rb') # get the payload length self.bytes = int(self.fh.readline().strip()) # get the MIME payload self.message = email.message_from_string(self.fh.read(self.bytes)) # the remaining bytes are the .plist self.plist = ''.join(self.fh.readlines()) self.fh.close() def save(self, filename): fh = open(filename,'wb') # get the payload length bytes = len(str(self.message)) fh.write("%d\n%s%s" % (bytes, self.message, self.plist)) fh.close() def grab(self, url): """grab images (not very sophisticated yet, doesn't handle redirects and such)""" h = self.opener.open(url) mtype = h.info().getheader('Content-Type') data = h.read() return (mtype,data) def parse(self): for part in self.message.walk(): if part.get_content_type() == 'text/html': self.rebuild(part) return def rebuild(self,part): # parse the HTML soup = BeautifulSoup(part.get_payload()) # strain out all images referenced by HTTP/HTTPS images = soup('img',{'src':re.compile('^http')}) count = 0 # prepare new MIME message newmessage = MIMEMultipart('related') for h in preserved_headers: newmessage.add_header(h,self.message[h]) attachments = [] for i in images: # Grab the image (mtype, data) = self.grab(i['src']) # Build a cid for it subtype = mtype.split('/')[1] cid = '%(count)d.%(subtype)s' % locals() # Create and attach new MIME part # we use all reference methods to ensure cross-MUA compatibility image = MIMEImage(data, subtype,name=cid) image.add_header('Content-ID', '<%s>' % cid) image.add_header('Content-Location', cid) image.add_header('Content-Disposition','inline', filename=("%s" % cid)) attachments.append(image) # update references to images i['src'] = 'cid:%s' % cid count = count + 1 # inject rewritten HTML first part.set_payload(str(soup)) newmessage.attach(part) # now add inline images as extra MIME parts for a in attachments: newmessage.attach(a) # replace the message self.message = newmessage if __name__ == "__main__": a = emlx('320611.emlx') a.parse() a.save('injected.emlx')
Right now, I’m considering tweaking the
plist flags a bit, and since I absolutely loathe the bright blue header Mail.app uses to display feed items (which often hides large portions of item titles) I will be doing outright conversion to “normal” e-mail messages.
Plus, of course, I still need a decent way to invoke it upon an entire folder crammed with RSS items. That is easy enough to do, but I’d rather try to code something that can be re-used by other folk, and as such I’m looking into developing an Automator action for this.
Time (my scarcest resource) will tell if it’s doable. Still, I wonder why Apple doesn’t allow for archival of RSS items with inline images – it’s not as if they don’t have all the pieces (and Automator already has plenty of RSS support…). | http://the.taoofmac.com/space/blog/2008/03/03/2211 | crawl-002 | refinedweb | 945 | 58.58 |
Custom widgets using PyQt
Introduction ;-)
What you need to follow the tutorial
- PyQt version 3.10 or later (older ones probably OK, too, just untested by me)
- docutils
Step I: Drawing the thing
We can use Qt's desiger to draw the layout of our widget, if it needs more than one component.
In this case, I will create a tabbed widget containing a editing (QTextEdit) and a viewing (QTextBrowser) widgets.
- Open designer.
- Tell it you want to create a widget.
- It will create an empty form, called Form1, nothing very exciting ;-)
- Now layout the stuff just the way I want it. The tabbed widget is called tabs, the QTextEdit is called editor, and the QTextBrowser is called viewer.
- Right-click on the form, and choose Form Settings
- Class Name: I choose QRestEditorBase This is the name of the generated class.
- Comment: Describe your class
- Author: You
- You can see this in the qresteditorbase.ui file
Step II: Slots
A Qt Widget has a number of slots, which is how the code accesses its functionality.
While the real QRestEditor is a rather complex widget, I will explain just a small part of it, so you can see how the general case is. Repeat as needed ;-)
You can create the slots from designer, by right-clicking on the form and choosing Slots.
Well, create all the slots you need. I only created one, called render(), which will use the docutils module to render the text in the editor into HTML on the viewer.
Step III: Implementation
While the look of the widget is already done, it needs to be fleshed by adding code to it.
My preferred way to do it is creating a subclass of QRestEditorBase, called QRestEditor, and implement there the desired functionality.
- First compile qresteditorbase.ui using pyuic:
pyuic -p0 -x qresteditorbase.ui > qresteditorbase.py
This will generate a qresteditorbase.py file which implements the QRestEditorBase class.
You can see if it works by doing
python qresteditorbase.py
Now here's how the QRestEditor class looks like:
from qt import *
from qresteditorbase import QRestEditorBase
from docutils import core
class QRestEditor (QRestEditorBase):
def __init__(self,parent= None,name = None,fl = 0):
QRestEditorBase.__init__(self,parent,name,fl)
def render(self):
self.viewer.setText(core.publish_string(str(self.editor.text()),
writer_name='html',
settings_overrides={'input_encoding': 'utf8'}))
if __name__ == "__main__":
a = QApplication(sys.argv)
QObject.connect(a,SIGNAL("lastWindowClosed()"),a,SLOT("quit()"))
w = QRestEditor()
a.setMainWidget(w)
w.show()
a.exec_loop()
As you can see, it simply reimplements the render() slot.
Now you can just use the widget at will. Of course what we did so far is nothing unusual :-)
Step IV: Designer's Custom Widgets
Designer is a nice tool. That's why we used it in Step I, after all. Now, we'll see how we can make it so our new widget can be used in designer, too!
- Open designer.
- Go to Tools->Custom->Edit Custom Widgets
- Click on New Widget
- In the Definition tab:
- Class: QRestEditor
- Headerfile: blank
- In the Slots tab:
- Click New Slot
- Slot: render()
- You can replace the Qt logo with some picture so that later, when we use the widget, it looks somewhat WYSIWYG.
- Click Save Descriptions, and save it in an adequate file.
Step V: Using the custom widget in an application
Now, suppose we want to create a dialog containing the custom widget we created, for demostration purposes.
Open designer
Tell it you want to create a dialog
Tools->Custom->QRestEditor
Click anywhere so it gets created.
Now, you need to tell designer something, so that the generated code will import the right Python module to create a QRestEditor widget.
Right click on the form, and choose Form Settings
In the Comments field, add this:
Python:#Import the qresteditor module Python:from qresteditor import *
Notice that there is no space after Python: because this will be inserted in the python code!
Add a QPushButton somewhere in the dialog.
Now, something that doesn't quite work yet:
- Connect, using Designer, the clicked() signal of the button to the render() slot in the QRestEditor, so that when you click it, it will trigger the rendering.
- Save the dialog to a file. I used testdialog.ui
- To test it, compile it using pyuic and run it:
pyuic -x -p0 testdialog.ui > testdialog.py python testdialog.py
Sadly, on PyQt 3.10, the connections are not generated correctly, so this will not work right. :-P
On testdialog.py, you will need to change the line
self.connect(self.pushButton1,SIGNAL("clicked()"),self.restEditor1,SLOT("render()"))
to
self.connect(self.pushButton1,SIGNAL("clicked()"),self.restEditor1.render)
This means this is not yet perfect. But hopefully it ill get fixed soon, if possible :-)
However, if you handle all the connections to the custom widget by hand in a subclass of testdialog, it is entirely possible to use this already. That means you can't use designer 100%, but it's close.
Possible improvements
- If someone could manage to write a QWidgetPlugin that created arbitrary PyQt-based widgets, which could be used from C++, that would have two very nice effects:
- It would be possible to use such widgets WYSWIG from designer
- It would be possible to use them from any C++ Qt applications! (at least to some extent)
- Alternative: maybe it is possible to load the class we create, and then automagically generate a .cw file for it. The format of the .cw file is simple enough, and in PyQt every method is a slot, so it would be as simple as picking up a list of methods... that would make things quite a bit easier!
Packaging
There's really no need to package these things. You could just put them in a subfolder of your own project, and use them.
However, if you want to have a system-wide version of the custom widget, a simple distutils-based setup.py can do the work, and is left as exercise for the reader (post it as a comment, you get your grade later ;-)
Final Words
Of course to make a good Restructured Text editor widget, it needs lots more fleshing out than what you see here. In fact, I wrote this article in a more advanced version of the same thing ;-) The purpose was simply to show how you can create widgets maximizing the chance of reuse.
If any magazine or site editor reads this, feels this article has adequate quality, and wants to publish something I write... feel free to contact me and we'll talk about it. I'm not expensive, I have several ideas, and I write quickly ;-)
Hi very nice article | https://ralsina.me/stories/27.html | CC-MAIN-2021-04 | refinedweb | 1,107 | 63.49 |
This library converts a
Float to a
String with ultimate
control how many digits after the decimal point are shown and how the remaining
digits are rounded. It rounds, floors and ceils the "common" way (ie. half
up) or the "commerical"
way (ie. half away from
zero).
Example:
x = 3.141592653589793 round 2 x -- "3.14" round 4 x -- "3.1416" ceiling 2 x -- "3.15" floor 4 x -- "3.1415"
The given number of digits after decimal point can also be negative.
x = 213.14 round -2 x -- "200" round -1 x -- "210" ceiling -2 x -- "300" floor -3 x -- "0"
Commercial rounding means that negative and positive numbers are treated symmetrically. It affects numbers whose last digit equals 5. For example:
x = -0.5 round 0 x -- "0" roundCom 0 x -- "-1" floor 0 x -- "-1" floorCom 0 x -- "0" ceiling 0 x -- "0" ceilingCom 0 x -- "-1"
Have a look at the tests for more examples!
Why couldn't you just do
x * 1000 |> round |> toFloat |> (flip (/)) 1000 in
order to round to 3 digits after comma? Due to floating point
arithmetic it might happen that it results into someting like
3.1416000000001,
although we just wanted
3.1416.
Under the hood this library converts the
Float into a
String and rounds it
char-wise. Hence it's safe from floating point arithmetic weirdness.
From the root of your Elm project run
elm package install myrho/elm-round
Import it in your Elm modules:
import Round
| Version | Notes | | ------- | ----- | | 1.0.3 | Fix issues with number in scientific notation, complete rewrite. | | 1.0.2 | Given number of digits after decimal point can be negative. | | 1.0.1 | Upgrade to Elm 0.18 | | 1.0.0 | First official release, streamlined API and tests, docs added | | https://package.frelm.org/repo/710/1.0.3 | CC-MAIN-2019-09 | refinedweb | 296 | 76.82 |
Create a Database Module
08/08/2018
You will learn
You will create a database module with Core Data Services artifacts.
Back to your project, right click on your project and then click on
New->SAP HANA Database Module:
Name your module
db and click on next
Remove the namespace, add a name to the schema, click on Build module after creation and the click on Finish
You will now use Core Data Services to create a table. You will then use other entities to combine the data.
Begin by creating a folder under
db->src:
Call it
data:
Create a CDS artifact in your new folder
Call it
PO
You can now explore the graphical Core Data Services editor briefly.
Right-click on the entity and choose Graphical Editor.
Double-click on the context to create an entity:
Click on an entity and drop it in the editor:
Call it
APPROVAL_STATUS:
Double click on the node you have just added (inside the white rectangle) and click on the + sign to add a new field for your entity:
Create two fields as follows:
Hint: If you haven’t already, close the
Gitpane.
Save and close the Graphical editor.
Open the Text Editor again by right-clicking on
PO.hdbcds
Copy the definition of the entity (blurred out below) and click on Validate:
You will now add data into your new entity. Build the db module first:
Create a comma-separated values file called
status.csv in the
data folder:
Add the following contents to it:
ID,TEXT I, In process A, Approved R, Rejected
Save the file.
Now you need to add a new file to indicate how that file loads your new table. Create a file called
load.hdbtabledata
Add the following contents to it:
{ "format_version": 1, "imports": [ { "target_table": "PO.APPROVAL_STATUS", "source_data": { "data_type": "CSV", "file_name": "status.csv", "has_header": true }, "import_settings": { "import_columns": [ "ID", "TEXT" ] } } ] }
Save:
Build the module.
Save and build the
db module. Wait until the build finished to answer the following question. | https://developers.sap.com/latvia/tutorials/xsa-ml-e2e-create-cds-db.html | CC-MAIN-2018-51 | refinedweb | 334 | 61.97 |
Rob:
- ODF’s lack of spreadsheet formula syntax creates some interoperability challenges; Because ODF 1.0 and 1.1 do not support formulas, all ODF spreadsheet implementations are application-dependent
- We’ve worked hard to overcome these challenges in ways that provide accurate results and predictable interoperability
- We have been fully transparent about the decisions we’ve made in our ODF implementation, both in terms of the guiding principles we’ve followed and also the specific details published in our implementer notes
- The Open Formula specification is not yet a standard, so we do not support it in its unfinished state, but we will look closely at Open Formula when it becomes a standard and make a decision then about how to best proceed.?
After further investigation, though, I think I see what Rob may have actually done to get the result in his table. He seems to have included some steps that aren’t documented in his blog post:
- In Open Office Calc, he went into Tools, Options, Load/Save, General.
- For “ODF format version” he changed the setting from “1.2 (recommended")” to “1.0/1.1 (OpenOffice.org 2.x)”
- The dialog then warned him that “Not using ODF 1.2 may cause information to be lost.”
- He clicked OK to save the:
The standard defines the attribute table:formula, contained within the element <table:table-cell>, contained within the parent element <office:spreadsheet \ table:table-row>
This attribute is supported in core Excel 2007.
1. When saving the table:formula attribute, Excel 2007 precedes its formula syntax with the "msoxl" namespace.
2. When loading the attribute table:formula, Excel 2007 first looks at the namespace. If the namespace is “msoxl”, Excel 2007, Excel 2007 loads the <text:p> element and maps the element to an Error data type. Error data types that Excel 2007 does not support are mapped to #VALUE!
And, as both Rob’s tests and mine show, that is exactly what Excel does. It would be great if there were a place implementers could go to see these sorts of details for all major ODF implementations.
Summary.
PingBack from
Hi Dough !….".
Not a good end to Rob’s Mary tale, isn’t it ? (but a good "real world" use case about MSOffice with SP2).
Best and cheers from Brazil,
Jomar
Doug, all of the spreadsheets I created and tested are in the zip file linked to from my post. There are no additional "undocumented" steps involving saving OpenOffice 3.01 files in ODF 1.1 format. I resent your insinuation that the test cases I used were otherwise than what I documented.
My guess is your results differed because you used an older version of Symphony than the one I tested with. My blog post explicitly states that I used Symphony 1.3. You state that you used Symphony 1.2. The goal of my test was to use the latest version of each application. Using older versions is rather pointless, especially when the updates are free. If you cherry pick versions then of course you will get different results. If I wanted to play that game I could have picked the prior Excel 2007 SP1 and shown that it cannot load ODF documents at all. But I think it is best like to show all applications in their most favorable light by using their latest versions.
If you use the versions I indicated, the latest and greatest versions of the apps, you’ll get exactly the results I indicated, using the out-of-the-box defaults of the applications, with absolutely no tinkering required. OO 3.01 spreadsheets are properly loaded by Symphony 1.3 and vice versa. In fact, Symphony 1.3 spreadsheet formulas appear to be read properly by all of the ODF implementations I tested, except for Excel 2007 SP2. It appears that some vendors actually work together for interoperability rather than just work on excuses.
Hi Jomar, thanks for the kinds words. The feeling is mutual.
Regarding Mary’s spreadsheet, I think she’s actually in a very good position here. Seriously, hear me out …
Her future husband may have had concerns about the presence of undocumented formula markup in her document, which does not conform to any published standard. He may have worried that this implied a lack of principles on her part; a questionable willingness to set aside open standards in pursuit of some other short-term objective.
But now, he sees that she has removed that markup from her document, and it is 100% open-standards compliant. And it still contains their data, the most valuable piece of the puzzle. Hooray! “Thank you, Mary,” he says, “for assuring that we will start our life together in true conformance to open standards, with no non-standard markup in our documents.” And they live happily ever after. 🙂
Of course, after Open Formula is published, the story could get even better!
Hi Rob,
First off, let me apologize: I didn’t notice that you were using Symphony 1.3. My mistake. I’m sure you can see how, with Symphony 1.2 installed (the currently available version from your web site at), I had the experience that I did.
So when will Symphony 1.3 be released? Is there a way for me to get it now? I’d love to do some interop testing with it.
And regarding the good interoperability between Symphony 1.3 and OpenOffice 3.0 that you’ve described, you seem to be saying that that’s based on the unpublished, unapproved, not-yet-ready-for-public-review ODF 1.2 spec, correct? What happened to the importance of conformance to open standards? Does that matter this year?
Oh, by the way — do you intend to update your test grid to reflect Kspread beta’s ability to interoperate with Office 2007 SP2? Or does your testing methodology only allow for pre-release versions of *IBM* products?
Cheers,
Doug
Hi
I checked on the IBM web site too and could only find 1.2, so it’s not one of those Redmond IP filter tricks that some organizations have been known to employ 😉
I might try this new approach with analysts:
Monarch .next opens all formats on earth flawlessly, will pull in 1 TB of data in 5 seconds and make your morning coffee-it’s all in this table.
Oh, you can’t repro that-well of course you can’t, you don’t have .next. I explicitly stated you need it. So there 😉 Doug,
I find your arguments weak at best. Would a published, peer-reviewed standard be the best solution? Absolutely. Is it the only way to achieve interoperability? Of course not. Sometimes you have to simply mimick what other software does. Which is exactly what all other ODF implementations do. The only one that fails this formula test so miserably is Office 2007 SP2. Which shows the world that interoperability is possible, but it was not important (enough) to you.
By the way, how did you manage to be interoperable with Lotus 1-2-3? A simple pointer to the published standard describing the document format (including formulas) is enough. Thanks.
Stephan
Doug, I’ll pass you name on to our beta program people and see if they can get you a copy.
I’d be happy to add KOffice 2.0 RC to the list, and in fact I wrote to the KOffice guys before I wrote that blog post, to see if I could get them to run some test cases, but I didn’t hear back in time. The structure of the test requires that I not only test KSpread/SP2, but test all of the other combinations involving KSpread and KSpread-authored spreadsheets as well. When I have that info, I’ll update the table. Ditto for any other product updates that come along. The only one that is tricky is Google, since it is not obvious when they have updated their code. It could be different an hour from now than it is now. That’s the nature of the beast with web-based editors. So to be safe I retest with them any time I re-test any other editor.
The recommended approach is to conform with open standards where available and then do what ever additional work that is required to be interoperable, so long as that is not incompatible with conformance. Using draft ODF 1.2 formulas in an ODF 1.1 document is still conformant to the ODF 1.1 standard. First, your spreadsheet formulas are not conformant with the ODF 1.1 standard. And second, you could have easily made them be both conformant and interoperable. In fact you already have the code to do this from your ODF Add-in. Using the excuse, "The standard made me do it" doesn’t cut it, since it is not true. You could have been cboth onformant and interoperable. But you ended up being neither.
I’d suggest that all ODF vendors add Doug Mahugh to their beta testing program. Microsoft clearly needs some help finding their way to doing interoperability testing _before_ they ship nonconforming ODF code to millions of people.
@Stephan
Hi Stephan – we have considerable experience in reverse engineering Excel binary formats and also in reading and writing OOXML spreadsheets. The problem with aping the behaviour of canonical apps is that you can never be sure what pain updates will bring, or if observed behaviour is in fact a bug that is going to be fixed.
In addition, if you are using subsets of functionality, you can’t be sure if mimicking the subset is acceptable to other mimicking apps, who may consider some functionality you don’t implement a required part of their own private, internal conformance rules.
We have observed this behaviour in some applications that read XLSX files by trying to ape Excel 2007 behaviour, rather than the spec. This has resulted in those applications not being able to read our valid, correct OOXML instances. This is wrong and it is the path to madness.
The "sometimes" you mention WRT mimicking apps is where there is no alternative, such as the old days of BIFF, where there was a spec, but it was somewhat outdated and badly documented. With ODF and OOXML, the point is to try and leave those days behind.
It appears that Microsoft are in a no-win situation here, as if they conform rigidly to standards they are lambasted and if they don’t they are lambasted.
Perhaps if the people that feel they are qualified to be judge and jury about how Microsoft should implement standards in software could produce a guidance document with advice on when and where they should not be rigid, then that would probably help.
Lotus 1-2-3 compatibility was "the old days" if you like. People like Rob and the ODF contributors can take some of the credit for "helping" Microsoft to see the light, but you have to be careful what you wish for.
BTW, if the documentation on the ODF support had been perused, this behaviour should not have been a surprise.
This all may sound like minor details to large organizations with big office suite implementations and huge resources at their disposal, but having decent standards in place really helps smaller vendors like us, who often only use targeted subsets of functionality.
In the "old days" our exports could only reliably be read by Excel and to some extent OpenOffice (+clones) but with OOXML, the audience of consuming applications can grow far more easily.
It’s like the old write once, (debug everywhere 😉 run anywhere story. Our hope for OOXML is that it widens the population of consuming applications for our data exports. Stab-in-the-dark application-aping is not something we can afford to do, or rely on other applications to do.
@Rob I’m not sure what your phrase means: "conform with open standards where available and then do whatever additional work that is required to be interoperable, so long as that is not incompatible with conformance"
Do you mean "conform with those parts of ODF that are defined, then do what OpenOffice does, which, unsurprisingly is allowed by an extremely catholic conformance clause" ?
Or "conform to parts of standards you fancy using but feel free to mix in a bit of undefined, proprietary cruft if you like"
Beware, it sounds like you’re giving Microsoft a free rein to go back to the bad old days!
Gareth
Gareth, I mean that if a standard permits a dimension of variability, then once should take one’s head out of one’s ass and look around at other products in the market, and if the preponderance of existing implementations are already following a formula convention that is conformant with the standard, then interoperability will be greater if one also follows that same convention rather then diverging onto a non-interoperable convention. This is especially true if that convention is already undergoing standardization in a committee of which you are a member.
This is so obvious that you almost need to be in the habit of beating yourself on the head with a hammer to not understand this.
Rob,
I understand the pragmatic nature of this, of course, but aren’t "conventions" what we are trying to avoid?
If everyone gets too comfortable with these conventions, then it could lead to OpenFormula being redundant, with the all-too-common slide down to the simplest lowest common denominator by vendors outside the "boxing ring".
In any case, Microsoft have to be more careful than most with their approach to standards, you might have noticed.
As Doug said, the philosophy and guidance on the implementation was known well in advance. Perhaps a formal request and a pass for using "conventions" instead of standards from the ODF / OOO good and the great might have enabled them to do so without fear being accused of any wrongdoing.
Conventions of a canonical app (Office) have been deemed very bad, historically. Equitably, conventions of a canonical app (OpenOffice) must be seen in the same light.
However, the world still turns happily, as the vendor who created the conventions (Sun) has created a plug-in which works with Microsoft Office to perpetuate these conventions in documents produced and consumed by Microsoft Office.
Everyone is happy. Now if Microsoft had made it so the plug-in would not work, giving an error message with some heavy threats about standards etc etc, then we could rightfully be up in arms.
We still have the choice, don’t we?
Gareth
Gareth, read my words more carefully. I never said to not use the standard, or to use conventions instead of the standard. I said to use common conventions to supplement the standard where needed to provide interoperability. And this particular convention, although it may be in the OO XML namespace, is hardly an OO-specific convention. As my tests showed, this convention is supported by every other ODF spreadsheet editor I tested, except for SP2. So you cannot just explain it away by saying it is OO-specific. Perhaps it originated in OO years ago. But everyone, even Microsoft’s own ODF Add-in for Excel, supports that convention. But not SP2 for reasons that have still not been explained.
But Rob, couldn’t that same argument be used to make a case for everyone sticking with the binary Office formats? They were widely supported years ago, and continue to be. And more to the point, couldn’t that same line of reasoning have been used by the ODF TC a few years ago to conclude that ODF should include formula syntax compatible with the millions of documents that already existed at that time with formulas in them?
Regarding your desire for an explanation for our decision to not support the formula syntax used by OO, perhaps you missed this paragraph above:.
Doug, the analogy would be this: If we supported the legacy Office binary spreadsheet format, as do all spreadsheet applications that I know of, then we would look to the available specification, which Microsoft resurfaced last February after a decade’s absence. But when we look in that binary format specification we find that has no coverage of spreadsheet formulas. Zero. Zip. Nada. So what should we do? Indeed, what did we and every other vendor do in that situation? We looked around, found what the prevailing convention was and coded to it.
You seem to be arguing that in that situation each vendor implementing the legacy binary format should make their own choice as to formula language and create non-interoperable documents. Does that really make sense for the customer? Certainly move the convention into a standard. That is what a convention is in the end — a formalized convention. But to replace an widely held convention and an emerging standard in a committee which you are a member of, with a divergent approach that is non-interoperable with any other vendors and indeed not compatible with your own ODF Add-in’s documents, this defies reason.
Well, our "divergent approach" as you call it is based on the one and only approved and published standard for formula markup that exists, as of today.
If we’re going to write formulas, that seems to me a pretty logical approach. And when there’s another published standard for formula markup, we’ll look at that option as well (as I said above).
Rob,
So as I understand it, when Microsoft add extensions to an existing standard, as OOO does with formulas on top of ODF, it is called "Embrace, extend and extinguish" and is the vilest evil to ever have been visted on the earth, but when it’s not Microsoft, they are merely "conventions" which should be supported by all other vendors in an ecosystem.
See "The Strategy" section in:
Squirm in the irony of this when applied to OpenOffice + ODF.
I know this is expressed in a rather emotive way, but purely to illustrate the point.
One could say that one man’s evil extensions are another man’s conventions.
Of course I read your words carefully, and my response reflected that. I said standards + cruft, not cruft without any standards.
As I said before, maybe this is a harsh implementation, but this is the post-EU-slapdown Microsoft, they have to be wary. They don’t want to be accused of being complicit in EEE by proxy for OpenOffice 😉
And again, you can still use the Sun plug-in. They are the originators of said extension!
Another thing – any response to my first question?
Dough,
It’s funny to see your point: numbers only = data preserved (I bet that several economists will disagree with you, but ok).
No additional comments 🙂
Best,
Jomar
dmahugh: "Well, our "divergent approach" as you call it is based on the one and only approved and published standard for formula markup that exists, as of today."
Irrelevant for the following reasons:
1) The OOXML formula language was not approved and published as a separate standard. Rather it is part of a larger specification. Thus, it was not approved as a general purpose formula language. It was approved as part of a larger specification.
2) The OOXML formulas do not match the syntax of the formula examples given in ODF 1.1 subsection 8.1.3. For instance, they exclude the brackets around the cell references. Thus the OOXML formula language is not in keeping with the spirit of the ODF 1.1 specification.
3) The OOXML formulas violate ODF 1.1 subsection 8.3.1 by failing to include a dot before the cell reference.
4) OOXML formulas are not a standard, defacto or otherwise, for other ODF 1.1 implementations.
From the main article: "Open Office, for reasons I don’t understand, has decided to use as their default formula syntax the unfinished Open Formula specification, which is neither approved nor published by OASIS – not even out for public review yet."
I beg to differ:
Latest draft is less than a week old. That’s not public enough for you?
Gareth, this is not the time or place to announce the exciting new features in Lotus Symphony 1.3 You’ll want to keep posted to for that information.
@Dough, "Well, our "divergent approach" as you call it is based on the one and only approved and published standard for formula markup that exists, as of today."
Well there is at least a big issue with this approach: the "approved and published standard for formula markup that exists" is NOT conformant with ODF 1.1 (there is no "." in front of the cell references). So the OOXML formula syntax is NOT a possible choice if you want to respect the ODF standard.
You can argue as long as you want, it is clear to everybody (even to you if you want to be honest) that this is again a trick used by Microsoft to break interoperability.
But Microsoft making promises and not following them is no news: "the Commission notes that today’s announcement follows at least four similar statements by Microsoft in the past on the importance of interoperability"
@Gareth, "this is the post-EU-slapdown Microsoft, they have to be wary."
Well, Microsoft should be more wary then: ."
This is clearly a very bad start…
@Doug, let’s have a closer look at your "Guiding Principles":
* Adhere to the ODF 1.1 Standard: Failed. The formula syntax used in SP2 does not adhere to ODF 1.1
* Be Predictable: OK. You summarise this as "The principle here is that we want to do what an informed user would likely expect. " Indeed, a user informed of Microsoft usage would likely expect that you find a trick to break interoperability.
* Preserve User Intent: Failed. The obvious intent of the user is to use formula, not to silently have them erased.
* Preserve Editability: Failed. As the formula are gone, you can no longer edit the file successfully.
* Preserve Visual Fidelity: OK.
I would say this is not a bright score…
@Luc so you feel that the EU should deem extensions not included in a standard as a positive interoperability initiative that is to be (or forced to be?) supported in this case, but not say in Java or any other examples of EEE.
One can’t bang on about interoperability with extensions now, when it has been decided that standards are the cure. Is interoperability the most important issue, or is it standards? Maybe a few vendors could get together in a smoky room and thrash out interoperability behind the scenes instead, would that be better?
I don’t think the EU would base a legal challenge on this type of non-equitable basis.
Microsoft have been supportive of implementing any future, published standard in this area, so the point is moot.
The press release contains 2 key concepts: ODF (no formulas, so irrelevant w.r.t the standard) & better interoperability.
There is now native support, rather than just the old MS-sponsored plug-in. Whether it is better or not, I don’t personally know, but there has certainly been a considerable effort in terms of the code and implementation notes.
I imagine that over a wide range of scenarios (excepting the ones targeted to raise the temperature) the interoperability is better.
I guess only real customers with no agenda will be the only ones to be able to verify that.
@Rob I am sure they will be beavering away at some features inspired by this very thread as we speak 😉
Gareth
@Gareth: the role of the EC here is not to validate conformance to whatever standards. It is to check if there is an abuse of a monopoly position that harms customers. The EC press release don’t talk about standards. It talks about "better interoperability" and allowing consumers "to process and exchange their documents with the software product of their choice". And SP2 offers worse interoperability than the previous CleverAge plug-in, not better. Conclude yourself.
It is easy to see, even for the EC, that the recent claims of Microsoft to be the White Knight defending the sacred Standards against the Evil OpenOffice and its Dark Lord Rob Weir is purely an hypocritical position, resulting is less interoperability rather than more interoperability.
Doug, we all understand you are deeply troubled by the failure of your employer to meet the expectations on a product update that is an exact match with your job title "Office Interoperability".
However, I don’t think it is a great idea to try and save face by telling lies and trying to pretend that nothing bad happened. Nothing you say can cover up the very obvious facts:
1) Excel 2007 does not produce conformant ODF 1.1 documents. (Small but important issues with missing brackets and the leading dot.) That is not interoperability.
2) Excel 2007 won’t open any ODF 1.1 spreadsheet created by any other ODF-supporting software without silently crippling it by deleting all formulas. That is not interoperability.
3) No other ODF 1.1 spreadsheet application will open an (almost-conformant) ODF 1.1 spreadsheet saved from Excel 2007. That is not interoperability.
The actual product release certainly does not adhere to your own guidelines which you are quoting:
"Adhere to the ODF 1.1 Standard", "Be Predictable", "Preserve User Intent", "Preserve Editability"
and "Preserve Visual Fidelity". You need to admit that SP2 failed on at least three of five accounts here. (Deleting all formulas is bad but formally predictable behavior, and visual fidelity is preserved even by just importing a static table.)
Excel 2007 is not interoperable through the ODF format, and does not yet deliver on your promises. Admit that, and maybe we can get on with fixing it instead of squabbling over why who did what. If you stay on your current path of denial, you will send the message that your job is not in fact "Office Interoperability", but something rather less honorable.
The cell-reference issue that some of you have brought up is part of the broader issue of formula syntax in ODF. We believe we are completely conformant with the spec on this matter, and that we have implemented formulas in a way that is consistent with the intent and spirit of the ODF specification. I’ll post a blog post on this topic next, since I think it deserves more thorough and specific explanation than a comment box can contain. (As Fermat put it, "Cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.")
I must admit, it’s pretty fascinating to me to see the level of interest people have in discussing something (formulas) that doesn’t even appear in the ODF specification at all. Imagine how much fun we’ll have when we start discussing things that are actually there!
"Imagine how much fun we’ll have when we start discussing things that are actually there!"
this is the problem here, you are arguing with a guy who has "fun" with standards. Standards are a serious thing, not a game nor a Public Relations statement.
But, but, but … you guys are making my head asplode!
If standards are so serious, why are all of these people arguing that we should support things like OO.o’s historical formula syntax (never was a standard, never will be) or Open Formula (not yet a standard, probably will be eventually)?
Which is it? Do you think standards important, or not?
In either case, here at Microsoft, we feel standards are important. And we have fun, too.
Doug, if you are not convinced by Rob, please have a look at what the implementers of the add-in translator sponsored by Microsoft are saying:
."
So, Microsoft already supports OO.o’s historical formula syntax, despite the fact that it never was a standard, and never will be.
Can you explain this:
– Microsoft first made an attempt at not supporting the OOo formula syntax, received complaints and "decided to implement a decent formula translation approach" in the translator;
– at the same time, Microsoft decides that it is against the Microsoft serious approach to standards to support the same decent approach for SP2.
This hypocrisy is so blatant that I would like to thank you for showing to the world and to the European Commission that there is no "new Microsoft", and that you remain the same monopoly abuser playing dirty tricks to avoid interoperability at all cost.
Hi all,
I do seem to see a higher degree of conformance required by microsoft by others. If its not in the spec, it is not advisable to implement it.
Let me take a case… why don’t all browser manufacturers mimic internet explorer 6 to give practical interoperability.. This is because it is not the standard.
Now u want microsoft to provide practical interoperability with ODF even though its not in the spec.???.. Seems pretty one sided to me..
@gk: "Now u want microsoft to provide practical interoperability with ODF"
Microsoft itself committed to this practical, real world interoperability in its press release on 21-May-08: "Microsoft recognizes that customers care most about real-world interoperability in the marketplace, so the company is committed to continuing to engage the IT community to achieve that goal when it comes to document format standards."
Many customers and ODF supporters *asked* for support of ODF in MS Office. Nobody *forced* Microsoft to support ODF. But if MS decides to provide support, it seems logical to require a working support, not a broken one aimed at fragmenting the corpus of ODF documents.
What we see here is the same old tricks already used to fragment Java and HTML, for which MS has been punished respectively by an US court and the European Commission.
I have no doubt that once again Microsoft will be punished by the EC for their current monopoly abuser behaviour.
@Luc
Allthough the OpenXML/ODF translator was funded by Microsoft it is OSS software created by 3rd pary software producers and as such not belonging to any Microsoft software prodcut and I think it is safe to asume that Microsoft will not add OSS software snippets to their core software product as that would create possible license issues especially if someone were to reuse some GPL code in those sources.
Luc, as I said in my post we have certain obligations when we ship software that don’t apply to open-source projects. I’ll cover some more details of this in my follow-up post on spreadsheet interoperability coming shortly.
Luc: "…I think it is safe to [assume] that Microsoft will not add OSS software snippets to their core software product as that would create possible license issues especially if someone were to reuse some GPL code in those sources."
You can’t legally put GPL code into BSD at all, unless you’re the author, in which case you’d probably be required to relicense the code under BSD in order to submit it to the project in the first place.
I think the whole issue of code reuse is beside the point, though. Microsoft has more than adequate resources to create whatever code they wish, especially when the development effort is directly associated with one of its primary sources of income: Microsoft Office. Of all parties involved, they are in the worst position to argue for choosing a formula language based on scarcity of resources.
doug, let me copy a snip of "code" from Weir’s blog:
would like to know why Excel 2007 doesn’t produce ODF 1.1 conformant documents, regarding this "cell address" thing.
Thank you
carlos, a small correction:
* OOo 3.0.1: of:=E12+C13-D13
* Excel 2007 SP2: msoxl:=E12+C13-D13
clippy, actually
* OOo 3.0.1: "of:=[.C4]+[.D4]+[.E4]"
isn’t it?
@Clippy, carlos had it right the first time. In the Maya spreadsheet Rob Weir saved using OOo 3.0.1, the formula is "of:=[.E12]+[.C13]-[.D13]". This is a direct cut and paste.
@carlos, Microsoft interprets the word "Typically" in section 8.1.3 as being infectious and recursive, converting everything it touches into non-normative examples. Thus, the content that a formula, cell addresses, and even all of section 8.3.1 become non-normative simply by being within seven degrees of the word "Typically". Never mind the fact that the specification makes perfect sense if you interpret "Typically" as only applying to having a equal sign at the beginning of the formula, as this example in 6.5.3 would indicate:
text:formula=’ooo-w:[address book file.address.FIRSTNAME] == "Julie"’
Note the use of the brackets used for addressing are present, but the preceding equal sign is missing.
Microsoft’s interpretation of the ODF 1.1 specification is not necessarily a deliberate misreading, but even giving them that benefit of the doubt, they still ignored parts of the spec they considered informative to use the formula language that suited their interests rather than picking a widely used defacto standard that takes 8.1.3 and 8.3.1 to heart and promotes interoperability. The whole point of standards is to allow interchange. Anything else is just a subversion of the process.
Matthew, there’s a long history how standards text is written and what’s normative or not; ISO and other standards bodies have rules governing such details as well. I think it’s in everyone’s interest for us to follow those precedents and rules, rather than making up new rationales for such things. The entire reason such rules exist is so that we DON’T get into these little debates about things like "what the meaning of the word ‘is’ is."
As I’ve mentioned, I’ll cover all of this in my next blog post. This discussion needs more facts and fewer opinions … in my opinion. 🙂
@Mathew: "The whole point of standards is to allow interchange. Anything else is just a subversion of the process."
The pity is that subversion of processes is a standard at Microsoft 🙁
@Carlos
The reason for several applications using the same cell referencing system is that this is used by OOo and that these applications implemented the existing OOo spreadsheet variation in ODF whereas MS has been implementing the existing Excel spreadsheet variation in ODF and is using the cell referencing mechanism that Excel is using.
It seems that the ODF 1.1 specification does not define normative specifications on how to reference spreadsheet cell or in fact anything normative about specifying spreadsheet content in an ODF spreadsheet.
thanks hAl
doug, is that true?
IMHO if it is, then this decision is totally inexcusable.
@Doug Mahugh:
If there is no standard (yet) for formulas in ODF, i.e. not specified in ODF 1.1 and not published or finalized in ODF 1.2… then why do anything with formulas at all?
Why do SP2, period?
Or, if you prefer, if you wanted to do something useful, consider the document base.
How many ODF documents exist, world-wide? How does that compare to the empty set of documents in "ODF" format produced by SP2 (before there was an SP2)?
This is a _document_ format issue, not an _application_ issue. It doesn’t matter as much how many installed versions of Vendor X software that read/write ODF – it matters how many of what "format" exist.
So, why not take the approach of, "How do we build an SP that allows us to read the existing ODF files, and write them out again, so that the original author’s software could open the result?"
I don’t think it is something a rocket scientist needs to suggest, at a DII meeting.
Don’t forget, too, that the OOo source was and is available for reference in implementing SP2. I suggest the path of least resistance in the marketplace, would have been interoperating regardless of the state of standards — if you wanted to achieve any degree of interoperability.
Name Withheld
@gk: "Let me take a case… why don’t all browser manufacturers mimic internet explorer 6 to give practical interoperability.. This is because it is not the standard."
IE6 implements the HTML and CSS specifications. It simply does not conform to those standards. From what I can tell, Office 2007 SP2 can read/write ODF1.1 documents as specified.
What is in question here is that the formulas are not specified in the current standard, but *are* being standardised in a separate formula standard that is being drafted and agreed upon that is ratifying what existing implementations and existing documents do. Where existing implementations – such as Symphony – do not conform at the moment, upcoming versions do to make them interoperable.
I find it amusing that here Microsoft are saying "we can’t do XYZ as it is not standardised, and is still in draft" whereas the upcoming Visual Studio 10 is supporting language features in the draft C++0x standard!
It seems that Microsoft is suffering from split-personality disorder and is choosing which standards and draft standards to conform to that best suite Microsoft. Like IE being one of the few vendors that don’t support Scalable Vector Graphics or MathML, instead going for a butchered hybrid that no-one else supports.
Since Rob Weir moderates really nicely his blog, I assume he won’t post my question there.
Rob, please tell me, what are today’s incentives given by IBM to it workers to hawk agendas?
The following is just a precedent established by IBM:
**********."
I think there are actually two issues.
One is what is the most appropriate format for Office SP2 to save, in the interim new world of strongly-labelled formulas before Open Formula is released. The other is what formats Office SP2 should load successfully, in the interim new world of strongly-labelled formulas before Open Formula is released.
The ODF spec clearly says that the namespace prefix can select the syntax. I don’t have much problem with Microsoft exporting in Excel with a proper label, as the implementation notes say; it is up to other importing applications to add filters to get interoperability. The lag is just a ramification of the ODF 1.1 spec.
But what is good for the goose has to be good for the gander. For load, the current approach in SP2 and the implementation notes is clearly deficient for interoperability. Instead of failing if the syntax is not Excel, trying it in the dot notation would be genuine interoperability.
This is nothing more than Postel’s law: being conservative in what you send and generous in what you receive, which has proved itself a pretty good mindset for implementers.
You’re asking me about OS/2?!? I was still at Harvard when OS/2 was being marketed. You’ll need to ask someone much older than me about that.
To Doug’s point on "typically", the question is when does the scope of that word end? I suggest that it ends when the verbal phrase ends. This is standard English usage. Anyone is welcome to suggest an alternative reading, but you will need to state where the effect "typically" ends, and on what principle you make that reading. Simply throwing your hands up and saying ,"Oh my, this doesn’t follow ISO drafting guidelines for the preparation of standards, therefor it means anything I want, or maybe nothing at all, based on my whim" — that doesn’t cut it. You need to state your principles and interpret the text that you have, not the text that you wish you had.
Thank you Doug.
I don’t expect Rob to answer my question.
Nonetheless, another question arises, has IBM extended the above incentives to publications such as Groklaw?
One fact that intrigues me is all the fuzz about document standards knowing Linux market share on the desktop at 1% unchanged during the past 8 years:
and
The second fact that intrigues about all the fuzz on standards is the lack of a word processor and spreadsheet equivalent to MS Word and MS Excel in the Linux world until pretty recently:
The last fact the intrigues me is the lack of good GUIs in the Linux world (which have ‘borrowed’ a lot from MS Windows) until only a few years ago:
Why all of the sudden so much fuzz for document standards?
Rob, a key paragraph is:
."
Let me refrase the paragraph for you:
IBM will offer employees incentives ranging from medals to IBM software or hardware to cash, depending on how much effort they put into ODF. In return, says an IBM marketing executive, the company will ask employees to approach their neighbors, their dentists, their schoolboards. Armed with brochures and talking points, the IBMers will sing the praises of ODF as the solution to people’s document needs.’
Does the world reference implementation slip out side your minds.
How do you think bugs are going to be found in the new draft standard for formulator if its not implemented.
OpenOffice is the reference implementation so they do items from draft. Still provide option to save in prior format.
I am sorry but openoffice’s formula syntax is documented for all versions even the old Staroffice file formats style is documented. OpenOffice changed over to the draft standard because it would be more future safe.
Yes gnumeric koffice IBM Lotus Symphony and others have managed to operate in a compatible way. OpenOffice syntax 1 of them.
MS choose to be incompatible so be it.
Correct Openoffice’s spreedsheet formal are not compatible with Excels . Openoffice don’t have rounding errors and has more functions. Excel support is a small subset. So excel is still short many functions to do 1.2 support when it gets released.
Doug Mahugh and a bunch of the standards crew (both in and out of Microsoft) have been having a great
I wrote a comment here that was moderated. Thank you.
ghomen, I just looked through the comment table and I don’t see anything from you that hasn’t been let through. Can you tell me the rough date/time? Or re-post? Thanks.
@dmahugh
My comment was:
You are trying to justify a major fault on Microsoft’s ODF implementation with problems on a very specific and unuseful scenario (summing strings!). That scenario hardly matters at all when compared to the state of interoperability you left ODF with.
I sincerely think that there’s no possible excuse for what was done given so many existing reference implementations (including your own supported plugin).
Probably it’s not up to you to decide, but this really looks like a trick to spread FUD on ODF portability. People will notice, you know? There’s nothing that can be said to save face at this time.
Can something be done?
Suggestion: issue an urgent fix for this and get your credibility restored.
Why the hell a lot of people is talking about a standard not yet approved (ODF 1.2)?
Hey guys, I´m a Linux user (Debian), and I agree with Microsoft in this point of view.
And… Why the hell OOo is using ODF 1.2?
This is out of standard! Where is the interoperatibilty?
Why the hell OOo don´t follow the ISO/IEC/Whatever 26300 (ODF 1.0/1.1)?
If they follow ODF 1.2, so they are OUT of standard and interoperability!
And… Come one, help me with this bug:
ehehehehhehe
Regards,
Renato S. Yamane
Brazil
Hi Doug,
I have to say I am not a fan of Microsoft (to not say that I hate it), but no one has the rights to say anything against MS for the reason explained in your post (there are other reasons to not like the company where you work, but this one is not one of them).
It makes me wonder, how could ODF 1.0/1.1 be approved as an ISO standard? The lack of specification on how to define formulae is just a joke.
Let’s see if MS Office will follow the new ODF standard with this problem fixed when it is ready and aproved.
Regards,
Carlos
To all that say there are no standards for formulas. Did you actually _read_ Rob Weir’s blog post? I will copy the link for you:
Carefully pay attention to the quotes of section 8.1.3 and 8.3.1 of the ODF 1.1 standard. Then compare the entries for the different spreadsheet products and how they encode a formula. If you still do not see that Excel 2007 SP2 is the one that breaks the formal standard, then well, go on living in your universe….
Carl, everyone is entitled to their own unique interpretation of the normative requirements of ODF 1.1, if they’re so inclined. The interpretation you’ve linked to is one perspective, although it seems to be a perspective held by one member of the ODF TC and no others.
Here are a few other opinions you may find interesting or helpful:
From an ODF TC member and ODF implementer:
From the convenor of SC34 WG1:
From a longtime document standards contributor:
From the ODF editor:
It’s also worth noting that Excel 2007 SP2’s approach to formulas was publicly demonstrated last July (at an interoperability event that I invited all members of the ODF TC to attend). So there’s no new information here in nearly a year. For a non-Microsoft perspective on the formula details that was posted last year, see here:
Un article de fond qui montre bien comment les problèmes d’interopérabilité sont complexes. | https://blogs.msdn.microsoft.com/dmahugh/2009/05/05/odf-spreadsheet-interoperability/ | CC-MAIN-2017-26 | refinedweb | 7,662 | 62.48 |
Heard of a special Database called an Object Database? It has been around for quite sometime now, and has many uses. One benefit is with OOP programming, since it can store actual objects in it's database. You can think of Active Directory or an LDAP server as a sort of object database. It stores many class-like objects with many attributes such as a persons name, phone number, and address information. It can store much more than just that, it basically describes an entire corporate directory of both employees, various contacts, and machines. With an object database, you can simulate such a system.
From this, you may understand the difference between a relational database and an object database. Although you can simulate an object database with a relational database, it won't be ideal due to the large amount of queries and tables necessary. An object is very dynamic, whereas a defined table in a relational database is very static. You can add new attributes on the fly to an object, but you cannot easily add new columns to a table on the fly. This is one advantage to an object database. Another is being able to sort your data in a tree of hierarchy objects, much like a file system on your computer.
One very popular object database in Python is ZODB, the Zope Object Database. However, it requires a lot of dependencies from Zope itself, so in my playing around, I avoided it entirely. Instead, I ended up using Shelve, which ships in the standard Python Library. It essentially provides a persistent dict object. Similar to what ZODB provides. I created a few components for Django to test out an object database through a website, much like how Zope did. A middleware checks the incoming URL to see if an object for it exists in the object database and provides that object to the browser. I equally created a Template Loader to test the loading of templates from an object database, similar to what Zope allows. I store the pre-compiled Django Template object in the database, so there is no need to re-compile it at runtime. In order to make these components work, I needed to make a dict traversable:
from odb.models import odb_dict def get_object(path): if path == '/': return odb_dict if path[-1] == '/': path = path[:-1] try: o = "odb_dict" for e in path[1:].split('/'): o += "['%s']" % e ob = eval(o) except KeyError: return None return ob def put_object(path, o): if path == '/': return False if path[-1] == '/': path = path[:-1] ob = "odb_dict" for e in path[1:].split('/')[:-1]: ob += "['%s']" % str(e) eval(ob)[str(path.split('/')[-1])] = o return True
It is very basic and simple, but works wonderfully. I am sure there is a better way to handle it, which will add more speed. However, for the sole purpose of playing around, there is no need for optimizations. Here is the middleware I created to handle the requests:
class TraversalMiddleware(object): def process_request(self, req): try: o = get_object(req.path) if hasattr(o, '__getitem__') and 'index' in o: o = o['index'] if hasattr(o, 'render') and callable(o.render): return o.render(req) elif isinstance(o, HttpResponse): return o elif isinstance(o, str): return HttpResponse(o) elif isclass(o) and issubclass(o, Form): return FormView.as_view(form_class=o,%s</a></li>' % (k, k) else: html += '<li><a href="%s">%s</a></li>' % (k, k) return HttpResponse("<ul>%s</ul>" % html) return None except: return None
As one can see, it's very messy. It was messier before I made some recent changes. The code near the bottom is generating HTML directly because I was too lazy to create a template while I was playing around. For the most part, this middleware does a lot and detects a large amount of Django objects. I am currently working on a special content object class which has a render() function. This will replace everything seen above with just the call to render(), which will handle the rendering of that particular object. Here is the very short and simple Template Loader:
class TraverseLoader(BaseLoader): is_usable = True def load_template(self, template_name, template_dirs=None): template = get_object("/templates/%s" % template_name) if template == None: raise TemplateDoesNotExist return template, None
This does currently hardcode in the template directory, but this will change if I decide to make something of this. The idea of storing the templates in the object database is that it can allow the templates to be edited in a web-based interface, like any other object in the object database. I admired Zope when it came to it's nicely polished web interface, it allows the creation of various objects and file-types. I am planning on replicating this experience in Django. The content object class will support the easy creation of new objects, and the ability to render them on a website. Like in object-oriented programming, these objects will be the main logic for a web application. To extend this object-based system, one would create a new object with the appropriate attributes and logic. At first, it may be more for content management purposes, which is a great way to use an object database. Future plans are to create an Authentication Backend which hooks into the object database, so that user objects can be stored there. This can allow the extending of a user object to be as simple as adding a new attribute to the object itself. Other exciting features are a full ACL system for the database, which will prevent access to specific objects or actions upon them without proper credentials.
Due to implementing a render() function, and using that to render the actual object, I am hoping to add complete object transparency. Meaning, that say you wish to use multiple template engines for different purposes. The render() function will transparently handle everything and provide a usable Template object to Django. This transparency will go a step further and fully provide introspection to automatically generate editing forms based on the object. If a template is assigned to an object, it will automatically generate a context and push it to the template when viewed by end-users. The listing of a directory node in the object database, will automatically render an object list template for that node. Unlike Django's ListView however, this is not limited to a single object type, but will display all objects types in that node with minimal effort. Hooks into the Django relational database will also be provided, which will enable easy creation of querysets to be rendered. This will essentially cut down on coding entirely, as the class-based views will be dynamically created and rendered based on attributes set on the object. Since these objects can be dynamically edited via an easy to use web-based interface, anybody can create querysets or utilize any of Django's class-based views with no Python programming knowledge whatsoever.
In the end, this idea will be more targeted towards current users of competing CMS software, rather than Django programmers. However, without Django programmers to extend the objects to new heights, this idea won't really lift far beyond the objects which will be provided when I complete such an ambitious Django app. I am hoping that this idea will take off and provide a great starting point for users of existing CMS products whom are looking for something a bit more powerful. | http://pythondiary.com/blog/Nov.18,2012/been-playing-around-object-databases.html | CC-MAIN-2019-26 | refinedweb | 1,245 | 52.6 |
Thresholding
OverviewTeaching: 60 min
Exercises: 50 minQuestions
-
How can we use thresholding to produce a binary image?Objectives
-
Explain what thresholding is and how it can be used.
-
Use histograms to determine appropriate threshold values to use for the thresholding process.
-
Apply simple, fixed-level binary thresholding to an image.
-
Explain the difference between using the operator
>or the operator
<to threshold an image represented by a numpy array.
-
Describe the shape of a binary image produced by thresholding via
>or
<.
-
Explain when Otsu’s method for automatic thresholding is appropriate.
-
Apply automatic thresholding to an image using Otsu’s method.
-
Use the
np.count_nonzero()function to count the number of non-zero pixels in an image.
In this episode, we will learn how to use skimage functions to apply thresholding to an image. Thresholding is a type of image segmentation, where we change the pixels of an image to make the image easier to analyze.. We have already done some simple thresholding, in the “Manipulating pixels” section of the Skimage Images episode. In that case, we used a simple NumPy array manipulation to separate the pixels belonging to the root system of a plant from the black background. In this episode, we will learn how to use skimage functions to perform thresholding. Then, we will use the masks returned by these functions to select the parts of an image we are interested in.
Simple thresholding
Consider the image
fig/06-junk-before.jpg with a series of
crudely cut shapes set against a white background.
import numpy as np import glob import matplotlib.pyplot as plt import skimage.io import skimage.color import skimage.filters %matplotlib widget # load the image image = skimage.io.imread("../../fig/06-junk-before.png") fig, ax = plt.subplots() plt.imshow(image) plt.show()
Now suppose we want to select only the shapes from the image. In other words,
we want to leave the pixels belonging to the shapes “on,” while turning the
rest of the pixels “off,” by setting their color channel values to zeros. The
skimage library has several different methods of thresholding. We will start
with the simplest version, which involves an important step of human
input. Specifically, in this simple, fixed-level thresholding, we have to
provide a threshold value
t.
The process works like this. First, we will load the original image, convert it to grayscale, and de-noise it as in the Blurring episode.
# convert the image to grayscale gray_image = skimage.color.rgb2gray(image) # blur the image to denoise blurred_image = skimage.filters.gaussian(gray_image, sigma=1.0) fig, ax = plt.subplots() plt.imshow(blurred_image, cmap='gray') plt.show()
Next, we would like to apply the threshold
t such that pixels with
grayscale values on one side of
t will be turned “on”, while pixels
with grayscale values on the other side will be turned “off”. How
might we do that? Remember that grayscale images contain pixel values
in the range from 0 to 1, so we are looking for a threshold
t in the
closed range [0.0, 1.0]. We see in the image that the geometric shapes
are “darker” than the white background but there is also some light
gray noise on the background. One way to determine a “good” value for
t is to look at the grayscale histogram of the image and try to
identify what grayscale ranges correspond to the shapes in the image
or the background.
The histogram for the shapes image shown above can be produced as in the Creating Histograms episode.
# create a histogram of the blurred grayscale image histogram, bin_edges = np.histogram(blurred_image, bins=256, range=(0.0, 1.0)) plt.plot(bin_edges[0:-1], histogram) plt.title("Grayscale Histogram") plt.xlabel("grayscale value") plt.ylabel("pixels") plt.xlim(0, 1.0) plt.show()
Since the image has a white background, most of the pixels in the
image are white. This corresponds nicely to what we see in the
histogram: there is a peak near the value of 1.0. If we want to select
the shapes and not the background, we want to turn off the white
background pixels, while leaving the pixels for the shapes turned
on. So, we should choose a value of
t somewhere before the large
peak and turn pixels above that value “off”. Let us choose
t=0.8.
To apply the threshold
t, we can use the numpy comparison operators
to create a mask. Here, we want to turn “on” all pixels which have
values smaller than the threshold, so we use the less operator
< to
compare the
blurred_image to the threshold
t. The operator returns
a mask, that we capture in the variable
binary_mask. It has only one
channel, and each of its values is either 0 or 1. The binary mask
created by the thresholding operation can be shown with
plt.imshow.
# create a mask based on the threshold t = 0.8 binary_mask = blurred_image < t fig, ax = plt.subplots() plt.imshow(binary_mask, cmap='gray') plt.show()
You can see that the areas where the shapes were in the original area are now white, while the rest of the mask image is black.
What makes a good threshold?
As is often the case, the answer to this question is “it depends”. In the example above, we could have just switched off all the white background pixels by choosing
t=1.0, but this would leave us with some background noise in the mask image. On the other hand, if we choose too low a value for the threshold, we could lose some of the shapes that are too bright. You can experiment with the threshold by re-running the above code lines with different values for
t. In practice, it is a matter of domain knowledge and experience to interpret the peaks in the histogram so to determine an appropriate threshold. The process often involves trial and error, which is a drawback of the simple thresholding method. Below we will introduce automatic thresholding, which uses a quantitative, mathematical definition for a good threshold that allows us to determine the value of
tautomatically. It is worth noting that the principle for simple and automatic thresholding can also be used for images with pixel ranges other than [0.0, 1.0]. For example, we could perform thresholding on pixel intensity values in the range [0, 255] as we have already seen in the Image representation in skimage epsiode.
We can now apply the
binary_mask to the original colored image as we
have learned in the Drawing and Bitwise Operations episode. What we are left with is only the
colored shapes from the original.
# use the binary_mask to select the "interesting" part of the image selection = np.zeros_like(image) selection[binary_mask] = image[binary_mask] fig, ax = plt.subplots() plt.imshow(selection) plt.show()
More practice with simple thresholding (15 min)
Now, it is your turn to practice. Suppose we want to use simple thresholding to select only the colored shapes from the image
fig/06-more-junk.jpg:
First, plot the grayscale histogram as in the Creating Histogram episode and examine the distribution of grayscale values in the image. What do you think would be a good value for the threshold
t?
Solution
The histogram for the more-junk.jpg image can be shown with
image = skimage.io.imread("../../fig/06-more-junk.jpg", as_gray=True) histogram, bin_edges = np.histogram(image, bins=256, range=(0.0, 1.0)) plt.plot(bin_edges[0:-1], histogram) plt.title("Graylevel histogram") plt.xlabel("gray value") plt.ylabel("pixel count") plt.xlim(0, 1.0) plt.show()
We can see a large spike around 0.3, and a smaller spike around 0.7. The spike near 0.3 represents the darker background, so it seems like a value close to
t=0.5would be a good choice.
Next, create a mask to turn the pixels above the threshold
ton and pixels below the threshold
toff. Note that unlike the image with a white background we used above, here the peak for the background color is at a lower gray level than the shapes. Therefore, change the comparison operator less
<to greater
>to create the appropriate mask. Then apply the mask to the image and view the thresholded image. If everything works as it should, your output should show only the colored shapes on a black background.
Solution
Here are the commands to create and view the binary mask
t = 0.5 binary_mask = image < t fig, ax = plt.subplots() plt.imshow(binary_mask, cmap='gray') plt.show()
And here are the commands to apply the mask and view the thresholded image
selection = np.zeros_like(image) selection[binary_mask] = image[binary_mask] fig, ax = plt.subplots() plt.imshow(selection) plt.show()
Automatic thresholding
The downside of the simple thresholding technique is that we have to
make an educated guess about the threshold
t by inspecting the
histogram. There are also automatic thresholding methods that can
determine the threshold automatically for us. One such method is
Otsu’s method. It
is particularly useful for situations where the grayscale histogram of
an image has two peaks that correspond to background and objects of
interest.
Denoising an image before thresholding
In practice, it is often necessary to denoise the image before thresholding, which can be done with one of the methods from the Blurring episode.
Consider the image
fig/06-roots-original.jpg of a maize root system
which we have seen before in the Skimage Images episode.
image = skimage.io.imread("../../fig/06-roots-original.jpg") fig, ax = plt.subplots() plt.imshow(image) plt.show()
We use Gaussian blur with a sigma of 1.0 to denoise the root image. Let us look at the grayscale histogram of the denoised image.
# convert the image to grayscale gray_image = skimage.color.rgb2gray(image) # blur the image to denoise blurred_image = skimage.filter.gaussian(gray_image, sigma=1.0) # show the histogram of the blurred image()
The histogram has a significant peak around 0.2, and a second, smaller peak very near 1.0. Thus, this image is a good candidate for thresholding with Otsu’s method. The mathematical details of how this works are complicated (see the skimage documentation if you are interested), but the outcome is that Otsu’s method finds a threshold value between the two peaks of a grayscale histogram.
The
skimage.filters.threshold_otsu() function can be used to
determine the threshold automatically via Otsu’s method. Then numpy
comparison operators can be used to apply it as before. Here are the
Python commands to determine the threshold
t with Otsu’s method.
~~~ # perform automatic thresholding t =
skimage.filters.threshold_otsu(blurred_image) print(“Found automatic
threshold t = {}.”.format(t)) ~~~ {: .language-python}
Found automatic threshold t = 0.4172454549881862.
For this root image and a Gaussian blur with the chosen sigma of 1.0,
the computed threshold value is 0.42. No we can create a binary mask
with the comparison operator
>. As we have seen before, pixels above
the threshold value will be turned on, those below the threshold will
be turned off.
# create a binary mask with the threshold found by Otsu's method binary_mask = blurred_image > t fig, ax = plt.subplots() plt.imshow(binary_mask, cmap='gray') plt.show()
Finally, we use the mask to select the foreground:
# apply the binary mask to select the foreground selection = np.zeros_like(image) selection[binary_mask] = image[binary_mask] fig, ax = plt.subplots() plt.imshow(selection) plt.show()
Application: measuring root mass
Let us now turn to an application where we can apply thresholding and
other techniques we have learned to this point. Consider these four
maize root system images, which you can find in the files
trial-016.jpg,
trial-020.jpg,
trial-216.jpg, and
trial-293.jpg.
Suppose we are interested in the amount of plant material in each image, and in particular how that amount changes from image to image. Perhaps the images represent the growth of the plant over time, or perhaps the images show four different maize varieties at the same phase of their growth. The question we would like to answer is, “how much root mass is in each image?”
We will first construct a Python program to measure this value for a single image. Our strategy will be this:
- Read the image, converting it to grayscale as it is read. For this application we do not need the color image.
- Blur the image.
- Use Otsu’s method of thresholding to create a binary image, where the pixels that were part of the maize plant are white, and everything else is black.
- Save the binary image so it can be examined later.
- Count the white pixels in the binary image, and divide by the number of pixels in the image. This ratio will be a measure of the root mass of the plant in the image.
- Output the name of the image processed and the root mass ratio.
Our intent is to perform these steps and produce the numeric result – a measure of the root mass in the image – without human intervention. Implementing the steps within a Python function will enable us to call this function for different images.
Here is a Python function that implements this root-mass-measuring strategy. Since the function is intended to produce numeric output without human interaction, it does not display any of the images. Almost all of the commands should be familiar, and in fact, it may seem simpler than the code we have worked on thus far, because we are not displaying any of the images.
def measure_root_mass(filename, sigma=1.0): # read the original image, converting to grayscale on the fly image = skimage.io.imread(fname=filename, as_gray=True) # blur before thresholding blurred_image = skimage.filters.gaussian(image, sigma=sigma) #
The function begins with reading the orignal image from the file
filename. We use
skimage.io.imread with the optional argument
as_gray=True to automatically convert it to grayscale. Next, the
grayscale image is blurred with a Gaussian filter with the value of
sigma that is passed to the function. Then we determine the
threshold
t with Otsu’s method and create a binary mask just as we
did in the previous section. Up to this point, everything should be
familiar.
The final part of the function determines the root mass ratio in the
image. Recall that in the
binary_mask, every pixel has either a
value of zero (black/background) or one (white/foreground). We want to
count the number of white pixels, which can be accomplished with a
call to the numpy function
np.count_nonzero. Then we determine the
width and height of the image by using the the elements of
binary_mask.shape (that is, the dimensions of the numpy array that
stores the image). Finally, the density ratio is calculated by
dividing the number of white pixels by the total number of pixels
w*h in the image. The function returns then root density of the
image.
We can call this function with any filename and provide a sigma value for the blurring. If no sigma value is provided, the default value 1.0 will be used. For example, for the file trial-016.jpg and a sigma value of 1.5, we would call the function like this:
measure_root_mass("trial-016.jpg", sigma=1.5)
0.0482436835106383`
Now we can use the function to process the series of four images shown above. In a real-world scientific situation, there might be dozens, hundreds, or even thousands of images to process. To save us the tedium of calling the function for each image by hand, we can write a loop that processes all files automatically. The following code block assumes that the files are located in the same directory and the filenames all start with the trial- prefix and end with the .jpg suffix.
all_files = glob.glob("trial-*.jpg") for filename in all_files: density = measure_root_mass(filename, sigma=1.5) # output in format suitable for .csv print(filename, density, sep=",")
trial-016.jpg,0.0482436835106383 trial-020.jpg,0.06346941489361702 trial-216.jpg,0.14073969414893617 trial-293.jpg,0.13607895611702128
Ignoring more of the images – brainstorming (10 min)
Let us take a closer look at the binary masks produced by the
measure_root_massfunction.
You may have noticed in the section on automatic thresholding that the thresholded image does include regions of the image aside of the plant root: the numbered labels and the white circles in each image are preserved during the thresholding, because their grayscale values are above the threshold. Therefore, our calculated root mass ratios include the white pixels of the label and white circle that are not part of the plant root. Those extra pixels affect how accurate the root mass calculation is!
How might we remove the labels and circles before calculating the ratio, so that our results are more accurate? Think about some options given what we have learned so far.
Solution
One approach we might take is to try to completely mask out a region from each image, particularly, the area containing the white circle and the numbered label. If we had coordinates for a rectangular area on the image that contained the circle and the label, we could mask the area out easily by using techniques we learned in the Drawing and Bitwise Operations episode.
However, a closer inspection of the binary images raises some issues with that approach. Since the roots are not always constrained to a certain area in the image, and since the circles and labels are in different locations each time, we would have difficulties coming up with a single rectangle that would work for every image. We could create a different masking rectangle for each image, but that is not a practicable approach if we have hundreds or thousands of images to process.
Another approach we could take is to apply two thresholding steps to the image. Look at the graylevel histogram of the file
trial-016.jpgshown above again: Notice the peak near 1.0? Recall that a grayscale value of 1.0 corresponds to white pixels: the peak corresponds to the white label and circle. So, we could use simple binary thresholding to mask the white circle and label from the image, and then we could use Otsu’s method to select the pixels in the plant portion of the image.
Note that most of this extra work in processing the image could have been avoided during the experimental design stage, with some careful consideration of how the resulting images would be used. For example, all of the following measures could have made the images easier to process, by helping us predict and/or detect where the label is in the image and subsequently mask it from further processing:
- Using labels with a consistent size and shape
- Placing all the labels in the same position, relative to the sample
- Using a non-white label, with non-black writing
Ignoring more of the images – implementation (30 min - optional, not included in timing)
Implement an enhanced version of the function
measure_root_massthat applies simple binary thresholding to remove the white circle and label from the image before applying Otsu’s method.
Solution
We can apply a simple binary thresholding with a threshold
t=0.95to remove the label and circle from the image. We use the binary mask to set the pixels in the blurred image to zero (black).
def enhanced_root_mass(filename, sigma): # read the original image, converting to grayscale on the fly image = skimage.io.imread(fname=filename, as_gray=True) # blur before thresholding blurred_image = skimage.filters.gaussian(image, sigma=sigma) # perform inverse binary thresholding to mask the white label and circle binary_mask = blurred_image > 0.95 # use the mask to remove the circle and label from the blurred image blurred_image[binary_mask] = 0 # all_files = glob.glob("trial-*.jpg") for filename in all_files: density = enhanced_root_mass(filename, sigma=1.5) # output in format suitable for .csv print(filename, density, sep=",")
The output of the improved program does illustrate that the white circles and labels were skewing our root mass ratios:
trial-016.jpg,0.045935837765957444 trial-020.jpg,0.058800033244680854 trial-216.jpg,0.13705003324468085 trial-293.jpg,0.13164461436170213
Here are the binary images produced by the additional thresholding. Note that we have not completely removed the offending white pixels. Outlines still remain. However, we have reduced the number of extraneous pixels, which should make the output more accurate.
Thresholding a bacteria colony image (15 min)
In the images directory
fig/, you will find an image named
colonies01.png.
This is one of the images you will be working with in the morphometric challenge at the end of the workshop.
- Plot and inspect the grayscale histogram of the image to determine a good threshold value for the image.
- Create a binary mask that leaves the pixels in the bacteria colonies “on” while turning the rest of the pixels in the image “off”.
Solution
Here is the code to create the grayscale histogram:
image = skimage.io.imread("../../fig/colonies01.png") gray_image = skimage.color.rgb2gray(image) blurred_image = skimage.filters.gaussian(gray_image, sigma=1.0)()
The peak near one corresponds to the white image background, and the broader peak around 0.5 corresponds to the yellow/brown culture medium in the dish. The small peak near zero is what we are after: the dark bacteria colonies. A reasonable choice thus might be to leave pixels below
t=0.2on.
Here is the code to create and show the binarized image using the
<operator with a threshold
t=0.2:
t = 0.2 binary_mask = blurred_image < t fig, ax = plt.subplots() plt.imshow(binary_mask, cmap='gray') plt.show()
When you experiment with the threshold a bit, you can see that in particular the size of the bacteria colony near the edge of the dish in the top right is affected by the choice of the threshold.
Key Points
-
Thresholding produces a binary image, where all pixels with intensities above (or below) a threshold value are turned on, while all other pixels are turned off.
-
The binary images produced by thresholding are held in two-dimensional NumPy arrays, since they have only one color value channel. They are boolean, hence they contain the values 0 (off) and 1 (on).
-
Thresholding can be used to create masks that select only the interesting parts of an image, or as the first step before Edge Detection or finding Contours. | https://datacarpentry.org/image-processing/07-thresholding/index.html | CC-MAIN-2021-43 | refinedweb | 3,729 | 56.86 |
#include <HeightField.h>
Defines height data used to store values representing elevation.
Heightfields can be used to construct both Terrain objects as well as PhysicsCollisionShape heightfield defintions, which are used in heightfield rigid body creation. Heightfields can be populated manually, or loaded from images and RAW files.
Creates a new HeightField of the given dimensions, with uninitialized height data.
Creates a HeightField from the specified heightfield image.
The specified image path must refer to a valid heightfield image. Supported images are the same as those supported by the Image class (i.e. PNG).
The minHeight and maxHeight parameters provides a mapping from heightfield pixel intensity to height values. The minHeight parameter is mapped to zero intensity pixel, while maxHeight maxHeight is mapped to full intensity pixels.
Creates a HeightField from the specified RAW8 or RAW16 file.
RAW files are header-less files containing intensity values, either in 8-bit (RAW8) or 16-bit (RAW16) format. RAW16 files must have little endian (PC) byte ordering. Since RAW files have no header, you must specify the dimensions of the data in the file. This method automatically determines (based on file size) whether the input file is RAW8 or RAW16. RAW files must have a .raw or .r16 file extension.
RAW files are commonly used in software that produces heightmap images. Using RAW16 is preferred or any 8-bit heightfield source since it allows greater precision, resulting in smoother height transitions.
The minHeight and maxHeight parameters provides a mapping from heightfield pixel intensity to height values. The minHeight parameter is mapped to zero intensity pixel, while maxHeight maxHeight is mapped to full intensity pixels.
Returns a pointer to the underlying height array.
The array is packed in row major order, meaning that the data is aligned in rows, from top left to bottom right.
Returns the number of columns in the heightfield.
Returns the height at the specified row and column.
The specified row and column are specified as floating point numbers so that values between points can be specified. In this case, a height value is calculated that is interpolated between neighboring height values.
If the specified point lies outside the heightfield, it is clamped to the boundary of the heightfield.
Returns the number of rows in the heightfield. | http://gameplay3d.github.io/GamePlay/api/classgameplay_1_1_height_field.html | CC-MAIN-2017-17 | refinedweb | 377 | 57.98 |
Forms and Dialogs in a Windows DLL with C Sharp
This article covers the programming of Windows forms and dialogs so that they are located in a Windows DLL (dynamic link library). This can help with an apps design, maintenance, and testing, and when adding new features, by breaking up a large project into more manageable chunks. This tutorial assumes you can create and start a WinForms project in Visual Studio, if not see the article Hello World in C#, A Starting WinForms Example
Introduction to Windows Forms in a DLL
When it comes to programming an app, try and keep the apps functions contained into small separate units, it'll help with future development and help to avoid spaghetti code. For Windows apps there is the main executable, or .exe, that starts the program. Then, especially for large programs with lots of functionality, there are library files that contain other code the main .exe can use. A library file is .dll file, dll for dynamic link library. Lots of developers add Windows forms (WinForms) to the .exe for the user interface. However, WinForms can be added to dlls to help break up a bigger program into more manageable chunks.
Start with a Test Program and Class Library
For this tutorial a Windows .exe is going to load a WinForm from a DLL. The WinForm is located in a Class Library project, which compiles into a DLL. In Visual Studio create a test app for the .exe, here it is called DLLFormTest, then add a new Class Library project called DLLForm to the new solution (using Add and New Project).
Put a WinForm in the DLL and Add References
Next, use the context menu on the .dll project (here called DLLForm) to add a Windows form, using Add then New Item (or add a new form via the Project menu). Rename the form, e.g. to MyForm from Form1, so that you do not get confused with the form in the .exe app, also called Form1. Accept the Visual Studio option to rename all references to the renamed form. Change the new renamed DLL form's text property as well, e.g. from Form1 to MyForm. Delete the Class1 file from the DLLForm project, it won't be used. In the DLLFormTest project add a refernce to the DLLForm Class Library.
Load the DLL Form from the EXE
Drop a button onto the main EXE's form (in this tutorial Form1 in the DLLFormTest project). Reference the form in the DLL, firstly with using DLLForm, and code to load the DLL form from a button. The code will look similar to this:
using System; using System.Windows.Forms; using DLLForm; namespace DLLFormTest { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { MyForm myForm = new MyForm(); myForm.Show(); } } }
Run the code and press the button to load the form from the DLL. When running it will look something like that shown at the beginning of this article. If there are errors check that the form renaming was correct, e.g. the MyForm constructor is correct:
using System.Windows.Forms; namespace DLLForm { public partial class MyForm : Form { public MyForm() { InitializeComponent(); } } }
In the binary output directories for the solution, under the bin folders there will be DLLFormTest.exe and DLLForm.dll, with MyForm in the .dll file.
Showing a DLL WinForm Modally
WinForms can be displayed so that no other form in the app can be used until the new form is closed. The MessageBox is often used by developers to ask the user for a response before carrying on. The enumeration (enum) called DialogResult is returned by a MessageBox. DialogResult is used to test for the MessageBox return value:
private void button2_Click(object sender, EventArgs e) { if (MessageBox.Show("Exit app?", "Exiting", MessageBoxButtons.YesNo) == DialogResult.Yes) Application.Exit(); }
This type of form is called a Modal form, or sometimes a dialog box. To do the same with a normal WinForm use the ShowDialog() method:
using System; using System.Windows.Forms; using DLLForm; namespace DLLFormTest { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { Form myForm = new MyForm(); myForm.ShowDialog(); //Show modally } } }
Returning DialogResult from a WinForm
A normal WinForm that is shown modally can also return DialogResult values. To do it add a button to the form and set the button's DialogResult property.
Try it with the MyForm example. Add two buttons, one with the text Yes and with the DialogResult property set to Yes, and one with the text No and with the DialogResult property set to No. Support for DialogResult allows for fancy custom dialog boxes to be built and stored in a separate dll.
private void button1_Click(object sender, EventArgs e) { MyForm myForm = new MyForm(); if( myForm.ShowDialog() == DialogResult.Yes ) //Show modally Application.Exit(); }
Default Escape and Return Actions
WinForms support default button actions on presses of the return/enter or escape key. For MyForm set the AcceptButton property to the name of the Yes button. For the CancelButton property of MyForm set it to the name of the No button. (Because the DialogResult value for the buttons have been set they override the default values which are DialogResult.OK for a button assigned to the AcceptButton property, and DialogResult.Cancel for a button assigned to the CancelButton property.)
Setting and Returning Values from a WinForm
For data that is more complex than a DialogResult add extra properties to the DLL form. This allows data to be passed to and from the DLL form. Here a FirstNameField is being used to read and write from a TextBox:
using System.Windows.Forms; namespace DLLForm { public partial class MyForm : Form { public MyForm() { InitializeComponent(); } //Read and update a TextBox for first name public string FirstNameField { get { return textBox1.Text; } set { textBox1.Text = value; } } } }
The property is then used to access the forms data fields:
private void button1_Click(object sender, EventArgs e) { MyForm myForm = new MyForm(); myForm.ShowDialog(); label1.Text = "The first name is " + myForm.FirstNameField; }
However, if not using the form modally another event is required to control reading of the data from the form. Here button1 creates the form and button2 reads the data. Notice how the form variable is now at the module level so it can be used by both buttons. There is also extra logic to see if the form is loaded:
using System; using System.Windows.Forms; using DLLForm; namespace DLLFormTest { public partial class Form1 : Form { public Form1() { InitializeComponent(); } MyForm myForm; //Keep a module reference to a form private void button1_Click(object sender, EventArgs e) { //See if we need to create the form if (myForm == null || myForm.IsDisposed) { myForm = new MyForm(); myForm.Show(); } } private void button2_Click(object sender, EventArgs e) { //Check for form available if (myForm != null && !myForm.IsDisposed) label1.Text = "The first name is " + myForm.FirstNameField; } } }
Form management can become an issue in large programs, though the Application.OpenForms property and FormCollection class can help. If passing lots of data to and from a form use an object or structure that is common to both forms. Pass a reference to that common object or structure to the form being loaded, then use it to write and read from the forms fields.
See Also
Author:Daniel S. Fowler Published: | https://tekeye.uk/visual_studio/forms-and-dialogs-in-a-windows-dll | CC-MAIN-2019-13 | refinedweb | 1,215 | 64.81 |
.
These days you’d probably use function comnponents in React, and use a hook to fetch the data, which is like the modern way to handle a “side effect” of a component, which is a modern improvement over lifecycle methods. So this article is old in that sense. Conceptually it still makes sense though. I’ll update little bits of it with more modern stuff where I can.
State, State, StateState, dataGrabbing that data
In the case of comments, you likely have that data in your own app.
But data doesn’t have to come from yourself, data can come from anywhere. External data, if you will. Like from an API.
AjaxAjExample Data
Let’s use some simple, publicly available JSON:
It’s simply a chunk of data that describes all the job postings on the CodePen Job Board.
Requesting the DataRequesting the Data
In Axios, requesting that data is like:
axios .get("") .then(function(result) { // we got it! });
With a React Hook, we’d it like this…
import React, { useState, useEffect } from 'react'; import axios from 'axios'; function App() { const [data, setData] = useState({ jobs: [] }); useEffect(async () => { const result = await axios( '', ); setData(result.data); }); return ( <p>Loop over jobs data here and show results.</p> ); } export default App;
Setting the State with External DataSetting DataTempl Designer <} {job.company_name} is looking for a {job.term} {job.title} </div> ); })} </div> ) } });
DemoDemo
With all that together, we got ourselves a nice little React component that renders itself with external data!
See the Pen
Load External Data in React by Chris Coyier (@chriscoyier)
on CodePen.
This dovetails into all the React stuff swirling in my head, like our recent series, recent video, and recent discussion on headless CMSs. | https://css-tricks.com/loading-using-external-data-react/ | CC-MAIN-2022-05 | refinedweb | 284 | 56.35 |
You probably already know classical features such as step-by-step execution (
Step Into or
Step Over to enter or not in functions) with breakpoints (conditions, count, disabling etc),
Watch window, etc.
But there are also a lot of other very useful features for debugging.
Goto definitionwith a right-click on a type or function name
Call Browser | Show callers graphon a function name, to see all places of the project from where the function is called
Find all referenceson a symbol name, to get all places where it is used
You can debug a DLL (with step-by-step execution, breakpoints and all the usual stuff) when it is launched by any program.
In
Project Properties | Configuration Properties [ Debugging | Command you just have to put the path of the executable which will use your DLL.
Then, just put a breakpoint in your DLL source code,
Start Debugging (F5) : it will launch the external program, and when your DLL will be used it will stop on the breakpoint.
Remark : the DLL that the executable use must be the same that the one generated by Visual Studio (you can't move or copy it). But it is simple to configure Visual Studio in order to generate the DLL in the path you want.
If your program is used by another program and you want debug it in this context, you can.
With Visual Studio you can attach the debugger to a running process. Set
Project Properties | Configuration Properties | Debugging | Attach to yes, and be sure that the
Command field still refers to your executable.
Then, just put a breakpoint in your DLL source code, launch your program in any way you want, and after attach the debugger with
Start Debugging (F5) : when your program will be used it will stop on the breakpoint.
In non-managed C++, Visual Studio uses special libraries in Debug mode, which add a lot of verifications when allocating or freeing memory, in order to detect some problems in your code. But unhappily it won't detect all errors, in particular access to non allocated memory areas.
There exist some tools which can do that, such as Purify for example, but I don't know any free tool for Windows (for Linux there is the excellent Valgrind).
The problem with these errors is that they are not deterministic, can occur anywhere anytime with no connection with the location of your error in the code.
An artisanal way to try to deal with that can be to allocate huge buffers of memory that you don't touch, but you compute checksums of theses buffers all along your code. If your code is wrong and write data outside allocated areas, it can modify these buffers and you will see it. Of course you're not sure at all it will detect your errors, but there is a chance.
There is a tool which automates this procedure with other stuff, called Memwatch, but this is still not foolproof.
When using threads (
System::Threading) or something else you get errors like
Ambiguous symbol for the
IServiceProvider class in the
servprov.h file that of course you never touched.
Just try to move the
#include <windows.h> line BEFORE the
System::Threading line for example (and not after or not at all). | http://crteknologies.fr/wiki/programming:visual-studio | CC-MAIN-2017-09 | refinedweb | 550 | 64.75 |
Created 03-04-2016 03:36 PM
Ambari agent in HDP 2.2.0 seems to be taking about 22Gigs after running for a while at a customer site.
Is there a way to reduce or limit its memory usage
Created 03-04-2016 03:42 PM
Please see this thread...
and look into the ambari agent logs
This is a known Ambari bug:
This is resolved in Ambari Agent 2.4.0
As a work around you can modify the main.py
1. Stop Ambari agent
2. Backup file
mv /usr/lib/python2.6/site-packages/ambari_agent/main.py /tmp/main.py.backup
3. Edit main.py under this path : /usr/lib/python2.6/site-packages/ambari_agent/
Add the following at the beginning of the file:
def fix_subprocess_racecondition():
""" Subprocess in Python has race condition with enabling/disabling gc. Which may lead to turning off python garbage collector. This leads to a memory leak. This function monkey patches subprocess to fix the issue. !!! PLEASE NOTE THIS SHOULD BE CALLED BEFORE ANY OTHER INITIALIZATION was done to avoid already created links to subprocess or subprocess.gc or gc """
# monkey patching subprocess
import subprocess subprocess.gc.isenabled = lambda: True
# re-importing gc to have correct isenabled for non-subprocess contexts
import sys del sys.modules['gc']
import gc fix_subprocess_racecondition()
4. Start Ambari agent
Please do this on one host and monitor the memory usage. If memory usage looks OK, then replace main.py on all hosts. | https://community.cloudera.com/t5/Support-Questions/Ambari-agent-memory-leak-or-taking-too-much-memory/m-p/111339 | CC-MAIN-2020-45 | refinedweb | 244 | 60.51 |
GoingNative 3: The C++/CX Episode with Marian Luparu
- Posted: Oct 26, 2011 at 10:24 AM
- 100,178 Views
- 281 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
This is the C++/CX episode - everything you ever wanted to know, but were afraid to ask...
C++/CX language design team member Marian Luparu sits in the hot seat to answer some questions (a few from the GoingNative community - thank you!), draw on the whiteboard and demo some code. It's all fear the hat., thanks for the shout out!
It would be very cool if VisualStudio could analyse C++/CX code for circular reference counts. Apple's Instruments tools for XCode does this for Objective C.
The audio and video of High Quality MP4 format not sync, delay a little bit.
Another great talk.
Great show.
It was good to hear how things work and see it on the whiteboard; CX makes a lot more sense now.
@Marian Luparu
Using these language extensions makes the source code locked to microsoft's compiler.
So using gcc with superior c++11 support can't be used.
Why did you not make a COM framework instead ?
Many companies frown upon lock-in languages extensions and stays away from them.
If i understand C++/CX correctly, you must it to get access to metro stuff, so companies are forced to use the lock-in language extensions..
Did you not consider the lock-in effect or was this planed ?
@Mr Crash: These C++/CX languages extensions are meant to be used only at the boundary; inside your C++ modules you can happily use ISO C++.
And if for some reasons you don't want to use language extensions (or can't/don't want to use C++ exceptions), you can use pure C++ with WRL. And COM can be programmed in pure C as well (write your own v-tables... woow
it can be a great learning experience, you may want to start from here.
You didn't call delete at the end of the presentation.
@reply to Jedrek: We didn't need to -> we ref new'd and used hats, man
This is the C++/CX epidode, so we get automatic ref counting and smart pointers...
C^
@reply to C64: Precisely!
C
Great video guys, I hope to see more C++/CX videos because I'm still trying to get sold on C++/CX and hearing views of others about C++/CX, both for and against, helps evolve my opinions on it. Currently, I'm in the against camp, but please don't take my comments below as a negative relfection of the video - the video was great.
I struggle to be sold on C++/CX. It seems others are too. My issues are not about the technical merit of C++/CX. It seems a great piece of technical work.
However, I feel negative towards it on other grounds. Here's my reasons:
If I constantly kicked you in the leg, would you thank me for selling you a new bandage every day? C++/CX appears to be yet another bandage I "need", but only because MS keeps kicking me in the leg. In that context, the best solution I really need, is for MS to stop kicking me, not to keep selling me bandages.
Why do I say that? Well, once upon a time:
I had Win32, a wildly successful API, but not the most friendly because it was C, and not C++. It had no destructors, so could easily leak.
Then I had MFC, it was better, but being just a wrapper for the C Win32 API, it wasn't modern C++ or API design.
Then we had COM. Object based, but hardly useable and hardly C++, just IDL and a vtable. In reality, it was even less friendly than C.
Then we had .NET, good try, but it was slow and hardly useable from C++. It basically gave C# all that C++ ever wanted, but it took away all that C++ ever had. I needed .NET only because COM was so bad and C#/.NET got rid of COM for me but gave me a resource hog.
Then we had C++/CLI. which gave me C# within C++, but I didn't want to go the slow world of .NET where I only went because C++ had been neglected in its tool support. I also didn't like having two concepts of class to keep in synch.
Now we have C++/CX, which gives me COM within C++, so I can escape C# and .NET, and so C# can do the same! But only to be trapped by WinRT and Metro instead, and it's still COM and not straight C++! Now I have 3 concepts of class.
Hmmmm
Where in that great mass of development is a simple ISO C++ API? I don't see one! Now I realise I never had one!! Where in this does MS deserve my thanks!? (and that's without even mentioning the C++11 language features that we know are in the post. oops.).
If you look at all that mass of MS development, it does very little for a C++ developer other than just create more and more interface churn and take things away only to replace them with something else with problems. MS are masters of this.
These technologies don't add sufficient value to straight C++ development apart from making C++ code available to other languages that are unable to stand on their own and causing C++ to become infected with their features that C++ thus far chose to avoid for itself, like GC or ref counting. Which is why they aren't in the language yet already. If they were, I'd have less objections, but that's just one more thing until then.
The most appealing API produced so far has been PPL and C++11 makes that even better.
But the bottom line is:
* I don't want to use C. It has no destructors.
* I don't want to use COM. It is too hard.
* I don't want to use C# (it's not bad), but I don't need GC, and C++ templates are handy, deterministic destructors are great, and r/value move operations are brilliant!
* I don't want to use C++/CLI, because .NET is slow and a memory hog. .NET can come to me, I don't want to go to it.
* I don't (yet) want to use C++/CX because I still don't like COM, or want another notion of component beyond C++ class. Nor do I feel sure about WinRT or what Metro licensing will be or want the walled garden that any of it might mean.
I also hear people keeping talking about "borders" like its no big deal, like C64.
C++/CX is a component technology. That's its name: Component eXtensions. It doesn't mean you just call some C++/CX api like Metro and wrap it with ISO C++.
It means you cut up your own applications too, into little WinRT components so those can be exposed. Essentially, you don't just wrap someone elses code, you will be breaking up and wrapping your own code too!!
Don't think borders where you wrap big boxes. Think borders where you wrap jigsaw puzzle pieces. Not fun!!
C++ *already* has a notion of the Component, it's called the class. If you want to create a second notion of class, go for C++/CX. If you want a third, do C++/CLI. It will mean you create a one to one mapping/wrapping of your C++ class to a C++/CX or C++/CLI class for every component you use or expose.
Is that what you want? So far, until I am informed otherwise, I don't want that. I want to write classes, not wrappers.
Until I hear some opinions that change my viewpoint, which is what I am soliciting for, I don't want C++/CX. I want ISO C++. not "some other standard" that nobody else will adopt. I haven't seen C++/CLI fly far beyond Windows and I doubt C++/CX will either as things stand.
C++/CX (why use WRL?) is a great technical solution, don't get me wrong, but right now, it's solutions to Microsoft problems, not mine. I don't want to spend my life wrapping up C++ for everybody else. I don't have to wrap anything if I use C#. It's 100% transparent. That's how C++ should be used.
Bottom line: We need to stop the interface churn and redefinitions and make the tools generate the C++ -> C# AND C# -> C++ wrappers or whatever it takes, so that we aren't creating layers and layers of interface redefinitions by hand. It needs to be transparent so that I only write only ONE class definition - a C++ standard one - and the tools perform the magic to make that usable to every other languaage and vice versa. I don't want to have to use a ^ or a % to do it or require GC - unless it's in the C++ standard.
That's my starting point. I ask people to try hard to convince me otherwise because this stuff has to be hardened by fire to proove its worth. I'm not looking to cause offense to anyone. Thanks.
@Glen: "C++ *already* has a notion of the Component, it's called the class."
That's wrong. A C++ class is not a "Component": you can safely use a C++ class in a very constrained way.
e.g. If you build a C++ DLL and export C++ classes from it, and if you throw C++ exceptions from the DLL, for the client to safely use these "C++ classes components" the client itself must be written with the same C++ compiler and the same flavor of the CRT/STL.
For example: if you have STL classes at the DLL interface boundary, and the DLL is built with VC9 and CRT dynamically linked and with some settings of _HAS_ITERATOR_DEBUGGING (or the more modern _ITERATOR_DEBUG_LEVEL for VC10) and the client is developed with the same VC9 compiler but different _HID, then things got broken, because the memory layouts of the STL classes in the .EXE and the .DLL are different: so the DLL's "std::vector" is different from the EXE's "std::vector". What a fragile mechanism! And we are speaking of DLL and EXE built with the same C++ compiler... imagine with different compiler versions!
(And with C++ new/new[] and delete/delete[] there are also problems in allocating and freeing memory across module boundaries; and even if you don't explicitly use new/new[]/delete/delete[] in your C++ code, they can be called indirectly in the implementation of other classes like STL containers, std::wstring or smart pointers at the DLL interface boundary.)
A real Component should be much more robust than that. In fact, this robustness is achieved with COM: you can build COM components with VC<version X> and consume them with VC<version Y> safely (e.g. you can safely use legacy COM components written with VC6 or VC8 inside a client code developed using VS2010). Or you can even build COM components with language L1 (e.g. C) and consume them with language L2 (e.g. Visual Basic). That is what a Component is.
As a bonus very interesting reading, consider this:
If the shell is written in C++, why not just export its base classes?
In a C++ project, I can happily write all my classes in ISO C++, I can happily use STL classes inside, and then, only at the boundary, I can use COM to offer to my clients a robust Component-based interface. If we want to build some Components using C++, and we have a team of N C++ programmers, only one programmer needs to know COM, to build the COM-based Components interface; all the remaining (N-1) C++ programmers just need to know ISO C++, and can ignore COM.
To build COM components in C++, the best way right now was using ATL.
Then Microsoft evolved the COM component technology introducing WinRT, and then they offered two ways to build those components (and to consume them as well): WRL and C++/CX.
I find nothing wrong with that: just use the right tool for the job.
Inside my C++ projects I just use normal C++ classes, e.g. to store strings I use std::wstring (ISO C++) and not Platform::String^ (C++/CX). I'd use Platform::String only at the boundary. Using Platform::String^ all over the place in C++ projects instead of std::wstring would be just plain wrong: C++/CX's Platform::String is only for the boundary.
PS: As a side note, it would be great if Microsoft could offer an implementation of C++/CX to target also ordinary COM Windows 7 desktop development (instead of Win8 WinRT Metro apps only), as a more productive and simpler tool than C++ with ATL.
)
Making a component in RAD was pleasant and this is the why I'm excited with CX
The right tool for the job.
@Glen -- good analysis. I think the real reason for frustration C++ developers feel is the fact the Microsoft keeps emasculating C++ in order to satisfy all other food group languages (with C# at the top). It's kind of schizofrenic approach because C++ is the most powerful, the most elegant, the most productive language, without which all those food languages couldn't even exist. It used to be the other way around, but unfortunately the food group became prevalent in the last decade with its pointer semantics and garbage collection, which are largely at odds with good C++ practices targeted at value semantics and generic design. They say it’s only about a border. I like that “only”, as if it were a small stuff. But I say, why should C++ be stripped naked to cross the border? – Let the food group go through the contortions, let C++ march in its glorious garbs. I really don’t understand “going native” campaign, because there is nothing “native” about it. Until MS understands the place of C++ and removes those food group fetters, Windows will continue to be a hostile development environment for C++ development.
@new2STL: I used Borland/Codegear/Embarcadero C++ Builder. Their VCL library is absolutely nothing like MS extensions. Yes, there are limitations what you can do with VCL classes, and yes, there are “property” and “closure” extensions, but otherwise your code is fully C++, you hardly notice the extensions. VCL is actually a good example how to marry C++ with food languages with the least damage to the former. Had MS came up with a similar C++ library, it would have been a great success. Unfortunately, at present we are left to decide which is the lesser abomination C++/CX or C++/CLI or COM or WRL.
'
@new2STL, I think there are several wrong assumptions made by MS in every iteration of their "managed" C++, and for that reason I don't think C++/CX will have higher level of satisfaction or live any longer than its predecessors.
Firstly, that C++ developers are interested in interoperability with other languages, whether exporting to or importing components from other languages. Some might, great majority -- couldn't care less. Secondly, that C++ developers absolutely need stable ABI. It's one of those nice to have, but not really necessary features. Given the choice of standard C++ without ABI and emasculating C++ with ABI, I strongly believe the majority would choose the first. I never complained because I recompiled and shipped all my DLLs with the applications when compiler or libraries have changed (I even export functions with C interface when necessary), it's a small price to pay to stay with the standard. But I'm disappointed with every generation of emasculated C++ from MS that "solve" ABI problem. Thirdly, I don't really like the "branchy cranberry" class hierarchies, especially single-rooted. It stinks when everything grows from some "Object" class, it stinks to see all those pointer conversions. I don't want object-oriented design in Java or C# sense, I don't want inheritance and pointer semantics. I want a modern event-driven C++ design with no gratuitous hierarchies, with value semantics, with std::function for event handlers, with standard containers, standard strings, etc. etc. And if somebody wants to add extensions to the language, they are welcome to suggest them to the C++ standardization committee, then they will have the burden to prove they are useful for the language at large.
@Glen: thank you, for expressing this so mcuh more clearly than I could have. I feel very much the same way about C++/CX. I'm still trying to make up my mind, and I'm not yet quite convinced that it was the right way to go.
@C64 (and more generally), I'm having a hard time imagining how this "C++/CX is only at the boundary" thing is going to work out in practice. I suspect that it's going to "bleed" into your general code base. Why wouldn't it?
In theory, windows.h should only be used at the boundary too. How many applications managed to do *that*? How many Windows applications do *not* have DWORDs and HANDLEs all over their code?
If it is reasonably *possible* to limit C++/CX to the boundary code only, then that's great, and as it should be. But I suspect that it's a noble intention and a great thing to say to calm people's fears, but that in practice, it aint gonna happen.
So Microsoft, I'd love to see some examples of this. Has anyone who worked on the language actually tried writing applications where C++/CX was only used at the "boundary" interop layer? Or did you just write your code with C++/CX all over the place because that was the most convenient? What were your actual experiences with this?
In particular, the need to use special WinRT types instead of the ones that already exist seems worrying. As far as I know (I haven't looked closely at this), the WinRT String class gets mapped to .NET's String under the hood for .NET users. But C++ users have to use a new, proprietary string class throughout their application (or convert strings between two classes all over the place). Won't the easy way out be to use WinRT's string class all over, then? And when you do that, you've effectively turned you *entire* code base into C++/CX, rather than keeping it at the boundary only. Couldn't C++/CX have made a similar conversion implicitly (at least optionally)? When a WinRT component expects a string parameter, let me pass a `std::wstring`, and do the conversion internally? C++/CX seems to sit in an odd place where, on one hand, there's no clear separation between it and ISO C++, and on the other hand, it doesn't go *all* the way towards playing nice with C++, instead forcing the ISO C++ code to bend and submit.
It seems like there's a huge disconnect in the whole "going native"/"C++ renaissance" thing. Microsoft thinks that we simply want native code (code that does not rely on the CLR), whereas what what their users are calling out for is primarily ISO C++. I've heard the "but C++/CX is all native, don't worry, there's no .NET in it" response from a lot of Microsofters, which just completely misses the point. I don't care if it's native or not, I care if it's C++. And so from Microsoft's point of view, C++/CX was a great solution because "hey, we're letting people write native code now", and when their customers see it, their immediate reaction is "what the hell, *another* proprietary language? Why do you have C++ so much?"
I am also non-plussed at the idea of using yet another language extension.
C64, I take it that you have no problems with C++/CX:
"Then Microsoft evolved the COM component technology introducing WinRT, and then they offered two ways to build those components (and to consume them as well): WRL and C++/CX. I find nothing wrong with that: just use the right tool for the job."
I have a number of problems with C++/CX in particular, but in this post I will focus on just one point. Why, in the case of Microsoft, the right tool for the job is almost always another language extension? Why do they always have to butcher the language, the standards, the tools and the infrastructure built around these standards? Why??
In the past 8 years, this is a 3rd extension to C++ that is being pushed on us:
The first extension, managed extensions for C++ has already been declared dead and is no longer supported. At its inception, of course, that extension has been touted as the best thing since sliced bread. Of course, it wasn't all that great, thus it failed and has been buried.
The second extension, C++/CLI is still with us, but has been largely neglected by Microsoft right after being introduced. C++/CLI is definitely a second-class citizen in the world of Windows development. First, of course, it goes without saying that nobody besides Microsoft supports it, thus you are locked to a single compiler and a single IDE. Then, debugging mixed-mode projects written in C++/CLI is terribly, unbearably slow. Developing in C++/CLI leaves you without Intellisense. Heck, you can't even do multi-targeting of .NET versions. Think about it, C++/CLI is supposed to be THE language for interop between native and managed worlds, the ability to target different versions of .NET is pretty fundamental for that, yet C++/CLI can't do it *while C# and everyone else can*!
Now we have the third extension, C++/CX. Sigh... Microsoft never learns.
Does anyone seriously believe C++/CX will still be with us and Microsoft will continue to work on it in, say, 5 years? Does anyone believe Microsoft will not abandon or freeze it in favor of yet another flavor-of-the-month language extension / "cool" development thingy in that time? Contrast that with C++11, which you know will be there, supported and used.
C++/CX is an unnecessary, temporary thing. I wish Microsoft didn't do it and instead worked on improving WRL or some other C++ library so that using WinRT from ISO C++ would be as easy as doing that from other languages. To bring the point once put forward on another thread by another poster here, we were promised a fluid and fast way to use WinRT from a language of our choice. A language of my choice is ISO C++. What is the fluid and fast way to use WinRT for me? Neither raw COM nor WRL qualifies as fast and fluid. C++/CX is a different language. The answer is: there is no fluid and fast way to use WinRT for an ISO C++ developer. And this says it all.
@PFYB: Thanks for expressing your opinions. Keep them coming.
Did you watch the interview with Marian? We discuss:
1) The reasoning behind going the extension route
2) What the extensions are and why they are what they are
3) When to use C++/CX
4) The meaning of "just use it at the boundary" and how thin this boundary can be
5) The fact that you don't have to use C++/CX at all...
C
I honestly don't mind Microsoft making C++/CX, in the general sense. Domain-specific language extensions are fine, and using them within those domains is fine.
My problem is that Microsoft spent precious compiler developer time on C++/CX that wasn't spent on C++11 support. Which is why VCNext's C++11 support is barely improved from VC10. No uniform initialization. No initializer lists. No varadic templates. These are major C++11 features that Microsoft just decided not to bother even trying to support. We won't see support for them until VCNext+1, if then.
Microsoft has shown that they don't have a commitment to the C++ language itself. That's my problem.
+1 to Alfonse. I do mind having language extensions, I think C++/CX is a net loss on its own, but I would be much less vocal on this if Microsoft delivered on C++11. That Microsoft have chosen to spend resources on a language extension INSTEAD of implementing the C++11 standard makes it that much worse.
@Charles: Yes, I did watch the interview. I can answer point by point, if you want, but in general:
* I am totally unconvinced that developing a language extension was warranted, you could have done everything using ISO C++, generating metadata if you absolutely have to,
* I know about WRL, of course, I wish you spent more time on that, because right now it leaves a lot to be desired, same with raw COM - I am fine using it and just wish you spent more time tying WinRT to raw COM without involving language extensions,
* I cringe whenever I hear this "only at the boundary" talk, because the boundary is hugely important (as an experiment, look at the list of features for any modern application, I predict that at least half of them, perhaps more, will touch "the boundary" in some way).
That's it.
I fully agree with the long comment of Glen.
@jalf:
jalf: When you write MFC apps, it makes sense to use CString in the MFC code, because it is well integrated with the framework. And CString (which is platform specific) offers better support for *useful* stuff like loading strings from resources, which lacks in std::wstring. And this is because std::wstring is cross-platform, but loading a string from string table resource is Win32 platform-specific stuff.
So, both CString and std::wstring are doing their jobs well: one is well integrated and convenient to use for Windows-specific code, the other works well for multiplatform code.
The fact that you use MFC for the GUI (and CString here), doesn't mean that you have to use CString all over the place in your C++ code: you can use STL containers and std::wstring in the core non-GUI part of your app, and make it portable to other platforms.
The fact that some apps have DWORD's and HANDLE's all over the place can be a bad design decision and bad implementation technique, but you can't blame Microsoft for poor software engineering practices of some 3rd party developers. Those raw HANDLE's could be wrapped in C++ RAII classes, and not exposed outside.
If you code for Windows in C++, of course you have to have some layers of code which interact with Win32 API and manage DWORD's and HANDLE's, but you can wrap them in C++ classes and hide these implementation details to the other layers of code.
@PFYB:
Yes, I don't have problems with language extensions for platform specific code.
I can use ISO C++ for the most part of the app, and use the language extensions for platform specific stuff, if it makes me more productive.
And, again, if you don't want to use extensions, there is WRL.
Probably because the standards can't cover 100% special cases of platform specific code?
It seems to me that QT, which is multiplatform, does have some kind of "extensions" to address some problems that standard C++ can't address in a productive/convenient way: e.g. QT has the "moc" compiler and makes use of several macros to enrich the C++ language.
In my practical view of programming, if a language extension makes me more productive for writing platform specific code, it's OK to me.
For what I've seen so far, the "fast and fluid" way of programming Metro in native code is using C++/CX. WRL uses standard C++, but seems to me less productive than C++/CX.
What would be an ISO C++ alternative to C++/CX? Would it require using preprocessor macros? Would it require a separate compiler (kind of like MIDL) for metadata or other platform specific stuff?
I'm not sure this would be better than a platform-specific language extension like C++/CX.
Nevertheless, let's enjoy each other's viewpoints.
@C64: Very simple:
"So, both CString and std::wstring are doing their jobs well: one is well integrated and convenient to use for Windows-specific code, the other works well for multiplatform code."
The problem is not that either CString or std::wstring are not doing their job. The problem is that they are arguably doing the same job yet they are two different classes. If you were designing MFC today, would you reuse std::wstring or would you invent your own string class? Easy question, right? Of course, you would use std::wstring. It's the same with WinRT.
Everyone understands the concept of splitting code into modules which have separate responsibilities. The problem is, making a separate class representing "a string for the GUI subsystem" instead of just using a generic string class does not add a lot of value. Why not just use the latter? Same with the syntax. Why have "a super-smart pointer to a regular COM object which is just so cool it has to use a hat"? Why not use the traditional smart pointer?
@PFYB:
I imagine that if Microsoft had not delivered C++/CX, then several people would have said: "Oh, programming Metro in C++ with WRL is hard: is kind of like COM and ATL; let's use C# for that, it's much more productive." So, I think C++/CX was a good move to offer a native development tool for Metro.
However, I'm also kind of disappointed because C++11 support in VC11 is insufficient. I'd not expected full C++11 standard compliance; but some nice things like variadic templates or range-based for loop would have been a very nice touch. But let's hope in a hotfix or serivce-pack 1
@PFYB:
"If you were designing MFC today, would you reuse std::wstring or would you invent your own string class? Easy question, right? Of course, you would use std::wstring. It's the same with WinRT."
The problem with std::wstring and WinRT is that WinRT strings have to cross module boundaries, kind of like BSTR's of COM (so, they need to use a common memory manager): std::wstring can't do that. So you need a new ad hoc string class (to wrap a WinRT string handle - HSTRING? - ) for WinRT.
I think a similar thing happens for WinRT C++/CX collections (<collection.h>): standard STL collections can't safely cross module boundaries (and C++ exceptions - which they use - can't either), instead the collection classes implemented in <collection.h> can.
Both Platform::String and <collection.h> classes can be used only at WinRT boundary; std::wstring and STL containers in the rest of the code.
@C64: Thanks for the answer (we cross-posted). Yes, let's enjoy each other's viewpoints.
[PFYB: "Why, in the case of Microsoft, the right tool for the job is almost always another language extension? Why do they always have to butcher the language, the standards, the tools and the infrastructure built around these standards?"]
"Probably because the standards can't cover 100% special cases of platform specific code?"
Of course, the standards can't cover 100% special cases of platform specific code, but why the solution to this is to add a language extension? There are other ways to cover special cases of platform specific code. Why not, say, write a library?
"What would be an ISO C++ alternative to C++/CX? Would it require using preprocessor macros? Would it require a separate compiler (kind of like MIDL) for metadata or other platform specific stuff?"
Yes, macros, base classes, as well as tools (or new logic in the existing tools) to generate and use metadata.
@C64:
"The problem with std::wstring and WinRT is that WinRT strings have to cross module boundaries, kind of like BSTR's of COM (so, they need to use a common memory manager): std::wstring can't do that."
I understand that WinRT strings have to cross module boundaries and that std::wstring can't do that. I also understand that C++ is a low-level language and thus we have to have means to use the low-level WinRT strings as they are. The problem is with the way MS tackles the use of WinRT from C++ at the high-level. Why, instead of having Platform::String^ and a plethora of new syntax and rules in the new language extension, which is what we have as the high-level now, not have std::wstring and traditional C++ classes and smart pointers?
Imagine doing this:
#import "winrt.<subsystem1>.dll"
#import "winrt.<subsystem2>.dll"
...and having it generate C++ classes with shared_ptr<IWhatever> and wstring. Why can't we have this instead of C++/CX?
@PFYB: WRL is largely undocumented at this point. This will change, of course. Stay tuned..
All, keep it coming. There's no way to learn without listening.
C
@PFYB
"Imagine doing this:
#import "winrt.<subsystem1>.dll"
#import "winrt.<subsystem2>.dll"
...and having it generate C++ classes with shared_ptr<IWhatever> and wstring. Why can't we have this instead of C++/CX?"
Because, then someone would be able to instantiate a non-reference-counted object of type IWhatever. Hello memory leaks.
Now, this could be worked around by using the named constructor pattern, but what about when the coder has to implement a class to pass to a component of WinRT? It cannot be enforced. In C++/CX, reference counting is a part of the contract of a ref class. There is NO other way to instantiate them, and this is a good thing.
@Michael Price: "Because, then someone would be able to instantiate a non-reference-counted object of type IWhatever. Hello memory leaks."
It's been a long standing philosophy of C++ to let people jump off the cliff if they want to. You offer safety nets as long as they don't impose unreasonable constraints on those who know what they are doing. Compare that with Borland/Codegear VCL. Yes, VCL classes aren't fully C++, most of them can only be used as regular pointers (not hats) and can potentially be misused. Has anybody heard horror stories about it? I don't remember anything significant since 1996 when the first version was released, and unlike 3 iterations of "managed" C++ and general dissatisfaction, VCL is largely unchanged and enjoys favorable opinions. I'm not saying MS should imitate VCL, I believe there are far superior solutions, but I'm saying that MS continues treading the same turf with the same methods and the same results. Tell me what is it?
@C64: "The problem with std::wstring and WinRT is that WinRT strings have to cross module boundaries, kind of like BSTR's of COM (so, they need to use a common memory manager): std::wstring can't do that."
Who said std::wstring can't cross the boundary? There are two ready solutions in C++: overloading or displacing "operator new" and/or providing custom allocators. Any inconvenience caused by those two methods pale in comparison with non-C++ String type. Even making interface taking wchar_t* and prohibiting cross-boundary construction/destruction would be a better solution. It's a bad design having objects without owner anyway, so either pass a copy or move with rvalues -- don't throw the hats all over the place.
@Garfield:
I've already written on problems on passing STL stuff across module boundaries.
And note also that STL stuff including std::wstring can throw C++ exceptions, and they can't safely cross DLL module boundaries either (so, even if you use a custom common allocator for std::wstring, e.g. CoTaskMemAlloc(), then there are other problems to be fixed, like C++ exceptions flying out of DLL's, or structure of STL containers changing with different compiler versions and _ITERATOR_DEBUG_LEVEL values, etc.)
Moreover, I consider the pure C style of raw wchar_t* a bad design, a move backward in the 1990's
passing a pointer to preallocated buffer + the buffer size, like say with ::GetWindowText()...? And maybe return codes or call ::GetLastError() to get error information? No, I do prefer the modern object-oriented exception-based C++/CX approach.
@C64: "And note also that STL stuff including std::wstring can throw C++ exceptions, and they can't safely cross DLL module boundaries ..."
As I suggested, don't pass the objects across the boundary if you can't handle exceptions, make a copy on the other side, or move, or swap, and return them the same way. You can even serialize the object to pass it through the boundary and construct them on the other side. People were crossing DLL boundary long before COM/.NET/whatnot appeared staying within standard C++, it's not like it's a new problem.
"Moreover, I consider the pure C style of raw wchar_t* a bad design"
Bad design? How about the cure for bad design? No, really, I think System::String^ is much worse design. Heap allocated objects without clear ownership, like those traveling with hats is a bad design. Have you noticed when std::unique_ptr was finally available how shared_ptr became a lot less useful? People found they used the latter only because the former wasn't available. In reality, there aren't many good solutions that require sharing. Shared (ref counted) pointers already smell of design problems.
Again, there is nothing terribly wrong about passing wchar_t* zero terminated strings, it's simple to get pointer to a buffer, const wchar_t* cstr = mystring.c_str(), it's easy to make a copy std::wstring mystring(cstr). It's been used for ages. Built-in types are your friends -- they never throw.
Of course, I don't want to trivialize the issues suggesting that all problems can be solved in short discussion thread, C++ standard doesn't have notion of DLLs, modules, ABI. The solution is still to keep standard C++, achieving as much as possible with libraries, probably using external tools, designers, code generators, macros, attributes, etc. to fill the gaps. Just like other existing frameworks do. The worst solution is to mangle C++ by adding these extensions, which ironically doesn't mean extending the language. In reality as soon as you are using these extensions you are constraining the language, imposing upon it foreign architecture coming from languages with pointer semantics and garbage collection. I wish Java/C# were never created, because we have a lost decade in software industry. Free C++ from the burden of C# architecture and I'm sure you will see how to do it right.
@Garfield
Crossing DLL boundaries with standard C++ is very constraining.
One possibility is to only let C++ abstract interfaces cross the boundaries... and then you start reinventing COM (which is interface-based)..
@Charles:
."
Sure.
Very simply, my position is that there is no need to have yet another language extension, because one can achieve everything this extension provides, including ease-of-use, by using ISO C++ and some support from tools that generate and use C++ code and / or metadata.
That's pretty close to saying that everything can be done by using a C++ library, but not quite equivalent, as the source code for that "library" should, as I propose, at some point be generated by a tool. The generation of the source code (or even binaries) is, of course, nothing terribly revolutionary, we've had this for years with tools like TlbImp / TlbExp in .NET and features like #import in C++.
With that, let's turn on to what Marian says in the video. He brings up three points as to why you went with a language extension:
1. You really wanted the classes representing WinRT objects in C++ to throw exceptions instead of returning HRESULTs, eg, to have chain calls (GetX()->GetY()->GetZ()), etc. -- Good. That's not the first time we encounter this problem, we had the same problem with COM. Extend #import to cover WinRT: allow #import to take a WinRT module, parse its metadata, then use that metadata to generate corresponding C++ classes, rewriting method signatures so that instead of returning HRESULTs they return [out,retval] values. Done.
2. You really wanted to preserve IFoo inheriting from IBar, even though in WinRT this merely means that QueryInterface'ing IFoo for IBar succeeds. -- Good. Have the C++ code generated by #import expose interfaces IFoo and IBar as inheriting from each other in the C++ sense, do the QueryInterface's in the implementation as necessary. Done.
3. You really didn't want to force developers writing WinRT components to write methods in COM style, returning HRESULTs. -- Good. Interestingly enough, at this point Marian actually mentions #import, and says that this is where #import doesn't help. In the recent thread on vcblog Jim Springfield talks about the same thing. I get an impression that this point is *the* reason the team gave up on #import and concluded that they have no choice but to add new language constructs. So, is there really nothing that could be done here? Of course, not. Just do the reverse of what #import does and generate wrapper C++ code that is facing outwards. Eg, have a compiler option that would take the C++ code and generate wrapper classes for that code, turning methods returning whatever into methods returning HRESULTs, converting exceptions into HRESULTs, generating GUIDs for interfaces, etc. If you don't want to do this within the compiler, pack this functionality into a separate tool. This is so simple, it is amazing neither Jim nor Marian don't seem to ever acknowledge the possibility of that. The irony is that this also, at least to me, seems significantly easier to implement than C++/CX, since that requires doing only part of what you had to do for C++/CX (you have to parse the C++ code and generate / consume metadata, but you don't have to actually extend the language). Done.
So, all three issues addressed. There are other issues like the ABI, having to suppress exceptions, etc, not brought up in this video, but so far, I didn't come across anything that couldn't be addressed by generating C++ code both ways, as above.
In sum, it seems to me that the team didn't go far enough in trying to tie WinRT with C++ without altering the C++ language. You gave up far, far, far too easily. Generating C++ code both ways, for wrapping WinRT objects in order to use them from C++, and for wrapping C++ classes in order to expose them for WinRT applications or modules, would have allowed to do basically everything C++/CX does now. Wrapping both ways wouldn't also be all that different from what other languages are doing - I take it that .NET languages wrap both ways as well. As a result, C# developers use System::String and never see Platform::String^, they can pretend everything is garbage collected even though WinRT objects aren't garbage collected, etc. Why did you not do the same for C++??? Beats me.
@C64:
."
This is where we differ.
If I were to choose between (a) passing raw handles and staying compatible with the ISO C++ standard, and (b) passing hat pointers to objects but using non-standard language extensions, I would choose the former. I'd rather write a couple of wrappers to make passing raw handles easier, but be able to use the standard C++ toolset, than go the other way.
Hence, no C++/CX for me.
I give up, I feel like my messages are lost on the other side of the boundary.
The bottom line, I don't have much hopes that after 10 years of experimentation MS would finally create a reasonably good solution, it's still rehashing the same failed approaches. I predict 2-3 years from now MS will cheerfully announce they are done with C++/CX and offer the next iteration of the same. I hope some company, like Codegear/Embarcadero will wrap this unsavory design in close to standard C++ implementation as they always did. The third party solutions were always better and it looks like this trend will continue.
@Jalf: Thanks for the kind words and your balanced contributions to the various debates.
@Garfield, welcome to the party! :)
@C64: Thanks for your thoughtful reply. You are absolutely right in your technical explaination of the class/component problem space. However, I think you are mistaken to claim I am wrong when I said that "C++ already has a notion of the Component. It's called the class." Here's why:
If you google the word "notion" used in my statement, you will find this definition:
no-tion (noun): A *vague* awareness or understanding of the nature of something.
Synonyms: idea - *concept* - conception - thought - mind - opinion
Referring to that definition, you will see that I wasn't claiming that a C++ class and a Component (any of Microsofts definitions of them!) are exactly the same thing from a technical perspective. I was arguing that the C++ class is *vaguely* similar in *concept* to the Component.
I stand by that and don't feel that to be wrong at all.
The difference between us was your emphasis concluded with classes and Components being different (your emphasis is on the *vague* from "notion") and mine concluded with that they are different but *also* similar in concept i.e. equivalent. (my emphasis is on the *concept* in "notion").
I don't want to lose that distinction because the "equivalence" (for want of a better word) is really important!
@c64 and @others:
If you think class and Component are so different, note that effectively C# only has one definition of class to support both class and Component, as did C++ until Microsoft introduced C++/CLI and C++/CX etc. to add more. not to mention the different pointer types required to support them!
When you make a Component and a class different, *one has to still reach the other*. If that's not transparent (and it should be), that makes C++ developers chief "wrapper upperers" to the king! If that's my job, I'll quit - or get a new king. C# does not make you do this because class and Component are not surfaced in the language in an invasive way.
Once you conclude that the class and Component are not just different, but *equivalent*, you should start to see why "the border matters" - because the border will be everywhere: not just at "the OS level", the Metro level, or any of the places people instinctively know they will need to wrap. It's at the Component level and when that equates to the class it means the borders are everywhere!! Plus all of those usual OS level places as well!!!
IMHO, herb and C64 seem to underestimate the importantance of what this means in practice. I think it's huge.
People will have a natural incentive to want to share/expose their *own* ISO C++ classes as Components so they can be used by the market - i.e. by other languages (like C# and Javascript). They may even be forced to do that, to support OS APIs (like Metro) or if MS insists on it through policy. So people will be incentivised, or forced, to put up their own borders. Everwhere. So they will.
That means the neccessity will be for ISO C++ developers to put hand wrapped Microsoft classes and technology around *each* of their own ISO C++ classes to make them useful in the Microsoft world; and/or, put ISO C++ wrappers around Component classes they use to prevent the Microsoft extensions bleading into their main code base.
Obviously MS think maintaining an ISO C++ class, a classic COM class, a C++/CLI class, a C++/CX class or a "whatever they dream of next week" class is a useful endeavor. I don't think its even viable. Even if it is, why does MS think that C++ developers want to be chief wrapper upperers to the king? What is fast or fluid about that? How is that productive for a C++ developer or a C++ renaissance?
The maintainance effort (bye productivity) required to support ISO C++ classes and "other" component classes will cause some people to just forget about the cross platform (bye opportunity) ISO C++ one (bye standards) or eschew componentising their product to begin with (bye opportunity).
Fusing a "Component Technology" into a language like C++ (in an invasive way) that already has a notion of Component is a mistake IMHO. It supports lock in (bye freedom) and other languages that fail to stand on their own (hellow slow coaches); allowing "them" to stand on our shoulders but not doing enough for C++ "us".
In C#, most of this IS transparent, and the only Component to maintain is the class.
Is any of this good for ISO C++ and is the complexity of any of it neccessary? Has any of this technology really proven itself and will any of it be here in 5 years?
I see Component Technology as a bridging technology, that's "all" it does. If it isn't nearly virtually transparent, it isn't the right way to go. It shouldn't be invasive. C++/CX and C++/CLI aren't transparent, ref class, ^ and % are not transparent and wrapping is invasive and GC may be the same.
In C# class and Component are the equivalent and the details are as hidden as possible - that shows it is possible and the right idea. C++ component technology needs to hit the same level of transparency in a standard way and I believe this to be possible.
The C++ modules effort may further the right technology, but regardless there has to be other better ideas short of the proposition I see now to make Component transparent. I think the ISO C++ team (that includes us) need to be working on whatever the right bridging/component technology is to make Microsofts component technology transparent. If that can't be done the platform needs to go, not C++.
So far, IMHO, C++/CLI and C++/CX don't achieve the goals I think they should for C++, just Microsoft, and are another dead end if thats all we get.
Do keep trying to convince me otherwise, I am keen to hear the other views - or how about better solutions to this problem space (it does need a solution) if possible - that is being positive too.
Thanks
@Charles: Thanks for the video. It has a nice flow and good amount of detail. That said, I watched with amazement at yourself saying over and over that C++/CX is no big deal, just a "ref" here and a hat there, and how it all is a very small extension. That's missing the point by quite a margin! The number of keywords in a language extension doesn't matter! A single keyword that you have to use is all it takes to turn a standard-compliant program that compiles in GCC or in a prior version of VC, into a non-compliant program that doesn't compile. PFYB is talking a lot of sense. Being able to use the standard C++ toolset is of paramount importance. Our codebases are huge, we do need a lot of tools to help us deal with the complexity of our code, designers, modelers, analyzers, documentation extractors, style checkers, you name it. These tools have expectations that the source code is standard-compliant! We do use more than one C++ compiler, too. I doubt any of these compilers besides VC (excluding existing versions - already a loss!) will support C++/CX in the foreseeable future!
I don't know how to say it any clearer: The standards are important! Don't break them!
@KevUK: Thanks, Kev. I understand.
I guess my light-hearted commentary should be interpreted as "C++/CX is for use in writing simplifed native (COM reimagined) Windows 8 Metro Style applications with shareable components(shared across very different language boundaries....)."
So, you wouldn't write a library in C++/CX and expect it to be consumable/usable (as intended...) outside of the context of WinRT. I mean, right? That seems to be pretty clear.
Sorry if I made it sound like this is no big deal (the hats and refs, the extending of a standardized language...) Of course standards matter. They matter very much to us. You are absolutely right. The goal of this episode of GoingNative was to shed some light, real light, on why C++/CX exists at all. I trust that came through?
C
@Charles
The Component IS the platform (any platform, not just WinRT) because the Component IS the library and the library IS the platform.
When we write a library, we want it to be consumable everywhere with the least effort neccessary.
If C++ can't hurdle the library "border" transparently or be the border transparently, you cut off C++'s legs and give us two heads. That probably explains the extra hats!!!
Seriously, there is no viable C++ in the world you describe, it is only C++/CX or C++/CLI, or whatever you come up with tomorrow.
If that's the goal, admit it, but I don't support it.
If that's not the goal, admit it, and lets fix it.
When you write a library in C#, you write a library in C#, not WinRT.
You can't really compare a VM-based solution to a native one... Yes, the C# compiler team and CLR team had to do a lot of work to keep C# and .NET looking and feeling like C# and .NET for C# and .NET developers in their effort to support the WinRT programming.
A big part of this C++/CX thing is to make your life easier when writing WinRT applications and components for Windows 8 Metro... Right? There's no evil plot to fork the language -> you only ever use C++/CX to write native apps/shared components that target a single platform. Right? Then again, that's exactly where you find this to be annoying and wrong. I understand that.
I'm not arguing for or against anything. Just trying to help clarify the way it is.
C
@Charles:
I hope I clarified how exactly I think you could have implemented everything there is in C++/CX using straight ISO C++ in my previous post. If there is anything that is not clear about what I am saying, please ask and I will elaborate.
With that, I am going to address a couple of points in your answer to Glen:
"You can't really compare a VM-based solution to a native one..." [In terms of how they work with WinRT.]
A VM-based solution differs from WinRT *more* than a native solution! A VM-based solution has garbage collection and compiles code from IL on the fly! Yes, the devil is in the details, but the fact that .NET languages can use WinRT seamlessly despite .NET being managed code and WinRT being native code, while C++ can NOT use WinRT seamlessly despite both C++ and WinRT being native code, speaks for itself. Yes, .NET and C++ differ a lot, but we, the developers who frequent this and other Microsoft sites, aren't total C++ babies, we know a bit on how to map platforms and concepts onto each other, we have a lot of practice doing that with COM and similar technologies, both Microsoft and non-Microsoft, and it definitely seems to us that Microsoft could have done the mapping of WinRT to C++ without altering the language, with roughly the same effort as was required for C++/CX, and with more or less the same results plus the standards-compliance. So, yes, .NET and C++ differ a lot, but the fact that .NET languages can use WinRT seamlessly and C++ can't does not exactly seem to speak to the fact that tying WinRT to C++ was somehow more difficult than tying WinRT to .NET. This seems to be more about .NET guys in Microsoft caring more about preserving their languages than C++ guys.
"the C++ team did a LOT of work as well..."
Yes, absolutely, they did a lot of work. The problem is: the C++ team forgot they are the C++ team and did their work for a different language. The work they did does C++ more harm than good! The C++ team absolutely could do what they wanted to do without altering the language, but they seemingly gave up after the first roadblock that seemed serious (but wasn't). Why didn't they try just a bit harder and overcome that roadblock? Because they didn't see standard-compliance as serious enough to warrant that little bit of extra effort. And nothing you say, Charles, will change this. Actions are louder than words. Yes, I believe you when you say that the team cares about standards-compliance, but the measure of just how much they care is in front of us: there is a way to do everything the team wants in straight C++, but the team overlooks it or doesn't care enough to use it. They care about standards-compliance, but they don't care enough about it to achieve it, even though achieving it is not actually all that difficult.
And let's stop bringing up WRL. WRL is not fast and fluid. It is not a solution, it is a bandage.
Garfield: "I predict 2-3 years from now MS will cheerfully announce they are done with C++/CX and offer the next iteration of the same."
I fully agree. Actually, I suspect everybody here, including Charles himself, agrees with this, it is just that some people aren't afraid to say it how it is and others either try not to think about it or try to convince themselves that they really don't know what it's going to be like 5 years from now.
What Microsoft does with C++ is a crime:
[2002 or so, the era of Win32, Microsoft introduces .NET]
Microsoft: hey, world, here is .NET.
C++ folks: cool, we suppose we call it via HRUNTIME, CoCreateNetObject(HASSEMBLY, LPCWSTR) and so on, right?
Microsoft: no, that's so 1990s, we have created a new language extension (ME F C++), use that.
C++ folks: heh, a language extension? that's not standard-compliant, right? why not just use handles, Win32-style?
Microsoft: it's all going to be fine, just listen to us, use the extension.
C++ folks: ...ok.
[2004, Microsoft adds a lot of stuff to .NET]
Microsoft: yo, here is .NET 2, with generics and stuff, code away.
C++ folks: nice, we didn't have a lot of fun with your language extension, but .NET 2 is cool, we'll look into it.
Microsoft: ha, forgot to say, we decided managed extensions for C++ are no good, we are pulling them off.
C++ folks: huh? does that mean we get handles and C-style functions?
Microsoft: no, no, no, that's so 1990s, we have created a new language extension (C++/CLI), use that.
C++ folks: but doesn't it worry you that the previous extension didn't quite work? could we just get handles, maybe? we'd like to be standard-compliant, if possible.
Microsoft: nah, listen to us, everything is going to be fine, use the new extension.
C++ folks: ...ok.
[2011, Microsoft announces Windows 8 and WinRT]
Microsoft: heya, heya, ladies and gentlemen, we are proud to present to you... WinRT!!! and the C++ renaissance!
C++ folks: wow, WinRT and the C++ renaissance! cool! we suppose that's about the new C++ standard that just happened, right? very timely, Microsoft, well done!
Microsoft: um... no, we aren't going to do much with respect to the new C++ standard.
C++ folks: erm? so, what is this C++ renaissance thing about?
Microsoft: it's about WinRT, the new native tech, you'll like it, we are sure!
C++ folks: ok... do we use that via C++?
Microsoft: nope.
C++ folks: hmm... via C++/CLI?
Microsoft: nope.
C++ folks: then how?
Microsoft: well, we have a new language extension! it's called C++/CX and it's about reference counting. ain't it great?!!
C++ folks: ... (looking at each other in disbelief)
True story.
It is painfully clear that this can continue until cows come home. This is nuts, but this is how it goes. Does Microsoft care about standards? Yes. Will it trade standard compliance for more vendor-lock in, a prettier bell, a louder whistle, almost anything? YOU BET!
@Rockford N:
I'm not afraid of say it how it is: when I needed to glue some C++ code with .NET (because e.g. I had to build a GUI in WinForms but needed to use some back-end existing C++ code), I was glad I had C++/CLI as a tool available to me to build the bridging layer from native ISO C++ to .NET world.
If you don't like C++/CX extensions, you are free to not use them; WinRT can be programmed without C++ extensions using WRL; and if you don't like WRL, you are free to build your own native pure ISO C++ library for WinRT. If you succeed, you may make big money with that.
Personally, I've a practical view of programming: if you give me tools and libraries written in pure ISO C++, which are easier to use and more productive than C++/CX, I would welcome them and use them. But as it is today, the best tool for the job seems to me C++/CX.
Good for you, C64.
As for me and my team, after the news that VS 2011 will have basically no improvements for C++11 (vague promises of some improvements in this area in between VS 2011 and the next version are vague, we suspect these improvements consist of variadic templates and basically nothing else, which is too little and too late), after the disappointing flop of the "C++ renaissance" meme with WinRT and C++/CX, and after the disappointing lack of performance improvements in the developer preview of VS 2011, we decided we finally had enough and are not going to go with Microsoft any further. We are switching our development to the Intel's compiler.
And yes, if we decide to target WinRT (which is something we aren't yet sure about, it is not clear to us if this is worth doing at all, and the numerous restrictions on exactly how you have to program for WinRT don't help), we will do it via our own ISO C++ code, not via C++/CX. Why? Because we consider *this* more practical and more future-proof than using C++/CX.
@PFYB: Don't mix several things... If you speak of the lack of C++11 support and the not very good performance of VS201x IDE, I agree with you.
But I think that these problems are not caused by VC++ Team; maybe the upper layers of Microsoft need to invest more money and more brainpower to increase C++11 support in MSVC (e.g. add new developers to VC++ Team).
Regard the IDE being rewritten in WPF, this was not a very good move, IMHO. I'd have stayed with a native IDE, instead of spending time and resources in the .NET rewriting of the IDE (and put these resources for something different). Office has great performance, and its GUI is 100% native (and has the zoom bar capability, too
which was introduced in VS IDE with VS2010).
@C64:
>)
@PFYB, as C64 says, try to stick with C++/CX here (though you are right about C++11 support of course) because you've got some great comments on C++/CX so don't let yourself get too distracted.
Keep your C++/CX ideas coming.
@Charles
I'm not comparing a vm vs a native solution.
Though I don't entirely know how much technology you are coupling in there as PFYB mentioned the possible scope of things.
People have one code base. Not a native code base and a non-native code base unless it's one big compile to go back and forth. I am sure that is what you aim to do for your own code base when the time comes?
I can't see a world where the average person maintains a native code base and a non native code base in any other fashion. Can you? If so, how?
Therefore I don't see C++ as inherently native or not so the comparison is pointless to me at least. Is C# native or not native? Isn't that the same answer?
You keep mentioning WRL, I don't see what that does for me other than increase the amount and complexity of Microsoft specific code I write. Even if it made my code more "standard".
I'm not seeing value to do that. Can you explain?
It doesn't help a ref class from having to manually delegate to an ISO class, does it?
Having a ref class and an ISO class is putting two heads on a dog. You have to feed both wherever you go, that's where I want the complexity removed. There *has* to be a logical mapping between the two as you can do it manually, and I don't want to do it manually. What are your thoughts on this?
Charles).
Charles you should read more carefuly what
PFYB says in his comments. They make a lot of
sense. This part is answered by his plea for
extending the #import directive and writing
codegen tool which would do the reverse thing.
I remeber proposing extending #import myself
under some of earlier channel9 videos.
>
Again listen to PFYB and other C++ people who
commented on using WRL (including me). It is
absolutely not fluid and fast. I remember one
channel9 video here where some indian guy
introduced WRL and there was only one macro
and one templated base class needed so it all
looked fine and transparent. But it was only
because he barely touched the subject! If you
want to see how would look the real ISO C++
WRL code look here:
you can compare the code bloat in
OnLaunched handler implementation using C++/CX
and WRL. I am repeating myself now but this code
looks worse than ancient MFC. So stop bringing
WRL up as an alternative!
We are not small babies. We like to watch the
videos but we don't mindlessly consume everything
what is told. We can build our opinions ourselves.
And this long discussion is only consequence of
that.
Of course you're not... Nobody is suggesting otherwise. Please continue to speak your minds, share feedback and offer suggestions. It's always welcome here.
C
I'm just saying that there are apples and oranges here, not just oranges.
C
Quick analogy:
There are std paper sizes and companies manufacturing printers operating on just on those std sizes - no company changes those standards. They just produce their products to be compatible with them so no matter which printer I buy/use I know that my standard paper size will work with this.
What Microsoft does is - manufacturing printer (just for simplicity call it VS without mentioning all bolts and nuts like compiler etc.) and on top of that is reinventing new paper sizes (every few years) that, and here may I have your attention, is almost identical to the standard sizes but slightly "improved" in order to better serve MS clients. But MS being over all a business don't won't to constrain itself just to MS clients. They say that if you want to print standard sizes - well, no problem at all. Just take the part A11, A125, R58 and D12 and fit them in slots shown in a picture inside a manual attached with our printer and here you go, from now on you can print std sizes!
And few years from now, there will be another paper size from MS. But not worry those who want print just standard paper sizes - MS will acomodate you appropriately.
@KMNY_a_ha: A very apt analogy!
To continue it: ...and if you absolutely want to use standard-sized paper with the Microsoft printer, you can take a screwdriver, disassemble the printer, take the toner out and draw whatever you need to print on your paper by hand, here are the instructions how. That's WRL.
@C64, @Glen: I hear you on sticking to C++/CX. I will try not to get distracted, I promise.
@Charles:
I thought I would elaborate on one of the things that bothers me greatly about language extensions. That should partly explain why I am so vehemently against them.
Every time you extend the language, you create semantic rifts which weren't there before. Take, for example, C++/CX and your 'ref new' syntax. Suppose you have a template function:
template<typename T> void f(T* a) { T* b = new T(); ... }
Can you use that function with a ref class? No, you can't. You can't do 'new T' with a ref class, you have to do 'ref new T'.
Or, say, you have this function:
template<typename T> void f(const T& a) { ... typeid(a) ... }
Can you use that function with a ref class? No, you can't, ref types don't support C++ type info.
Was it obvious to anyone that 'typeid' wouldn't work with ref types? It wasn't obvious to me, that's for sure. I mean, I can rationalize the behavior once I know what it is, but it isn't at all intuitive, and, frankly, the fact that 'typeid' doesn't work bothers me on its own (the above was a real function used for tracing).
I can give you an example for C++/CLI, too, with C++/CLI you can have managed functions and unmanaged functions, but you can only have unmanaged lambdas. Managed lambdas are not supported. Was *this* obvious?
There are plenty of places where your extensions break in non-obvious ways, and don't integrate well into the language. Plenty. And I am not even talking about much bigger semantic issues here, like the fact that we couldn't have circular references (they were memory leaks) in straight C++, then we could have them (and people did jump on this) in C++/CLI, and now we again can't have them in C++/CX (well, I personally am not going to use C++/CX, but those who will, especially those who were using C++/CLI before, will suffer). I understand that circular references are fair game in C++/CLI, but not in C++/CX, because the core technologies underlying these extensions are vastly different, but since these technologies are so different maybe you shouldn't have attempted to integrate them into the same language? Otherwise not only do you keep extending the language, you fluctuate between opposite directions! That's crazy, but that's what you are doing.
To PFYB - I couldn't agree more.
When I heard of C++/CX I immediately thought "no, not again, give me C++, heck, give me plain old C, but stop bending the language to suit your strange, temporary needs". All my code is Windows-only, I don't care about platform independence one bit, but what you are doing with the language is a gross offense.
Please listen to PFYB. The things you are adding to the language end up not well integrated with it. C++ is already hard, but your extensions turn it into an absolute mess. You partially destroy the internal logic of the language.
In addition to the angle explored by new features not integrating well with the rest of the language, there is also an issue of things like properties, delegates and generics. Oh god. Please forgive my sarcasm, but I wonder, when you decide to extend C++ with a particular feature, does it not occur to you that maybe you are not the only one who thought of adding that feature to C++? Maybe that feature was considered by the C++ committee or the C++ community, and they rejected it - at least in the form you are about to add it - as non-suitable? Or maybe the language already has means to do what you want and the feature you are about to add was considered to bring too little to the table and was rejected on that basis? Or maybe the feature is not in the language because there is a replacement feature coming in some future version of the standard? What's up with C++/CX extending the language with features I mention above? You are adding them to C++ like it is your own language. Are you not aware of seemingly endless debates regarding, for example, properties on the C++ forums? Do you not realize that not all people WANT to have properties? Is it really your role to pick and choose what C++ will and will not have, which direction it will go? What arrogance!
Stop with the extensions, give us libraries.
@PFYB: you are having a good point on this, thanks.
However, note that it is possible to integrate ref classes with STL containers, e.g. it is possible to have std::vector<SomeCxClass ^>.
I wonder if they could refine the extensions to reduce the "impedance" with the rest of the C++ language.
@C64
>"
)
@C64: Oh, you liked it? There is lots more. For example:
1. We can have an array of smart pointers. Can we have an array of hat pointers? Nope, 'T^ a[3];' is a compile error. We can have a regular pointer to a hat pointer (T^* a;) and this works fine, but we can't have an array of hat pointers. Why? Your guess is as good as mine.
2. With C++ types, we can stack stars to construct pointers to pointers, pointers to pointers to pointers, etc. Can we do this with hats? No, 'T^^ a;' is a compile error. Yes, stacking stars and hats is useful.
3. It is fairly common for a C++ class to derive from a template class. C++/CX has generics. What happens if we try to derive a C++ class from a generic interface class? Oops, that's a compile error, too. (Example: 'class C : public I<int> { ... };'. This compiles if I is a template interface: 'template<typename T> class I', but doesn't compile if I is a generic interface: 'generic<typename T> interface class'.)
4. A C++ template can, of course, take another template as a type parameter. Can a generic take another generic as a type parameter? No. Can a template take a generic as a type parameter? Yes. Can a generic take a template as a type parameter? No.
5. We can use instances of ref classes through hat pointers (T^ p = ref new T; p->x = 2;) or we can pretend they are allocated on stack (T p; p.x = 2;). Good. We can take an address of a hat pointer (T^ p = ref new T; f(&p); ... with f, for example, at some point doing *p = nullptr;). Can we take an address of an instance of a ref class that pretends it is allocated on stack? Nope, this is a compile error.
6. If we take a hat pointer as an argument, we can prevent changes to the underlying ref class by switching the type of the argument from hat to const hat (T^ -> const T^). Good. This prevents direct writes. Can we now mark methods in T which do not change its internal state as const (ref class T { void f() const {} };) and leave methods that change its internal state as non-const? Nope, marking methods of a ref class as const is a compile error.
7. Ref classes can have automatic properties (ref class T { property int x; }). Ref classes can also have unions or structs that contain unions, like VARIANTs. Good. Can we have a union with, say, a float and an int and declare one or both these things as an automatic property? Noooooo, this is a compile error.
8. What about operator overloading, can ref classes overload operators (think about constructs like matrices)? Why, yes, they can. But if you think this works in the way this worked in regular C++, think again. Your overload prototype likely uses const references (const T& operator+(const T& r) { ... }). Well, this doesn't work with a ref class, since there is no such thing as 'const T&' for ref classes. You might try using plain types (T operator+(T r) { ... }). Well, this doesn't work either, since attempting to return '*this' fails, ref classes do not have a copy constructor. What you can do is use hats (T^ operator+(const T&r) { ... }), but this effectively leaves you with an implementation for +=, using + modifies one of the arguments.
And this is just scratching the surface. The above took me a total of about 45 minutes to compose - make a quick list of things to try, try them one by one, write down the results. Just in case you are interested, the original list contained 12 items, 8 items exposed problems as above, 3 items worked, the remaining item needs further investigation (having two identically named ref classes in two different compilation modules makes the debugger use only one definition, it can be that the debugger contains a bug and doesn't know that identically named ref classes in different compilation modules are not the same, or having identically named ref classes that are different is actually not allowed - which, of course, would be vastly different from C++).
By the way, how does C++/CX interoperate with the previous extension, C++/CLI? Can we have managed code, unmanaged code and WinRT code in the same file? C++/CLI had #pragma managed / unmanaged, so we should be fine, right? No, of course this doesn't work.
A total mess.
@PFYB and others - I'm sure there is only one person who can answer competently to all those "why" questions. Mr herb sutter - "the" C++ architect at MS.
?
@new2STL:
"He has already answered in Using Windows Runtime from C++ (plus other vídeos and blogs from VS team, but it seems that was not enough to appease the angry-ones ...)"
You know, new2STL, I think I have read and heard more or less everything said on the matter by Herb Sutter. I don't remember him providing satisfactory answers to any questions. He did make a vague promise on adding some unspecified features of C++11 *after* the version of VS that is currently not even in the beta, but probably before the next one, and that's it.
Did Herb Sutter at some point explain why the team went with C++/CX and didn't use ISO C++? If he did that and I missed it, please point me to that explanation.
"A total mess." -- "Only because the compiler not let you do wrong stuff?"
So, if I understand you correctly, you say that, for example, 'T^ a[3];' does not compile, because it is "wrong stuff"?
What criteria do you use to distinguish "wrong stuff" from "right stuff"? I surely hope it is something more than simply what the implementation does.
@new2stl where (which video file, what time on this video) herb sutter answered to those questions? Would you mind pointing me to them? I'm asking you because I've watched it at least twice each of them and I cannot recall any of those answers being given. So could you kindly, point me out there?
Thanks
Sorry Charles for the explosive text, but I have to put it out. From now I'll no more answering to PFYB.
")
@new2STL:
You don't like what I am saying, fine. I agree my tone is not the most friendly. Please keep in mind me and my team (and a lot of other people here) have been asking Microsoft not to dilute C++ with extensions, etc, for a long, long, looooooong time, to no avail.
You want the discussion to be constructive, good, I want the same. Let's stick to the facts.
Do the above examples I provide show that C++/CX does not integrate well into the language? Do they or do not they?
@new2stl and I've said already that I've watched all of them at least twice and in none of them answers to those questions are given. Second, Jim in his blog also doesn't give answers to them.
As for rage? Where on this or Jim's thread I "raged". I've asked you politely to point me to the specific time on a specific video which would demonstrate that what you're saying has any back up. But you, just in the style of guys from MS conveniently telling me to see those videos, again... even though I've just explained to you that I saw them at least twice, yet you do not answer what you've been asked. Just like they don't.
And as for deciphering "KMNY_a_ha"? Your point is? It's not like I've tried to conceal the fact that "KMNY_a_ha" and "Knowing me knowing you, a-ha" is same personr. I've explained that to Diegum long time ago that both nicknames are mine and the KMNY_a_ha is due to the fact that I cannot register myself with my original nickname on Channel9. So your point is, what? Sherlock?
So are you going to show in which video/blog herb sutter answers to those questions mentioned here, or you just going to simply state that those questions had been already answered?
I don't know of other threads, but in this thread it seems to me that PFYB is having some good points and is giving constructive feedback. I differ with him on some points, but I see no "hate" in his words... I just see a technically interesting conversation.
@C64 You mentioned that one cannot pass smart pointers between modules. I keep wondering, why is that not possible, if both new smart pointers (shared and unique) posses a deleter? I assume, that thanks to this OUR module will deallocate the resource, NOT theirs.
I agree. The tone here is not hateful at all (that's happened on other threads and not by PFYB, mind you, but not here - it's been quite civil. Thank you!). What I see here is frustration and perhaps some misunderstanding, too.
I do respect new2STL's desire to keep the tone respectful and constructive here. Thanks for that reminder, new2STL!
It must be that the information you seek hasn't been supplied in this episode of GoingNative and in all of the BUILD videos on C++/CX. Right? Is this the case? Deon Brewis' great BUILD session discusses the goals (and internals of C++/CX - from compiler output to the reasoning behind the new model), right? Did you watch that one? What about Jim Springfield's VC blog post on the C++/CX design? I know you've read that one, PFYB since you've commented there. Looks like we need to provide more information.
What I'm hearing from you is that you need more information and justification.
C
You can pass C++ smart pointers across module boundaries, but it is very constraining, as with other C++ classes.. But you can safely use COM components written with VCx with clients written with VCy.
Charles asked if I'd pop in for a quick comment to answer "why have C++ language extensions for WinRT?"
Short answer: You should still write nearly all your code using portable ISO C++ only. There's going to be some Windows-specific code if you want to use some Windows services (e.g., Metro), just as there's going to be some CORBA-specific code if you want to use CORBA services, and C++/CX actually exists to reduce the amount of nonportable Windows-specific code you have to write, although we also support WRL as the more traditional "ATL+MIDL+..." style approach we've all historically used for COM and CORBA.
Longer answer follows:
1. WinRT is an evolution of the COM/CORBA model.
Native C++ types aren't ABI-safe, programming language-neutral, version-safe, or otherwise capable of meeting the multi-language and serviceability design constraints of WinRT. Historically when such capabilities have been needed, we're resorted to some non-portable solution like COM or CORBA which rely on compiler extensions + libraries + build/tool extensions.
WinRT is just an evolution of COM. Supporting COM or CORBA has always required extensions beyond just ISO C++, from language/compiler extensions (e.g,. #import is not ISO C++), to custom libraries (e.g., ATL), to build/tool support (e.g., IDL/MIDL). WinRT adds extra wrinkles on top of COM, such as inheritance implemented under the covers using COM aggregation, and so supporting it requires a bit more work than old-style COM, but that's just a matter of degree; the problem is exactly the same in kind.
Note that we are also providing WRL, which can be considered an evolution of the ATL + tools like MIDL approach. If you prefer that, go ahead and use it; your code will look more like you're using just a library, though I will point out that you'll be using lots of non-ISO C++ scaffolding just as with COM or CORBA.
You can use C++/CX or WRL. I would just ask that you try both, and even if you choose WRL you'll be able to see why the C++/CX option is designed to make it a lot easier -- and actually less invasive in your code -- to write the non-ISO C++ thin wrapper parts of the code because we've employed the compiler to help you by avoiding build/tool complexity (e.g., no MIDL/IDL compilers or files), better language integration (e.g., you get to use exceptions naturally across module boundaries), and other simplifications which will actually reduce the amount of Windows-specific code you'll write.
2. WinRT is a foreign (non-C++) object model almost as much so as .NET.
Many (but not all) of the answers to "why do we need C++/CX language extensions for WinRT?" are the same as the answers to "why did we need C++/CLI language extensions for .NET?" which I wrote up in A Design Rationale for C++/CLI.
What's the same: Nearly everything. Most of the same design constraints apply -- we are binding to a non-C++ object model that supports multiple client languages, metadata, ABI safety, and so forth.). As such, binding to a foreign object model requires additional compiler knowledge to account for the differences -- and anything that requires compiler knowledge can't be written as just a library. Note: Even if a library-like syntax were chosen, it would simply be a language feature dressed in library clothing.
Now, WinRT is an enhanced COM, and as such is closer to C++ than .NET is. So a few constraints don't apply and so C++/CX is slightly simpler that C++/CLI. The main relaxed constraint is that in WinRT we don't have to worry about a compacting GC, so objects' addresses do not change. For that reason, we can (and do) easily allow "mixed types," meaning a ref class with data members of native C++ type data members and vice versa (see 3.1.3.3 in the Rationale linked above; we never got to this for C++/CLI but do support it for WinRT and without having to have two separately allocated subobjects), and we have less need of some helper abstractions like pin_ptr and interior_ptr that matter a lot more when binding to foreign object model and runtime that permit compacting GC.
3. Neither C++/CX nor WinRT is recommended most of the time.
This can't be said enough: C++/CX and WRL are special-purpose tools, not replacements for portable ISO C++. The guidance is the same for all the Microsoft-supported languages that are WinRT-enabled: You should continue to write nearly all your code in your normal language (in this case portable ISO C++; for .NET programmers, C#; for web programmers, Javascript) and use WinRT only on the boundary where you want the cross-language, ABI-safe, etc. environment. Just like COM or CORBA. Because WinRT is COM.
C64>.
The STL never has and never will guarantee binary compatibility between different major versions. We're enforcing this with linker errors when mixing object files/static libraries compiled with different major versions that are both VC10+, but for DLLs and/or VC9-and-earlier, you're on your own when it comes to following the rules.
I am sure this again will be interpreted by new2STL as me not intending to listen, but...
@hsutter:
Please excuse me for asking this, but did you read the thread? We know that WinRT is an evolution of COM, we get that WinRT is a foreign object model, and we heard that we are supposed to use C++/CX "only at the border" for what seems a hundred times already. We also heard about WRL. We are asking about something else.)?
***
Before you say so, yes, #import is not part of the standard, we can talk about why we think #import is fine while other language extensions aren't.
Your post simply reiterates the same general points we have already heard a number of times, and, as such, does not answer the above question.
Specifically:
"Native C++ types aren't ABI-safe, programming language-neutral, version-safe, or otherwise capable of meeting the multi-language and serviceability design constraints of WinRT." -- We know this. Wrapping code both ways allows meeting the requirements of the ABI without altering the language.
"Note that we are also providing WRL, which can be considered an evolution of the ATL + tools like MIDL approach." -- Using WRL is not fast and fluid. Saying 'just use WRL' is almost like saying 'just use a hex editor'. If using WRL was as fast and fluid as using C++/CX, or as fast as fluid as using WinRT from C#, we likely wouldn't have this thread.
"... you'll be able to see why the C++/CX option ... will actually reduce the amount of Windows-specific code you'll write." -- The question is, again, why did you not allow writing Windows-specific code with equal ease in ISO C++. We get that C++/CX is expressive and not very verbose. We think you could have achieved the same or better with ISO C++.
"Most of the same design constraints apply [as for C++/CLI] -- we are binding to a non-C++ object model that supports multiple client languages, metadata, ABI safety, and so forth." -- See above. Wrapping code both ways allows meeting the requirements of the ABI without altering the language. We agree metadata is great, please continue generating and using it.
)." -- See above and see the posts in the thread. Wrapping code both ways allows all this while staying withing the boundaries of the standard.
"As such, binding to a foreign object model requires additional compiler knowledge to account for the differences -- and anything that requires compiler knowledge can't be written as just a library." -- Yes, binding to a foreign object model requires additional compiler knowledge, and, yes, anything that requires additional compiler knowledge can't be written as just a library. But using a language extension is not the only way to apply additional compiler knowledge. You can, for example, as suggested above, use additional compiler knowledge to generate C++ wrapper code. There are other ways to apply additional compiler knowledge as well.
"Now, WinRT is an enhanced COM, and as such is closer to C++ than .NET is. So a few constraints don't apply and so C++/CX is slightly simpler that C++/CLI." -- Good, but, first, the question was not about this, and, second, even though WinRT is, as you say, closer to C++ than .NET ever was, and C++/CX had less hoops to jump than C++/CLI ever had, this, sadly, didn't allow you to integrate C++/CX into the C++ much better than C++/CLI. If you read the thread, you will find a post with examples of language problems that C++/CX has. The general feeling from using C++/CX is that the number of such problems is not less than that for C++/CLI. While reading that post please note how using standard C++ constructs instead of C++/CX constructs (eg, smart pointers vs hat pointers) easily avoids the problems. This is one of the reasons we are asking why you didn't do everything in ISO C++.
"You should continue to write nearly all your code in your normal language (in this case portable ISO C++; for .NET programmers, C#; for web programmers, Javascript) and use WinRT only on the boundary where you want the cross-language, ABI-safe, etc. environment." -- We've heard this many times already, there is no need to repeat. What you are saying implies that the border is a small thing, and is easily separable from the rest of the application. Me and many others disagree. In most projects I have worked in there were multiple borders and most of the code was touching these borders in some way. You are asking us to add one more border. This is a considerable expense. From experience, this new border will end up touching a lot of code. Also from experience, concepts from this new border *WILL* leak into other parts of code, for performance reasons, because someone didn't know any better, or because someone was in a hurry to fix a bug. By tying WinRT to a new type of border you are putting us and other C++ developers into a position where we have to choose between using WinRT and living with the consequences of the new border (reaping the benefits of C++/CX, but also incurring heavy costs), or writing our own wrappers for WinRT, or maybe not using WinRT at all. WRL is a non-option, it does so little that using WRL is equivalent to writing our own wrappers. Saying that something should only be used 'just on the border' isn't saying much. Plus, getting back to the topic of the thread, the central question is not where we should be using C++/CX. We heard you on that already. The central question is why we should use C++/CX and why you didn't do everything in C++.
In sum, Herb, I appreciate your posting here, but could you please read the thread and do another posting addressing the actual issues discussed here.
Thank you.
I second PFYB. Thanks so much for the detailed and thoughtful post!
Charles, thanks a lot for bringing up Herb Sutter, this is much appreciated, but, Herb, could you please read the thread and post on the specific issues raised? Thank you.
One more word on borders. The application I am currently working on uses a modern, modular structure. We have a separate data access layer into which we can plug in different data providers. We have a separate presentation layer which we connect to the desktop interface and the web interface (web is actually several interfaces). What you would normally call business logic, of course, is also a separate layer. Do you know what we are working on most of the time? The interface layer and the ties between that layer and everything else. If WinRT was not about the UI (plus so many different other things), if it was more on the level of, for example, OLE DB, I would be more sympathetic to the argument that using C++/CX to access WinRT is no big deal. As it stands, however, the main and most compelling reason to use WinRT is to use the new Metro UI, and I just know that the border at the level of the UI is huge.
@new2STL, like C64, I support PFYB in that he been very constructive in this thread thus far and I certainly share the principles he is promoting.
I will post again later when I have more time but in short I do no share Herb's views on this but I will elaborate soon.
@PFYB: That is very well summed up - you are really hitting the point here.
@Warren: You are absolutely correct. This is one of my big concerns too.
STL: Thanks for the confirmation.
@PFYB my suggestion to you, could you please list those questions, bold them out
and number them so there will be no wriggling out/avoiding/omitting excuse anymore as to which and what specific questions has been asked. Then in reply mr herb could also number his answers appropriately and this would (I think) greatly simplified whole communication "thing".
@KMNY_a_ha: Well, this is the question, right? ->
Why, instead of doing C++/CX, did you not use a design that requires no language extensions, eg, by generating C++ wrapper code for consuming WinRT components (similarly to #import) and generating C++ wrapper code for exposing WinRT components (the reverse of #import)?
C
@Hi Charles, thanks for reply.
The question/s is/are (depends on how you read it, as a one question with branches or few separate questions said it in one breath):++.
Thank you.
P.S.
If you could wait couple of days till other people get the chance and ask their questions, and then if you could compile them into one post, post this compilation with numbered questions here on this thread and then forward it to mr sutter get his numbered replies and post them here would be really appreciated.
C++ Rules and Rocks!
@PFYB: great observations.
Ironically, MS extensions are restrictions. They extend the language with constructs nobody wants, at the same time restricting it where everybody wants. They are also viral. Once put in some places they severely restrict what you can do with that code and that tend to propagate from that boundary and require efforts to prevent the bleeding via probably the same firewalls of raw pointers and built-in types that MS considers so 1990.
Didn't Jim Springfield explain the #import (for consumption) problem in his post on the VC blog (authoring complexity)?
The #import feature that was available in VC6 provides a good mechanism for consuming COM objects that have a type library. We thought about providing something similar for the Windows Runtime (which uses a new .winmd file), but while that could provide a good consumption experience, it does nothing for authoring. Given that Windows is moving to a model where many things are asynchronous, authoring of callbacks is very important and there aren't many consumption scenarios that wouldn't include at least some authoring. Also, authoring is very important for writing UI applications as each page and user-defined control is a class derived from an existing Runtime class.
C
@KMNY_a_ha: We do need to be respectful of Herb's time and ensure the questions presented are clear and have not already been answered (like the #import question, which Jim Springfield seemingly answered already, right?).
C
Charles that question was only *partly* answered by Jim.
As PFYB says it's needed to generate code wrappers in both ways. One way is through the #import directive which allows you to *consume* WinRT interfaces through ISO C++ wrappers. So you will get set of automatically generated C++ classes with inheritance and exceptions and you can easily use them in the C++ way. There is no need for Platform::String etc. because wrappers accept standard C++ types like std::wstring and transform it to ABI safe structures before they call WinRT raw interfaces. The implementation for derived members calls QueryInterface. Exceptions are thrown when HRESULT calls fail. Most of this functionality is already implemented for a long time (Jim says since VC6). It only needs to be updated to understand the new metadata format (.winmd) and new concepts (COM inheritance). This would be the true WinRT projection to C++. Jim and others indirectly conclude that this would be possible.
Opposite way is also needed. Developer writes ISO C++ class and wants to expose it to other language or wants to handle events. Jim is calling this *authoring*. So in short you have to write tool which parses the C++ code and generates wrapping COM interfaces for classes that are marked for export (By __declspec or by using base class with special name). The wrapper implementation catches all exceptions and transforms them to bad HRESULTs. Exposes WinRT interfaces with ABI safe types. And also generates metadata.
Now this approach has many advantages
- uses only ISO C++. Very easy to use. All COM-related code is automatically generated/hidden. No CX extensions, no Platform::Strings
- If the functionality is implemented in separate tools one can use older VS compilers or even third party compilers
- It seems that it would be easier to implement this than whole C++/CX thing
So we would like to hear what are the problems with this approach and why it was not preffered over CX extensions. I hope my english is clear enough here.
@Everyone:
It's not about *no* language extensions, it's about the *right*, *necessary*, and *justified* ones. *Right* means I would support even *more* extensions if they could be *justified* and were correct, or *none* if they weren't.
I hope this dialogue means Microsoft is equally open to this. To that end I hope Microsoft (at least Herb) is making notes, as people (at least me) will ask for them later as evidence.
My own opinion is this: it could be that MORE extensions may be necessary beyond what MS has proprosed, but if we can show that NO or less extensions are needed, then that's even better iff it achieves the same and right goals. But either is fine, if it is right.
In any event, the bar has to be high. MS needs to be challenged to be sure its right and more so when MS doesn't have a history of getting it right enough. COM was half baked, and .NET is half performant. C++/CX is what COM should have been, but it is a day late and still a dollar short as I hope to explain. It needs to be not just good for MS, but good for C++ and its customers.
I agree with the "lost decade" people have commented about as Microsoft has persued this and got itself tangled up all the time.
@Herb / Microsoft. Here's are my concerns:
1. Why is a "Foreign Object Model" that is SO foreign to C++ that it causes so much problems justified? We can't afford more lost decades.
2. Is this because MS needs to support its legacy COM code and .NET? If so, I'm not sure that's reason enough for C++ to suffer or MS to keep persuing thing this this way. If MS didn't have a legacy, would C++/CX be the solution? Wouldn't the evolution of a module based C++ have been this deal?
3. C++/CX does a lot for COM, but I'm not sure it does enough for C++. It creates two different but equivalent concepts in C++ that will have to talk to each other - as Herb says himself. This is rarely a good idea. C# does not have this issue. It does not have two equivalent concepts of class surfaced in the language. Why is this good for C++? I do not believe C# is tied to Windows, so does C++ need a type that is? A ref class is a logical competitor to a C++ class.
4. C++/CX provides insufficient means to navigate between an ISO class and a COM class, when they surely have to - Herb made the case himself. This is a key statement: this can and must be nearly transparent for this thing to fly even if one were to agree the other key issue that C++/CX and COM was neccessary at all (and thats not agreed yet either). What justification is there that it cannot be transparent? Is it good enough justification? If it can't be transparent, shouldn't ref class go? Anything that leaves C++ with a significant wrapping burden of virtually any kind at this level is bad for C++. C# doesn't cause this. Why should C++?
I believe MORE language extensions might be the answer here and some of them could be standard. As an Adapter of some kind? This could go in the language and not in the compiler/tool that does the adaptation. This makes it useful elsewhere and on other platforms. Should other platforms have C++/HP/IBM? Why not one solution for all? Smarter people than me, please test this idea, I don't claim to have had time to think it through but fleshing the idea out together is what the ISO team does, and that's us. BUT:
5. If/is ISO C++ so broken that it needs a whole new object concept? If so, shouldn't we fix it? Clearly the dialogue of opinions suggests yes. If COM is that fix, wont that be the future? If so, great, lets standardize it. If C++ isn't broken enough that it needs a new object concept, how can Microsoft justify it needs it? In either case, class and ref class are equivlant concepts and standard or not they need to connect more seemlessly; ideally, transparently. The Adapter was one rough idea to bring/map the two together, but ideas are needed, ISO or otherwise, to make the friction of two class concepts less, regardless if they come about in either standard or non standard form.
6. MS is clearly (probably with some informal WG21 team blessing? Everyone?) creating a language within a language (again) - by creating yet more notions of a class. If that allows MS to "innovate" in their own sub language and keeps MS from messing with the core language so both can evolve freely and independently, that *might* be something. But would it? C++/CLI never acheived that goal, and here we are with COM/C++/CX doing it again. Is it worth it? and can the risk or effort be reduced beyond what is currently proposed to allow it?
If WG21 informally agrees to a language within a language, maybe they need to sell this too or have the co-existance better planned? Bjarne Kanobi, you are our only hope?!
In summary:
The goal of Componentisation (C++/CX) is good.
C++ has a concept of it.
More is needed of it.
Interopability is good, without it C++ and Componentisation is missing a market.
C++ needs to be better at it.
BUT:
We can't allow MS to make the Component the next Platform battle field.
C++/CX is not A library, it is The Definition of Library.
Standard C++ cannot afford to give up the concept of Component to Microsoft, or it will be at odds with itself.
Introducing two classes that aren't seemless is the start of that.
So: @Charles for Herb.
1. The case for COM (C++/CX) has to be made. Is it made?
2. If the case for COM is made, shouldn't it be standard?
3. If the case can't be made or made standard, should MS be using it, or forcing it on the rest of us? Why is that good for us? Says who?
4. Either way, aren't a class and a ref class equivalent (no, not the same)?
5. Aren't class and ref class competitive?
6. If either triumphs, they have to talk to each other still (or its an agenda) and one is work for the other.
7. That is a concern and it will grow. Why can't or shouldn't the connection between class and ref class be made transparent?
8. If it can't be transparent? Is this worth it or the right solution?
9. Have we made our case to Microsoft? If so, why so? If not, why not? What's the plan?
10. Thanks.
Many good posts.
@Charles:
One thing that I want to make clear:
"... (like the #import question, which Jim Springfield seemingly answered already, right?)"
No. See the post from Tomas. Very shortly, Jim is saying that the problem has two parts (consuming and authoring) and that #import solves only one part (consuming). We agree that #import solves only one part, but point out that a feature similar to #import - generating code that is facing out of C++, instead of into it - solves the second part. Later in the thread on vcblog, Jim seems to acknowledge that the proposed approach is possible and would work.
The questions thus are:
Why wasn't it done this way? What aspect of C++/CX can not be done using ISO C++, generating the code both ways as proposed?
Hope this makes it clear.
@Charles, Hi, I'm sure lads from C++ community will do their best to formulate them as clearly and succinctly as possible.
Regards
Ok, simple, concise questions for the Microsoft team:
1: Do you have any actual experience with the whole "only use C++/CX at the boundary" thing? Are Microsoft's WinRT components written in this way? Or did you just go all-out C++/CX for those because "it wasn't important", and then made sure that *in theory* it is possible to restrict C++/CX to the boundary?
2: Was an approach as discussed by PFYB and apparently discussed with Jim Springfield on the VCBlog considered? Could it be considered for a future release?
3: What about a "Friendly WRL"? WRL gives full access to the WinRT plumbing, and the ability to do things that C++/CX can't do. But for the most part, we don't need that. Most likely, we don't even need all that C++/CX does. So, could a smaller, simpler, cleaner, more user-friendly library be developed which, instead of doing everything WRL does, tries to do *roughly* all that C++/CX does? Cutting a few corners is fine, we can always dive into WRL proper if we need those last bits of functionality. But something that made the commonly used 90% accessible to ISO C++ developers in a more streamlined and clean way? That'd be nice. Has that been looked into?
@Glen: as nice as it'd be, all the talk about "do it the right way, and get it standardized" is just unrealistic. "getting it standardized" isn't easy, and there's a lot of opposition in the committee against just adding arbitrary features to C++. The feeling is that the language is already (more than) big enough, so increasing the scope of the standard (for example to encompass COM-like interop functionality) is probably unlikely. And further, even if Microsoft comes up with something that everyone on the committee loves, it'd take the better part of a decade to get it standardized. And Microsoft obviously can't wait that long before making WinRT available to C++ developers.
Thanks for the clear questions. I'll pass along.
C
Hi @Grumpy (say Hi to Dopey for me!).
I don't think we are in any disagreement. :)
My emphasis was as much about "doing it the right way" as it was about getting it standardized.
To illustrate, I said if more non standard language extensions made it the right way, I'd support it over something that was done the wrong way with less language extensions.
I was emphasising that the focus has to be about getting it right first, however you do it, because standardizing something that's wrong will be hard and using something that is wrong will be even harder.
So I share your suspicion that WG21 will never make COM standard. That's why I called for those relevant who support it in that context, or not, to come out and make the case.
It's also why I pointed out C++/CX forks the road with two notions of class and how I fail to see how it makes logical sense to augment C++ with two non transparent notions of class.
To maintain an ISO code base, a ref class can only ever delegate to an ISO class, so Microsoft will always be at every border and C++ developers will always be fowarding/wrapping to an ISO class to maintain an ISO code base or giving up all together on that last part. That's not good fo C++.
C# has none of this problem so why wouldn't people prefer that in many situations where C++ could have been equally good. That's not good for C++.
C++/CX appears to me as a play for defining how libraries should be constructed in general. To me that means Microsoft wants to own the border of every library not just theirs.
I fail to see how that is good, but for C++ developers that is terrible if they have to construct the border manually to maintain an ISO c++ code base.
Regardless, what we have to date remains insufficiently baked and unproven to be right. I don't see any disagreement between us there.
COM1 was half baked and left C++ devs suffering for years doing the the wiring and the plumbing.
COM2 is insufficiently baked and still leaves C++ developers doing the wiring.
C# has none of this nonsense surfaced. One class.
I am over Microsoft rushing to market with half finished/tested ideas that cause the industry vast amounts of pain, from which they walk away from a while later anyway or extend in an incompatible way, leaving everyone with the mess, including themselves.
If C++/CX is just a plan to clean up some of that mess, it's not enough. It needs to clean up more of it.
I think us C++ devs should work out what is right here, regardless of what Microsoft is doing. If they want to go against that, good luck to them. If that challenge instead proves Microsoft right, even better..
Do the little things matter? Which customers do I upset with the "not nice" names - my MS customers or my ISO customers? Or just myself because I've brought the two for one deal and polluted the name space with twice as many class names plus I need to remember what that attribute is to fix it and go add it, assuming there is one. For every class in my library.
class UmHotTubTimeMachineToo
{
public:
void AddPerson( const wstring& personName );
private:
vector< wstring > peopleInHotTub;
};
ref class HotTubTimeMachine
{
private:
UmHotTubTimeMachineToo ht;
public:
void AddPerson( String^ personName );
};
void HotTubTimeMachine::AddPerson(String^ personName)
{
ht.AddPerson( personName );
}
void UmHotTubTimeMachineToo::AddPerson(const wstring& personName)
{
peopleInHotTub.insert( personName );
}
I agree with Glen.
> I think us C++ devs should work out what is right here,
> regardless of what Microsoft is doing. If they want to
> go against that, good luck to them.
I agree. I am working it out right now reading this thread. A lot depends on how Herb Sutter answers the questions put forward by the posters here.
>.
I second this too. Walking away from Visual C++ is definitely an option we are considering. We have been burned too many times with things like C++/CLI and now C++/CX, at some point you've got to say enough.
I would also like to add my voice to the first question by grumpy. With all due respect to Herb, Marian and other Microsoft speakers, what I am hearing from you on borders does not match my own experience. The border for a thing as huge as WinRT is going to be similarly huge. I don't see how one can hope to isolate it from the rest of the codebase short of putting all WinRT stuff into a separate module and guarding that module with a heavy facade (likely organized with C calls and custom handle-like things, as Garfield says, erasing half of the benefits of using the language extension in the first place).
Do you have any examples of large applications that are now written in C++ and that are going to be moved to WinRT? If using WinRT (and C++/CX) on the border is as simple as you imply, there should be such applications, right? I mean, who wouldn't want to be first to use the Metro UI.
For the record, this never happened with C++/CLI. The only example of a large application that went from C++ to partly .NET, that I know of, is Visual Studio itself, with its 2010 version. It took a lot of time for the team to even contemplate the move and the results were not good. But maybe C++/CX is different.
So, once again, the question is: do you have any examples of large applications that are now written in C++ and that are going to be moved to WinRT?
Thank you.
Charles, thanks.
Looks interesting.
And, we have finally obtained (not directly) long awaiting answer to the question: Is COM Dead?
Charles, should we expect an answer from Herb Sutter? I would suggest a discussion in this thread, since it seems to me that the issues at hand are simply too complex for a single all-encompassing answer, a bit of back-and-forth posts would work much better. I would argue that Herb Sutter, the team responsible for C++/CX, and Microsoft in general would benefit from such a discussion at least as much as us here, but... should we expect anything at all?
As you can imagine, Herb and co are a tad busy. Please be patient. I don't want to pester Herb every day. That said, I agree with you. We should have this conversation.
How would you feel about us bringing this up in the next episode of GN? You could provide a list of concise questions that we will get the answers to. The number of replies on this specific thread are making it hard to follow along. Jim Springfield won't come on the show (he doesn't want to be on camera and we respect that). Herb's schedule is impossibly complicated and I doubt we will get him, but we can try! So... What do you think?
C
@PFYB A discussion thread is an excellent idea, essential.
@Charles, a future going native show would be great!
@Everyone
It's essential the discussion thread come first because the community needs the time and chance to build consensus and clarity of the best ideas and opinions we think Herb should answer to. We have to argue with ourselves first to get the best out; before we waste Herbs time. He and others can always peep in too to see whats coming or to contribute.
Doing a show too early doesn't allow us to arrive at consensus or clarity and we'll end up presenting Herb with a mix of confused ideas or questions that are not thought through enough or where there is no clear consensus to help Herb prioritise his time on, or where the best ideas still haven't arrived or evolved yet.
If that happens, we will waste Herbs time with duff or unevolved ideas and questions that he'll reject. Yet we will still feel angry at him because we are still thinking the better answers are out there or he missed it. That wont be fair to him or us but he'll get the blame and we'll keep frothing at him; and you don't know how many goes you'll get at this.
This is too important for us or MS to mess up.
Definitely a discussion thread first, an are you ready? question next, then a show to follow. Rinse and repeat if necessary! lol
Cool stuff.
Thanks!
OK. This is going to happen - addressing these questions on GoingNative. What I would like to do is the following (this makes my life easier).
@PFYB: Please send me an email with your specific, technical, clear, concise feedback and questions. Just email me at ctorre AT microsoft DOT com. OK? Since others on this thread agree with you, there's no need for folks to send the same questions to me. If you have different technical, clear, concise questions about the C++/CX implementation details, then send them, too. I would ask that you be respectful and not send me rant mails. You can get your rant on here
Cool?
BTW, I spoke to soon about Jim not wanting to be on camera. Perhaps I shouldn't have declared defeat before trying... My apologies to Jim and to you for sharing this assumption. At any rate, we're working to see if we can make this happen for the next episode (could be it doesn't work out due to timing, but it WILL work out at some point).
For now, please send the technical, clear, concise feeback and questions (especially) directly to me so I have a single place to see questions and not have to parse a thread with over 100 replies and growing.
We asked you in our very first episode to help us fly this plane. We weren't kidding, my friends. Thanks for the help!
C
@PFYB:)?
***
Yes, the "extend TLB #import" option was one of a dozen-plus major alternatives that we considered in considerable detail, including doing some prototyping of a few variations.
The short answer is that to enable authoring (create new delegates, classes, etc.) in any model inherently requires a lot more information than consumption, because authoring requires complete information to generate the metadata and complex underlying structures. That information has to come from the user, which means he has to provide it somehow -- and again, this is inherent in the problem regardless of the programming model that's chosen. If you pursue the "extend #import" path, you can extend #import itself easy for consumption (which is the easy part) but then for authoring basically end up reinventing a non-ISO C++ MIDL-like language/compiler/toolchain or its equivalent. Not only doesn't that shield the programmer from having to provide all the nonportable non-ISO C++ information he has to provide in any model, but from experience with MIDL et al. we know that he ends up with a programming model that isn't actually simpler in concept count or overall complexity, whose run time performance is often slower (e.g., compiler isn't able to optimize ref counts of smart pointers), and whose tool support is inferior (e.g., debuggers, designers). Now, it would have been simpler for us to implement, which is one major reason we almost did it, but would have been harder to use for authoring and slower.
Longer answer follows:
Yes, you can do it... the trouble is the death by 1K paper cuts, because you have to account for detail after detail after detail that needs to be taken care of where the WinRT object model is intricate and fundamentally foreign to the C++ object model (e.g., requires, inheritance, the this pointer, retval mapping, exception mapping, partial classes, more).
Trying to summarize a months-long design discussion briefly, some of the major (but not only) drawbacks to that approach were:
So it's easy to say "reverse of #import," but you can't escape the inherent complexity in the problem domain that the user has to be able to express, regardless of the model. So to be viable this path has to grow a full-blown "actually a full MIDL language/compiler and toolchain" model. That doesn't save the user from expressing all the nonportable code and extensions anyway, but would require him to use a more complex toolchain, learn more variant language annotations/extensions, write more code to do the same job, often run more slowly, and have inferior debug and other tool support.
Having said that, what if you really only want to do consumption, which #import does make pretty nice, and wish you had a #import model? Then my answer is that we basically have it: The C++/CX #using story is actually similarly lightweight. If really want to do "only consumption" in C++/CX you can ignore all of the other extensions besides just #using, ^ and ref new. Done, everything else is seamless -- interface "requires" and class inheritance are exposed as normal C++ inheritance, [out,retval] is wrapped to a return type, HRESULTs are mapped to exceptions, etc. Maybe that's an option to consider.
And there's definitely WRL.
Thanks,
Herb
Wow. I think that about takes care of that. Thank you, Herb!
"Patience, grasshopper"... I should of taken my own advice...
So, let us know what you want for the next episode. Do we really need to cover this now that Herb has answered the key question - and very clearly and in detail?
C
PS: I'll answer my own question: Yes. Yes we will. We will talk to Jim about C++/CX internals and implementation (this current episode was focused on the design) and learn more about the overall system that it targets... If you still have technical questions after absorbing and digesting Herb's reply, feel free to send them along.
@Charles
It sounds like you just ignored my opinion that that wasn't the only key question, and that preparation is required, and that a discussion thread exactly allows for that?
I saw no acknowledgement of that at least.
Until then I said I suspect most answers Herb gives wont satisfy.
Infact, I'll ask. @Everyone: Who's satisified with the latest answer? I suspect nobody (that doesn't mean Herb is wrong). But until people have thrashed out with clarity amongst themselves what the issues are and the potential solutions - the things that it took Herb a month to thrash out with his team, we wont get it.
Best thing Herb can do is publish the C++/CX Design Rationale (his conclusion of that month of thrashing) and let us pick that to death before he bakes the pie.
In absense of that and regardless of that, the rest of us are best thrashing that out ourselves and see where we agree with him.
There are other key questions. Sending our questions all to you or Herb in private doesn't allow people to work together to share and evolve the right ideas together and develop a public understanding and consensus.
Not all of us are stressed about extensions, for example, (as I have made every effort to say already), but some of us are stressed that they need to be the right ones.
Now before you go and ask me what they are: the answer is already given. I'm not sure yet, its complex as Herb says.
But we need the input from other people to help agree with others and formulate for ourselves what the key questions are.
I can't even be sure that my own KEY issues actually are KEY issues for me or anyone else without that discussion and input from the other guys here, let alone anybody elses.
A discussion thread is what is needed and thats how we help you fly the plane without waisting Herbs time.
Who agrees / disagrees?
Does anybody feel stupid after Herb's explanation? It's like looking at the hands of an illusionist :-) We know that CodeGear C++ Builder or Qt models feel like standard C++, consumed and authored like almost standard C++, debugging of that code is just like any other C++ code. Yet, C++/CX being the one true way, is causing shivers down the spine. Is the real reason for the existence of CLI/CX/whatever-comes-next in the fact that "WinRT object model is intricate and fundamentally foreign to the C++ object model," and therefore no matter what we do, we'll have to torture C++ into submission to that object model? I feel like demanding, give me the old C WinAPI back, I know how to wrap it into C++ classes, I know how to create events and properties in C++, and keep the CX, thank you.
Well Houdini,
THAT is exactly a KEY question that I already asked earlier. "Is is any object model that is so Foreign to C++ justified.".
However, C++/CX isn't a QT or a Win32, it isn't a library. It is a wrapper for ALL libraries (feel free to argue that), and yes it does require you to contort your C++ do support it.
Is that "contortion" justified, how can or should it be automated is the question that leads to a million other questions.
Not sure why you're saying this... I'll quote myself: "If you have different technical, clear, concise questions about the C++/CX implementation details, then send them, too."
In terms of your question "is any object model that is so foreign to C++ justified", I'm not sure you're asking the right people. Herb and the VC++ team didn't design WinRT. The Windows team did. And their design isn't as foreign to C++ as you seem to suggest... Think about the problems they set out to solve. Watch some BUILD sessions on WinRT internals. Read some WinRT documentation. Perhaps you already have, so forgive me if I'm being pedantic.
The topic here is C++/CX. Now, obviously, C++/CX exists only to make C++ programming on WinRT productive for C++ developers - WinRT objects written in C++ are readily consumable by other Metro components, some of which will be written in VERY different languages and running in foreign virtual machines (CLR, Chakra)... You can write low level COM to get the job done using WRL. We keep saying this for a reason. That strategy, however, hasn't been fully documented yet, so it's fair for you to ask "but what does that really mean in practice?"
Herb clearly explained the rationale behind not employing one proposed model to solve the problems at hand (consuming and authoring shared components in the "Windows 8 Metro style"). Right? What's missing from his answer, exactly? Again, we're not talking about the design decisions behind WinRT itself. You can imagine a lot of thinking and planning and high quality engineering went into that design, too.... Step back and think about what WinRT affords: efficient interop between very different runtimes in a coherent, predictable and reliable application execution environment. Why should C++ developers have to write the gnarliest code to play in this modern sandbox? Can't C++ developers get some productivity with their coffee, too? This part of the equation is seemingly ignored in this thread... Does productivity not matter to you at all? Of course it does!
This thread is not the right venue for the WinRT conversation... That must happen in a place where Windows engineers frequent - the Windows 8 forums.
Finally, we will sit down with Jim and hash this out - the C++/CX internals stuff. What questions - related to C++/CX's implementation - do you have for Jim?
C
Charles
I read what you said the first time, but did you read what I said in reply? If you did, I don't understand why you repeated yourself?
You said:
"If you have different technical, clear, concise questions about the C++/CX implementation details, then send them, too."
To that, I'll quote myself:
"There are other key questions. Sending our questions all to you or Herb in private doesn't allow people to work together to share and evolve the right ideas together and develop a public understanding and consensus."
Which bit of that didn't you get?
@Glen: You made it sound like you were being ignored. The point of the email is so I can have a single source list of questions to raise in the interview with Jim Springfield. You are free, as a group, to have any conversation you'd like. For my selfish purposes, this particular thread is becoming a bit large and cumbersome (what are the questions, again? Wait, didn't we already answer that one? Etc....). Productivity is important to a content schmuck like me, too...
C
@Charles && Everyone:
Could any of you ask mr sutter the question I've asked and didn't get an answer to it:
Ok, the question was/is:++.
Regards
This is madness.
First, I read a post by Charles and this post gives me hope.
Then, I read a post by Herb and I seee that he more or less ignores everything we are saying without any deep thoughts and essentially answers us with a single phrase -- "Saying we should do "the reverse of #import" is easy to say, but elides a lot of details." -- then spends the rest of the post either restating points that have already been discussed a number of times, or bringing up tangential issues... Sigh.
Sorry in advance for the long post, I split it into parts for easier navigation.
TLDR: I am deeply dissatisfied with the post from Herb. I am tired of answering the same old points ad infinitum. I suspect Herb didn't read the thread. I suspect he won't. Such is the "discussion" we are having here.
Part 1: old points: rehashed
Let's walk through the bullet list. The first half consists completely of things we have already discussed at length. I get an impression that Herb did not read the thread at all. In fact, I get an impression he didn't even read my reply. Here goes:
* "The "more TLB #import" path is wonderful, but for limited consumption only, where by "limited consumption" I mean the ability to use someone else's WinRT API minus delegates and events." -- We have been over this already a number of times. With #import, one can consume WinRT API. Since delegates and events require exposing parts of your own code to WinRT, #import does not help with that, so if you only have #import you can not consume delegates and events. You have to have something else besides #import. We suggest this something else can be the reverse of #import. With both #import and the reverse of #import, one can consume WinRT API including delegates and events.
* "Whatever programming model(s) we chose had to also support what we call "high-level" presentation of a WinRT API. The major (but far from only) requirements here are to map the [out,retval] parameters to return values, and mapping HRESULT returns to exceptions -- and both of those in both directions ..." -- We have been over this a number of times as well. Wrapper code generated by #import and the reverse of #import absolutely can map [out,retval] parameters to return values and vice versa. Similarly, it absolutely can map HRESULTs to exceptions and vice versa.
"This is difficult to do in a #import model; what you need is a toolchain like MIDL or another C++ compiler front-end, which still knows about more information-carrying extensions that the user must write and can parse them out." -- Yes, it is difficult to do the reverse of #import with #import (doh). Nobody suggests you do this. We suggest you generate wrapper code for exposing C++ classes to WinRT in the compiler, using a logic similar to the one you employ for generating metadata out of ref classes. Alternatively, move that logic into a separate tool. Before you say so, yes, we realize this logic is going to be different than the one you are using for ref classes right now. In some respects, the new logic would be more complex, since C++ is more expressive than C++/CX. On the other hand, the new logic would only have to support C++ without the features added by C++/CX.
* "A major problem is that WinRT inheritance is a very complex pattern, and nearly impossible to do by hand. It's actually implemented using COM aggregation, plus a complex convention of private helper interfaces under the covers that must be followed exactly." -- It seems we have to restate *again* that we suggest using a combination of #import (not relevant to this point, but I will repeat it) and the reverse of #import. With the reverse of #import, we suggest you take classes with the natural C++ inheritance and turn them into classes with WinRT inheritance. Can this be done? Yes. Does it matter how hard it would be to do this by hand? No, since we suggest that you do this via an automatic tool.
* "Similarly major is interface "requires": Classes implementing an interface, and other interfaces extending an interface, do not really derive from the interface. Instead, they use a "requires" relationship expressed through metadata + QueryInterface, and it isn't a simple matter of emitting a note because it affects code generation, metadata generation, etc." -- Ooohh... Again, we suggest that you take care of this by generating wrapper code for C++ classes that have to be exposed to WinRT. Can this be done? Yes.
Seriously, Herb, we have been over all of the above points at least twice, I can't believe you are still bringing them up without any changes. If you, say, showed us example C++ code with inheritance which would somehow translate into WinRT badly, that would be quite different, but you are simply reiterating the points which we believe we have already answered, more than once! Why do this? Please read the thread. Sigh.
Part 2: new points: tangential or bizarre
The rest of the list consists of relatively new points, however, most of them are very tangential to C++ and neither is convincing in the least. Here goes:
* "Bunch of other language stuff. For example, there's a nest of thorny issues around the conceptually simple problem of the "this" pointer: What's its exact type? (Note: There are 2-3 obviously correct answers because there are 2-3 different subsets of the use cases, and it turns out that each obvious answer is clearly the right one for its uses cases but fundamentally breaks on the other ones. Sigh.) How do you pass it accurately to another WinRT component, that may be written in another language? etc." -- It is not clear at all what you are talking about. It is even less clear how this advances the case for C++/CX. You say that no matter how one answers the question you are posing, he is wrong, and there are equally valid alternative points of view. Good. What is it exactly that C++/CX does right here that C++ can't do? If you are talking about the problem of identity as regards an object implementing multiple interfaces, this is not a new problem neither for C++, nor for COM, nor for WinRT. This problem certainly exists in C++/CX as well. If you are talking about something else, please be more specific.
* "Then there's debugging: Briefly, it would make debugger integration and the debugging experience a lot worse. As a very simple example, you'd have to see and step through all the generated wrapper layers and ugliness, whereas we can effectively hide them in the shipping design so that you generally can simply debug the code you wrote. We have some techniques to make some library calls "step-through invisible" but we couldn't cover enough to make it anywhere close to "debug only your own code" seamless." -- First: What??? You break the language with an extension that is not going to be supported by anybody but Microsoft, thus denying us use of any debuggers and helper tools except those you have chosen to provide yourself (and your debugger does have its own set of bugs and performance problems, mind you), yet you are talking about providing better debugging experience?! You don't even consider this angle, do you? This angle is huge.
But let's try and think about what the debugging experience with wrapper classes would actually be, as compared to C++/CX:
a) You make a WinRT call implemented by the system. In C++/CX you do F10 and get a result. In C++ you can do F10 and get a result (same as in C++/CX), or you can do F11 and step through a couple of times into the wrapper code until you meet the actual system call, then you do F10 and step out. No difference.
b) You accept a call from WinRT into your own function. In C++/CX you place a breakpoint into your function, wait, get hit, go from there. In C++ you do exactly the same. No difference.
c) You make a WinRT call which eventually returns to your own function? Let's try. In C++/CX you place a breakpoint into the callback function which is supposed to be eventually called through WinRT, then F10 over the original call, get a hit on the breakpoint, and proceed from there. If you look into the stack, you will see your original call, a bunch of WinRT calls, then your callback. OK. In C++ you place a breakpoint into the callback, then F10 over the original call, get a hit on the breakpoint, and proceed from there. In the stack, you see your original code, a bunch of WinRT calls and calls through wrapper code (this is where the difference is), then your callback. So, that's it, a couple of extra calls that you see in the call stack? And that you may sometimes have to go back through them if you want to trace the path of your return value from your callback to your function that made the original call to WinRT? Isn't it a bit far-fetched? Is this all there is?
Which debugging scenario did you have in mind, Herb? I fail to see where exactly the debugger experience would be so much worse than with C++/CX. Yes, you will sometimes see a couple of extra entries from wrapper code on stack, and you might sometimes have to manually get out of them (and maybe - very rarely - have to manually get into them), but... is that it? If so, that's a very minor inconvenience, and certainly not a reason for forfeiting C++ in favor of a language extension.
* "Then there's other tools: It turns out that partial classes are an important ingredient in enabling design tools (e.g., the Jupiter model). The way some of our other libraries work around this using "don't edit here" comments, or "#include <mycode.h> inside a class" textual inclusion, or similar make life harder for the design tool developer while also yielding a worse user experience (e.g., very brittle round-tripping)." -- Let's disregard for the moment support for partial classes possibly coming to C++11. What's wrong with "don't edit here" comments? If you do edit the code inside those comments, you might break it so that the designer won't recognize it, fine. But if you edit the code inside a file generated by the designer, you might break it, too. This whole "story" behind partial classes is not that partial classes magically allow the designers to not be broken anymore, it is that partial classes merely take the code generated by designers out of the way. The code *can* still be broken, by an inattentive developer or by an ignorant tool. You get a couple of new problems, too, since you have to take care of additional files (eg, since your code now lives in more than one file, the files can now get out of sync, in particular, you have to remember to update / commit / checkin / checkout / whatever verbs your VCS uses these files as a whole - this is a new problem which you might not have had before). Some might say that the cure is worse than the supposed "disease". Finally, if you wanted partial classes so much (even though it baffles me as to why), given that this feature is eyed by the standard folks, there is no reason you couldn't have added partial classes to C++ instead of inventing C++/CX. Yes, we would be unhappy with you altering the language in non-standard ways, but that would be infinitely better than what you are doing with C++/CX.
* "Then performance: There are various cases, some of them common (e.g., aggressively optimizing refcounting) where the resulting code would be slower than in a language-extension in-compiler model because we wouldn't be as able to optimize away performance overheads without that lever." -- Oh. This final point is *complete* BS. Here is why:
First, "aggressively optimizing refcounting" is so small, you should not be bringing it up. Yes, AddRef / Release pairs are costly in the world of multiple CPUs, but precisely for this reason we have long knew not to AddRef / Release repeatedly in tight loops (obvious to any developer after he knows that AddRef / Release have to sync all CPUs and this costs time), and not to make interfaces chatty (obvious to any developer dealing with any kind of "borders", using your terminology). If someone is not following either of these principles on a critical path in code, his performance problems will be much more serious than just AddRef / Release. And for anyone who does follow these principles, there are no AddRef / Release pairs which you could optimize away! Seriously, saying that you are going to "aggressively optimize refcounting" is like saying that you are going to find all cases where a developer increments a variable, then immediately decrements it, without doing anything in between, and remove this. Nobody does this!
But, more importantly, let's be clear on what you are doing with C++/CX. Overlook the fact that you are taking development resources which could have been spent on C++ - eg, on making the generated code faster - and redirecting them onto your proprietary extension. Overlook this. What you are doing is much worse, you are making us write code that nothing else besides Visual C++ will be able to compile. In terms of performance, you are locking us into whatever optimizations you will choose to do in Visual C++. This lock in *alone* loses us so much performance, it is not funny! We *do* care about the performance. We *do* use other compilers and tools other than compilers (eg, code analyzers) to get the best performance we can. With C++/CX, we *won't* be able to use these compilers and tools. This *will* make our code slower than it otherwise would have been, by a significant amount. Is your compiler the best there is in terms of optimizing C++ code? No, the quality of the code you generate is fair, but that's it. In fact, on modern optimization techniques, you are way, waaaaaaaay behind others. I am not even talking about code analysis. Losing the ability to use tools other than yours can't even begin to compare with "aggressive optimizations of refcounting".
You might say that I don't have to use C++/CX in the performance-critical parts of the code. Calmly, if the performance-critical parts of the code should be in C++ and not in C++/CX, you shouldn't even be making a point on "aggressive optimizations of refcounting", since these optimizations can't play a role. If, on the contrary, the performance-critical parts of the code should be in C++/CX, see my note on your "aggressive optimizations of refcounting" being a toy and the vendor lock-in occurring from using C++/CX having much more a say in the final performance than any of optimization you could ever possibly make.
End of bullet list.
Part 3: summary
Now, I know you are tired, but let's return to this signature phrase of Herb which is essentially his answer to everything we have been putting forward in this thread:
"Saying we should do "the reverse of #import" is easy to say, but elides a lot of details."
Herb, we know that saying "the reverse of #import" elides a lot of details. We asked you to provide the details which would make doing the reverse of #import impossible or very difficult, much more difficult than doing C++/CX. This was the point of the discussion.
You have not provided these details. You restated all of the old points, many of which had no bearing to doing "the reverse of #import" at all, and have thrown in a couple of new points all of which are tangential and very questionable. You did not provide any specifics to anything.
The question still is:
Why, instead of doing C++/CX, did you not use a design that requires no language extensions, eg, by generating C++ wrapper code for consuming WinRT components (similarly to #import) and generating C++ wrapper code for exposing WinRT components (the reverse of #import)?
Can you answer this question? Sorry, but ""the reverse of #import" elides a lot of details" is not an answer.
You are free to call me a ranting fool, Charles, and you are free to say things like -- "Wow. I think that about takes care of that." -- but the truth is that no, this answer from Herb does not take care of that at all. It does not answer anything. It answers nothing at all.
Sigh.
I don't know if it makes sense to continue. I would be all too happy to make a post or email or whatever with nothing but technical points and no rants, but... for what purpose? You just don't listen. You are lecturing us.
@Charles, I don't think you read a worse I say. If someone else wants to translate to charles where the mismatch is in what I'm saying to what he's hearing, please do. If it's me, equally, tell me. I'm over it for today.
In the mean time, @PFYB good luck. I'd like to talk to you privately if we get a chance.
Charles, could you please give PFYB my email address. PFYB, drop me a line, but no worries if you'd rather not.
Charles,
> Do we really need to cover this now that Herb has answered
> the key question - and very clearly and in detail?
I strongly disagree that Herb Sutter answered the key question as put forward at the top of his own post.
I am seriously disappointed that instead of answering that question, Herb has chosen to mostly restate the same old points which have been abundantly addressed before, including in replies to Herb's previous post. Take the requirement to map [out, retval] parameters to return values. It has been explained a number of times that this is a non-issue, if you can generate wrapper code, you can take care of this. Why bring this up again? Is Herb unaware of how we are proposing to address the issue? If so, he should read the thread. Does he disagree that the way in which we propose to address the issue will solve it? If so, he should point out exactly why. If neither is the case, there is no reason to beat this dead horse again. Disappointingly, most of points in the list provided by Herb are like this.
I am also disappointed that Herb decided to pad his list with ruminations on various side-effects of the current implementation of C++/CX masquerading as genuine reasons for its existence. Such is his point on performance. Herb, do you really want to say that one of the reasons to choose C++/CX over plain ISO C++ is performance? If this has been a design goal, could you please explain to us how you fulfilled it? How exactly is C++/CX is going to be faster than C++? What did you do in this area, what measurements did you perform, what results do you have to show? Someone on the team saw a low-hanging fruit and decided he can save a pair of calls to AddRef and Release, fine, but let's not be silly here, this does not at all mean that C++/CX will suddenly beat C++ in terms of performance. If you really mean that C++/CX is genuinely faster than C++, go ahead and make your case! We are C++ developers, we will kill for performance. If you can't or won't make this case, stop pretending it has been made already and take the relevant item off your list. Stop insulting our intelligence. We aren't idiots, we can tell a genuine argument from a half-baked afterthought.
I also note that there was nothing said on borders. This issue has been raised many times on the thread and is extremely important.
I hate to do this, but just so we are really, totally clear:
> Herb clearly explained the rationale behind not employing one
> proposed model to solve the problems at hand (consuming and
> authoring shared components in the "Windows 8 Metro style").
> Right?
No.
> What's missing from his answer, exactly?
Herb didn't provide an answer to the question at the top of his post. If you think he has, please answer in your own words: can generating wrapper ISO C++ classes do everything C++/CX can do? why Microsoft have chosen to go with the latter and not the former? I can't see the answer to this in Herb's post.
We need a real answer.
Glen: my email address is, without extra tokens: prae nospam tor271 at hotmail blabla com.
@hsutter
Your post is, indeed, thoroughly disappointing. Very briefly, with my words in parens...
So it's easy to say "reverse of #import," but you can't escape the inherent complexity in the problem domain that the user has to be able to express, regardless of the model. (Really? We ask you why you think addressing that complexity while staying in C++ is worse than rolling out a language extension, and your reply is "there is complexity"? Really?)
So to be viable this path has to grow a full-blown "actually a full MIDL language/compiler and toolchain" model. (You are willing to do a full-blown compiler and toolchain for C++/CX, but not for C++? How is parsing C++ code and generating other C++ code significantly more difficult than parsing C++ code and generating metadata?)
That doesn't save the user from expressing all the nonportable code and extensions anyway, but would require him to use a more complex toolchain, learn more variant language annotations/extensions, write more code to do the same job, (Huh? How so? What is so difficult in marking up language elements you want to export. You already have declspec, you already have export "C", you can extend either. You can always use #pragma export(winrt, start) ... #pragma export(winrt, stop). There are other solutions as well. What is so difficult? Where is all the additional code you are talking about? Where is a more complex toolchain?)
often run more slowly, (How so? Are you referring to your optimization of refcounts? Seriously, what effect does this have on the final performance? Geez.)
and have inferior debug and other tool support. (Really? Inferior debug and other tool support? Can we debug C++/CX in WinDbg with the same level of support as for other non-C++ technologies like .NET? I strongly suspect we can't. We can debug C++ though. Yet you say it is C++ that gets inferior debug and not C++/CX. It is the reverse! Same for tools.)
I agree with PFYB, you, guys, just don't listen. This looks hopeless.
One thing that makes C++/CX feel icky to me is that it seems to go beyond what's strictly needed. Instead of a shared pointer (or instead of simply building the ref-counting into the C++ representations of WRT types), we get the ^, which, well, how does this play together with C++?
How do the std type_trait classes behave with hats or all the other C++/CX "magic" bits? Something like a wrt_ptr<String> would fit into the language, I'd know how it behaved, it wouldn't have weird corner cases that require me to write *more* nonstandard code. (See the list PFYB showed, of standard C++ constructs which just don't work with these "magic" types.
So by introducing non-C++ types into my C++ code, it cuts me off from writing the surrounding code in C++. And then, am I really writing C++ at all?
Another example is partial classes. They're not necessary for interop with WRT. They're not even *necessary* for designer-generated code, they just make life a bit easier for the people writing the designer. But they were crammed in there anyway, because, apparently, the cost of adding language extensions is near zero.
Is the "ref new" keyword necessary? Why not just a CreateInstance<T> function?
I really wish C++/CX hadn't tried to do so many things. I wish I could get the bare minimum of extensions required to interop with the Windows Runtime. Convenience syntax for reference-counting is not necessary, partial classes are not necessary, and magic "ref new" keywords are (as far as I can see) just not necessary to achieve the stated goal of C++/CX. They offer a very small amount of additional convenience, at the cost of introducing new syntax, and making it harder/impossible to fit C++/CX code into C++.
And I wish that these extensions had been more cleanly separated from C++. A different file extension, perhaps, would have reinforced the notion that C++/CX is intended for use at the boundaries only. Say we use the .cx extension for C++/CX code. Then the compiler can default to C++/CX extensions enabled on .cx files, and disabled on .cpp files. And then we would have some relatively clear separation. Something that can still be overridden if need be, but which, by default, slaps us over the wrist if we accidentally start writing hats or other CX syntax into our "core" C++ code. Intentionally or not, I'm concerned that C++/CX effectively breaks C++ code.
Sadly, I still get the impression that Microsoft were working under the assumption that "as long as it's native, everything is ok". I get the impression that to Microsoft, the whole "boundary only" thing was of academic interest only: that none of Microsoft's code was actually developed this way, and that no thought was given as to how this separation could be enforced, either by the toolset or in the design of the language extensions.
Anyway, I think most of the frustrations with C++/CX have been made clear by now. Would it be possible (either here or somewhere else) to start a constructive discussion about how this could/should evolve in the future? If C++/CX 2.0 could be developed with a b it more community input and feedback, it might be better received.
Like I said, I think a clearer separation between C++ and C++/CX would go a long way towards fixing my concerns. And with a few simple extensions, standard C++ wrappers for many C++/CX constructs could be created, allowing C++/CX code to interact more naturally with C++.
I've just discovered this very interesting thread, and spent quite a while to read it. I would like to thank Herb, Charles, et al. because they want to hear what we have to say. I'm not sure the Windows guys are listening to what we say.
Here is what I think about C++/CX, for what it's worth. I totally agree with what other C++ devs say here.
I have written a few C++/CX lines of code, and the feeling I have is, a C++ app which uses WinRT is not a C++ App. It is not using C++, it is something else.
At some point I was trying to figure out how to use one particular WinRT object. I found a Javascript sample, nothing in C++ or C#. The Javascript sample let me understand how it works. I was able to translate the code from Javascript to C++/CX and C# on a line by line basis. That was straitforward. Same WinRT types, same instructions, the syntax differs. So with C++/CX we can share our code base with C# and Javascript developers, isn't that wonderful?
I haven't written a full featured app in C++/CX with a data layer, presentation layer, etc, Only a few code samples. But in all the code samples I wrote, there was no "thin boundary" to WinRT. All the code had to use C++/CX in some way: an API call, a callback or a system type (I hate Platform::String). WinRT is using async calls, this is great, but it is WinRT C++/CX parallelism. Of course I could have created some layers, but the WinRT interface layer and the layer boundaries would have been far bigger than ISO C++ code itself. This is really worrying me.
The other thing that is worrying me is, you can translate C++/CX code into C# and Javascript on a line by line basis. I haven't tried VB though. I don't like this "one size fits all" idea. 10+ years ago there was Visual C++ on one side, and Visual Basic on another. 2 different languages for different purposes. Then .Net guys thought one pattern would fit all. They even tried to build Windows in C#/.Net, but they failed. Then they had this great idea of writing .Net framework in native code, and now they can build Windows using .Net, well WinRT. And this great marketing phrase: you can use fast and fluid C#, Javascript, VB, C++, whichever you like, to build Windows Applications!
This is worrying me because Microsoft already said that 10 years ago with .Net. Problem is, there is only one model, and the languages (except C#) have to be distorted to fit WinRT concepts.
In June 2011, when I heard rumors that Microsoft would go native, I thought Windows guys would build some native framework we could use in C++, and build a CLR layer on top of it so it could be used in C#. But Windows team did not do this. They built a native framework with a CLR-compatible interface (WinRT). And we are trying to use this interface in C++, although it is designed for C#. How disappointed I was two months ago!
This is the first point. OK it has nothing to do with C++ team, it is Windows team.
The second point is C++/CX itself. IMO, it is designed to look like C#. Marketing point. In fact it is really easy to translate C++/CX code into C# or VB and vice versa but do we care? WinRT is really pleasant to use in C#, it has been designed for that. Then the C++ team had to find something to make WinRT work with C++. You did it the C# way, not the C++ way. C++/CX is C++ distorted to fit C# design. So where is the point using C++/CX instead of C#?
As other guys here, I do beleive there is another fast and fluid way to use WinRT in C++. But maybe this other way of using WinRT in C++ is too far from WinRT concepts. Why creating a new design with WinRT if you don't use it in C++?
PFYB: "Now we have the third extension, C++/CX. Sigh... Microsoft never learns."
Warren: "can generating wrapper ISO C++ classes do everything C++/CX can do? why Microsoft have chosen to go with the latter and not the former?"
Maybe because this is what MS is just used to as a company having a long history of "embrace, extend and extinguish"? Quoting Wikipedia:
"The strategy's three phases are:.
The U.S. Department of Justice, Microsoft critics, and computer-industry journalists claim that the goal of the strategy is to monopolize a product category."
Doesn't it look familiar?
@Pierre
You don't wrap WinRT, it's not a library. Its design purpose is to wrap your library, C++/CX's job is to get underneath you. That's why the border matters despite people saying it's "just" the border. The difference is, it's your border, YOUR library.
It's a language within a language to do exactly that.
People, please come out and challenge this statement. I'm happy to be wrong. But this is the core of it.
@Glen: What am I missing? I answered your question: You are not being ignored. I also attempted to add some logical reasoning behind the complexity at play here: WinRT provides the Windows 8 Metro Style Application object model. Not VC++. You should therefore engage the Windows people, too....
It seems to me that my comment on the developer productivity gains and ease of consuming and authoring WinRT objects originating or targetting different WinRT-compatible languages that C++/CX affords is simply being ignored. Why is that, exactly?
C
About productivity gains using WinRT, well I'm not sure. When I use WinRT, I always have to care if I am using a WinRT type or a C++ type, and write code to convert from one to another. Am I using a wstring here? Oh but I need a Platform String. Same for containers. You can see this problem in Marian Luparu's demo in the video. Most of his code is about type conversion. And you can also note that there is no "thin boundary" at all. All his code is using WinRT types. OK it is just a small sample, but still.
About the ease of merging different languages in the same application, well you are right, it is easy to author a C++/CX module and use it from a C# object. But why would I use two or more languages if I can use one? Using 2 languages makes the project maintenance at least twice as hard. You need maintain the compatibility with two languages instead of one. You also need dual competence developers, etc. Why would I do that if I can avoid
Just wanted to touch on "It turns out that partial classes are an important ingredient in enabling design tools ... FWIW, think partial classes should be considered for C++1y as a standard language feature."
I wish to see the latter being discussed by committee members, I'm somehow skeptical about them agreeing on adding this feature to the language.
Firstly, the claimed benefit is limited to "enabling design tools." -- Can we keep expanding the standard with new design tools constantly coming and going?
Secondly, I don't even think that's the only or even best way to build design tools. Visual C++ already has "#pragma region" to mark and collapse designer generated code, it works pretty good. IDE could even have made this portion read-only. The interaction of designer generated code with user input is also nothing new -- Borland designer, for example, would merge the user changes, providing seamless integration of user-defined and designer-generated code.
Finally, I found reading and debugging partial classes (using C#) a lot worse experience than dealing with normal C++ classes, so I doubt C++ folks will like this either.
It is also easy to consume a WinRT module written in JavaScript inside your C++/CX code. That's also important. This isn't really about you using multiple languages yourself in a single project - though it certainly could be the case that you would want to, say, build a UI layer in HTML5 (HTML + CSS + JS) and write the core application logic/algorithms in C++ or use XAML for the UI layer, etc... It's certainly possible, though somewhat far-fetched, that you would want to write an app that uses C#, JS and C++, but, implicitly, you probably will be doing just this: You will be consuming components across language boundaries (they could have been authored in C++, JS or C#...), many of which are not written by you... That's a key point.
It seems to me that in the C++ case, these types of shared WinRT objects will often be authored by you, the kings of high performance and high output computing -> C++ developers. The core of what you author - the algorithms - will not be written in C++/CX because they don't need to be. They will be written in C++. hybrid application environment that WinRT supports seems to me to be the most compelling thing about it. As you learned in the BUILD sessions on Windows 8 Metro Style Applications and WinRT, there are compelling user-centric features provided by the underlying system that your application can take advantage of for free, but only if you play by WinRT's rules (again... WinRT defines the model, not the languages used to program WinRT apps...).
The whole point of C++/CX is to make your life easier in this hybrid world of Metro style apps. If you have no interest in playing in this heterogeneous sandbox (authoring and consuming WinRT objects written in very different languages), then you don't even have to consider C++/CX at all. Even if you do ( queue the broken record
Thanks for having this conversation with us.
@Everyone: As I said already, we will dedicate another GoingNative episode to C++/CX. This time the focus will be on the internals, the implementation strategy. And this will be the last time we address this small C++ extension for WinRT programming for a while. OK?
As I said in an earlier reply, I think Herb's answer was thoughtful, precise, clear and easily understandable. I also feel he answered the exact question he was asked to answer. @PFYB, I respect your opinion and you are of course free to state your opinion here - we like this! -, but I'm beginning to suspect that there is no way to satisfy you. I mean, this is made clear by your reply to Herb's reply - as he did answer the question you posed. He just didn't provide the answer you're looking for. Is this fair?
This is a very useful conversation. Let's keep it alive!
C
This conversation reminds of the scene in one of my favorite movies of all time, Thunderheart, where the elder tribal medicine man, Grandpa Regis, explains the vision he received from the spirits before "the FBI" came to town which accurately reflect the troubled childhood experiences of Ray LeVoy the young partially native American FBI agent seeking information from the wise man about a murder that took place on the reservation.
After the elder spiritual leader finishes his story, Ray, shaken by the truth in the words just spoken to him, asks the tribal police chief who's translating Sioux to English for him, "does he have any information or not?".
"He gave it to you."
C?
@Charles
You miss that you simply just roll right along without taking in any of what is being said.
Here's what you missed me saying at least:
What I said:
1. A show idea is great.
2. We aren't ready for a show yet.
3. We don't have clear questions for Herb, but good ideas and potential.
4. Of the views suggested, its not agreed yet which are Key nor the clarity of them.
5. Give us more time. We need to argue and proove our case(s) with each other, to get clear questions for Herb.
6. The process must happen so don't waste Herbs time AND so people don't stay angry at Herb unfairly.
7. Herbs responses aren't and wont suffice and wont until he gets clearer questions and can give a clear answer.
Look for my "rinse and repeat" starting point if that helps.
Now go look and see how you responded. You just continued to solicit personal emails and continue with the show plan.
Then go look at what happened after you continued doing that; you will see:
1. Various different opinions continue about what Microsoft is doing that people aren't sold on yet.
2. Great ideas, some definite possibilities, a few counter views, but no clear set of questions.
3. People continue to call out, like @jalf, for a discussion thread and more back and forth time.
4. Only real consensus: People aren't convinced "Microsoft has got this right or are doing what people want.".
5. Herb answering and failing to suffice with his answers.
6. Herb feeling let down that he spent time answering and it feeling wasted.
In other words, that's exactly what I thought was going to happen and what I tried to avoid, and what you missed.
Sorry that Herb feels he wasted his time. I can totally understand why he feels like that, but I also understand why people aren't happy with it.
Charles, what I don't understand is how you manage to communicate in such a style that all of this missed you by or that it feels like it.
I'll continue to work with the other guys. When it becomes clear, I'll try to articulate it. But as I said before, for myself, I'm particularly keen not to waste Herbs time until I see something clear as possible. Then I will check that with the other guys here first to test the agreement, then I really will expect an equally satisfying answer from Herb.
For now, bottom line. I'm not sold on C++/CX is it is; I remain open to being convinced in either direction. The negative side is winning currently. That's as much as I can say directly to Herb right now.
Everything else I have to say has to be aimed at yourself and other people to help me evolve that point of view so I exactly don't waste Herbs time, just my own and everybody elses until then.
Thanks
@Jim
Thanks for that, that appeared while I was posting my reply to charles. I'll take time to look at what you said but I have to run now. Thanks for your efforts. :)
@Glen: Fair enough... I appreciate that this all has to bake for a while before the more insightful questions emerge, but it's certainly spot on to ask Microsoft why we did this in the first place, how it works and why we think the way it works is the right way. These topics are being addressed. It's great to see this level of discourse happening. Jim's post, like Herb's, presents the reasoning behind this stuff very clearly and accurately. These are great insights you're getting from the team.
Thanks Herb and Jim!!
C
@Jim -- first of all, thanks, it's a good reading. Second, as far as I understood from yours and Herb's answers, the consumption of C++ code by other languages was the most constraining and perplexing part that led to C++/CX design decisions. What I was thinking is that 99% of C++ developers aren't in the category of portable component writers. So I believe if that cross-language compatibility requirement is lifted then we could stay close-to-the-bone C++ design, making those 99% of C++ developers a lot more happy. Would it be possible in addition to C++/CX, which targets 1% of cross-language component developers, release a separate native C++ library that consumes WinRT, but limits exporting components for other C++ applications only.
@Garfield
"we could stay close-to-the-bone C++ design, making those 99% of C++ developers a lot more happy"
Are you kidding? Writing in ISO C++ sometimes can be pain in the * compared to managed languages. C++ is just too low level for GUI development to be comfortable with. The more features from managed languages we have - the better.
I sincerely hope C++/CX will be standardized and consumed by ISO C++ to make all C++ developers' life easier.
@hsutter:."
You didn't answer the question. You reiterated points which have already been discussed at length. You even borrowed several points from your previous post, which has been replied to at length as well. Now, it would be understandable if you disagreed with the replies, but you neither indicated any such disagreement, nor substantiated it. It is not like you put something forward, got an answer, then disagreed and put a counterpoint moving the discussion forward. No, what happened was that you put something forward, got an answer, then ignored that answer and put the same point forward once again. This is not a productive discussion.
And, no, you didn't answer the question. You just started building your list of repeat points and various things that happened to cross your mind (like, "oh, right, we did this thing that saves a pair of AddRef / Releases, good, I will call that 'aggressive optimizations for refcounting'"), and after doing this for some time you decided you did enough and stopped. At no point did you actually answer the question.
I did read the paper on design rationale for C++/CLI. So? If you want to carry a particular thought from there and apply it to C++/CX, do this. Just saying "hey, read that paper, you will understand why we did C++/CX" doesn't work.
@Charles:.
Yes, we could have - good. But, this would have added a lot of complexity - please substantiate this. We fail to see how this would have added a lot of complexity. Herb's post isn't helpful, all of his points have been answered several times now and he didn't even say what he disagrees with, yet alone why. Same for the complexity for C++ developers.
One more time, yes, C++/CX is easy, we argue that wrapping code both ways would produce an environment that would be equally easy to use.
If this helps visualize things, here is example code in C++/CX:
Window::Current->Content = ref new MainPage();
Window::Current->Activate();
Here is the code we propose for C++:
Window::Current->Content = new MainPage();
Window::Current->Activate();
Is the proposed C++ code any more complex than C++/CX? Of course, not. The C++ code does not have the 'ref' keyword, everything else is the same. How this could be done? Wrappers.
Feel free to post other example code, with events, delegates and whatever else. We will post what we propose this code converts to for C++. We can post fragments of wrappers and, in general, discuss the implementation, too. This is the entire point! This is what we are trying to start talking about! If there is something that can not be done using wrappers, let's take a look at this and see what we can do. But let's discover this first. Is it too much to ask that someone on the Visual C++ team spends a couple of hours checking if we can have all the benefits of C++/CX without altering the language? Is that not a worthy goal?? If you say you already did the work and determined that the approach we are proposing is a no-go, is it too much to ask you to demonstrate this with a code example??
@JimSpringfield:, ...
There is no need to second guess what we are asking. I, at least, while strongly believe that you could do everything without altering the language as proposed, am ready to talk about not only whether this is possible or not, but also whether it makes the most sense to do it this way or not. The questions we are asking are honest: ( a ) could you do the C++/CX in C++, eg, in the way we propose, ( b ) if not, why, ( c ) if yes, why didn't you do it this way. The problem with the discussion is that the responses we get from, say, Herb, do not answer ( a ) and just muddle around from time to time throwing out points like "oh, yes, we'd also have to map HRESULTs to exceptions" and pseudo-points like "oh, but what the debugging experience would be? I suspect it would be worse". This is not productive.
What kind of answers would we be satisfied with? Easy. No second guessing necessary here either. To ( a ) we would accept "yes, we could do everything there is in C++/CX using C++ wrapping code as you suggest" or, say, "no, wrapping code wouldn't work, here is why: <explanation why>". This covers ( b ), too. I suspect it is ( c ) that you have problems with. It is indeed difficult to answer why you didn't do everything in C++ if it was possible and maybe even easier than doing C++/CX. I can't help you here, unfortunately, but I would point out that the next version of VS is still not even in the beta and that there will be a version of VS after that.
... which is to have a tool that can generate C++ code from metadata both for consumption and ... I do agree that this approach is possible to do, although I believe that there are negatives to this approach. We did consider several different possibilities during design and discarded them due to issues we ran into. Herb and I have covered many of these.
At this point, Jim, I want to thank you for actually reading what we have been saying and getting what exactly we are proposing for consuming WinRT from C++. You are the only guy from Microsoft who did that. Thank you.
You are saying that this approach has negatives. Of course, every approach has both positives and negatives, that's a given. The question is which approach is better on balance. If you think the approach with wrappers is worse than the one you took with C++/CX, we would appreciate your thoughts as to why. The huge benefit of the approach with wrappers is that it allows using C++ without altering the language. This is the entire reason we suggested it in the first place. It seems to us that in all key areas, wrappers could work just as well as C++/CX, and in some areas, they would work better than C++/CX. The benefits from using C++/CX over C++ (like the "aggressive optimization of refcounts") on the other hand seem completely minor. Consequently, it seems that on balance, the approach with wrappers is a clear winner. If you disagree with that, we would appreciate yours saying why.
As a side note, if you don't disagree, but you think that the ship with C++/CX has already sailed, that's fine too, but please say it how it is.
@JimSpringfield:
I thought I would point out that every single optimization you describe you did for C++/CX would have been possible if you generated wrapper code:.
With wrapper classes, wrapper code would have taken "const shared_ptr<>&" or const reference to a different type of smart pointer, then it would have taken the actual value of the smart pointer, without doing an AddRef, make the underlying WinRT (COM) call, then returned the result without doing a Release. Done..
A naive approach:
The wrapper code takes const std::wstring&. It extracts a const wchar_t* pointer from the wstring and passes that to WinRT without wrapping it into HSTRING. Returned HSTRINGs are converted to wstring.
A more involved approach, if the team thinks it is worth it to save on conversions from const wchar_t* (eg, literals) and HSTRING (return values from WinRT) to wstring:
The wrapper code implements methods with string arguments and or return values as more than one method with arguments and return values being either wstring or a library class that wraps all of wstring / HSTRING / const wchar_t*. There is no need to try all combinations, two combinations are all we need: either all types are wstring, or all types are the uber-wrapper class. The implementation code for the method that uses the uber-wrapper class uses the logic you have in C++/CX.
There are alternative approaches as well. Done. ^.
The wrapper code for a WinRT class can, of course, cache all interfaces it ever calls. A very simple approach which might be enough for the vast majority of cases would be to simply QueryInterface for all supported interfaces in the constructor. If this ends up wasting too much time for classes implementing multiple interfaces (multiple calls to QueryInterface in constructor mean multiple interlocked calls, same for destructor), one can always cache queried interfaces on demand. There are lock-free techniques for this.
Alternatively, if any of the above is not acceptable, the wrapper code can always expose classes wrapping non-default interfaces. Anyone who finds out that the code spends too much time doing AddRef / Release on the arguments can request an instance of such a class and do the calls through that instance.
Now, let's see an optimization you would be able to do if you did everything in C++ instead of C++/CX:
Suppose you have a class you are exposing to WinRT. If you are also calling its methods in some of your other code in the same module, the compiler would be able to inline these calls.
Unlike saving a pair of calls to AddRef and Release here and there, having the ability to inline code actually matters. There are many cases where a developer has to split his code into several methods, as dictated by a design (witness property getters, for example), there is a lot of performance to be gained by putting these chunks of code together at runtime, and only the compiler can do this. The exact benefits depend on the design, of course, on average they are significant, and sometimes they are simply huge.
Are you doing the inlining of WinRT code in the same module right now? Across different compilation units? I don't think so. If you went with straight ISO C++ you would have gotten this for free.
@hsutter
You say:
-------
.
-------
You say you have answered the question by PFYB - you haven't.
@Charles
is missing the point. See the numerous posts by PFYB, Tomas and others. See the last point by JimSpringfield, if you will. The comparison between C++/CX and WRL is missing the point. The real comparison is between C++/CX and C++ wrapper classes generated two ways. Does this get through?
Later in your post you get a bit annoyed and say that Herb Sutter explained why the approach the team have taken with C++/CX is better than the approach with C++ wrapper classes. He didn't. He didn't explain this. He didn't explain why wrapper classes would be inferior. Take a look at his list of points. Does the first point explain why wrapper classes are worse than C++/CX? No, it is irrelevant to the question. Does the second? No, it is irrelevant to the question. The first five points are all irrelevant, and the last three are... well, can I just say "questionable" to be polite. Read them and you will see for yourself, you really want to say that one of the major reasons - one of only three reasons as the list provided by Herb Sutter suggests - to prefer C++/CX to C++ is better performance? Really? We aren't idiots.
I have been watching this thread with interest. It contains a lot of very good points. I know I now understand a lot better what WinRT and C++/CX are. I also understand that I should stay away from C++/CX and that I should likely start looking for a new compiler besides Visual C++.
To add to what PFYB is talking about - please have a look how the new CORBA mapping to C++ is going to be done (). It's exactly the same kind of stuff - mapping of a foreign object model to C++. But there's no ^, Platform::String or IVector - only std::shared_ptr, std::wstring and std::array/std::vector. They're getting it right, you MS are not.
@JimSpringfield
"Partial Classes
There are other ways to do this and we've done them before, but there are some issues with those."
Could you please explain why the most straightforward way of doing it in ISO C++ wouldn't work? What relevant functionality partial classes offer that this method does not:
// Form1.designer.h - edited by the designer tool
Button button;
// etc.
// Form1.h - edited by the user
class Form1 : public Form
{
// user's code
private:
#include "Form1.designer.h"
};
@JimSpringfield
Thanks for the post.
First, since you are saying that you are going to fix a compilation error on 'T^ arr[3]', you could as well also fix 'T^* arr = new T^[3];'. This does not compile either, likely for the same reason.
Second, maybe you can answer a question on borders asked by Dennis higher in the thread (thanks!):
Do you have any examples of large applications that are now written in C++ and that are going to be moved to WinRT? If using WinRT (and C++/CX) on the border is as simple as you imply, there should be such applications.
I and others are sceptical on the whole borders issue. It didn't exactly work out with C++/CLI to say the least.
Thanks again.
@hsutter
I am sorry, but:
@PFYB: "I suspect Herb didn't read the thread. I suspect he won't. Such is the "discussion" we are having here."
@hsutter: "Note that this is now a 32,000-word thread. That's half the word count of Exceptional C++."
Herb, can't you see that you are making PFYB's point? You don't deny that you didn't read the thread but you make it seem like you do deny it. Maybe this is going to be a surprise for you, but you do the same with the real points regarding C++/CX. Reread the thread. We really expect more from you.
@Charles:
I am talking about what I know, that is, creating consumer apps (in C++, although I've tried in C#, but I failed).
You are talking about 2 different scenarios where using multiple languages might be usefull:
- using a 3rd party library built in C# or C++ or JS. Well if I write C++ native code, I will certainly not hamper my app performance by using a 3rd party C# or JS library. The C# library would have to initialize a CLR, and that would consume a lot of resources. It doesn't make sense. And If I use a C++ 3rd party library, I would better use a ISO C or C++ API than a C++/CX API. I bet the 3rd party vendor has published his C++ library as a native C or C++ interface before he adds a C++/CX layer.
- building a user interface in HTML5 or XAML/C#. Well, I can use XAML interface in C++/CX right? Also, we've been trying for 4-5 years to create hybrid applications with WPF UI and C++ code. It has been proven it is not possible. The only hybrid app I know is Visual Studio 10, but Microsoft had to modify WPF to make it work, and invested a lot of energy in this project. Why would hybrid apps make sense now?
Really, I'm not interested in being able to use other language components.
I think C# or JS devs are interested in using C++ library. This makes sense. This also requires the vendor to write a C++/CX layer on top of his library. But C# programmers can already use C/C++ lib without C++/CX. Of course a C++/CX interface is more C# friendly than a native ISO C/C++ interface.
Your point is OK with C#.
I've already done this kind of thing. I do agree with you: C++/CX is far more usable than WRL. But ... (you know)
I don't know how to create a Win8 app with "C++/CX at its boundary". In all C++ code for Win8 I've seen/written yet, there is no such thing as "C++/CX at its boundary" as the "boundary" tends to be 50-90% of the code. But, as I already said, I have not seen a real Win8-Metro C++ app source code, it may be different.
Once again, things are really different in C#.
Hybrid application environment sounds terrible for me... I've spent so much time trying to write managed interfaces to native programs... I don't want to do that again.
OK we need to play WinRT rules if we want to write Metro apps. Make sense. WinRT defines the model not the language, but this model is close to (designed for?) C#, not C++ (yet).
I guess many small apps will be written in C# for WinRT. The kind of app you see on iPhone or Android. They are not expensive to develop. I wouldn't write this kind of app for WinRT in C++.
For larger customer apps, C# is too weak. I don't know about C++ and WinRT. Not sure I want to spend a lot of time on this.
@Jim Springfield: thanks for the answers.
> Regarding something like CreateInstance<T>, you can definitely do this, but if you don't have "ref new", how do you implement CreateInstance<>?
That's an implementation default. Perhaps the compiler generates and links in explicitly instantiated specializations of this function for every WRT class used, so that only the declaration needs to be visible to users. Then the actual definition is hidden from us, and it can use whatever magic it likes. And then I can call CreateInstance<Foo>, and the linker will find the instantiation of this specialization. Or it could be a simple forwarding function which calls a non-template CreateInstance(classId) instead. Or something else. Point is, it could be standard C++ syntax which behaved like standard C++, and then it would minimize the weird/ambiguous/unspecified/unintuitive/nonstandard cases where C++ and C++/CX syntax intersect.
You're right, we're putting you in an impossible position, and there's no perfect solution.
But I, personally, would have liked to see a more minimal and less intrusive solution, one which introduced less proprietary syntax, and thus made it easier to avoid "polluting" my actual C++ code first, and then perhaps, a year or two down the line, a set of more intrusive language extensions like you have now.
Of course, now C++/CX is pretty much set in stone, and it's pointless to argue about what *should* have been done. But I hope to see future improvements to C++/CX, which:
- make it possible to use it in a less intrusive manner, which keeps some kind of clear separation between C++ code and C++/CX, and
- (at least optionally) provides ISO C++ alternatives to some of the C++/CX syntax (say, a simple smart pointer to replace the hat, and perhaps as I outlined above, an ISO C++ (template) function as an alternative to ref new)
I feel that (and correct me if I'm wrong), this whole "C++/CX at the boundary only" thing has never really been put to the test. I get the impression that all the components written by Microsoft so far have been in C++/CX throughout. I get the feeling that you felt that the "boundary only" thing didn't apply to your internally developed components, and so it was never really tested how well C++/CX actually lives up to this ideal, and you ended up with a set of intrusive language extensions that are very hard to restrict to the boundary.
@Jim Springfield
"Regarding something like CreateInstance<T>, you can definitely do this, but if you don't have "ref new", how do you implement CreateInstance<>?"
I guess the same way you implemented instance creation in WRL?
@jalf:
> Of course, now C++/CX is pretty much set in stone, and it's pointless to argue about what *should* have been done.
Actually, there is a point. If Jim and other Microsoft people see that they really could do everything they wanted without harming the language, maybe they won't be so quick to harm it again three or so years down the line when the current crop of technologies on the Microsoft stack goes out of fashion. I am dead serious.
@Ron: that's true. But ultimately, I still think the best way to achieve that is to discuss what route C++/CX should take *in the future*. Because then that can act as a starting point the next time they decide they need a new C++ to complement the 4 they've already got.
@Charles: watching the video, you keep emphasizing that "ref new and hat is not a big deal". Please, think about that for a moment:
you're redefining how we create objects, and how we refer to existing objects. What changes could *possibly* be bigger? I seriously cannot imagine a single change that'd have a larger scope. And that's why C++/CX makes me uncomfortable: those two operations, referencing an existing object (whether we do this through a reference, a pointer or a hat), and creating new objects are really ubiquitous operations: it's impossible to write *anything* meaningful in C++ without touching those operations. But now suddenly we've got a bucket of types for which these operations have to be written differently. We have pointers which aren't pointers, and will, I'm assuming, mess up any code which relies on, for example, std::is_pointer<T>, or which will break if I try to allocate or copy them in the usual way. These are unavoidable changes, and all the code I write now has to consider "am I getting a pointer to a C++ class, or a ref-counted pseudo-pointer to a WinRT class?"
Yes, ref new and hat is a big deal. It may turn out that adding these extensions was a good idea, but let's not pretend that it's not a big deal. Whatever else it is, whether it's good or bad, it most definitely is a big deal.
@everyone: lots of great opinions, nice to see.
@jalf "...it's pointless to argue about what *should* have been done..."
Not at all:
* We might be able to Change v1. But ok, lets assume not for argument sake.
Even if we can't change V1:
* We are "fit for purpose" testing it without blowing a big project on it. This is of real value.
* If we can proove its fit for purpose already, great. Now we can dive in with confidence, sooner and with reasons for our selling case to higher ups and customers.
* If it isn't quite fit, we can help others know where the pitfalls are. This is already working.
* If it's really not fit for purpose at all, we can why, and v2 will arrive quicker.
* If we have the right fit defined and MS ignore it, we know who are friends are.
* If MS wont build it, MS's competitors will, we have the spec. Or we can build it outselves with Clang.
No more hostage to fortune.
This things a) value proposition needs to be understood and prooven, b) it needs to be understood what's wrong with it or not as defined by MS so far, then how do you do it better or right if its wrong. Getting it all out now for V1 or V2 is worth it for me, for all these reasons and the hard reasons above.
Beyond that, I still hope people will also focus on, "the right language extensions" not just, "none". None is best iff it's right. As long as people can keep proving none is better, that's great though.
I'll be posting more tomorrow when I have more time. But thanks for your comments, keep them coming, I'm reading what you have to say with interest.
@Jim, thanks for your reply. The tone and content of your resposne was excellent, I really feel you were listening.
@Glen you're saying:
"Beyond that, I still hope people will also focus on, "the right language extensions" not just, "none". None is best iff it's right. As long as people can keep proving none is better, that's great though.".
"In my opinion language extensions have right to be iff there isn't an easy or possible way to achieve goals with already existing features. As we proved here this isn't the case with CX. Everything what CX does can be done with C++. No need for extensions in this case."
Hear, hear.
If the Visual C++ team is serious about staying true to the standards, they should drop C++/CX and allow us to consume WinRT via C++ wrappers, as suggested. They will be able to reuse much of the work they had to do for C++/CX and there is still time since, as others mentioned, VS vNext is not even in the beta state.
If the team won't do this, we should ignore C++/CX for several reasons:
First, if you invest into C++/CX you will have it all over your code. The "just on borders" argument is a lie. Try doing "just on borders" with your application by converting the way it interacts with the system from Win32 to .NET, using C++/CLI. See how far the border will go (everywhere). It is going to be the same with WinRT and C++/CX. Do you want to make most of your code non-compliant?
Second, C++/CX is a half-baked throw-away technology. It is so brittle and artificial, it will likely be replaced by another technology or simply abandoned. Compared to C++/CX, .NET and Silverlight look solid as rocks and where these technologies are now? If you choose to use C++/CX, you will likely find yourself having to redo that code all over again in a couple of years. Save yourself the hassle and just do whatever you want to do with Metro UI using HTML5 and Javascript.
Third, ignoring C++/CX sends a message. This continuous stream of Microsoft extensions to C++ should just stop. I don't want to have to deal with one more language extension come Windows 9. I don't want to hear Herb Sutter explain how the new extension makes a lot of sense because it makes it all look pretty in the debugger. Enough.
I completely agree, Dennis.
By the way, I have a question for Herb Sutter.
Herb, upon joining Microsoft back in 2002 you gave this interview:
You said many interesting things. In particular, you said that you "will be pushing for language extensions in Microsoft's C++ even before the product is fully compliant to today's standard". That said, you caveated this as follows:
."
I have a hard time reconciling the above with what happened with C++/CX. Did you significantly change your stance on extensions since the time you gave the interview? If not:
These are serious questions. What changed between 2002 and 2010 or whenever you started working on C++/CX? Why? Answers to these questions will help us understand what you are going to do with C++ in the future. I am not too optimistic, but, well, explain yourself. We'll listen.
Thank you.
PFYB, indeed I'd very much like to know how Herb Sutter answers your questions.
"half-baked homebrew misfeatures"? This sounds exactly like C++/CX to me. "half-baked" = numerous problems in integration with other language features, as shown higher in the thread. "homebrew" = Microsoft-only invention, nobody saw it before //BUILD, nobody got a chance to comment. "misfeatures" = unnecessary, can be replaced with standard-compliant C++, as discussed in the thread. Yes, C++/CX is all that, a "half-baked homebrew misfeature" of the kind Herb Sutter hated back in 2002..
.).
@Herb
Minor points:.
I can't recall any significant language invention being particuarly attributable to Microsoft that ever became a remaining part of the core language, as bad as that sounds to say it. Maybe I am mistaken on that last part, sorry if I am. Bjarne doesn't even appear to be a strong supporter of sealed..
I hear your reasoning on that, but my observations don't leave me too disposed to the idea they will come in the next round and if they don't arrive in a compatible way with the C++/CLI/CX version of them, you can be sure MS wont support them? It would appear to mess the design up of everything if they did; arguably like how the basic .NET container classes and the interfaces that used them became a blot on the landscape once the typed version ones became available.
Major points:
Regarding goals B1, and B2.
My opinion is that a Component technology is not like the technology of, say a GUI library; because a component technology is not "like a library" at all - it is the "essence of library" i.e. any library, especially *my* library.
C++/CX as it stands requires me to wrap all of my libraries - my classes, i.e. the ones that *I* produce; at the class level. Which is why it introduces it's own class to do this. That one statement alone is huge. If Bjarne really thinks this is a good idea (and I know you two are friends so I'm open to what this statement will yield), but I want to see him say it because its too big of an idea/language extension to trust to MS alone. It's virtually unheard to give a dog two heads and find anything less that one or both of them dies, or it becomes a heck of a lot more expensive too feed!! That's C++/CX!!
Unlike, say, a Win32 GUI API, that you wrap, C++/CX wraps you, at the class level, so the borders are everywhere you are yourself, and you do the wrapping yourself. I don't believe in that. I certainly don't believe in that where the method is much less than transparent.
When you think it should be as transparent as possible, talk of language extensions become really about saying you think none are needed (the goal is for the wrappers to be transparent wrapper after all) or you can have as many as you like but they wont be needed often and likely [[attributes]] might be sufficient in most cases. Who knows exactly, that's what I'm here to work++.
If you rush out now and C++/CX'ify your code now, two years from now the problem space will be made transparent by someone (MS or a competitor) or the platform will die, and you'll have a load of rubbish code, if you haven't coded very carefully - a skill most people will lack or have time for. Anyone with a big C++/CLI code base probably already feels like that today, I bet.
So that's my opinion, don't do it. Until the problem I have stated is fixed (i.e. the problem becomes transparent), C++/CX is half baked like COM was before it.
C++ cannot afford to be the battle ground between component technologies, it needs to be a case of IJW or the platform is wrong or the language is being harmed. How many other languages are you going to extend or recommend get two notions of class?
Please Herb, or anyone, tell me your thoughts on that lot.
I will post again to elaborate but thats the basic issue for me: C++/CX doesn't make "C++ good for writing libraries" it makes "borders everywhere" and makes it "expensive for writing libraries".
@JimSpringfield
"Regarding partial classes, I explained that there is often more than just putting additional methods and data into a class."
Thank you for your explanation. However, I must admit I'm still not convinced at all, because:
"Often, you need to modify the set of bases for the class."
C# has partial classes, yet the designer puts the base class in the user-editable file. So this doesn't seem important at all.
"Sometimes, you also need to inject code into constructors/destructors or other functions, although usually that can be worked around."
Again, C# has partial classes, yet the designer generates InitializeComponents() method that you call in the constructor inside the user-editable file. So this seems unimportant too since the C# guys have done it this way anyway.
"Also, doing a #include inside of a class would make it really hard on the browsing tools within the IDE."
Even if so, this is a really weak reason to provide a new core language feature. It's rather a strong reason to improve the browsing tools – after all, this is a perfectly valid C++ code so the tools should be able to deal with it anyway. The only difficulty I see is providing Intellisense etc. in the generated file, but since it's assumed not to be edited by the user then that's not a problem at all.
"You can take a look at some of my posts on Intellisense/browsing for more background."
I will look for them, thanks for the info.
"Regarding CreateInstance and WRL. WRL doesn't use ^ and creates objects similarly to how ATL did it. "Ref new" returns a ^, not a *. If you get rid of hats (or similarly marked up pointers), then you don't need it."
Exactly – if you get rid of hats. C++ already has a very good means of doing exactly such things without inventing new syntax like ref new and hats. No need to look far – Standard Library has shared_ptr<T> and make_shared<T>(args...). Then why ^ and ref new instead of just winrt_ptr<T> and make_winrt<T>(args...)?
If those two extensions are basically all that is needed for C++ code consuming WinRT components then it is even more appealing reason to not implement them as extensions. Imagine all those external tools, I'm not even talking about other compilers, but lint-like analysers, documentation generators, code editors or reformatters, which would be able to digest a lot of C++/CX code without problems should it only use winrt_ptr/make_winrt, but instead they will choke on ^ and ref new. It's so low-hanging fruit to make C++/CX just a bit more portable, so why you're ruining it all with the hats?
Offtopic: could someone please tell me how one gets text formatting in the comments? HTML tags? BBCode? I couldn't find anything about this on channel9 pages...
@Glen
"Regarding properties, events and delegates, if they were of value to ISO, it's stunning that they aren't already in the language or weren't more widely debated this time around"
As to events and delegates – I believe that this is not because they are of no value to ISO, but because a) there was already too much important stuff put into C++11 for them to also make it and b) they don't need any core language support. A good example is Boost.Signals library which is a pure library solution yet it provides way more functionality than .NET/WinRT delegates and events (automatic disconnection and combining return values to name a few). If/when C++ adopts events and delegates, it's more probable they will be library-based (and quite likely based od Boost.Signals). And when that happens, we'll have yet another source of incompatibility since there will be yet another thing in C++/CX done differently than in C++.
@ISO 14882: Re formatting... For a variety of reasons, we do not enable text formatting (HTML) beyond newlines for anonymous posts. It's a policy, not a bug.
By design.
C
@Herb!!
If some of the people didn't get so tied up in knots about such non starters, maybe they'd have had time to properly consider properties, delegates and events!!
Yes I appreciate that you were on the right side of this particular case. :).
For its size though, Microsoft has arguably never contributed substantially to C++ yet, but it has benefited greatly from it. It just doesn't have a good history of invention there.
Which leads me on to Properties, Events and Delegates, your comments suggest that ISO might pick it up, but what I was getting at was what @ISO believes: that if they come at all, they are likely to be library based. So the question remains the same: where will be then? Its a worry. Your cart is before the horse.
I take your point about fusing C++/CX and WinRT. But never the less, their existance is fused. I don't see that one exists without the other? ref class has no future without WinRT?
As far as B1 or B2 goes, I can't answer that right now, but I'll post separately my view of the whole MS deal and you can tell me where I'm missing the tricks there or where the deal can get better.
.)
Thanks to both Jim and Herb for the replies. This is a reply to Herb, with another short reply and a short reply to Jim following.
@hsutter
Before I go further, let me sanity-check something: Do we agree that the route of "extending TLB #import" leads to a full ATL + MIDL like approach?
No. ATL + MIDL is a reasonable way to write code, but we are talking about something else. We are talking about not only "extending TLB #import" to cover WinRT modules, but also about doing the reverse of #import, that is, parsing regular C++ code();
...
};
The developer would write the above code, use class C in a regular way in the module, and direct the compiler to expose C for consumption by WinRT. There are many different ways to direct the compiler to do this (pragmas, __declspec, predefined bases, separate command line switches, etc), we can talk about pros and cons of each. The compiler would take the above code and generate wrapper C++ code representing C, I2 and I1 for WinRT. I1 would translate into __internal_I1 and would derive from IInspectable. I2 would translate to __internal_I2 and would derive fro IInspectable as well. __internal_I1::G would return a WinRT string. __internal_I2::H would return a refcounted pointer to __internal_I1. There would be a factory for creating a wrapper for C, which would return a refcounted pointer to it. And so on.
We can discuss how to do the translation for delegates, events and everything else. This is a lot to talk about, but from what we see, for every feature a regular C++ mapping of concepts can be done.
Example syntax for events:
// ISO C++
class C ...
{
...
virtual winrt_event<int()>& E() { return _e; }
...
};
...
c.E() += []{ return 2; }; // or 'c->' if c is winrt_ptr<C>
To create an event, you create a member of the type winrt_event<>, then return a reference to that member via an interface function. When you direct the compiler to expose your class for consumption by WinRT, it generates C++ wrapper code that maps winrt_event<> to equivalent constructs in WinRT. If you later import the same component in a different C++ project via #import, the compiler generates C++ wrapper code that maps WinRT constructs back to winrt_event<>. You subscribe to an event using the plus operator or the Subscribe function on winrt_event<>. The latter lets you save a cookie you can use pass to the Unsubscribe function on winrt_event<>.
If you subscribe to an event you expose in the same project, the call goes directly through C++, without passing through WinRT. This is a huge plus for many reasons (eg, optimization), but in case this ever becomes a minus, you can make the call go through WinRT by directing the compiler to export the class that exposes the event and then reimporting the result via #import.
I hope the above is sufficiently clear.
On to the post:?
No. Smart pointers, base classes and possibly macros - yes. MIDL - no. I am not convinced that MIDL is worse than C++/CX either, but right now let's concentrate on using straight C++ without MIDL.
At the outset of the project, my team was asked by our management to take as primary design requirements that we find a programming model that should: R1. Enable full power and control, including both consumption and authoring. ... R2. Be as simple and elegant as possible for the developer.
Good goals. The approach proposed above achieves both - the wrapper classes allow R2 while being thin enough not to harm R1. It is possible to capture the code generated for wrapper classes and partially rewrite / reuse it to get more R1 if one needs this.
A. What is the minimal code syntax and build steps it would it take to express this in a MIDL-style approach, and across how many source/intermediate files? What's the best we could do in that kind of alternative approach?
For the sake of completeness:
// ISO C++
class Screen {
public:
int DrawLine( Point x, Point y );
};
Screen s;
s.DrawLine( Point(10,40), Point(20,50) );
B. How do you feel about these two goals? 1. Minimize the actual amount of non-portable non-ISO C++ code/tools developers have to write/use to use WinRT. 2. Make the code in the .cpp file itself appear to be using a C++ library, even though it actually is not (it is generated from Windows-specific language/extensions that are just located in a different non-.cpp file).
Ideally, the only non-portable non-ISO C++ code is #import and #pragmas. There is nothing wrong with having generated code. The requirements on the original code from which the code is generated do not have to be artificial. We are trying to add the ABI so that we can cross over to other languages and technologies, fine, let's do this by generating additional code and using that when we are crossing over. When we don't have to cross over, let's just use (instantiate, call, etc) the original code as is, to the maximum extent possible. Should I elaborate or is what I am saying here clear?
I strongly disagree that C++/CX minimizes the actual amount of non-portable non-ISO C++ code developers have to write to use WinRT. Yes, the above example is 6 lines of code in total and you turn it into C++/CX by adding a single 'ref' keyword, but this single 'ref' keyword makes all of the 6 lines non-ISO. 'Screen s;' looks deceptively like ISO, but if 'Screen' is a ref class, it is not ISO, because the ISO standard does not say what the compiler has to do to create an instance of a ref class. Same for the call to 's.DrawLine', the ISO standard does not say what the compiler has to do to call an instance of a ref class.
If we try and evolve your example just a tiny bit, the non-ISO nature of it will become much more apparent. Say, Screen has to be allocated dynamically, then 'Screen s' becomes 'Screen^ s = ref new Screen()'. Whoa, we added a hat and a second ref. That's 3 keywords for 6 lines now. If arguments to DrawLine have to be WinRT objects as well, there are more hats and refs. If we proceed in that direction we will very soon flood a good deal of our code with hats and refs. And we aren't even talking about non-C++ things like events yet, we are just talking about the trivial example you have chosen to start from.
Compare this mess with wrapper classes.
One can view C++ as a framework for using different technologies. We can use very different technologies like, say, ODBC and OLEDB, simultaneously, there are a lot of very diverse technologies like that which we can use, and we can do it all with next to no changes to the language (#import or an analog is all that is required, if that). What C++/CX does is take one such technology (WinRT) and shove it down into the language, breaking internal logic in many places along the way. What the proposed wrapper classes do is keep WinRT outside the language, making it cooperate via a layer of adapter code. It is clear to me that the approach with wrapper classes is a vastly better one.
@hsutter:
On the interview (let's get fluff out of the way):
I still agree with pretty much everything I said, including the need for further conformance to the new C++ standard now that it has stabilized and was published last month.
Sorry, no.
You equate C++/CX with C++/CLI point blank ignoring that while they look similarly, they wrap different underlying technologies, and what exactly they wrap matters. Suppose someone comes up with an idea of using hat pointers and ref new as an alternative way to work with regular C++ classes. Would you support the idea of such a language extension? Of course, not. You see, what exact implementation details hide behind the proposed syntax and rules matters. If the standards committee or the larger C++ community agrees on some aspect of C++/CLI, it does not at all follow that they would agree on the same aspect of C++/CX. Saying that you did it all right with C++/CLI and so it is all automatically right with C++/CX as well because it uses the same syntax and certain concepts are similar, too, is wrong. You simply can't say this, this isn't true.
Then again, did you really do everything right with C++/CLI? Take point 5, where you say a language extension should be something "not proprietary but rather the opposite: thing we already know is certain or likely to come" in the next version of the C++ standard and which you "hope all compiler vendors will provide too"? Is C++/CLI part of C++11? Is it certain to come in the next version of the C++ standard? Is C++/CLI provided or going to be provided by any major compiler vendor? You had more than 5 years, how much longer do we have to wait to see what you are talking about as a requirement for a language extension happen for C++/CLI? Is it going to happen at all?
As a side note, yes, C++/CLI is an ECMA standard, but that's not at all what we are after here and not what you have been talking about in the interview. The exact wording in the interview was: "things we already know are certain or likely to come in C++0x and which we hope all compiler vendors will provide too". We are talking about making extensions to C++, there is only one standard that really matters here - the ISO C++ standard for the language itself. It is nice if things outside of the standard are somewhat structured and standardized, but let's not be silly here, when we are talking about standard-compliant C++ code we mean only one standard - the ISO C++ one.
Same with points other than 5.
Since someone mentioned Bjarne Stroustrup, let's hear what he thinks of C++/CLI, too:
"I am happy that it makes every feature of the CLI easily accessible from C++ and happy that C++/CLI is a far better language than its predecessor "Managed C++". However, I am less happy that C++/CLI achieves its goals by essentially augmenting C++ with a separate language feature for each feature of CLI (interfaces, properties, generics, pointers, inheritance, enumerations, and much, much more). This will be a major source of confusion (whatever anyone does or says). The wealth of new language facilities in C++/CLI compared to ISO Standard C++ tempts programmers to write non-portable code that (often invisibly) become intimately tied to Microsoft Windows."
Heh, maybe C++/CLI will not become part of ISO C++ after all. Maybe it is not such a good and clear way to extend the language as Herb Sutter from Microsoft suggests. Maybe we indeed won't have any compiler vendors other than Microsoft supporting it anytime soon.
In sum, Herb, I am, frankly, amazed at your willingness to call black white, but... we aren't stupid. Black is black. White is white. What you did with C++/CX goes very much against the principles you outlined in the interview.
And a reply to Jim.
@JimSpringfield:.
Yes, this is settled. We are yet to determine what exact issues of wrapper classes pushed you to C++/CX, but we are talking about this right now so hopefully it will become clear soon.
Regarding modules, we do NOT go through WinRT machinery when accessing a locally defined (within a binary) ref class. So, we can inline code and take advantage of other compiler optimizations.
That's good. This suggests to me that when you see a 'ref class', you already split it into a regular C++ class (whose methods you can then inline) and a WinRT class (whose methods are stubs that call methods of the regular class). Thus, you are doing much of what we are arguing for already, you only have to go a couple more steps and (1) generate the wrapper WinRT class in the source form, and (2) get rid of 'ref', altering the code that parses code exported for WinRT to support regular C++ definitions for several things you now do through C++/CX-only features.
@Jim and PFYB:
I just want to add something which I think is vital, but was totally dis/missed from what Jim said (by the way Jim, thanks for honest answers). When Jim says:
a) Yes b) n/a c) Because we didn't see this approach as the best solution for our users. We've tried to cover many of the issues that pushed us to C++/CX and I can understand if you disagree with some of them.
I strongly believe that here is the crux - who are those users Jim mentions? Not C++ community surely? So who else is there? The answer is clear and obvious: The .NET crowd.
MS as a very respectable company cannot just abandoned its current "users" and tell them to feck off and that the technology they were lured to for over decade simply goes to butcher's house, and those folks need to learn new technology or learn a new phrase "DO YOU WANT FRIES WITH THAT?".
No, MS knows that this wouldn't be the right strategic decision. In view of the fact that .NET will/is going to be slaughtered in the near future, MS had to come up with something better/clever in order to:
a) keep a face
b) confirms to its "users" that those who stick with it can depend on MS and will not be abandoned by this company
c) keep an image of a respectable company
c) holds to its "users"
needed to ease the pain of switching between technologies.
There were few options - better ones (Jim in his answer, admits that there is a way to do this with C++) and worse ones. Why MS picked worse one? Every one knows that people who works at MS are world class pros and experts, so how is it possible that MS made a mistake in judging what's good for it's customers. Exactly - just because it's worse for you it doesn't mean that is worse for MS customers, do you see it now? MS picked what's right for its customers, not what's right for you. Now when the cards are reveiled everything starts making perfect sense.
Well, as they say in my old country if you don't know what it's all about it must be about money. And if you think about this, MS knew who their customers/users are. There are .NET crowd. Not C++ folks.
There are much more .NET devs in the world than C++. To MS was obvious that it has to try to keep those people by its side. In order to do that, familiar syntax and workings needed to be put in place. Just to make the switch from .NET to WinRT as easy and gentle as possible. That's why there also wasn't any pressure on adding C++11 features. What for? Their users don't need them so why would they bother?
And what they (MS) could loose? Nothing really. .Net crowd will eventually switch to native WinRT - why wouldn't they? and those guys from C++ community who decide to use WinRT with the syntax from .NET world will be and extra addition to customers/users group of MS. Perfect plan. No chance for a loss.
@PFYB
Assuming an almost 100% ISO C++ simple and elegant solution for creation and consumption of WinRT objects can be designed (will definitely love that), could it be implemented by a third-party entity (e.g. not Microsoft) or are there some technical limitations?
Thanks a bunch for wonderful posts, everyone.
Jim Springfield, thanks for partly answering my question. It is good to know that large applications will be converted to WinRT, this wasn't the case with .NET and C++/CLI. Just to be sure, those applications are currently written in C++ and are going to use C++/CX and only on the border, right? Because I was asking about that. Thanks in advance.
PFYB, your proposed code is exactly what I would like to write. I am very eager to hear what Microsoft people will say about that way to talk to WinRT.
Herb Sutter, regarding your answer to Glen:
Glen: "C++/CX as it stands requires me to wrap all of my libraries - my classes, i.e. the ones that *I* produce; at the class level"
hsutter: "C++/CX doesn't do that, WinRT does."
I believe we all understand that the requirement to wrap code to enforce a specific ABI comes from WinRT. But please note that .NET languages do all the required wrapping automatically, while with C++/CX a developer has to do the wrapping manually, all the while polluting the code with non-standard constructs. This is what I believe Glen meant, this is what I believe is bothering all of us here. We would like you to do the wrapping automatically. Doing it manually is too much work. Not only that, it is too much stupid work, the work much more suitable for a machine than a human being. Plus, of course, this work is making your code non-portable. Plus, there is a chance that in a couple of years you will have to redo this work. All of this has to deal with C++/CX, not with WinRT.
Thanks again, keep it coming folks.
Herb
I appreciate your candid efforts to explain your thinking to me re: c++/CX.
I know the problem space is hard - your long wrestlings with it demonstrates that. I hope you will appreciate then, that I and others must climb the same mental mountain as you did, in order to arrive at anything like the same understandings you have; let alone try to reach any of the conclusions you did.
Infact, it's even harder for me because: a) it's not my job to do it; b) I'm not a language designer and even claim to be a good library designer and c) I don't remotely have the experience, skill or incentive that you have.
However, in my favour, I am not constrained to support any Microsoft management or technical legacy and I do still possess, for now, a fresh pair of eyes for the problem.
Most importantly, like the others here with me, I have a vain, idealistic, call it naive if you wish, belief that a better solution for C++ exists than what I see MS proposing so far.
In that context, I ask you to bare with me and understand that I have no lack of respect for your views even if at the same time I am compelled to challenge C++/CX and WinRT, to either its betterment; its death; or my enlightenment; whichever comes first! :)
It's just that having wrestled with COM; and to a lesser extent C++/CLI; I'm now being told C++/CX will be the next big thing. However, what I feel is that C++/CLI was a patch to a .NET problem. That that doesn't make it the right solution to model anything on when .NET should never have been created with such a disregard for C++ and when C++/CX doesn't fix any more core issues for the C++ developer than C++/CLI did I fail to see the value in it. Whatever it might do for .NET or Microsoft.
Herb, C++/CX doesn't change the game enough. COM, C++/CLI and C++/CX all have some terrible things in common. They all make C++ developers manually wrap their own code and they are all have contributed to a lost decade and a decline of C++ on Windows.
I remain convinced that either a better or different solution must be achieved, if the only other choice is to repeat the wrapping process again and lose yet another decade; or risk further decline of C++. C++/CX needs to be highly transparent. However you do it. Or the model is wrong or the technology is not sufficiently baked. Pick one, but not both.
MS has a history of half baking things. If COM had continued advancing the first time around to where you have progressed it today, regardless of COM's merits (which is another issue), just look at how much pointless buggy code you would have saved from being written!?
Do you think at that time people didn't say COM was too hard and not right at the time? Did MS listen then? It took a decade for it to be revisited and look at the wake that left.
I assert again (perhaps naively) that either things have stopped too soon again and that more needs to be done to make C++/CX and WinRT transparent. Or alternatively, things have gone too far and the model is too complicated.
None of Microsofts past developments since MFC have done anything but dilute or diminish C++'s viability on Windows and I don't see why I should trust that C++/CX is not more of the same when it doesn't appear to change the game. Until C++/CX becomes transparent (or non existant), I don't know that its too dramatic to say we have a problem that will either bring down C++, or Microsoft, as the force it is today.
What I do know though is that Microsoft, through COM, C++/CLI and .NET are now C++/CX are doing more to damage to C++ than promote it. I know which side of fence I am on if it comes to it. When you have to fight your customers this much, you should know something is wrong. We do.
Only time will tell if I am right; but if C++/CX can be made transparent, the size of that prize will be far greater for C++ and C++/CX than even the size of the code MS has already consigned to the dustbin with its COM improvements so far.
The size of that prize merits the size of the dialogue to find the solution and the great thing about ISO C++ is that we own it. It is our job to fix this situation whatever it takes and we have a workforce the size of the planet to do it.
I just wish this dialogue had happened before C++/CX or .NET even had started.
In my next post, I'll spell out the C++/CX wrapping nonsense and everyone can really hammer the truth of what I'm saying.
@hsutter:
You are asking Glen:."
My answer to that would be:
I am all for making better abstractions, but we have to do this in an organized way. Our language is complex and so if everyone just adds whatever he feels like adding, all benefits from these additions will be erased by the chaos that will arise. Additions have to be vetted so that they extend the language in natural, intuitive ways, so that they don't fight each other, so that there is not too many of them, etc. If you want to add virtual functions, go ahead and suggest this to the standards committee. If they take it on board, and I do think virtual functions are genuinely useful, so I don't see why they wouldn't, great. If they don't, so be it.
What you are doing with C++/CX is something else. You are putting the cart before the horse, like others said, with properties, events and numerous other things.
@Matt:
Assuming an almost 100% ISO C++ simple and elegant solution for creation and consumption of WinRT objects can be designed (will definitely love that), could it be implemented by a third-party entity (e.g. not Microsoft) or are there some technical limitations?
The sad thing and the only reason we are having this long discussion on this thread is that the ISO C++ solution as proposed above requires cooperation from the compiler and this is something that is much easier for Microsoft to do than for any third-party. The Microsoft compiler already parses C++ code and extracts WinRT concepts from that code, the only two things Microsoft has to do is make this work for regular C++ code instead of for their extended syntax, and to generate WinRT concepts as both metadata and C++ source code. We can achieve the same using, say, LLVM, but it won't be easy. If it was easy, or if no cooperation from the compiler was required, we wouldn't have had this thread, we'd just do it and use the result.
C++: "good for writing libraries"
---------------------------------
A stated goal of C++ is to be "good at writing libraries".
For many developers that definition of "good" means:
1. reaching the widest practical audience
2. with the most maintainable and efficient libraries.
That means not writing repetitive code, being able to achieve the absolute best performance possible without making too many compromises on the code quality or maintenance and being able to reuse pieces of other code whether written in C++ or in other languages and to expose pieces of C++ code for such reuse.
Conversely:
If a language doesn't help with following best practice, it creates a maintenance mess and will be challenged by other languages.
If a language or library doesn't target a wide audience, it consigns itself to a niche role and a small market. Neither reality encourages developers to use such a language.
If a language doesn't encourage maintainable code, it deminishes its own profitability in whatever market it is in and therefore its own lifetime, such a library and its developer will be overtaken in its market.
If a language diminishes itself or ignores its core customer base, it risks its own viability and that of the developers using it.
C++/CX: "good for writing libraries"?
-------------------------------------
The goal of "reaching the widest possible audience practical" can only be met by:
1. supporting other languages
2. supporting other platforms
For a language to support being called from other languages, the called language must have its own notion of class described in metadata. If it doesn't, the "other" languages notions of class cannot easily map onto it and the code of the implementing language cannot be called.
C++/CX chooses not to generate metadata for C++'s own class: the iso class. Rather, it chooses to leave the iso class invisible to other languages and instead, C++/CX augment C++ with a second notion of class: the ref class, for which it does generate metadata.
Having two notions of class in a language is a rare design and appears to leave the developer and the language at odds with itself:
Does the language need two class types?
Does a developer support one, the other, or both?
What does one "do" in each class type?
What does it mean if one doesn't do what one expects in either class type?
How will future ISO committee decisions impact or be impacted by C++/CX?
Will the ref class and the class ever be reconciled into one and does it need to be?
What does it mean if it isn't reconciled?
For an ISO C++ library to become usefully visible to other Microsoft languages and platforms, it inevitably MUST use a ref classes, as its own class is invisible.
But for a C++ library to support its own customer base, itself, the language itself, and other platforms; it inevitably MUST use iso classes.
Supporting just ref classes would be to abandon C++. The cross platform viability of even the simplest function would be lost if it were tied to Microsoft by being implemented in a ref class. Even on the same platform, non Microsoft clients would be shut out from using code if this approach were taken. This is bad for C++.
Supporting just iso classes would be to forgo having ones libraries being easily useable from other Microsoft languages and platforms, like Internet Explorer. This is bad for C++.
Supporting both class types invites duplicate code and reduces maintainability. ref classes introduce the reality that there are now two choices where to find and implement a function and two ways in which to invoke it and two decisions on what to name it. All of this is to define, reach, and name the same implementation. This is bad for C++.
The Reality of C++/CX
---------------------
If one wants to avoid duplication, retain consistency and actually even just use C++ at all. The only solution is to have the ref class be simply the delegator to the iso class where the sole implementation must exist.
The above solution yields:
* the ability to support being called by Microsoft languages and platforms.
* the ability for even the most trivial of code bases to not become tied to Microsoft.
* the ability to retain clarity that one isn't implementing the same thing twice or the same thing differently by mistake.
* the ability to support ones existing core customer base (C++!) on other platforms and on the same platform!
Observations
------------
1. Two classes are required to for every one concept or service in the object model.
2. Both classes work together but they are expected to provide the SAME identical service. If they don't, the object model is duplicitous.
3. Logically, there is and only should be one implementation of the same service.
4. Having two classes for the same thing creates ambiguity about what to write where and how to name either.
5. The two classes that provide the same service should have the same name yet they can't because their class names must be different.
6. Even if both classes have the same name (through different namespaces), it is inconvenient and exposes two ways to invoke what should be the same implementation of same service.
Practical Realities / Coding Standards
--------------------------------------
1. The ref class MUST be the wrapper because only it exposes the metadata so only it has visibility to receive other language calls "easily".
2. The iso class MUST be the implementer of the service. Otherwise straight C++ support is lost and the library has become tied to the platform.
3. The ref class MUST not implement the service, just wrap/delegate to it. Otherwise the object model design becomes ambiguous and the execution duplicitous.
4. Wrapper and implementer work to provide the same identical service. If each model provided a different service, it would likely be an error.
5. Both classes should have as similar name as possible and by convention so knowing one yields the other or one is inviting a maintenance headache.
Conjecture
----------
The ref class is always a "simpler" entity than the ISO class. It's type system is simpler; it's delegation system mechanical; and its implementation should be minimal for reasons given. It is just the delegator.
The relationship between a ref class and an iso class is likely to be a one to one mapping of classes, functions or parameter types.
When following the ideal development methodology, ref classes serve no purpose except to be a reception point, translation point, and a delegation point to iso classes. Therefore, in some sense, ref class serves as an abstract iso class.
Abstractly, directly instantiating a ref class in an ISO context is redundant and therefore ref classes appear to have no strong place in the ISO C++ type system and the name clashes are a nuisance.
The navigation from a ref class to an ISO class is a mechanical mapping, configuration of that may be of value but the receive, forward, implement relationship remains the same.
What has been described is just the problems with authoring side. This doesn't begin to discuss the use side. A C++ client would not wish to know anything that would make it aware it was using a non C++ class or a C++/CX one or that would entail more problems.
Conclusion
----------
C++/CX as it stands introduces significant complexity into C++ and is a competitor to C++. The burden it places on the "straight C++ developer" appears to be one that is detrimental to the C++ language and to the productivity of the developer using it in general and the maintainability of products created in it.
C++/CX primary issue is that it creates a non transparent "borders everywhere" burden in that it practically requires the C++ developer to manually wrap any iso C++ class that it intends to expose to another language, in its own C++/CX specific notion of class. In reality the C++/CX class must simply delegate to the ISO one to avoid other impairments.
The model ascribed to C++ by C++/CX is unique in that other languages like C# will not work this way and no two class model is likely to be seen in any other language.
It is hard to see how this model promotes C++ development. It is unique to "extend" a language by adding a second notion of class, but in this context "replace" seems more apt than "extend".
Classic COM and C++/CLI where this type of wrapping model have been adopted before resulted in significant diminishment of the C++ language, by forcing significant maintainability and productivity burdens on the developer. Since C++/CX does not change this pattern there is every reason to think it will be the same.
The model should be scrapped or enhanced to the point where the burden is removed.. This alone speaks volumes. Earlier in the thread Charles said that Microsoft cares a lot about standards and someone remarked that the extent of that is before our eyes: C++/CX is a huge pile of non-compliant stuff. The lack of "R3. Comply to the latest language standard." is one more illustration as to how much Microsoft really cares about this.
Let's also see how the Visual C++ team delivered on the second goal, expanding on it as Herb Sutter does:
To minimize the amount of non-ISO C++ code the user has to write. FAILED. Using C++/CX immediately makes your code non-ISO C++. As soon as you convert a class to a ref class, you have to convert 'new' to 'ref new', you have to convert pointers to that class to hats, et cetera, C++/CX tries to propagate outwards.
To not create a disincentive to use (V)C++ on WinRT. FAILED. It is simpler to use C# or Javascript than C++/CX. It probably makes more sense in the long run as well.
In effect:
R2. Be as simple and elegant as possible for the developer. Summarily failed.
@Matt:
> Assuming an almost 100% ISO C++ simple and elegant solution
> for creation and consumption of WinRT objects can be designed
> (will definitely love that), could it be implemented by a
> third-party entity (e.g. not Microsoft) or are there some
> technical limitations?
Well it's not so easy but could be done if few smart guys
work together. Normaly the most difficult part would be the
C++ parser but clang is progressing very nicely and should
handle the job well. Then you have to write the #import
parser for MS COM interfaces because I don't think clang can
understand them. But parsing interface definitions (without
function bodies) is not so difficult. Then there are the
winmd metadata files. Not sure if the specs are available
somewhere but if not it's easy to figure them out from
C++/CX output (they are supposed to be text files)
After you have the tool done you are no longer forced to buy VS.Next
- VS2010 compiler or even non-MS compiler would do as well. (but
VS2008 would not since the generated code will employ WRL which
uses some C++0x stuff like decltype, nullptr and type traits)
There are however some minor problems if code generation is done by
3rd party. For example you cannot use VS XAML editor since it will
still generate widget declarations and event handlers in C++/CX code.
Also you have to manualy set up the project to compile and link
generated source files. So using 3rd would not be so convenient as
if MS does it itself..
@Mort:?
@Everyone:.
One of the clearly stated goals for VC++ for Metro Style Apps - and the one that is consistently ignored on this thread - was to make playing in the WinRT sandbox easy for C++ developers, too (and let's not forget the tooling side of the equations, VC++, as Herb articulated clearly...). I wonder what your response would be if you were handed only WRL and this imaginary/speculative "reverse of #import with C++ wrappers" solution. What would the tools look like? The experience in VC++? Your workflow? Open questions, of course. But, there are more questions than answers here. Solutions are presented without defining, clearly, the question. What's the question?
Herb has spent a great deal of time on this thread. Thank you Herb. And thank you Jim. I think it's time to let him go and re-read what he has written, if need be. He's got a few other things to do. One of them is shipping C++/CX in VC11, which is going to ship. That ship has sailed. It left the harbor at BUILD.
The VC team is well aware that much better WRL documentation is needed and more samples that clearly demonstrate what they mean by C++/CX "boundary" layer programming with most of your code being ISO C++. It's understood. Message received. You're right.
WinRT (COM) defines the model, not C++. Ease-of-use was a key factor in the design decision at play here for C++/CX. You don't lose performance and control, but are afforded a simple high level abstraction for programming in an ABI-safe, ref-counted, multi-language/runtime-supported, COM world that IS WinRT. If this isn't clear by now, then I don't know what else can be said to make it clearer.
Look, I for one (and I speak for myself only - myself only) think it's unfortunate that of all languages that can be used to program WinRT apps, C++ was the only one that was extended. That said, based on I what I now know from this thread and from conversations with Herb and the VC team, there really was no other practical choice, one that met the bar for ease of use and COM friendly. Herb and Jim are not lying to us... They speak only the truth (they are the ones who discovered the truth in this case...) and we all should respect this and accept the fact that C++/CX isn't going away. It's already here. Personally, I think if you play around with it for a while, it starts to feel more natural - and this is why I said "it's just hats and refs, man". I know that might have come across as me belittling the issue - my apologies for that.
). Just don't expect Herb and Jim to answer your questions soon after you post them. Let them work, OK?
Thanks, as always, for speaking your minds and listening, learning, questioning, listening, learning, questioning. At this point, we really should (all of us, including me) respect Herb and Jim's time. This doesn't mean you aren't free to continue the discussion ( I'm sure you are going to blow up my reply - that's OK.).
@Herb Sutter: Thanks for interesting posts. Now, two things come to mind after reading your "confessions".
First, WinRT isn't really suitable to be exposed to C++; the model is too foreign to C++ even in CX garb. WinRT and C# are twins, which will never happen to C++, because it would require C++ to become C# with different syntax (C++/CX), but that will simply move the tension to the boundary with C++ uncomfortable with what is one side and WinRT (C#,CX) uncomfortable with the other.
Look at this example, how verbose and foreign to C++ it is: "Getting data into an app"
I'm not that much concerned with language extensions like properties, delegates, etc. as they are confined to the class. But I'm very concerned with foreign types and hats that will bleed outside and I don't see other alternative than a fence of native types to stop it. I’m assuming everybody agrees that WinRT types and hats have no sense in C++ applications and must be stopped at the boundary.
Secondly, the same solution coming back to me: don't expose the foreign WinRT types to the C++ applications. Instead create a C++ library with standard types, albeit with property and delegate extensions, but use WinRT types only as underlying implementation, don’t leak them to interfaces. If that limits C++ designs to consumption only, so be it, 99% of developers will be happy, and the other 1% may decide to suffer through CX.
P.S. By the way I liked what I learned about AMP from the presentations, and language extension didn’t bother me at all. But C++/CX is a completely different matter, it’s not just language extensions, it’s a conceptually foreign model with conceptually foreign types.
@Herb, you said:
"
Herb, as I've said to Charles before, I agree we aren't going to get an answer easily or quickly. I've also said flat out "we aren't ready for you yet", we'll likely waste your time until we're ready with the right questions.
I also agree that it will take exactly the kind of victorian efforts to get an answer.
However, this "design by thread" is of value. It's the only way the rest of us can quickly get the kind of information we need out there.
Yes it's not so much (arguably any) value for you, yet. But the rest of exactly do "need to design this together in a comment thread".
I have never thought anything other than that. Of course I'm pleased you are reading it. Just don't burn yourself out because we will need and expect you later.
You should be checking in though, as you are, as should everyone in MS development - to take notes of what is bubbling on the minds of the people here. That's the main value for now.
But doing this "design by thread" is really of value for the rest of us. It's the only way to for the "common people" to get together to get enough "common knowledge" out there quickly.
Common knowledge in this sense doesn't mean it's right, tasteful, informed or anything.
It's just a dialogue we need to get "out there" to get people together and contribute their common wisdom, goals, concerns, understandings, rants, whatever, together to hopefully (eventually) to do some common good. Sorry that it doesn't read well until then.
It definitely has value. It's the only way of illuminating this space and start something. I know its frustrating for you to see us where you were 12 months ago, but try to be optimistic. That's what grass roots and idealism is.
We aren't sold on WinRT and C++/CX and COM as it stands, this will help you see why. We want to proove to ourselves if our hunch is right. To do that we have to struggle to "fix it" or to define something better. We might fail, give up, or whatever.
But we don't have the same constraints you had, our goals aren't necessarily yours or Microsofts, so we might get different answers.
This thread is most certainly of value though. We need it to illuminate and find consensus on this subject. I believe this is the best way to do it and that it is working.
Wish us luck.
@Charles, you."
Charles, you have a gift.
Microsoft can consider what ever they like. But have you ever considered that customers don't have to buy it nor do they have to meekly lie down and accept your "constraints" as "their" constraints?
Watch the fur fly with that attitude.
I don't see how adding a second class to a language is a "small extension" either.
If I came and built my house on your back lawn next to your house and called that a small extension too, what would you say?
@Glen:
I fully agree with this sentiment. I am not suggesting that "you take it like a man", but rather "it is what it is". There's a difference. I'm all for this conversation continuing - just as I said in the post you've quoted from. I only asked that we all respect Herb and Jim's time - which is squarely focused on shipping VC11 (and this includes C++/CX, the WinRT-only C++ extension).
I love this conversation and the degree to which you have all poured your hearts, minds and time into making something meaningful out of this dialog. Thank you.
C
@Glen:
I don't think this analogy quite works in this specific case, but I respect your position and I understand what you're trying to say. I can't argue with you.
C.
Just like "for every C++ feature a regular C mapping of concepts can be done" (see for example the excellent example Herb made for virtual functions, or the article I pointed at the beginning of the thread on writing COM components in pure C). The point is: how this mapping is implemented? How is the quality of the resulting code?
Sometimes you have to raise the semantic level of code, and if this can only be done with language extensions, what's wrong with that? "Language extensions" to C (like the introduction of "class" keyword, "virtual" keyword, etc.) gave us C++. You can write OO code in pure C, as you can write COM components in C, but this requires a lot of work.
And "language extensions" to C++03 gave use C++11: e.g. you can have functors in C++03, but - at least for simple functors - I think C++11 lambdas are more convenient (for more complex stuff, good old classes with operator() overload can be better suited).
There must be a way to tell the compiler that "Screen" is a WinRT class.
The C++/CX way of doing so is the "ref" before "class". Is "ref class" much worse than say "class __declspec(winrt_export) Screen { ..." ? I'm not sure it is. (Frankly, when writing Windows C++ code, I used "throw()" as a way to tell MSVC compiler that a method doesn't throw, instead of the more verbose "__declspec(nothrow)"...)
And about instantiating Screen "on the stack" with the "Screen s;" statement, well WinRT classes have more complex creation semantics. There are blog posts that show how an apparently simple C++/CX syntax is actually equivalent to several steps:
The equivalent "pure C++" code is something like:
In this case, I welcome a language extension that makes my programming life easier (note that the above blog contains other more complex examples).
Note that if I'm targeting WinRT and Metro I'm already out of the "cross platform" kingdom, so it's not particular important to me if there is some non-standard non-portable language extension in this particular context (some pure C++ class like say winrt_event<...> would be equally non-standard and non-portable).
As a side note, I don't have experience in designing programming languages and implementing compilers, I'm just an "user" (as a C++ programmer) of programming languages and compilers. I think we can trust people in VC++ Team if they say they tried first without extending the C++ language, but it was not a good experience for the "end user" programmer, and they figured out C++/CX was a more productive tool for us.
However, it would be interesting if you and other guys could start an effort to try to offer a "pure C++ no-extension" approach to solve this problem of WinRT programming from C++, and let us know here. As I said, I'm open minded, I've a practical view of programming (I don't like "religious wars" in programming), and if you prove that you can do in pure C++ what can be done in C++/CX with a good level of productivity, that's good. But you have to build something concrete to convince me, not speculate on possibility to do X in this way, Y in this other way, etc.: just try to write some concrete code and tools, some concrete prototypes, and see what happens. Win 8 Dev Preview and VS2011 Dev Preivew are already available for download on MSDN. I'm curious.
Moreover, I'd like to say thank you to everyone who participated in this thread and spent his precious time writing insightful posts here, from PFYB to Jim Springfield and Herb Sutter, etc. I learned a lot!
Charles: it would be great if you could do another C9 episode on C++/CX with Jim Springfield (who is also the inventor of ATL - after several years, the best tool available today IMHO to do COM programming in C++), Herb Sutter and maybe invite also the great Larry Osterman (I recall some old pre-Windows 7 video of him), who seems to be working no more on the Audio team but instead on the WinRT team, so we can have both the VC++ Team point of view and also the Windows point of view.
Thanks.
@Charles
"it is what it is"
That would be fine if "what it is" was clearly understood, static, or finished AND if it were also the case that most people agreed that whatever "it" is was necessary or beyond significant improvement.
None of that is agreed. Clearly there is support for the endeavor to challenge C++/CX and WinRT to proove itself and to illuminate it in general.
Even if C++/CX were "done" according to Microsoft, that wouldn't mean anything to me on its own without a clear idea about what exactly what is done and how right it is and a community consenus if we should we buy into it.
And what exactly doesn't work in my house analogy? I could go further:
What if someone didn't just build a house on your own back lawn next to your own house and tell you its "small". What if they did more than that:
What if they hammered out a large house in private, without warning you, over the course of a few years, THEN plonked that on your lawn and told you it was just "small". Then what if they actually first tried to pass it off as just really being your own house too!?
Then what if they started telling you it is in your interest to have two houses on your lawn, one that you don't own, and that exists without a design rationale, code of compliance, or a council permit for its construction. What then if they proceded to get frustrated with you for not agree to why any of it was necessary!!
By all means you or anyone else, argue, pick fault with that analogy if you wish.
Anyway, while I'm here, I'll repeat something else, I remain totally open to language extensions, possibly many. I may even propose some myself, who knows.
I will totally support those who can justify not having them too.
Job 1 is to extract the core of what it is about COM that makes COM sooo necessary. Is it? And for who?
Job 2 is to figure out what language extensions (if any) enable us to use what is necessary, easily without requiring complete destruction of our existing code models.
Job 3 is to build a consensus on all of that.
Job 4 is to determine if what we come up with is actually good for C++ or C++ developers, regardless and how that improves on or not, against what Microsoft has already come up with and what it means in the big picture.
To answer your earlier statement, you're right that if that doesn't stack up for me, and C++ shouldn't do it, can't do it, or be made to support it without burdening the C++ developer with any more work than anybody else has, then I likely will firm to the idea that the object model is wrong, or its bad for C++, and that we don't adopt it. So far, thats where I'm at, actually, and trying to work back.
I believe that anything that fundamentally subverts the class concept in C++ in the way that C++/CX does, is bad for the language unless it can be done virtually transparently and/or is done through an WG21 standard.
I am focused that the wrapping burden be removed beyond any other issue. I am open to language extensions that achieve that. However adding any language extension that rises to the level of actually being a class, as is currently planned, is a whole other ball game. I believe that if such an entity is needed, it is likely to be buried as much as is possible if it is necessary at all.
I fail to see two "first class notions of class" ever being reconciled by WG21. The ref class has to go or be made transparent such that it is as good as gone.
Having two classes surfaced in a language is just too big of a jump for me to accept as "small". Frankly, I think you are crazy to suggest that its small, even if we can justify it to be necessary!
Maybe it'll take Bjarne's personal endorsement of C++/CX in clear specifics to steer me to some other perspective. If this debate achieves that much, that is good. Though I don't expect that and there maybe value to even him for this conversation to run its course.
Here are a few quotes I like, from the last messages:
Yes please...
I couldn't agree more on this one.
WinRT is COM
AFAIK, this is really not easy to do, and to do it right. It is very easy to misuse WinRT types and, for example, call IVector::GetAt(), which issues a COM call.
Thanks for providing insight on the story of C++/CX.
@Glen: As I said, I can't really argue with you. I understand and respect your position - it's your right to express yourself freely here. Keep it up.
I want to be very clear - I didn't say anything is "done". I said it is what it is (where is is, well, is). When I interview Jim, I'll ask him if C++/CX is "done". I don't speak for him or Herb or anybody else that does the real work of providing great native tools for C++ developers. Now that that is out of the way....
That said, as I said, I respect your position here - it makes sense to me that you feel the way you do. A solution to a problem was put forth and you don't like it or you're not convinced it was really necessary.
C
@pierremf: I wish you would have completed my thought. Here, I'll do it:
C
@Charles, you said:
."
I don't see anything contradictory in what I said.
I objected to the language having two notions of "class" surfaced deeply where that deepness requires heavy wrapping requirement.
Just because I am open to language extensions, that doesn't mean I am open to any. But nor does that mean I am objecting to all.
Hence, I have not batted an eyelid at "partial" for example, nor did I object to override or enum class in the past. I saw value.
If you read my post you'd even see I went out of my way to especially call out the exception which is class and explain the class specific case where even then I was open to if it could be seen to work without a burden and so far I can't.
Nothing contradictory all all. I don't know what you are talking about.
How about you explain why adding another class is "small". If it makes your job easier, try to imagine something bigger.
Go for it, your time starts now! I'm deadly serious funny guy! That should be an easy challenge.
@Glen: So, you want to argue, eh?
I just don't think adding ref class is such a tremendous burden as to give rise to the analogy of building a house in my backyard without my permission. If that were to happen to me, you can bet that your structure would stand for only so long...
I just see the value in making it easier for me to build a new structure that satisfies the new requirements of a new model for building new structures in my neighborhood. COM evolves, man. Like everything else.
Now, arguably (that's what we're doing, right?), that's not a very good analogy, but I'm not as smart as you are, Glen. As I said, I respect your position and have not said you're right or wrong. It is what it is, in fact. You are entitled to your opinion and position on the matter. You shouldn't spend much more time arguing with me - there's smarter people for you to engage on this thread.
I don't think designing language component extensions that make it easier for C++ to plug into foreign object models is such a terrible thing. That's just my opinion. Of course, as I've been quoted, I also think it's a really hard decision to make (extending a standard language to suit a specific platform) and in some ways, in the ideal, it's unfortunate, but I am totally confident in the people who made the call. Most positively. For sure. I trust Herb and Jim and Marian. I trust the VC team. These are exceptionally talented people.
I can't talk about the details of the underlying foreign (with respect to C++) object model at play here (WinRT). You'll need to watch the BUILD presentations I already linked to or read the MSDN documentation. If you haven't done this, then please do so. Then, come back and see if the color has changed.
To end this particular tangent (Glen versus Charles object-oriented analogy death match): OK. You're right. I'm wrong. Now, back to the regularly scheduled program.
C
@C64, good to see you here, thanks for your contributions too.
@C64/@PFYB and anyone.
If you get time, look for my ' C++: "good for writing libraries" ' post.
I posted some opinions in that particular post that sum up what i think C++/CX means to a typical C++ code base and how using it potentially makes everything microsoft specific unless you take extensive steps to avoid that; and why I actually think you must take such steps, but that to do so will be a massive burden to you and will dimish the value of ISO C++ if that burden isn't reduced.
It documents my core understanding of what doing a big project in C++/CX would mean to any code base and how I'd likely structure my code in that reality and why I wouldn't want to be burdened with that effort required to do that in the way I actually currently think it currently requires.
My perspective is that, C++/CX as currently defined, is a burden and C++/CX is not good for C++.
My further perspective is that it may be possible to make the wrapping purpose significantly more transparent and that the win for removing that would be huge IF it were possible.
If it were, it would:
a) significanly reduce the amount of wrapping code required by a magnitude
b) return the ISO class to prominance.
c) not dimish the C++ code base or language.
d) likely require (possibly many) language extensions but if most of them weren't needed all of the time or were attributes and if it did a), b) and c) it might be worth it.
To that end, I am busy trying to think of what kind of language extensions would do that and then trying to decide how WG21 friendly they'd likely be.
I'd love opinions on it and especially I'd like to know if people agree on how one would have to structure an existing code base to support both ISO and ref classes as they currently are defined, to support a situation where you wanted code a) to be used on other C++ platforms (just trival non platform specific code), and b) also wanted to expose the class to other languages and c) also wanted to expose C++ classes to other users even on the Microsoft platform who weren't wanting to be exposed to C++/CX themselves.
Hope that is clear as mud! lol
Thanks
@Charles
I'm really not spoiling for a fight nor trying to be clever but it is just tough when someone says this is a "small" thing but then you can't come up with an even bigger thing than the thing you are claiming as small or non tremendous.
I don't doubt that herb has come up with some really masterful stuff, nobody rates him higher than I do.
I can see his work will save large COM projects zillions of lines of code. The biggest COM shop in the world is MS. I can see how this all might be good for you (MS)! Just not me!
What I want you to understand is that there are many many many "shops" out there that just aren't in a COM world yet and having done some COM projects I would never take them down that road.
I can however see huge value in being able to expose C++ classes to other languages. But going for that is hard when:
50% (I'm making that up, but its a lot) of a COM'ified code base is just just forwarding noise. It decimates a code base if it has to wrap itself. I contend C++ has to or it strangles itself. C# just doesn't do this. Which are you going to use?
But this language extension is so big. I can't imagine anything bigger myself!
C++/CX encourages and incentivises C++ to go COM. If enough go there, everyone has to go there with the effect being what I said in previous posts.
If COM is of that much value, WG21 should be able to see it and try to absorb it/make it easier. If they can't, the road has forked for ever.
No other language that I can imagine will divide itself in this way. This might make C++ uniquely brilliant or uniquely dumb. I don't know yet.
But I must be really missing a trick here if this wrapping thing is even half as easy (i.e. not so tremendous) as you think.
If you or anyone cares to explain to me why it's not so tremendous, I'm listening. C++/CLI was no better in this regard.
I'm not a wiz at language design or library design either, that's why I'm soliciting other peoples opinions and challenging you and opening myself up to the same.
Close. I would rewrite your sentence to read WinRT encourages and incentivises C++ to go C++/CX.
See above.
Herb addressed this point. C++ is not infinitely general purpose. No language is. See his point, again, on C++ and foreign object models.
Both Herb and Jim stated that the net complexity (while also faced by the folks implementing such a solution) would be passed along to the end user programmer, like you. C++/CX, big or small, makes things easier for you.
Both Herb and Jim certainly are.
C
@C64:
The point is: how this mapping is implemented? How is the quality of the resulting code?
I agree. That was the original reason for the discussion.
There must be a way to tell the compiler that "Screen" is a WinRT class.
I agree. As I said earlier: "There are many different ways to direct the compiler to do this (pragmas, __declspec, predefined bases, separate command line switches, etc), we can talk about pros and cons of each."
Is "ref class" much worse than say "class __declspec(winrt_export) Screen { ..." ? I'm not sure it is.
If "ref" strictly before the declaration of a class was all C++/CX was adding, that might have been fine. But, no, we have hats, ref new, generics and everything else. I completely agree we have to talk about what exact code should be written in each case. __declspec is one way for the above, #pragma export(winrt) .. #pragma export would be another. That we have to talk specifically about each case does not mean it can't be done or is even terribly difficult to do. If you are skeptical about that and are willing to take Herb's word that this is terribly difficult, almost impossible, fine, but please realize that this turns it into a talk about faith-like concepts, not about technology. Yes, it is a pity I don't have a complete technical specification ready, I wish I would have one.
I think we can trust people in VC++ Team if they say they tried first without extending the C++ language, but it was not a good experience for the "end user" programmer, and they figured out C++/CX was a more productive tool for us.
Why does this have to be about faith? If there are technical arguments as to why not do wrappers, why not present them? Please forgive me for being blunt, but it very much looks like the team made a mistake in not going far enough with C++ and Herb is trying to justify that mistake by any means possible. So many posts from Herb with nothing resembling a counterpoint besides "it's so difficult, you don't even understand", this has to mean something.
However, it would be interesting if you and other guys could start an effort to try to offer a "pure C++ no-extension" approach to solve this problem of WinRT programming from C++, and let us know here. .. you have to build something concrete to convince me, ...
Fair enough.
As said, the approach suggested above requires extending an existing compiler. This isn't easy. Right now, the effort does not seem to be worth it, there is no hurry to use WinRT and if you absolutely have to use it, there are .NET languages and Javascript.
@hsutter:
Do we agree that the route of "extending TLB #import" leads to a full ATL + MIDL like approach? ----- ).
This is just word games. Yes, COM types are different from C++ types and there is more than one way to define the way C++ concepts translate to COM. Yes, we might have to convey some information to whoever is going to do the translation. No, you don't have to use MIDL. That's why I answered "No" above. If you think I should have answered "Yes", because using C++ is similar to using MIDL as long as both are used to define interfaces for COM, good. ...
Yes, it is necessary to generate wrappers. Our entire proposal is that you generate wrapper C++ code. That wrapper C++ code should wrap WinRT strings. What is your point here?
": public IInspectable" is not really C++ inheritance if you want it to have WinRT semantics.
We want interfaces to inherit from each other in the C++ way in the C++ code. We want them to do what's required for WinRT in wrapper code you generate. What is so difficult? Why do we have to go through this over and over?
There's not enough information to decide whether the intent is to define I1 as an interface or a class, since a class could be named I1 and could have all pure virtual methods.
Agree. This is something to talk about. Nothing too complex, but there are many possible solutions and we have to settle on one. Anything else?
As I've said before, the code doesn't carry enough information to generate metadata, do code generation, etc.
That's just you repeating yourself. Please say something specific.
As for the rest of I1, note that I1::G that returns a std::wstring will be slow if you provide only that much information.
See my reply to Jim Springfield which explains how you can avoid conversions between HSTRING and std::wstring.
We don't know whether each of X and Y is intended to be a class or an interface.
Yes, you have talked about this already. Again, this is something to talk about. There are many ways to go, we just have to settle on something.
WinRT requires X to have a default interface, but there's nothing in the code that says whether Y is to be the default interface for X. We could assume it must be, but then when you have "class X : public Y, public Z" how do you specify which interface should be the default interface?
Yes, this is something to talk about. Many ways to go again.
This is just scratching the surface. It is not by any means a complete list.
Of course. Nothing impossible or even really difficult though. A fair work, the kind you did for C++/CX, the kind we do day in and day out on our jobs. Nothing trivial, but nothing really difficult either. You just wake up, brush your teeth, drive to the office, get into the chair and do.
Oh, no. Please don't give me this "you just don't understand how difficult this really is, I have been doing this for years so I know much more about this than you do". C++/CX and WinRT aren't rocket science. Sorry, they just aren't. People have been mapping concepts between C++ and other technologies for years, it's a fair work, like I said, but nothing terribly difficult or grandiose like you pretend it is. Take a look at C# folks. They just took WinRT and mapped it into their language seamlessly with little fuss. Nobody has problems with what they did. Nobody. Why? Because WinRT is a better fit for .NET than C++? That's highly debatable. If you were among the C# guys you'd jump at that notion, you'd argue to death that WinRT is enormously closer to C++, since C++ is native and WinRT is native as well. No, the real reason the C# folks just took WinRT and mapped it seamlessly with little fuss is because they approached that as normal developers should, with little noise and no idiotic grand stands.
Herb, if you want to say that this all is really terribly difficult, please show it. Right now, you haven't shown anything particularly difficult. The total of your entire reply is this - how to specify whether a C++ class should map into a interface or a class or both in the WinRT sense, and how to specify the default interface for a class. That's it, everything else was just repetitions and irrelevancies. The question you bring up is not at all difficult. That said, it is very clear that you aren't really interested in discussing it. Your position is that since I don't have a complete technical specification for what is being proposed, this can't be done. This is pathetic.
@Charles:".
No, Charles. We want either "Yes, you are right" or "No, you are wrong, here is why". What we get from Herb is "No, you are wrong, but I won't show you why, this is all terribly difficult, you just don't understand, you don't have a complete technical spec so I will just point out to various things which you haven't yet mentioned in your posts and this will supposedly show that you, well, don't have a complete technical spec and thus I win". This is pathetic.
C++/CX is real, not imaginary or speculative like the reverse of #import with wrapper classes solution.
Right, this is your entire argument. Since we seem to be at our final words, here is the final word from me:
What you did with C++/CX puts C++ developers into a position where they can not easily talk to WinRT from their native language. They have to use another language. Due to its viral nature and bad language integration, C++/CX is actually a worse choice for a C++ developer who wants to talk to WinRT than .NET languages or Javascript. The latter languages at least allow you to see the borders clearly.
Me and my team are going to stay away from C++/CX. This means staying away from WinRT for some time, too.
Have fun..
Same for me, PFYB.
Thanks a lot for many insightful posts.
@JimSpringfield: I would think one way to map WinRT interfaces to C++ interfaces in the world of regular C++ constructs and wrapper C++ constructs would be: regular C++ constructs use regular C++ inheritance, no IInspectable, no IUnknown; wrapper C++ constructs use COM / WinRT "inheritance", derive from IInspectable, do QI. Out of interest, I sketched how this could look and while it is perhaps too long for a thread post, it was fairly straightforward.
Charles
>?
Exactly, I think crafting an ISO C++ solution is a viable option. I did read what everyone wrote, I base my opinion in part on what was said.
By the way, you say this:
>.
Do you mean to say that even if PFYB is right, you aren't going to say so? I hope you do realize how this sounds.
Please think what "sense" others on this thread are getting when reading replies from Microsoft folks.
My understanding was that the VC++ team tried several paths, including something similar to what you suggested, but figured out that the simpler approach for us "end-user" programmers was a language extension as C++/CX.
I don't know if you did some COM programming, I did with C++ and ATL, and while I think that ATL is the best tool for COM programming in C++, note that you have to write non-trivial code to create a "COM" class, instead the authoring process seems to me simplified a lot with C++/CX.
Just an example of some C++ ATL code snippet:
There are base classes, macros... I've not seen WRL, but probably it is similar to the above approach (maybe with some C++11 features that aren't present in ATL).
C++/CX seems to me to hide all this complexity, moving it behind the scene. This is why I kind of like C++/CX (but I've not experimented with it deeply, and in some posts you correctly pointed out some impedance mismatch with the rest of C++).
However, I think your request of showing concrete examples of difficulties encountered by the team in following an approach similar to what you suggested is fair. These kinds of examples may be collected and presented in a future video on C9 (something like "C9::GoingNative: C++/CX Backstage - A Design Rationale").
@PFYB:
As I said, this is a fair statement. C++/CX is not C++. Nobody is debating this. Now, in terms of .NET languages and JS, to be fair they see what their virtual machines show them. The CLR and Chakra do all the work here (interop with the WinRT object model, GC, etc...). C++ doesn't have such a facility (VM) so the work has to be done either at the library or language level. You know this, of course. I'm just stating, again, why the .NET and JS solution to this problem is not the same and can't really be used to argue against what was done for C++.
Your position is that going the language extension route changes the game and not for the better. You want technical reasons why the C++ wrapper solution wasn't deemed a viable approach. Right? Isn't this the crux of the issue?
C
No. I'm saying we have C++/CX and this isn't going to change. We also have WRL, which is a C++ library for writing exception-free COM that targets WinRT. We learned about WRL in the last episode of GoingNative. The point was made that you can (and Windows already does) write WinRT objects in WRL (so, not C++/CX). Right?
Herb and Jim are the right folks to comment on behalf of Microsoft. Perhaps I am just getting in the way. I will therefore step back and not add any more confusion to this thread.
Keep it going.
C
.
@Jim
I totally appreciate your comments. I don't doubt that C++/CX is a valiant compromise to fix and solve a lot of Microsoft legacy and desires.
However, after listening to all of the debate and some great ideas, I have to, again, forward my conclusion that C++/CX just can't be recommended.
It's not to blame you or herb, I think you've all clearly done some great work, some of the best work in many ways. You, especially, have done a great job of explaining yourself. Maybe the problem is the constraints you started with?
Regardless, I honeslty can't recommend C++/CX to anyone. If it's any consolation, I can't recommend C++/CLI either. I warmed to that before in desperation, but I got it sorely wrong.
C++/CLI is patch to an insufficiently performant problem that is .NET.
C++/CX is a homage to COM that still requires too much work.
Both make C++ the wrapper guy *for my own code* and require too much discipline to keep an ISO C++ code base sane and viable. Due to that, I sincerely believe C++/CX/COM/WinRT and C++/CLI do more harm to C++ than to promote them and that WRL serves no value in this either.
This is no C++ renaissance, far from it, just a native one. I have maintained that this was looking that way for some time. I am sad to have seen that demonstrated. I have enjoyed the discussion though and the public contributions.
But C# will just go from strength to strength in comparison to C++ on MS platforms and C++ is a laughing stock in comparison with its two notions of class, (one small one big as Charles would say) and the small work required to synch them, that is a burden unwanted. None of this is C++ itself's design failings IMHO.
I know you all have really great (for MS) reasons for why this was all done this way and constraints set for you that made this design inevitable. I've really tried hard to stay out of the "no language extensions" technical debates to give everyone as much scope to find any solution that I felt was workable that I could also see WG21 drawing together compatibly, but I just can't see that future.
MS now needs some heavy weight help from Bjarne or Dave A etc. to show that that future is possible. It's crazy that I have to say that because I know Bjarne and Herb communicate so I feel that he must in some way tacitly approve (to some definition) of your work, but I just can't see the light regardless.
As yoda says, the future, uncertain, it is, dark. Search your feelings.
So I searched my feelings. It comes down to this:
1. This design does more for MS than it does for me or my customers. Rewrite/refactor, for what?
2. I am not the wrapper guy. I will not manually wrap my own c++ in "c++" like this. I don't want signficant work to keep even the simplist ISO function not Microsoft specific or easily useable from other languages or modules or callers like IE.
3. I don't want any solution here that just damages ISO C++. I believe COM 1 did, .NET did, and C++/CX still does that. Where are they now? C++/CX makes .NET redundant and its sibling, but it is no better for me, just maybe MS.
4. If the COM model is what makes this so hard, maybe COM is the problem. Cut it down or out. If COM or something similar is what C++ is lacking, WG21 will surely roll it out. I have troubling seeing it will be C++/CX or a COM that isn't anything but drastically altered.
5. I have to believe ISO can draw what you come up with together that does not great a frankenstein. I'm not seeing it. If something like COM is needed, that's the door I will be watching now.
I am sure you've read my elaboration on that I feel C++/CX means to the average developer (me at least in both parts). I posted before "C++ good for libraries". Do comment, it's non technical.
That's what I feel. I will not recommend C++/CX to anyone as it stands.
In conclusion, for once, I agree with Charles.
Hey."
He is absolutely correct in essence on both accounts.
@Charles
> Your position is that going the language extension
> route changes the game and not for the better. You
> want technical reasons why the C++ wrapper solution
> wasn't deemed a viable approach. Right? Isn't this
> the crux of the issue?
Yes, Charles, PFYB, Tomas, KMNY_a_ha, myself and I think several others here wanted technical reasons why the C++ wrapper solution wasn't deemed a viable approach. I have to work hard to suppress my sarcasm at answering this after numerous posts above which ask this very clearly.
@JimSpringfield
To recap: IBaz is a C++ interface (deleted via delete and virtual dtor), _IBaz is its analog for WinRT (deleted via Release), IBazImporter is a way to wrap _IBaz into IBaz (deleted via delete and virtual dtor), IBazExporter is a way to wrap IBaz into _IBaz (deleted via Release).
>.
Why is this a problem? You do a call into WinRT via some other interface (ISomethingElse). The call creates a WinRT object that implements _IBaz. The wrapper code wraps _IBaz into IBazImporter. You get a pointer to IBazImporter. IBazImporter implements IBaz, you call some functions on IBaz. Some time after you decide you no longer need the object. You call delete on IBazImporter (automatic with smart pointers). IBazImporter calls Release on the WinRT object. This decrements the reference count on that object. If the object has no more references it is destroyed. If the object has more references, this means someone (likely you) handed a reference to that object to someone else and that someone else is holding onto that reference. This can be a different instance of IBazImporter or a different object depending on the implementation of the object and the implementation of wrapper code. If someone else is holding onto the object, it doesn't get deleted when you destroy your instance of IBazImporter and this is good, this is what you want.
> On the locally defined ISO C++ class, calling delete
> will actually delete the underlying object regardless
> of whether someone else is holding an interface.
Yes, exactly. This is OK.
If you get an instance of IBaz by doing new on a C++ class that implements it, you delete that instance by doing delete. Are you thinking about the case when you have to pass that instance of IBaz to WinRT and you don't want to hold onto it anymore in your code? In this case we have to replace the ownership rules from ctor / dtor to AddRef / Release. This can be done by creating an instance of IBazExporter and passing it the existing interface to IBaz transferring ownership using an appropriate ctor.
It's not an approach that's going to be implemented. This answers the bigger question: Is there going to be an ISO C++ solution to this problem instead of a language extension? The answer can't be stated any clearer than it already has been.
I'll leave it to Jim to reply to your reply to his reply. That said, let's please be respectful of his time. Also, I'd really like to interview him about ATL not just C++/CX.
C
.
@JimSpringfield
When IBaz comes from a local class, calling delete on it deletes an instance of that local class. When IBaz comes from a wrapper class for WinRT's _IBaz, calling delete on it deletes an instance of that wrapper class, which calls Release on _IBaz. The ownership rules are different, yet in C++ you always call delete and this either deletes the object or merely asks it to delete itself via Release.
> If it came from a local class, that will destroy the
> object and I probably don't want to do that.
Do you mean the case where you don't want to destroy the object because you handed a reference to it to some other code and thus you want the object which you originally created with 'new' to live by the COM rules now? I commented on that in my earlier post -- In this case we have to replace the ownership rules from ctor / dtor to AddRef / Release. This can be done by creating an instance of IBazExporter and passing it the existing interface to IBaz transferring ownership using an appropriate ctor (or make-function).
That's a great idea: the ATL inventor speaking of its own programming creature!
If possible, I'd like to listen also to historical perspectives on ATL, e.g. what were the problems at the time (maybe COM programming in pure C++ was too hard, MFC way of doing COM was too bloated, and so a better ad hoc library was needed? I also think ATL was very advanced for that time, because the library is heavily template-based, and uses several advanced techniques like mix-in classes and so called "compile-time virtual functions calls"); what design decisions were on the table, why they chose the option X instead of option Y, etc.
I'd like also to listen to something on WTL, which is built on top of ATL and offers nice object-oriented C++ wrappers around raw Win32 handles and C API functions.
I believe ATL is at the core of lots of Windows native software: e.g. shell extensions, IE add-ins, etc. maybe also software written by Microsoft itself, and snappy apps like Google Chrome use ATL and WTL too.
Thanks!
@C64: Thanks. I'll be sure to ask!
C
OK, Jim, I hope we can count this particular matter resolved by now.
Charles, take note.
My final word is that I believe I and others were not here to lay blame and to force the Visual C++ team to say that doing C++/CX was a mistake. What we wanted was a technical discussion on why an ISO C++ solution was not deemed viable. I was (still am) completely willing to be pointed to something that makes such a solution unviable. I thought though that if during the discussion it turns out that an ISO C++ solution is viable, maybe the team would be willing to do it for whatever version of VS, in some form. We never had the discussion we wanted. The summoned C++ heavyweight Herb Sutter spent the majority of his time talking about things that were irrelevant to the topic of an ISO C++ solution as discussed in the thread, to everyone's frustration. This is a shame.
In the end, it seems to me that an ISO C++ solution is completely viable, but we aren't going to get it from Microsoft due to stupid political reasons we don't completely understand. We will also have a hard time creating such a solution ourselves since that requires interacting with the compiler. This is a TOTAL shame.
The trust between me as a C++ developer and Microsoft as a maker of my tools has never been thinner.
I'm pretty sure people would complain less here if the C++ team had focused more on C++11. Not many people even care about C++/CX now, let's take a look at VC++ uservoice - there's the proof. We were expecting a modern GUI library to replace MFC, but what we get is something that can only be used for metro apps and has a horrible syntax.
Let's leave out the fact that the C++ team spent their time on C++/CX instead of C++11. What is there to stop them from implementing the C++11 standard now? GCC and clang C++11 statuses get updated every few weeks, and every time there's huge progress. Perhaps Microsoft should directly tell us to move to C# if that's their intention.
Dennis says:
"The trust between me as a C++ developer and Microsoft as a maker of my tools has never been thinner."
Here here.
It's clear to me that C++/CX is going to die a death like COM, mananged extensions, and C++/CLI before it. The only question is how much it again divides C++ users before it happens and if that means it will take down C++ with it? My money is on C++.
The ref class adds zero new invention to C++ - or language design in general. The ref class's only purpose is to serve as a vendor specific, lock in replacement, for something the language already has - for no greater purpose than to allow Microsoft to get under every class the C++ developer ever writes - at the developers own expense! - until the end of time - so that Microsoft can invoke those classes in a vendor specific way to support their own platforms like IE.
Are you nuts? Who will vote for that?
Your platforms are losing ground. IE loses ground to Chrome every week and has been since Chrome was released - just as Microsofts phone platform has also been perenially losing ground to Android and iOS.
The problem is obvious: C++/CX's lack of invention materially damages C++ by expecting C++ developers to enlist to manually wrap *everything* they write *for ever*, purely to help Microsofts platforms. This asks the C++ developer to simultaneously *harm C++ and their own productivity and viability in the process*.
Why Microsoft thinks C++ developers would support such a dreary plan this time, when they didn't before when it was under the guise of COM and C++/CLI that failed last time, is a thing to wonder?
C++ developers have never enjoyed such propositions inflicted on them but when Microsofts own platforms are under attack by better competitors like Chrome or more flexible ones like LLVM, for Microsoft to suggest this path again starts to border on insanity.
Microsoft, fighting your customers is the sign that is sent to you to recognise when you are doing something wrong. You may one day get back to helping your customers instead of helping yourself but by that time, if all your credibility is gone, what will it matter?
You are losing friends you can't afford to lose. I've told you why.
Listen to Dennis:
"The trust between me as a C++ developer and Microsoft as a maker of my tools has never been thinner."
Will there will be an compiler that produces ARM code that supports these extensions?
It's clear to me that you haven't been following along closely here. The point is that COM (well, WinRT, which is based on COM, which presents a programming surface that is much like COM, but it's WinRT. It's also COM...). Did you not watch the BUILD WinRT presentations I linked to ealier in this thread? Watch.
In terms of C++11, you already know that the VC team will release new C++11 language features post VC11, pre VC12. Herb stated that at BUILD. There's a lot of great information in the BUILD conference's keynotes and sessions that explain a lot of the Why. Seems this level of understanding is missing here (why are we doing this? what's going on here?). At least, this is what I can gather from an assertion that WinRT will die just like COM did...You do see the irony in such a conviction, right?
C
Charles, COM is not dead as a technology underlying WinRT but it is definitely dead as a technology that developers program for. Some developers still write COM code, but the vast majority only use COM if they have no other choice. In the past 5 years I have not seen any project that would use COM for what it was intended to be used - to separate code into components and connect these components together. I saw .dll's connected with plain C interfaces, .dll's connected with plain C++ interfaces (yes, this is vendor-specific), components talking to each other via .NET, extensions done via Python, extensions done via Lua. I never saw COM used for anything besides satisfying needs imposed by the system. As a technology that developers, not automatic tools, use, by choice, COM _is_ most definitely dead.
There is also a bigger message. I agree with Dennis one hundred percent. I am going to echo his words:
The trust between me as a C++ developer and Microsoft as a maker of my tools has never been thinner.
I just wonder if this "syntax" would/could 've been used in order to stay with ISO C++
class X
{
REFCLASS;//this is a macro, other tool may be needed but other than that nothing changes
};
continue from the comment above:
if class needs to be WinRT aware it has to have as its first line REFCLASS macro. That's it. Compiler or another tool is aware of that and based on that generates necessary info/metadata. Syntax and everything stays exactly the same as ISO C++.
KMNY_a_ha: I would go for a different syntax (I like the approach with pragmas suggested by Dennis), but, yes, this is the idea.
Are you sure?
DirectX (and, in particular, Direct3D/DirectGraphics) is a COM API, and is widely used to program best-selling videogames.
If you want to build Shell extensions for Windows, you have to use COM.
If you want to write add-ins for IE, you have to use COM: "[...] You can create add-ins using .NET and Script, but both have significant limitations as well as performance concerns. If you want to write an add-in of any complexity, you'll almost certainly want to write it in C++."
The new Windows Ribbon API uses COM as well.
etc. ...
C64
Yes, that's what I meant by "needs imposed by the system". Since Microsoft keeps using COM in their APIs, developers using Microsoft technologies sometimes have to use COM to some level as well. Frequently, all this means for a particular project is that someone writes a huge wrapper around the entire subsystem that uses COM - or reuses an existing wrapper written by someone else - and the rest of the team use that wrapper. The original purpose of COM was something else - to allow separating your code into components and let these components talk to each other using a strict ABI. As I said, in the last 5 years I have never seen anyone do that for their own code, all uses of COM that I saw was when someone's code had to talk to a Microsoft API that uses COM. To me that says that COM is dead for everyone but Microsoft.
@C64, as Warren says, including MS in your user base is dubious. One supporter of a product, who is also the inventor of it and most skilled user of it, does not make a happy eco system IMHO.
Even on Windows, MS authors COM classes, but barely anybody does unless it is trival for them to do so, which for C++ developers it isn't without a significant negative impact to their ISO code base.
Its barely used on any other platforms where it isn't Microsoft related?
For a technology that's existed for so long, that's pretty poor take up don't you think?
If MS folded, do you think COM would go onto to have a happy future? I doubt this will fly even without MS folding. I doubt there are any companies that use COM that aren't using it because of Microsoft. There must be a reason for that.
Please, can someone also inform me where the irony is that Charles is talking about:
If there is a level of understanding missing as Charles puts it, it would appear to be from everybody, because from the comments here, despite some 220+ posts, I'm not aware of a single person who has changed their mind to support C++/CX at all if they were not supporting it to begin with. Zero turn arounds seems a pretty poor success rate for such an effort and the customers willingness to partake.
Compare that to the 100% enthusiasm that more C++11 support has, that was garnered with zero effort required from Microsoft at all, but that despite that, it missed this schedules delivery bus. Now that is ironic.
What's to misunderstand about that?
How about a bet. I'll take a gamble that says, @PFYB, @jalf, @KMNY, @Warren, @Dennis, @Garfield etc. to name just a few, aren't suddenly now giant C++/CX, winrt and COM supporters now, and rushing out converting all, infact any, of their code over to ref classes? Do let me know. I'm keen to know if I was left the only one unsold or with misunderstanding.
It's great that VC Next might oneday (no dates) have better C++11 support, but I wont be buying it. This thread has demonstrated to me that any C++ renaissance is not safe in MS hands. The time to get out is now. Hitching a ride on the next big thing that isn't doesn't make sense.
@Glen, I certainly am not going to even touch WinRT. As've been proven here on this thread CX is a total unnecessary, foreing to C++ without legitimate right to exist.
But Glen, this will not bother MS. We are not their target customers - .NET crowd is. They will jump on CX. We (C++ guys) were just an extra - They (MS) speculated along this lines: "if it happens (that C++ guys will want to use it) that's fine, if not we (MS) can live without them quite happily."
They didn't care about us in the past and they will not in neither present nor future.
Why do you think this syntax is so .NET-ish? To accomodate C++ folks? Buhaha! That's so obvious - .NET crowd is what matters for MS not we.
C++ rules and rocks!
Forever!
Who cares about VC11? VC11 is completely broken by design!
see:
Until this BUG is fixed VC11 will be useless for most real world developers!
.
VC11 not supporting XP really seems like an attempt to kill C++ in Microsoft's world. .NET 4.5 will surely run on XP right? I've actually heard rumours that Microsoft pays universities to teach C# and NOT teach C++.
Whatever... So wrong on so many levels...
C
Yes. This is precisely what WinRT does - via a COM-based internal implementation - and WinRT components can be written in very different languages... Your arguing against COM, not C++/CX. If you don't want to use C++/CX to author WinRT components then - again...- don't use it. Use ISO C++ and WRL. Or, write your own wrapper. If it's so readily possible to implement a library-only solution, then do it. Nothing is stopping you as you have the low level abstractions (via, you guessed it, WRL) at your disposal. COM is the glue. If you don't want to write COM code, then that's the issue, not C++/CX. The whole purpose of C++/CX is to abstract away COM complexity. As Herb says, any way you choose to do this will require some form of extension (COM is a foreign object model to C++). C++/CX has been deemed the most user-friendly approach to solving this problem. Feel free to prove us wrong.
C
@Charles didn't we prove you wrong (to MS, mr sutter and you personally) by giving a number of differently approached counter examples written in pure ISO that would achieve exactly the same goal as C++/CX and actually would conform to ISO C++?
Are we having/had the same discussion here?
And as for mr sutter and his attitude:
"hey, this thread has half the word count of exceptional c++! I'm not gonna read it!" - pathetic
You also are saying:
Yes, but in order to consume it you use C++ syntax and you have C++ semantics. This is NOT true for C++/Cx - and that's the whole point, I and others were trying to make but apparently this genius mr sutter couldn't neither grasp it nor explain why staying within ISO C++ would be not possible and he and his team had to come up with CX. WOW!
The only person here from MS who has to be given fair play star is Jim Springfield. I and others may disagree with him but at least he didn't try to pull wool over our eyes, on the contrary to mr sutter and Jim at least read this tread and listened to what we want/have to say.
As for over all mr sutter's appearance here - WOW!
Charles wrote:
> *C++/CX has been deemed the most user-friendly approach
> to solving this problem. Feel free to prove us wrong.
I am working on the code generating approach discussed above. In this phase I chose a small subset of WinRT and I write all the code by hand but I am clearly separating user code from the generated code. User code is ISO C++ without hats, ref news and Platform::Strings so it's much more easier to use than C++/CX. Generated code uses WRL. I was already successful with deriving Application class and loading some XAML into it. When I have UserControls and events working I post the link here. Then we will have something concrete to discuss and we can prove the redundancy of CX.
@Tomas: Great. Code speaks louder than words. This has been a thread of mostly hand waving...
Good luck, Tomas!
C++.
@Charles
"Great. Code speaks louder than words. This has been a thread of mostly hand waving"
Thanks. I suggest you quit.
You forget that it's Microsofts job to do this, not ours. We don't get paid for it. One might call it arm waiving, saying hey, don't do this, but if it suits you to characterise that as Customers being lame "hand waivers" for their efforts, go for it, but its an unusual stance, particularly for someone working in developer engagement as yourself.
Since that's your core belief, let's talk about hand waiving. What have Microsofts hands been doing for C++ users since VC6?
1. Diluting C++ with Managed C++ extensions which were an epic failure.
2. Diluting C++ again with C++/CLI, a whole new language to target a managed runtime that was less peformant than what it replaced.
3. Diluting C++ again with C++/CX which is largely the same in syntax and tact but repurposed to flip flop back to native all over again.
4. Routinely closing out bugs in a condescending way, to developer consternation, rather than fixing them.
5. Flunking the phone market with no C++ support still, despite getting whipped senseless by Google Android and Apple IOS.
6. Failing to implement as much of C++11 as those rivals that are doing the whipping like apple/clang, gcc, and ibm; apparently because "Microsoft didn't know customers wanted it".
7. Bystanding as the stock price of those same competitors rises beyond Microsofts, evidently without a clue why.
8. Contributing pitifully back to the C++ eco system considering Microsofts size, budget and abilities despite Microsofts own reliance on C++.
9. Losing market share and adopting restrictive practices. VS11 drops XP support as IE9 did. Chrome retained XP support and now gains on IE9 on a daily basis on past and future platforms. Watch VS decline too. Go figure.
10. It engages Customers too late in the design cycle, fragments their market and insults and thwarts their customers needs.
In this time of Microsoft failing to execute, everybody else has.
Microsoft seems content to just insult their Customers. Labeling them as hand waivers just rubbishes their aspirations.
In contrast look at Boost:
1. It has contributed many full libraries to standard C++ with more to come.
2. It has fostered great conventions that are as much about C++ as Boost.
3. It has even diluted its very successful Boostcon brand FOR C++, renaming it C++Now for the greater good of the language and its customers. Smartly realising what's good for C++ is good for itself and everyone else.
See:
4. It routinely engages in ideas to meaningfully extend C++ in both language and libraries in a thoughtful, consultative way, with great yet simple discussion threads. That aren't just all about themselves.
See:
5. It has credibility with its community far far far in excess of anything Microsoft can or will muster, while Microsoft has virtually none in comparison.
So next time you want to talk about hand waiving, think about that. Then put your hands over your mouth.
While you are at it, since you are "encouraging" Tomas's efforts, do we take that as a guarantee Microsoft will use it, if it works? Or was that just you hand waiving?
C
@Glen: Well, I provided links to both folks talking about WinRT internals and folks talking about C++/CX internals (and showing you the exact code that hats and refs compile to...). No hand waving there. We care about C++ and C++ developers. You have to understand that the VC++ team has many masters. ALL of our core products are written in C++ and compiled with cl. The team that makes this toolchain must support things like Windows, Office, SQL server. How do you expect them to do everything you want when you want it? They DID provide a non-C++/CX solution to the WinRT component authoring problem. How many times do we have to say the same thing. Tomas gets it. He's actually using it in his exploration of an alternative solution. One more time: WRL.
Please understand that your perspective is appreciated - if it wasn't Herb and Jim wouldn't have spent time here explaining themselves to you. We wouldn't have a thread with over 230 replies. I wouldn't spend time on this thread and salute you in the latest GoingNative episode. You watched it, right? We welcome your feedback. We also appreciate what Tomas is trying to do - in code, not words. We've already said it's possible, just not practical. Perhaps Tomas will learn that. Perhaps he'll prove us wrong. That said, as said many times, C++/CX exists only to make it easier for C++ developers to prorgam a single platform, a single ABI, a single product.
Please give the VC team a break. They too, were misguided for 10 years as we - Microsoft -assumed .NET would be everything and everywhere and everyone would develop in C#. Can they have some time to catch up? Please stop casting them as incapable. In contrast, the VC team is composed of some of the most talented engineers in the industry - and they love C++ developers and especially C++. They ARE C++ developers... They compile Microsoft.
Respect.
Enough of this. Why don't you come to the GoingNative conference in Feb (all of you, please) and we can debate this stuff over some beers?
Peace,
C
Charles
I am not sure what exactly is your point on COM. I believe I expressed myself clearly enough and I am not sure if you agree or not. Perhaps it is best not to continue.
I would like, however, to respond to this:
> This has been a thread of mostly hand waving.
You imply that those of us who were asking you in this thread about why you didn't support WinRT in C++ without extending the language are somehow in the wrong because we didn't provide you with the exact code that implements this vision as it applies to your compiler? Is that what you are saying? This is ridiculous.
The thread can be summarized as this:
Users > Hey, we don't like C++/CX, here is why. Why didn't you guys do it all in C++, here is a possible approach, are there any problems here we are missing?
Microsoft > C++/CX is the best way. We won't point out any problems in your approach, but we assure you it is a bad one. Just trust us.
Who is hand-waving here?
@Charles, you know I really, really would like to believe in that but first:
1. Actions speak louder than any voice - where is C++ at MS? What are the plans for C++ at MS? Do we as devs know anything about it? The only thing we know is that you (MS) don't care about C++ and C++ community (because of the actions MS took, not because what MS says)
2. Would it be fair to say that now MS assumes everyone will develop in C++/CX? Because mr sutter says it's great, so surely it must be, mustn't? O my God...
3. Would it be fair to say that if you come to someone's country you behave according to this country rules, not the one from your homeland? C++ isn't your world/country yet you treat it as if it was invented/owned by you. How arrogant is that?
Enough is enough. Too many times MS threated C++ community like not even second class citizens. Arrogant, dismissive attitude was the way to go. Even few weeks ago, what was it? C++ renaissance? Why didn't you play straight? Why did you pretend, hiding behind names (C++). What does it says about your attitude towards C++ and C++ community? And as for Diegum expanation for lack of C++11 support in VS2012 - because we didn't know you (developers) want that. How arrogant, insulting is this? Why can't you (MS) get it that people are not sheeple?
Charles, on a personal note, I really have nothing against you. I know that we've had little "incident" in the past but it's a past. I think that you are ok guy. I also have a weak spot for a native americans since childhood (would you believe that my wife's sister is married to a Blackfoot and they live here in Ireland few years now?). But what I'm driving at is that (if you really care about C++) you are in a very unfortunate position that you work for a company which interest you obliged to protect and those interests are nowhere near yours and ours, and you just simply cannot say what the things really look like. We on the contrary can. From the perpective of C++ community MS (with mr sutter as a main C++ architect) betrayed this language and this community - once again. As simple as that. And as long as this won't be fixed no words, no amount of beer, no plastic beads
will bring happines and good relationship between our community and the company you work for.
Peace
As a sidenote w/r to VS - I personally stopped using it at home shortly after build conference, and judging by how things are going for this once great IDE won't be using it in the near future.
Not sure what you mean. What point?
Well, no. Not really...
Some people here said we did the wrong thing, that C++/CX was unecessary, that there's a better solution, an ISO C++ solution, a reverse of #import solution with wrapper classes... Herb and Jim provided answers and insights into why (and how) this extension was deemed the right solution. Please read Herb's and Jim's replies. Then, read them again.
We will have to agree to disagree and move on, Warren...
Tomas, good luck. Please do share your findings
C
Sorry, Charles, but it is not just that we will have to agree to disagree. What you are saying is just not true. Neither Jim nor Herb provided an answer as to why a language extension was necessary. Neither provided a significant critique of the approach with #import and the reverse of #import.
This is not a matter of disagreement. This is a fact. You can't dispute this.
You say we have been given answers, Charles, but I don't see them. Nobody sees them except yourself. You can't point to the exact posts and lines with the substantiated (I will repeat this for you: substantiated! "we did C++/CX because it was the best way" does not count!) answers, yet you say we have been given these answers. This IS hand waving, and the guy who does the most hand waving here is you, Charles.
I am out of here.
@Warren: I'll copy the last paragraph of Herb's very first reply, which echoes what I and others have been saying (though not as eloquently as Herb) as my final word on this thread because it contains the most salient points, the ones we're apparently endlessly talking around (in circles):
This concludes my time on this matter and this thread (let's revisit in 6 months or so, let's see what Tomas comes up with, too).
Thanks to all of you for the discussion and for refraining from taking this into a fire pit of conversational discord. I'm proud of all of you for keeping calm, not slinging insults (though we came close in a few places, but never crossing the line, not really).
In the spirit of Thanksgiving, I'm thankful that we have an open environment like Channel 9 to have these sorts of conversations and that smart, passionate and opinionated developers have a safe place to argue, in peace.
C
Very good, Charles, this quote DOES NOT ANSWER the question we have been asking you all this time - why a language extension was necessary. You claimed a number of times that this question has been answered. Everyone disagreed. I challenged you to point out what the answer was. You failed to do that.
Having an open environment like Channel 9 is great, but this environment is of limited use if all you are doing with it is delivering lectures and basically ignoring all feedback.
Happy Thanksgiving.
Warren, exactly!
Charles, for the last time:
1. The topic of the episode was C++/CX.
2. In the comments many people expressed their dissatisfaction with C++/CX, largely because it seems totally unnecessary. These people explained what the C++ solution would be and asked you what was wrong with that solution.
3. You invited other people from Microsoft to take this question.
4. The invited people did not answer this question. There was a little bit of talk around the topic, but nothing direct. We clarified and repeated our question again and again, waiting patiently for when you will finally stop talking about side topics and take the question seriously. It never got to this.
Now you pretend like our question has been answered. Yet you fail to communicate what that answer was.
Not good.
Warren is right, Charles. It is good that you have this site where we can ask various questions. But being able to ask questions doesn't matter much when these questions aren't being taken seriously enough as to be actually answered.
Happy holidays to all.
@Dennis @Warren:
Read this again (and again, if need be):
Then, read this again (and again, if need be):
Then, read this again (and again, if need be):
Then, read this again (and again, if need be):
I don't think anything else needs to be said on the matter. It's clear that:
1) You are free to think what you want
2) You can express these thoughts here, freely
3) Your questions were answered (in my opinion) - read the replies above and let the information sink in. Don't just assume what's being said is wrong because it's not what you want to hear.
4) C++/CX is shipping in VC11 and will be the way you wrap ISO C++ inside WinRT components - if you choose to (it certainly makes targetting a single ABI much eaiser for you).
5) You don't have to use C++/CX to author WinRT components in C++ at all. Use WRL - as you learned in the WRL GN episode. Watch that again, for a refresher on the subject:
Cheers,
C
I thought this thread was finished but apparently not.
Charles, all you did was relink the same posts that do not contain any answers. It is telling that you can not say what the answer to the question posed by Warren and Dennis and other guys here is, for third (or fourth?) post in a row. Stop this. This is dumb.
Take your own advice and reread the replies you link. Let the information "sink in" as well. These replies DO NOT ANSWER THE QUESTION.
Charles
I reread your posts and I think it can be that you just don't understand the technical side of what we are talking about here.
This quote of yours:
> We also appreciate what Tomas is trying to do - in code,
> not words. We've already said it's possible, just not practical.
> Perhaps Tomas will learn that. Perhaps he'll prove us wrong.
The question of whether a third party can do what we would like to have in C++ instead of C++/CX has appeared in this thread more than once. The answer is that a third party can do #import by parsing metadata, but doing the reverse of #import would be very hard for everyone but Microsoft since this involves parsing C++ code. You do the parsing as part of your compiler, we don't, so doing the reverse of #import is much easier for you than it is for us.
In terms of #import, me and some other people on my team did write a prototype of a tool that takes a WinRT module and generates C++ wrappers for that module. I guess Tomas and maybe other people too are working on something similar. It works, but it is not enough. We do need the reverse of #import, and we can't do it without taking an existing C++ compiler or code checker and putting the required code generation into that compiler. If we had our own C++ compiler, this would be easy. But we don't, so this is difficult.
Neither me nor Tomas nor any other third party can learn that doing C++/CX in C++ is "possible, but not practical". All we can learn is that it is not practical for us, since we don't have a C++ compiler as one of our projects. You do have a C++ compiler, so this is entirely different for you. You have 90% of the work done already. In fact, since you had to alter the compiler to emit metadata for C++/CX, you have 99% of the work done already.
Neither me nor Tomas will be able to prove you wrong by our work alone. We don't have the source code for your compiler, so even if we do alter someone else's compiler that won't prove anything as regards VC++.
I hope this much is clear?
To your points:
We are free to think what we want - thanks. We can express these thoughts here - thanks. Our questions were answered in your opinion - NO, THEY WEREN'T, in bold font. C++/CX makes targeting WinRT easier - at the cost of not being C++. We have WRL - it is not fast and fluid.
Here, all your points answered again. This is perhaps the third time they appear in the thread, same points, same answers. Nobody disputes 1-2, 4-5 are beside the point and not under any real dispute either, 3 is in the spotlight yet your way of defending it is to simply assert again and again and again that we've been given answers. In truth, all we've been given is unsubstantiated feelings along the lines that doing all in C++ is "possible, but not practical". Well, this might be enough for a school newspaper, but this is a technical site, so unsubstantiated feelings don't work. Even if they are expressed by celebrities like Herb Sutter.
@Charles please stop. This what you're doing (especially relinking well read by everyone threads) is actually insulting, arogant and makes me think that you're having a great laugh with other lads from MS on our (C++ community) expense. Either behave like a man or stop posting. And if you cannot understand what is asked simply say it and get someone (Jim perhaps) who is capable of doing so.
Oh, and one more thing, could you please do not invite mr sutter anymore, as he proved on this thread that he is also incapable of reading long threads and understand what is asked in them. I think most people will agree with me that what mr sutter showed here was just pathetic and we just do not want to read/listen to what he has/wants to say anymore..
@KMNY_a_ha
>.
Most of the C++ devellopers here are develloping
classic desktop aplications. So WinRT doesn't give
us anything we need.
But what we need is, to support WinXP for at least
5 years. And because VC11 doesn't support XP we have
to do this with an outdated c++ compiler and outdated
librarys for years!
And we can't use the new C++11 features to improve
our products for years and years, too.
So don't tell me that VC11 isn't broken by design!
For most of us it is.
@hmm:
++.
you got it :)
@kmny
You've heard Microsoft. Their plans are what they are. Everybody has made their grievances known now. It's Microsofts problem after that. Don't continue a dysfunctional relationship by arguing.
Bjarne always says keep your code ISO and it's exactly because of situations like these.
I don't intend to adopt anything new from Microsoft at this point. WinRT and C++/CX and the XP saga is a good point to break the cycle of MS dependence, not deepen it.
If others find that its never been better to stay, I am happy for them, that is freedom of choice.
clang 3.0 is coming along strong though and Embarcadero has a product line that is shaping up nicely, so it's never been a better time to leave too. Clang is allowing everyone to deliver some great code analysis, conversion tools and capable IDE's and there are now some really new and interesting platforms to target other than Windows.
Sometimes the path just forks and you have to go your own way. If you are always ISO, you always can, just with a little harder work, but that's the price of freedom.
I don't intend to let Microsoft make that job any harder. It's better to just focus on that, than spend time arguing here.
People like PFYB have vanished, I doubt it's because they are off writing volumes of C++/CX. They will just not use this stuff. There's your answer.
In the end, we didn't achieve a single thing in this thread. Isn't that the lesson itself? So it wasn't wasted. What more do you need to hear?
I have no further expectations from Microsoft at this point, I am simply not looking to them to provide the answers now going forward. That's all there is to it.
Good luck to everyone who joined the thread.
@Glen the only thing I'm saying that professional programmers will us VS11. Either they like it or not. What, you're thinking that company you work for will switch to differen compiler because you don't like it? Think again.
And just to be clear: I do not agree with MS politics, behavior and plans for future (lack of C++11 expecially), but this doesn't change the fact that people will still use their product. For the same reason why they use Windows. And for the same reason why they use Office And talk to you few years from now you will see it for yourself.
And perhaps the best example for you will be yourself and your company. Are they leaving Visual Studio? If yes then you're right, if not I'm right.
@All: Enough. Everybody go home. Bar's closed.
AceHack, thanks for the fresh air.
C
@kmny
I think what you are pointing out is that, for many, switching vendors is not easy and making that call to switch, often isn't down to just one person.
If so, you are right. That's why many people feel stuck with Microsoft and angry at their policies, but are unable to get out and for the reasons you mention. There just wouldn't continue be all this heat if you were not right about that, so sorry if my comments seemed insensitive to that or appeared to gloss over that fact.
I was just saying that, regardless, a time comes in any relationship when you what you want from an other is not what that other wants to give; and when you reach that understanding, or fail to, then continuing to argue for what you want, when it just isn't going to happen, is destructive and dysfunctional. Avoiding that, means at some point before then, argument needs to be replaced by more acceptance and planning for leaving.
I was just suggesting that perhaps that time has come, but if you want to continue that fight and have the energy for it, good luck. I hope you win.
I think its dysfunctional though now, this thread has shown that Microsoft just doesn't share the same vision of C++ that a significant number of the developers that use their C++ tools share. They haven't for a long time. If they did, this heat wouldn't be happening and so constantly, because it isn't happening anywhere else.
When Microsoft first started talking of a "C++ renaissance", most people thought it meant exactly that - that Microsoft and C++ were coming back; that engaging with Microsoft might start to feel more like Boost or at least like how Microsoft used to relate to C++ prior to (and including) VC 6.
However, that view is simply wrong. Charles has stated this is wrong several times over now in numerous posts to this very thread which anyone can look up.
Even the names even say it all.
Microsoft is about Going Native, Proprietarily, Eventually. Hence the Going Native name, the proprietary C++/CX and C++11 Eventually at a date still to be determined and in a quantity still to be guessed at.
In contrast, Boostcon is now C++ now, cross platform, and standards orientated, yesterday, now and tomorrow.
They are different paths with different masters.
By all means continue to fight Microsoft, but the answer to your question is that the less tenacious of us are accepting this and instead using our energy to attain clang compatibility one file at a time, as we get chance, and staying clear of C++/CX, WinRT and WRL, at least until such time as Wg21 can come up with something that makes that deal more appealing if that day ever comes.
(BTW. How many people have to keep saying WRL isn't an answer before MS stop quoting it as a solution?)
Anyway, how about we propose features for C++15 instead, rather than waste any more of our time flogging this dead horse? I think I linked the cpp-next thread on that previously.
Maybe Herb can contribute his list. await seems like a good start. Any improvements to that?
I've followed this thread from top to bottom and in the end I think it was productive at least to get Herb and Jim insights.
I have to leave a comment on all the unsupporters of C++/CX that I truly don't get it. For one of the proposed solutions was talking about declspec and #pragma this would also be talking about proprietary compiler extensions. Even if I had a totally library solution we are talking about targeting a specific platform, so where’s the gain, not expecting to WRT on Linux anytime soon. So let’s wait for Tomas solution since is determined to put is code where is post are .
@Charles:
I would appreciate that you could get some of these topics cleared for me, in a future issue of GoingNative or other.
I was and still am a user of COM, (yes still better solution than DLLs with C Exports or C++ Exports) and in the end COM will always be Love. But, it also comes to mind some flaws it has, let me put just some questions out there:
• How are Interface versioning being handled?
• “GUIDs are gone” so I’m guessing you are hiding the GUIDs through the use progid’s. If this is true, has the size limit been increased? And colliding progid’s how are they handled?
• We will be probably mostly be using Registration Free Activation, but some sort of “GAC” will exist. This “GAC” is still managed by the registry.
For some time I though native development was dead (in terms of Windows anyway). And now I see a lot of it and I like it. Though I would like to understand why this shift in focus in Microsoft. Herb hinted this with the “Performance/Watt talk”, but as Rico Mariani as shown ( ) in the past in his “completion” with Raymond Chen out of the gate C# is not as slow as we native developers tend to think. But it also shows a tradeoff of speed versus resource consumption.
Oh...
> Even if I had a totally library solution we are talking
> about targeting a specific platform, so where’s the gain,
> not expecting to WRT on Linux anytime soon.
According to this logic, let's have a language extension for each new API created by Microsoft folks. Let's also have a new language extension for each new API created by Linux folks, by Apple folks, by whoever really.
Seriously, do we really have to go into why exactly is it that the standards are important?
@Glen, that's my point exactly - it is very hard to switch to another compiler if you have large code base and I'm sure MS is aware of that. What I see and what I understand from this and few other threads is that we (C++ community) are with master-slave relationship when comes to Microsoft. And as long as this will continue there are always going be attempts to brake the chains and be free!
C++ Rules And Rocks!
Forever!
@ me_myself_and_cpp: From: "The only reason Raymond had several post instead of just one was that he intended to show his toy program evolving over time. Nor was the sequence of blog posts an example of just how well .NET performs. Raymond was *not* trying to outperform C# code put by Rico. Rico did try to outperform C++ code put by Raymond, without too much success. That's it."
When people will get it?
@MeMyselfAndCpp, hello! Great comments.
Nice to have some pro COM people other than MS to debate with. Thank you.
First, what I think we agree on. You said:
"Even if I had a totally library solution we are talking about targeting a specific platform, so where’s the gain, not expecting to WRT on Linux anytime soon."
Exactly. That's why WRL is a waste of time and why MS has been invoking my ire. Why would anyone ever do WRL? They wouldn't, for exactly the reasons you stated. Please explain that to Charles because that is exactly why WRL should never be mentioned again and why Herb/Jim etc. rightly came up with an alternative that is far better than WRL as far as it goes.
This isn't to knock WRL either, but it needs to be known that its really just a political solution to appease people who are missing the truth of your statement. So Ms should stop promoting it as it really is inviting people down a road they shouldn't go for the reasons you've said.
Regarding your COM questions, great questions, I'm keen to hear the answers. Thanks for asking them.
Now on the bits I'd like to debate with you further: "COM is Love".
The drum I've been trying to beat with Microsoft is that COM is still too hard and too niche as it stands.
My constructive questions to you that MS don't engage on, are these:
1. If COM is the answer, I haven't heard anyone outside of Microsoft say that it is. Why? And is that going to change and why? If it doesn't, we shouldn't persue it, nothing niche has a future. MS has a history of short term, dead end, technologies.
2. For COM to succeed for C++, even more steps need to be taken to make C++ a better fit for it. What further can you suggest?
3. How do you feel about ref class ever becoming standard? If it doesn't, isn't that forking the C++ user base so badly that it harms C++? Isn't that reason enough to eschew ref classes.
4. See my long post earlier on "C++ good for writing libraries". I've really had no one get into the meat of that post. Please pull it apart and attack it, agree with it, or expose whatever you see as errant in that. I'd be keen to know your opinions. The comments I have made in that post are what I see as the future life of the C++ programmer on Windows. Do you agree with that. Do you want that? Don't you think we can do better? C# avoids all this. Wont more people defect to C# because of it that shouldn't need to.
I don't intend to use C++/CX and WinRT as things stand myself, but nor am I interested in fighting MS over it any further, but I am keen to see others promote its merits that are non MS. This camp has been remarkably quiet until now, so I'm pleased to see the inputs coming in. I still want what is on offer to improve.
Man this thread is long, we need another thread! lol
Thanks :)
* Almost do fist bump * "We can't do this man, or we'll destruct" <3 lol
Hi guys
I am finally here with my attempt to access WinRT without CX extensions and Platform::Strings. I must say sorry if you waited for me it took me too long but I recenly changed my job and moved to another country for work so you can imagine my head is full of different stuff right now.
For everybody interested in my approach which is the same approach discussed by many smart people here take a look at. There is a readme.txt with some technical details and all the source code. The code although quickly baked is working well with only one issue at application exit (read the readme). Feel free to dig it yourself..
Nice to hear from you, Tomas!
I took a look at your code and I have to say it does look pretty clear and straightforward. Good job. Me and my team did our own experiments with WinRT and the code we were working with was in many aspects similar to yours. There is a small core of helper classes and then there is a huge mass of uncreative boilerplate for each class you want to use in your application, perfectly suitable for generation by an automatic tool.
I completely agree with your conclusion:
>.
Exactly!
Charles, any comments from you? You seemed interested in what Tomas would come up with...
Followed this thread since the beginning. Now that Tomas posted some sample code, I wonder what Herb and Jim (and everyone else that has been posting here) have to say about it.
My thoughts? OK. But take them with with a grain of salt or two.
I like the wrapping of WRL approach, as we've discussed - WRL is the lowest level C++ abstraction for WinRT, but you could write a C-based solution, if you want to go even lower. WRL is a means by which you write C++ components for WinRT, too, as we've covered. Now, you want to share these object across foreign language and execution boundaries, sometimes. This requires extra work when you go the WRL route with wrapper interfaces and hand-written boilerplate. The automation story at play for the masses is C++/CX. ^s and refs are the friendly abstractions in this case. A different approach, yes. It's the one we're going out with in Dev11.
Tomas, thank you for putting code where your mouth is. Nice job. Keep on exploring and let us know what you discover along the way. What will you find as you go deeper into the rabbit hole?
C
@Charles, you didn't get it again. And frankly speaking I don't think that any of you (MS guys) neither want nor care to get it. The point Tomas made is that everything what's needed in order to cooperate with WinRT is easily achievable just by using pure, standard conforming (ISO C++ standard - that's for mr sutter specially, for it was only he who seems not to understand what standard is important for C++ community) C++. No foreign extensions are neither needed nor necessary. CX is a stupid looking guy on the job who doesn't really needs to be there (nor knows any better how to do this particular job) but his dad has connections so they employed him.
Short story - there is absolutely no need for CX - C++ already provides everything and more in order to allow efficient interacting with WinRT - proved by Tomas.
@Jim, Jim when you were saying in one of your posts that before CX came to live and people where thinking about its desing, someone in the room said, why not to use CLI syntax, and everyone started laughing at him, wasn't that mr sutter who said that?
And last note: Charles, you're wrong again, C doesn't let you go lower than C++, they are allowing you to reach exactly the same level, they just provide different levels of abstraction.
When will they learn?
Charles, I don't want to be unnecessarily confrontational, so I will just quote the entire exchange.
Here is what you said previously about Tomas endeavour:
> We also appreciate what Tomas is trying to do - in code,
> not words. We've already said it's [using standards-compliant
> C++ instead of C#] possible, just not practical.
> Perhaps Tomas will learn that.
Cool. Repeating this one more time: perhaps Tomas will learn that doing it all in standards-compliant C++ is possible, but not practical. Your words, Charles.
Tomas does the work, returns and says:
>.
Does it sound like Tomas learned that doing it all in standards-compliant C++ is possible, but not practical? No, it sounds like he learned that exactly the reverse is the case!
So, Charles, what is your reaction to this, I ask?
Your answer:
> Tomas, thank you for putting code where your mouth is.
> Nice job. Keep on exploring and let us know what you
> discover along the way. What will you find as you go
> deeper into the rabbit hole?
You disregard everything Tomas says, suggest that he keeps on exploring, and imply that he didn't go into the "rabbit hole" deep enough.
This is just dishonest.
+1 to KMNY, too.
@Warren: Sorry if I seemed belittling. It wasn't my intention. I think it's great that Tomas spent the time investigating and coming up with a solution using code. I also think it's a solution we've discussed (wrapping WRL) - but Tomas demonstrated it. Right on.
The moral of the whole story here is: you don't like the C++/CX solution to the problem and you don't think it's the best solution. We've acknowledged that this is fair. We respect that. We've also been very clear that C++/CX will be the solution that ships. You can manually program WinRT with C++ using WRL (people do this today...). What else do you really want to talk about? I'd wait for Jim to stop by to critique Tomas' solution and not use my reply as anything other than my reply... You asked. I answered. Jim and team are heads down writing code that will ship at some point. They're very busy right now. I don't think a reply is imminent.).
Let's agree to disagree and move on. The solution we'll ship is C++/CX and a small template library for exception-free WinRT programming in C++.
C
Charles
Argh... I will give this one more try.
>).
Please take one more look at Tomas's code.
The files in \trunk are specific to the application. The files in \trunk\Generated are not specific to the application and are the interop layer for WinRT. The amount of code in the interop layer is 5 times the amount of code in the application itself. Admittedly, this is a simple demo application, but the interop layer does not come even close to covering the entire WinRT either. To cover the entire WinRT, the interop layer would have to grow at least by a factor of 20. Take note though that all code in the interop layer except maybe Object.i.cpp and Object.i.h is boilerplate code. It can be generated by a tool.
The point of Tomas's experiment was to try and use WinRT from C++ using code that could have been generated by a tool, to check if this looks feasible or not. Did he succeed? Yes, and Tomas says so. Quoting Tomas:
>.
You don't have to take Tomas's word either, you can look into the code in \trunk\Generated and see that it: a) works, b) is tedious and too bulky to write manually every single time you need it, c) can be generated by a tool. The point has been made. Using WinRT from C++ via wrapper objects that could have been generated by a tool does look feasible. To the extent Tomas was willing to carry his practical experiment he did not encounter any significant setbacks.
So, Charles, how can you "stand by your commentary on the matter", when you commentary was that the solution with wrapper classes was imaginary and speculative and Tomas's code shows that, indeed one can write these wrapper classes, they can indeed work, it indeed looks that they can be generated by a tool, and so on?
How can you say that you "don't disregard anything in his solution" when you ignore the actual point of the experiment, which was to check if the approach with wrapper classes looks feasible, and pretend that it was about whether or not one can use WRL?
Please tell us.
I'm growing really tired of this. You make it seem like I just don't understand what's been said over and over and over again here. I DO. Let me say this to you one more time: We are not going to provide this model for you. You'll need to do it yourself - like Tomas has done.
I did. He wrapped WRL as part of his solution. He's shown that you can write WinRT apps in ISO C++ using class abstraction. As I said (in each of my posts on the matter: GREAT!!). His experiment is not a complete solution nor did he advertise it as such - it's a proof of concept and he's shown, again, that you don't have to use C++/CX to write WinRT apps in C++.
To Tomas, keep exploring and keep experimenting. Make a shared ISO C++ WinRT object and consume it from JavaScript. Consume a C# object from your ISO C++ WinRT application. See how far you can take this. Of course, there will be a lot of boilerplate code to write, as you know (and as we have said).
Yes. My commentary was essentialy "Great. Keep exploring. You may run into a wall of complexity that prompted the C++/CX decision...". We've made it clear from day 1 that: You don't have to use C++/CX. You'll need to write a lot of code when go the WRL route, including the "wrapper class" aproach as Tomas has shown. That's the way it is. No need to keep saying this over and over again.
When this was first mentioned on this thread, it was acknowledged that this approach is feasible, but not practical and not in line with a primary goal of developer productivity (lots of COM boilerplate code is generated by the front-end compiler running over C++/CX code, winmd files are autogenerated, etc,etc... - the CX extension and associated toolchain are built specifically for WinRT, a single ABI that runs on a single platform.........).
I did. I can't make it any clearer: C++/CX and WRL are what will ship in VC11 for writing C++ objects for WinRT. If you don't want to use C++/CX (or can't depending on your situation - can't use exceptions, etc...), then don't. Use WRL and handroll your own WinRT-specific boilerplate code. A solution like Tomas' plus some infrastructure support (code gen) will not be provided (not in this release, if ever - I don't know. I don't work on the VC team.). There is no need to keep going over this point. Please stop.
Finally, thanks to all of you for the honest and open dialogue. My apologies if I seem frustrated, but I am. This thread has come to an end. Let's all move along.
C
Charles, you keep dodging the point.
> When this was first mentioned on this thread, it was
> acknowledged that this approach is feasible, but not
> practical and not in line with a primary goal of developer
> productivity ...
Tomas did the experiment to try and check whether or not this is the case. His conclusion is that - to the extent he was willing to carry the experiment - the approach was both feasible and practical. What is it in Tomas's code that is not practical? What is it that is not "in line with a primary goal of developer productivity"? Are you saying the approach will break if Tomas tries to "make a shared ISO C++ WinRT object and consume it from JavaScript"? If so, say it.
You keep saying things you can't back up. This is exactly the reason everyone here, including you, is frustrated.
If Herb and Jim and you either would have shown WHY wrapper classes wouldn't have worked (just so that we don't waste another pair of posts clarifying this trivial point, Charles, this means: would have been both feasible and practical to implement) OR agreed that wrapper classes would have worked, the thread would have likely ended at that point. As it stands, you can neither substantiate your arguments, nor admit that - oh, the horrors - C++/CX was a mistake. So the thread continues.
@Warren: You won't get Charles to say anything besides what he is saying now. No matter the arguments, Charles is here to say good things about Microsoft technologies. Yes, it absolutely looks that C++/CX was a grave mistake. Wrapper classes are a much better solution. Thanks a lot to you, Tomas, KMNY_a_ha, Glen, Dennis, PFYB and other folks on the thread for making that point very eloquently. But Charles will never admit it. He can't, his entire job is to say that C++/CX is cool.
God. what a thread! I just found it today and couldn't stop until I read it all.
I have to say I side with those who think that C++/CX was a mistake. WinRT should have been kept outside the language. Since the message from Microsoft seems to be that C++/CX and WRL are what will ship in VC11 and that even if wrapper classes make more sense, Microsoft don't care, here is a message from myself - in this case, I don't care to use WinRT from C++. You make us jump through hoops to use your new API from our language, you see how far that API will go in the C++ world. I will just use Javascript.
To all the participants - thank you for a very informative thread.
I have been watching the thread since the beginning. Charles, your opponents are exactly right, you keep answering questions noone is asking. I don't mean to say you are doing this intentionally, but in effect it looks like you are dodging the original questions.
The area Tomas set to explore was about the approach with wrapper classes, not WRL, not anything else. The conclusion was that wrapper classes work - to the extent explored by Tomas. Your original response was "I like the wrapping of WRL approach, ...", completely off point. You now say that your response is "Great. Keep exploring. You may run into a wall of complexity that prompted the C++/CX decision...". But Tomas says he didn't run into any walls. Are you saying he will run into a wall? If this is what you want to say, then please stop being all vague and hinty and say it clearly. Please explain to us what this wall is. Don't just say this wall exists somewhere, please show it. Tomas and others like PFYB and Dennis put their money where their mouths were, do the same please. You are trying to feed us stories of the proverbial wall for more than 200 posts now. Your opponents have been looking for this wall for months, they are dying to know what it is. Several people have written prototype code, Tomas even shared his code for everyone to look. They went to great lengths to try and find the wall you keep talking about. Please show us some courtesy and reply in kind, tell us where the wall is.
Finally, I'd urge all commenters in this thread to look at CXXI here:. This is very similar to the approach with wrapper classes (wrapping C++ classes to expose them to non-C++ clients) suggested in the thread. The Mono developers considered this approach to be both feasible and practical for their thing.
All, I am only trying to say that C++/CX is the approach we have taken, regardless of what Tomas has shown (and I like the fact that he's done something to show a different approach, this is the power of C++ and C++ developers).
Since I am not on the C++ team - and quite frankly, I should not have spoken on their behalf - I will not speak to why approach A was chosen over B (Herb and Jim have already explained the reasoning here - and that's enough). C++/CX and WRL are what will ship and they're excellent tools for the job at hand. People are building apps with them today and the consensus is overwhelmingly positive.
This conversation needs to end now. So, it will.
C
@Charles
Thank you for re-enabling this thread. As noted, no reply expected. :)
I am glad you have accepted that extreme censorship is not the answer though.
@KMNY, Charles has re-opened this thread as I asked so that we can just burn ourselves out or whatever in a naturual way. Can you respect that please. I support the basic essence of some of the things you were saying, but you say things far too strongly that it almost rises to hate speech IMHO. For example, you are not alone in thinking some of Herb's replies were not his best, but words like snake, cretin and traitor are over the top and don't credit him with the respect he deserves. Frankly, I just don't enjoy reading that kind of language if you are right or not. It is beyond keeping it real. You asked.
I asked for this thread to be re-opened in the name of free speech. For myself, I don't have more to say *to Microsoft* about C++/CX (other than by accident by finding it hard to keep ones mouth shut some times at some of the statements); since I *accepted* some time ago that Microsoft is sticking to their current plans. I encourage others to accept that fact too. I am of course interested in what the rest of you have to say and I wanted to thank @Tuume's for his link to CXXI. I don't find it off topic at all myself.
Even if it were, I think freedom of expression dictates we allow for a level of somewhat off topic opinions. Trying to cut down every off topic statement is too controlling for my tastes. Things have a natural way of going where they are meant to and for revealing what what is bubbling under that needs to be said or might get missed.
Otherwise, this would be a one way MS sales pitch and if it were that, I wouldn't read any of it. So its a case of taking the rough with the smooth. I hope Microsoft concedes some of these ideas.
Thank you everybody for your contributions and for the recent posts from Tuume, Anne, TQR and Warren. Thanks to Charles again for seeing the sense.
@Charles Just to correct you because you're wrong (again) in what you're saying:
"C++/CX and WRL are what will ship and they're excellent tools for the job at hand."
Charles by saying this you just contradicted yourself. At one point in time you've said that WRL approach is feasible but impractical, now you're saying that they're excellent tools. Well I suppose that's how the regime's machine works.
And when you're saying:
" People are building apps with them today and the consensus is overwhelmingly positive. "
Would you mind and point us to the place where people are actually expressing their positive opinions on a subject? Somehow they're not here.
And could you also ask mr herb to tell us what university he finished, what titles he has, what is his qualification/education background. Looked on the web, couldn't find anywere. But surely he must have (not some) education in the field, he poses as an expert mustn't he?
So just to summarize:
1. WRL is not an excellent tool even thoug once you say it is and few days after you say it isn't and few days after that you say that it is. Get it!
2. Where are those people who think CX is a great tool for a job?
3. What is a FORMAL education of mr sutter.
Cheers
Glen, thank you for managing to convince Charles to unlock the thread, locking it was crazy.
Just in case something happens with KMNY_a_ha (LOL), here are his first two points slightly paraphrased:
KMNY 1. Arguing like Charles does that WRL is both (feasible but) impractical and excellent is strange.
KMNY 2. Where are those people who think CX is a great tool for a job? We aren't seeing any.
I support both points.
On CXXI, thanks a lot to whoever mentioned it first. I have been looking into it ever since, I feel I learned quite a lot. Since the sources for CXXI are openly available (pure love!), one obvious idea would be to adopt the part that parses C++ and emits C#-XML to emit C++ wrapper classes for WinRT. I personally likely won't spend any time on this, since I think the whole of WinRT is a failure, but the possibility is definitely there.
@Anna, I agree CXXI looks great and I thought the same as you there.
Clever guys, on more than one level too, as what I'm about to link will show!
@kmny, from the previous blog, you said:
".
If you look back, you will see I have supported the gyst of a good number of your statements, and especially the one above. I could have written it myself.
However, language is of no use if you can't control it to be understood too. You still need perspective, or you aren't talking clay, you are talking doggie chocolate!
I've argued on occasion with Herb and Charles in pretty blunt ways myself, but words like traitor, cretin, snake *on a near continuous basis* is too much. Now you are pressing Herb for formal qualifications? Come on! Like they matter when his track record is as obvious as it is? Like he needs to prove them to you or me anyway? More to the point, why would his qualifications appease you anyway given your feelings? I doubt they would, so why pester for it?
Lest we forget, there are qualifications you don't always get at school anyway. This is for all of us at times, but today, this is for you KMNY:
Do follow the very descriptive links too!
That Miguel, he does come up with the gems!
KMNY, I hope you reign yourself in as you make it hard to support you otherwise. I'd hate to see you get banned or the day I'd want you to see you banned. Characters are core to a good debate but we do need some lighter moments too.
I'll offer no more comment on the subject. You have my best reading on the matter and thanks for it go out to Miguel!!
Happy Holidays!
@Ann thank you for paraphrasing my post. Most appreciated. Thank you.
@Glen Thanks for the link.
Now to answer your queston:
I'd like to see mr sutter's qualifications because his "track record" as you called it, so far is just that he can talk a lot - but does very little actual real work. And from this link (given to me by PFYB) you'll see that he has actually very little idea what he is talking about. Concentrate on exchange between him and David Abrahams.
And why do I want to know what mr sutter's formal education is? Because I'm fed-up with careerists like him, who without proper education are put on absolutely unsuitable positions and are absulutely unprepared to do any real work - because they don't have right/correct education. That's why.
And as my feeling goes, mr sutter doesn't have formal education which would indicate that he is a suitable guy to be a C++ architect. But I'm happy to say that I was wrong only someone has to prove it to me. But he wan't. Becase he can't. Because he doesn't have anything to show except smart * responses and patronizing talk. That's what his track record is so far.
As for my language, first, have to correct you here, I've never called anyone cretin, you see Glen, you just repeating what Charles told you, without actually paying attention what I'm saying and why. Anyway, that's not important. The important thing is that I believe that in extreme situations extreme measures are needed. This is extreme situation. That's why I'm using the kind of language I'm using. I'm not going to say to someone when he lies to me that his truth is different to mine. No! He is a lair. In case of mr sutter when at his position, he did nothing to help C++, on the contrary it seems like everything is done to hurt C++, what in that situation mr sutter should be called? The guy who tried to kill C++? I'm ok with that too.
And I'm sorry to say Glen but if you cannot overcome the negative feeling and see truth in my words just because I'm using strong language, then just simply don't read my posts. It's easy I think. You see my nick and skip it.
But I tell you one thing with reference to that - I and II war could've been avoided if diplomacy hadn't been used.
And if I'll get blocked from this site it will just prove my point more than anything.
I'd like you also to clarify on that, becase well, it is not clear what you mean when you say:
"However, language is of no use if you can't control it to be understood too. You still need perspective, or you aren't talking clay, you are talking doggie chocolate!"
Regards
Just read this entire thread and never saw the obvious answer.
Think about it. C++ IS a system level language. You could write everything MS sells in it. I've written apps and drivers in it for every version of Windows since 3.1.
I also wrote firmware in it for one of the first USB devices ever plugged into a Win95 machine (Oem 2.1 perhaps?)
Anything the processor can do, C++ can do. If you can't write an easy to use binding layer to something running on a cpu in C++ without altering the language, there are only two possibilities.
1. Stupidity
2. You want to change the language
WHY did MS fight so hard to make DirectX king over Opengl? Seems like a duh question to me.
Why would making a standardized language that can run on virtually anything Ms playground specific be any different?
Remove this comment
Remove this threadclose | http://channel9.msdn.com/Shows/C9-GoingNative/GoingNative-3-The-CCX-Episode-with-Marian-Luparu | CC-MAIN-2014-15 | refinedweb | 63,385 | 71.75 |
Originally posted by Mark Spritzler: Like Max said the second option, which you have adopted is the best. I am a little hesitant against your Static Hashmap in the DataAccess Remote class. But can't give any reasons why. Oh, wait, how again will you tell which client locked which record in that senario? Good Luck Mark
Max: She'll know which client locked which record, based on the fact the each client will have their own copy of DataAccess(which, in turn, will have it's own copy of Data).
Mark: That's what I would be afraid of. Each client's own copy of DataAccess should have a reference to the one instance of Data on the server. So therefore, there are two problems. One Each client has its own Hashmap tracking their own locking, but has no idea as to any locks by any other client.
Kim: ... I have a static HashMap object to keep track of the Data object and the record number required to lock. Now each remote client gets a copy of their own RemoteData hence indirectly a copy of Data object. When the client requests to lock a record, I add the record number as key and client's data object as value to the static HashMap variable. This makes sure that when unlocking, only the client who locked the record should unlock it.
Originally posted by Philippe Maquet: Hi Max, My reply crossed yours and I can see that didn't misinterpret anything. Kim, what I wanted to point out about multiple tables is this : as you use a static HashMap (shared by all tables) with recNos as keys, your system would never support multiple tables. It's perfect because we have only one physical table to deal with, as far as you don't try in other parts of your db design to make it multi-tables (as I did) in which case your global db design would be inconsistent. Now if there is an issue here, you can solve it with a very little change : wrap the recNo in a Record class of your own (instead of a Long) and use Record instances as keys in your HashMap. It's just a few more lines of code :
private class Record {
String dbFileName;
long recNo;
Record(String dbFileName, long recNo) {
this.dbFileName = dbFileName;
this.recNo = recNo;
}
public boolean equals(Object o) {
boolean retVal = (o instanceof Record);
if (retVal) {
retVal = dbFileName.equals(((Record)o).dbFileName)
and (recNo == ((Record)o).recNo);
}
return retVal;
}
public int hashCode() {
return ( (int) (recNo ^ (recNo >>> 32))) ^ dbFileName.hashCode();
}
}
Max It's the same principle as the LockManager, which is also a Global structure that keeps all instances of Data. However, it's a bit more Object Oriented(IMO), because the Data class is taking responsibility for it's own locking. Also, it fits the SCJD's requirement that the lock method in Data be implemented. Finally, it avoids the extra code and complexity of having to write a LockManager in the first place. True, it doesn't scale as easily as the LockManager, but I don't think that what Kim's concerned with here, and the SCJD doesn't grade on scalabilty: only clearity and simplicity. I fully agree with you except for "Also, it fits the SCJD's requirement that the lock method in Data be implemented". To me, delegation == implementation. Phil.
Just to clear up a doubt though, when you talk about "multiple tables" - I am assuming you are talking about multiple databases. Am I right?
Phil, the reason I am implementing my lock/unlock functionality via a static HashMap is because its fairly simple to understand. There is no need for a separate LockManager, there is no need to make or use Data as a singleton to ensure that all remote clients use the same Database. Is this not reason enough to take this approach?
Phil, when you say "I had to play with WeakReferences stored in a regular HashMap, with a ReferenceQueue", you mean in your implementation you wrapped the Data and Record objects as WeakReference objects and used WeakReference queues?
Am I correct in understanding from your statement Max that the lock/unlock methods should be implemented in the Data class and in my RemoteData class simply use these methods where as in my LocalData class NOT use them??
In my DBAccess interface lock methods are included. Not the case in your assignment ?
Originally posted by Ian Roberts: Kim and Max, Its nice to see that other people are applying and agreeing with the same design that I am implementing! I followed the threads on the large debate concerning the use of a LockManager but cannot bring myself to go against my OOAD training and what appears to be Sun's instructions. The design Kim has suggested is very, very close to the one I propose to put forward, probably with the exception of the new class names! Andrew, who has been a great help, has made a good point concerning the recordCount and it does appear that the add() method will require updating to avoid some nasty surprises.? Cheers, Ian R.
However, I don't really think you need to implement it: since your GUI clients can't add or remove records, and since the instructions say the Data class is 'complete' except for locking and the depreciated methods, I think you can leave it be. Honestly, I think it's just an oversight on Sun's part, and I doubt they would blame you for it.
Phil: quote: --------------------------------------------------------------------------------? -------------------------------------------------------------------------------- I thought since the add method is synchronized, there would be no problem there. Why do we need additional handling here?
I think I should implement the lock/unlock methods in Data class and use them from RemoteData class and not use them for LocalAccess, since anyways I am using object composition and Adapter pattern rather than extending the data class to use its methods for Local and Remote access. Would appreciate it if you could point out issues I am overlooking here, in this approach
Now, my concern is about using DataAdapter. It is a nice idea to use it but isn't DataAdapter will give another decoupling layer that no need to use it???
Originally posted by Andrew Monkhouse: Hi Max, However since FBNS requires the add() and delete() methods to be available to networked clients (regardless of whether our client uses them), then as a bare minimum you would have to have a major comment in your javadoc saying that this method is not going to work in multi user mode. And my question would then be - why are we making a method available that we know won't work? IMHO it is better to spend the time fixing the issues. Regards, Andrew
I have fully implemented add()....
I have not implemented the file level code in the Data class itself.
but am unsure of my approach here since sun instructions are rather vague
Although I am a bit doubtfull about not using lock unlock functionality in the local mode !
In locks I define a lock with a record number and a timestamp and is stored in the hashtable with a key
Also I am planning to build a lock expiration mechanism but am unsure as to how ?
Using WeakHashMap doesn't sound like a simple idea.
I am trying to build a lazy expiration where a lock expires if not used for a given time interval
It is not required by my assignment but I dont know how can you implement a locking solution without a lock expiration strategy, and I am just going for the best that I can think of.
Another thing I am not very comfortable about is using Combo Boxes for search fields. In commercial solutions search fields seldom use combo boxes
If the JTable is already displaying all records when the applications starts, why do we need to use combo boxes for search. | http://www.coderanch.com/t/184014/java-developer-SCJD/certification/Mark-Spritzler-Max-Andrew-Monkhouse | CC-MAIN-2015-48 | refinedweb | 1,330 | 66.98 |
Catalyst::Plugin::AutoCRUD - Instant AJAX web front-end for DBIx::Class
version 2.122460
You have a database, and wish to have a basic web interface supporting Create, Retrieve, Update, Delete and Search, with little effort. This module is able to create such interfaces on the fly. They are a bit whizzy and all Web 2.0-ish.
If you already have a Catalyst app with DBIx::Class models configured:
use Catalyst qw(AutoCRUD); # <-- add the plugin name here in MyApp.pm
Now load your app in a web browser, but add
/autocrud to the URL path.
Alternatively, to connect to an external database if you have the DBIX::Class schema available, use the
ConfigLoader plugin with the following config:
<Model::AutoCRUD::DBIC> schema_class My::Database::Schema connect_info dbi:Pg:dbname=mydbname;host=mydbhost.example.com; connect_info username connect_info password <connect_info> AutoCommit 1 </connect_info> </Model::AutoCRUD::DBIC>
If you don't have the DBIx::Class schema available, just omit the
schema_class option (and have DBIx::Class::Schema::Loader installed).
This module contains an application which will automatically construct a web interface for a database on the fly. The web interface supports Create, Retrieve, Update, Delete and Search operations.
The interface is not written to static files on your system, and uses AJAX to act upon the database without reloading your web page (much like other Web 2.0 applications, for example Google Mail).
Almost all the information required by the plugin is retrieved from the DBIx::Class ORM frontend to your database, which it is expected that you have already set up (although see "USAGE", below). This means that any change in database schema ought to be reflected immediately in the web interface after a page refresh.
This mode is for when you have written your Catalyst application, but the Views are catering for the users and as an admin you'd like a more direct, secondary web interface to the database.
package AutoCRUDUser; use Catalyst qw(AutoCRUD); __PACKAGE__->setup; 1;
Adding
Catalyst::Plugin::AutoCRUD as a plugin to your Catalyst application, as above, causes it to scan your existing Models. If any of them are built using Catalyst::Model::DBIC::Schema, they are automatically loaded.
This mode of operation works even if you have more than one database. You will be offered a Home screen to select the database, and then another menu to select the table within that.
Remember that the pages available from this plugin will be located under the
/autocrud path of your application. Use the
basepath option if you want to override this.
DBIx::Class::Schemabased class
In this mode,
Catalyst::Plugin::AutoCRUD is running standalone, in a sense as the Catalyst application itself. Your main application file looks almost the same as in Scenario 1, except you'll need the
ConfigLoader plugin:
package AutoCRUDUser; use Catalyst qw(ConfigLoader AutoCRUD); __PACKAGE__->setup; 1;
For the configuration, you need to tell AutoCRUD which package contains the
DBIx::Class schema, and also provide database connection parameters.
<Model::AutoCRUD::DBIC> schema_class My::Database::Schema connect_info dbi:Pg:dbname=mydbname;host=mydbhost.example.com; connect_info username connect_info password <connect_info> AutoCommit 1 </connect_info> </Model::AutoCRUD::DBIC>
The
Model::AutoCRUD::DBIC section must look (and be named) exactly like that above, except you should of course change the
schema_class value and the values within
connect_info.
Remember that the pages available from this plugin will be located under the
/autocrud path if your application. Use the
basepath option if you want to override this.
DBIx::Classsetup
You will of course need the
DBIx::Class schema to be created and installed on your system. The recommended way to do this quickly is to use the excellent DBIx::Class::Schema::Loader module which connects to your database and writes
DBIx::Class Perl modules for it.
Pick a suitable namespace for your schema, which is not related to this application. For example
DBIC::Database::Foo::Schema for the
Foo database (in the configuration example above we used
My::Database::Schema). Then use the following command-line incantation:
perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:. -e \ 'make_schema_at("DBIC::Database::Foo::Schema", { debug => 1, naming => 'current' }, \ ["dbi:Pg:dbname=foodb;host=mydbhost.example.com","user","pass" ])'
This will create a directory (such as
DBIC) which you need to move into your Perl Include path (one of the paths shown at the end of
perl -V).
DBIx::Classschema
If you're in such a hurry that you can't create the
DBIx::Class schema, as shown in the previous section, then
Catalyst::Plugin::AutoCRUD is able to do this on the fly, but it will slow the application's startup just a little.
The application file and configuration are very similar to those in Scenario two, above, except that you omit the
schema_class configuration option because you want AutoCRUD to generate that on the fly (rather than reading an existing one from disk).
package AutoCRUDUser; use Catalyst qw(ConfigLoader AutoCRUD); __PACKAGE__->setup; 1; <Model::AutoCRUD::DBIC> connect_info dbi:Pg:dbname=mydbname;host=mydbhost.example.com; connect_info username connect_info password <connect_info> AutoCommit 1 </connect_info> </Model::AutoCRUD::DBIC>
When AutoCRUD loads it will connect to the database and use the DBIx::Class::Schema::Loader module to reverse engineer its schema. To work properly you'll need the very latest version of that module (at least 0.05, or the most recent development release from CPAN).
The other drawback to this scenario (other than the slower operation) is that you have no ability to customize how foreign, related records are shown. A related record will simply be represented as something approximating the name of the foreign table, the names of the primary keys, and associated values (e.g.
id(5)).
It is essential that you load the Catalyst::Plugin::Unicode::Encoding plugin to ensure proper decoding/encoding of incoming request parameters and the outgoing body response respectively. This is done in your
MyApp.pm:
use Catalyst qw/ -Debug ConfigLoader Unicode::Encoding AutoCRUD /;
Additionally, when connecting to the database, add a flag to the connection parameters, specific to your database engine, that enables Unicode. See the following link for more details:
When the web interface wants to display a column which references another table, you can make things look much better by adding a custom render method to your
DBIx::Class Result Classes (i.e. the class files for each table).
First, the plugin will look for a method called
display_name and use that. Here is an example which could be added to your Result Class files below the line which reads
DO NOT MODIFY THIS OR ANYTHING ABOVE, and in this case returns the data from the
title column:
sub display_name { my $self = shift; return $self->title || ''; }
Failing the existence of a
display_name method, the plugin attempts to stringify the row object. Using stringification is not recommended, although some people like it. Here is an example of a stringification handler:
use overload '""' => sub { my $self = shift; return $self->title || ''; }, fallback => 1;
If all else fails the plugin prints the best hint it can to describe the foreign row. This is something approximating the name of the foreign table, the names of the primary keys, and associated values. It's better than stringifying the object the way Perl does, anyway.
When the plugin creates a web form for adding or editing, it has to choose whether to show a Textfield or Textarea for text-type fields. If you have set a
size option in add_columns() within the Schema, and this is less than or equal to 40, a Textfield is used. Otherwise, if the
size option is larger than 40 or not set, then an auto-expanding, scrollable Textarea is used.
The plugin will handle most tricky names, but you should remember to pass some required extra quoting hints to DBIx::Class when it makes a connection to your database:
# most databases: { quote_char => q{`}, name_sep => q{.} } # SQL Server: { quote_char => [qw/[ ]/], name_sep => q{.} }
For more information see the DBIx::Class::Storage::DBI manual page or ask on the DBIx::Class mail list.
Buried within one of the modules in this application are some filters which are applied to data of certain types as it enters or leaves the database. If you find a particular data type is not being rendered correctly, please drop the author a line at the email address below, explaining what you'd like to see instead.
If you want to use this application as a plugin with another Catalyst system, it should work fine, but you probably want to serve pages under a different path on your web site. To that end, the plugin by default places its pages under a path part of
.../autocrud/. You can change this by adding the following option to your configuration file:
<Plugin::AutoCRUD> basepath admin </Plugin::AutoCRUD>
In the above example, the path
.../admin/ will contain the AutoCRUD application, and all generated links in AutoCRUD will also make use of that path. Remember this is added to the
base of your Cataylst application which, depending on your web server configuration, might also have a leading path.
To have the links based at the root of your application (which was the default behaviour of
CatalystX::ListFramework::Builder, set this variable to an empty string in your configuration:
<Plugin::AutoCRUD> basepath "" </Plugin::AutoCRUD>
The plugin will use copies of the ExtJS libraries hosted in the CacheFly content delivery network out there on the Internet. Under some circumstances you'll want to use your own hosted copy, for instance if you are serving HTTPS (because browsers will warn about mixed HTTP and HTTPS content).
In which case, you'll need to download the ExtJS Javascript Library (version 2.2+ recommended), from this web page:.
Install it to your web server in a location that it is able to serve as static content. Make a note of the path used in a URL to retrieve this content, as it will be needed in the application configuration file, like so:
<Plugin::AutoCRUD> extjs2 /static/javascript/extjs-2 </Plugin::AutoCRUD>
Use the
extjs2 option as shown above to specify the URL path to the libraries. This will be used in the templates in some way like this:
<script type="text/javascript" src="[% c.config.extjs2 %]/ext-all.js" />
The default HTML
charset used by this module is
utf-8. If you wish to override this, then set the
html_charset parameter, as below:
<Plugin::AutoCRUD> html_charset iso-8859-1 </Plugin::AutoCRUD>
All table views will default to the full-featured ExtJS based frontend. If you would prefer to see a simple read-only non-JavaScript interface, then append
/browse to your URL.
This simpler frontend uses HTTP GET only, supports paging and sorting, and will obey any column filtering and renaming as set in your "SITES CONFIGURATION" file.
The whole site is built from Perl Template Toolkit templates, and it is possible to override these shipped templates with your own files. This goes for both general files (CSS, top-level TT wrapper) as well as the site files mentioned in the next section.
To add these override paths, include the following directive in your configuration file:
<Plugin::AutoCRUD> tt_path /path/to/my/local/templates </Plugin::AutoCRUD>
This
tt_path directive can be included multiple times to set a list of override paths, which will be processed in the order given.
Within the specified directory you should mirror the file structure where the overridden templates have come from, including the frontend name. For example:
extjs2 extjs2/wrapper extjs2/wrapper/footer.tt skinny skinny/wrapper skinny/wrapper/footer.tt
If you want to override any of the CSS used in the app, copy the
head.tt template from whichever
site you are using, edit, and install in a local
tt_path set with this directive.
It's possible to have multiple views of the source data, tailored in various ways. For example you might choose to hide some tables, or columns within tables, rename headings of columns, or disable updates or deletes.
This is all achieved through the
sites configuration. Altering the default site simply allows for control of column naming, hiding, etc. Creating a new site allows you to present alternate configurations of the same source data.
When using this plugin out of the box you're already running within the default site, which unsurprisingly is called
default. To override settings in this, create the following configuration stub, and fill it in with any of the options listed below:
<Plugin::AutoCRUD> <sites> <default> # override settings here </default> </sites> </Plugin::AutoCRUD>
In general, when you apply a setting to something at a higher level (say, a database), it percolates down to the child sections (i.e. the tables). For example, setting
delete_allowed no on a database will prevent records from any table within that from being deleted.
Some of the options are global for a site, others apply to the database or table within it. To specify an option for one or the other, use the database and table names as they appear in the URL path:
<Plugin::AutoCRUD> <sites> <default> # global settings for the site, here <mydb> # override settings here <sometable> # and/or override settings here </sometable </mydb> </default> </sites> </Plugin::AutoCRUD>
This can be applied to either a database or a table; if applied to a database it percolates to all the tables, unless the table has a different setting.
The default is to allow updates to be made to existing records. Set this to a value of
no to prevent this operation from being permitted. Widgets will also be removed from the user interface so as not to confuse users.
<Plugin::AutoCRUD> <sites> <default> update_allowed no </default> </sites> </Plugin::AutoCRUD>
This can be applied to either a database or a table; if applied to a database it percolates to all the tables, unless the table has a different setting.
The default is to allow new records to be created. Set this to a value of
no to prevent this operation from being allowed. Widgets will also be removed from the user interface so as not to confuse users.
<Plugin::AutoCRUD> <sites> <default> create_allowed no </default> </sites> </Plugin::AutoCRUD>
This can be applied to either a database or a table; if applied to a database it percolates to all the tables, unless the table has a different setting.
The default is to allow deletions of records in the tables. Set this to a value of
no to prevent deletions from being allowed. Widgets will also be removed from the user interface so as not to confuse users.
<Plugin::AutoCRUD> <sites> <default> delete_allowed no </default> </sites> </Plugin::AutoCRUD>
This option achieves two purposes. First, you can re-order the set of columns as they are displayed to the user. Second, by omitting columns from this list you can hide them from the main table views.
Provide a list of the column names (as the data source knows them) to this setting. This option must appear at the table level of your site config hierarchy. In
Config::General format, this would look something like:
<Plugin::AutoCRUD> <sites> <default> <mydb> <thetable> columns id columns title columns length </thetable> </mydb> </default> </sites> </Plugin::AutoCRUD>
Any columns existing in the table, but not mentioned there, will not be displayed in the main table. They'll still appear in the record edit form, as some fields are required by the database schema so cannot be hidden. Columns will be displayed in the same order that you list them in the configuration.
You can alter the title given to any column in the user interface, by providing a hash mapping of column names (as the data source knows them) to titles you wish displayed to the user. This option must appear at the table level of your site config hierarchy. In
Config::General format, this would look something like:
<Plugin::AutoCRUD> <sites> <default> <mydb> <thetable> <headings> id Key title Name length Time </headings> </thetable> </mydb> </default> </sites> </Plugin::AutoCRUD>
Any columns not included in the hash mapping will use the default title (i.e. what the plugin works out for itself). To hide a column from view, use the
columns option, described above.
If you don't want a database to be offered to the user, or likewise a particular table, then set this option to
yes. By default, all databases and tables are shown in the user interface.
<Plugin::AutoCRUD> <sites> <default> <mydb> <secrettable> hidden yes </secrettable> </mydb> </default> </sites> </Plugin::AutoCRUD>
This can be applied to either a database or table; if applied to a database it overrides all child tables, even if a table has a different setting.
With this option you can swap out the set of templates used to generate the web front-end, and completely change its look and feel.
Currently you have two choices: either
extjs2 which is the default and provides the standard full-featured ExtJS2 frontend, or
skinny which is a read-only non-JavaScript alternative supporting listing, paging and sorting only.
Set the frontend in your site config at its top level. Note that you cannot set the frontend on a per-database or per-table basis, only per-site:
<Plugin::AutoCRUD> <sites> <default> frontend skinny </default> </sites> </Plugin::AutoCRUD>
Be aware that setting the frontend to
skinny does not restrict create or update access to your database via the AJAX API. For that, you still should set the
*_allowed options listed above, as required.
You can create a new site by adding it to the
sites section of your configuration:
<Plugin::AutoCRUD> <sites> <mysite> # local settings here </mysite> </sites> </Plugin::AutoCRUD>
You'll notice that a non-default site is active because the path in your URLs changes to a more RPC-like verbose form, mentioning the site, database and table:
from this: .../autocrud/mydb/thetable # (i.e. site == default) to this: .../autocrud/site/mysite/schema/mydb/source/thetable
So let's say you've created a dumbed down site for your users which is read-only (i.e.
update_allowed no and
delete_allowed no), and called the site
simplesite in your configuration. You need to give the following URL to users:
.../autocrud/site/simplesite
You could also then place an access control on this path part in your web server (e.g. Apache) which is different from the default site itself.
If you want to run an instant demo of this module, with minimal configuration, then a simple application for that is shipped with this distribution. For this to work, you must have:
Go to the
examples/sql/ directory of this distribution and run the
bootstrap_sqlite.pl perl script. This will create an SQLite file.
Now change to the
examples/demo/ directory and start the demo application like so:
demo> perl ./server.pl
Visit in your browser as instructed at the end of the output from this command.
To use your own database rather than the SQLite demo, edit
examples/demo/demo.conf so that it contains the correct
dsn, username, and password for your database. Upon restarting the application you should see your own data source instead.
An alternate application exists which demonstrates use of the
display_name method on a DBIx::Class Row, to give row entries "friendly names". Follow all the instructions above but instead run the following server script:
demo> perl ./server_with_display_name.pl
Finally, the kitchen sink of other features supported by this module are demonstrated in a separate application. This contains many tables, each of which highlights one or more aspects of a relational database backend being rendered in AutoCRUD.
Follow all the instructions above, but instead run the following server script:
demo> perl ./server_other_features.pl
See Catalyst::Plugin::AutoCRUD::Manual::Troubleshooting.
See Catalyst::Plugin::AutoCRUD::Manual::Limitations.
CatalystX::CRUD and CatalystX::CRUD:YUI are two distributions which allow you to create something similar but with full customization, and the ability to add more features. So, you trade effort for flexibility and power.@cpan.org>
This software is copyright (c) 2012 by Oliver Gorwits.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~oliver/Catalyst-Plugin-AutoCRUD-2.122460/lib/Catalyst/Plugin/AutoCRUD.pm | CC-MAIN-2014-35 | refinedweb | 3,355 | 50.97 |
Hi, I’m having issues with exporting a sketch that has a movie in it from the Video library from processing. When I run the sketch, it runs fine and the video plays. But, when I export the sketch to an exe and try to run it, it displays nothing (just a blank screen).
import java.*; import processing.sound.*; import processing.video.*; Movie movie; void setup() { size(1000,1000,P2D); movie = new Movie(this, "keyboard-cat.mov"); // movie.width = width; // movie.height = height; movie.loop(); movie.volume(0.1); //music volume } void draw() { background(0); image(movie, 0, 0, width, height); } void movieEvent(Movie movie) { movie.read(); }
Yes the movie file is in the data folder which is located in the same folder as the sketch.
Help would be much appreciated!
Here’s a link to download the code + export if you need
You might need to import the library via sketch → import library → Video by the processing foundation | https://discourse.processing.org/t/exporting-sketch-with-movie-is-blank/27983/5 | CC-MAIN-2022-33 | refinedweb | 159 | 74.29 |
* Guillem Jover | On Fri, 2006-11-17 at 12:32:18 -0800, Tollef Fog Heen wrote: | > They are just strings. But, as I wrote, they are just one example; we | > would probably want to associate uploads and bzr branches as well. The | > LP syntax could be LP: #123 for closing bugs, LP: spec $specname for | > associating with a spec, and LP: bzr $product/$branchname (or something | > similar) for associating bzr branches with an upload. | | Are those the only ones you'd need for now? Could the recently "proposed" | Vcs field help with the bzr branches or does this refer to upstream | branches only? Given that we (Canonical and Ubuntu) is firmly committed to bzr, our interest in other RCS-es is not likely to be a concern. Note that being able to associate an upload and a branch is not the same as saying they come from the same place. As an example, we might have a changelog entry reading: * Integrate 8d branch for better support of 8D graphics cards.. LP: bzr xorg/8d-graphics-support The upload would then come from an integration branch, but we want to create the association automatically. | > > What about "Implements: foo" (or similar), a proper regex would have | > > to be defined, but you get the idea. | > | > Specs generally aren't implemented by a single upload, so it would be | > Spec-related or something like that. | | Could you explain how would you use that information to associate | things? And is there some way you would mark that spec is fully | implemented? The association would be done in launchpad, so it would not be of concern for dpkg itself. (We would probably have the upload processor associate the relevant meta-data with the upload based on the .changes file and then make sure the meta-data was kept together with the other meta-data we keep for an upload.) I don't think we would ever want to mark spec progress in a changelog; we discussed it, but it was rejected since progress often isn't the result of a single upload, but rather a series of uploads of multiple packages. I'll be happy to have this discussion further, but I think a discussion on how Ubuntu's procedures work is getting sufficiently off-topic for debian-project. :-) (Another reason for wanting bug closure but not spec progress tracking in a changelog is bugs are closed frequently; often multiple per uploads. A person probably has somewhere in the range of 5-10 specs for a six month release cycle; changing the status of those by way of a web UI isn't a big overhead.) | > I would rather just have a namespace allocation and derivatives can do | > whatever they want within their namespace, but to a certain degree I see | > why this is problematic and it seems you are unhappy with that? | | I'd prefer if we came up with something that is general and does not | need everyone around to implement their own solution. Granted some of | the work will still be specific per distribution, for example the | launchpad or bugzilla hooks into dak, but such solution would minimize | them, and most of the changes could be merged upstream w/o problem. A problem I see with trying to make this general is we end up with a lot of typing. The Closes syntax in changelogs is wonderful because it's so simple. If people had to write Closes: debian/#1234 (as an example), it'd be less readable, more writing and I think less liked as a whole. Inside a project, bug numbers are going to be unique; there's only one Freedesktop.org bug #1234, there's just one Debian bug #1234 and there's just one Launchpad bug #1234, so for me it makes a lot more sense to have a way to refer to bug #1234 in the context of a project and then treat that as a bug closing there. It's not like reopening bugs is that common anyway, so using a web/email UI for that is perfectly reasonable. If the syntax ended up being Closes: debian/#1234, Closes: Launchpad/#1234, Closes: Freedesktop.org/#1234, the «closes» bit is redundant and we could just as easily use Debian: #1234, LP: #1234 and fd.o: #1234 (We can argue over whether abbreviations should be allowed or not. I think they should, overlaps should be uncommon enough that we don't really need to care deeply.) -- Tollef Fog Heen ,''`. UNIX is user friendly, it's just picky about who its friends are : :' : `. `' `- | http://lists.debian.org/debian-project/2006/12/msg00012.html | CC-MAIN-2013-20 | refinedweb | 763 | 67.49 |
How to Build Smart Garden with Raspberry Pi using Moisture sensor
In this tutorial, I will utilize the excellent features of the Raspberry Pi to create a moisture sensor for the plant pots. You can monitor sensors locally with the led or remotely over the internet, and receive a daily e-mail if the Moisture is below a specified level.
Along the way I will:-
- Connect using breadboard and read the value of the analog sensor via SPI
- The sensor is well formatted in the console
- Raspberry Pi send email with sensor read
- Easily monitor sensors and some history books on the web
Today, I will show you how to use the humidity sensor and raspberry pi to send you an e-mail notification when your plant needs watering! If you’ve forgotten your houseplant, it’s really useful, but for people with very green fingers, you can, of course, double it!
The sensor board itself has both analog and digital outputs. The analog output gives a variable voltage reading so that you can estimate the soil’s water content (use some math!). When the soil moisture content exceeds a certain value, the digital output gives you a simple “on” or “off.” This value can be set or calibrated using an adjustable onboard potentiometer. In this case, we just want to know “yes, the factory has enough water” or “no, the factory needs watering!” So we will use the digital output. So in this we can set up the notification of the sensor.
Collect the Hardware to Build Smart Garden with Raspberry Pi
- Raspberry Pi (any mode is ok, we’ll use a zero for this demo)
- Humidity Sensor
- Paste probes together (we will use a factory)
Let’s connect the probe to the sensor. Just connect the two pins on the probe to the sensor side with only 2 pins. It does not matter which direction the wire is going to.
Now let’s connect the sensor to Raspberry Pi.
VCC -> 3v3 (pin 1)
GND -> GND (Pin 9)
D0 -> GPIO 17 (Pin 11)
Now that everything is connected, we can open Raspberry Pi. Without writing any code, we can test to see our humidity sensor working. When power is applied, you should see the power light is on (4 pins down and the power light is on the right).
If the detection light is off, you can adjust the potentiometer on the sensor so that you can change the detection threshold (this applies only to the digital output signal)
Now we can see that our sensor is working and it’s time to calibrate for your specific use.
In this example, we want to monitor the moisture content of our plant pots. So let’s set the checkpoint to a level so that if it falls we’ll be notified that our pots are too dry and need watering. The plants here are a little dry on the side, but right now, if it gets dry it needs to be watered.
I need to adjust the potentiometer until the test light is on. Once I reach the checkpoint, I will stop turning the potentiometer. In this way, the detection light goes off when the humidity
the level is reduced by a small amount.
This is the sensor now calibrated, now it’s time to write some code with the sensor’s digital output!
If you want to do this directly on the Raspberry Pi, you can clone git like this:
Run Python code
#!/usr/bin/python
# Start by importing the libraries we want to use
import RPi.GPIO as GPIO # This is the GPIO library we need to use the GPIO pins on the Raspberry Pi
import smtplib # This is the SMTP library we need to send the email notification
import time # This is the time library, we need this so we can use the sleep function
# Define some variables to be used later on in our script
# You might not need the username and password variable depends if you are using a provider or if you have your raspberry pi setup to send emails
# If you have setup your raspberry pi to send emails, then you will probably want to use ‘localhost’ for your smtp_host
smtp_username = “enter_username_here” # This is the username used to login to your SMTP provider
smtp_password = “enter_password_here” # This is the password used to login to your SMTP provider
smtp_host = “enter_host_here” # This is the host of the SMTP provider
smtp_port = 25 # This is the port that your SMTP provider uses
smtp_sender = “sender@email.com” # This is the FROM email address
smtp_receivers = [‘receiver@email.com’] # This is the TO email address
# The next two variables use triple quotes, these allow us to preserve the line breaks in the string.
# This is the message that will be sent when NO moisture is detected
message_dead = “””From: Sender Name <sender@email.com>
To: Receiver Name <receiver@email.com>
Subject: Moisture Sensor Notification
Warning, no moisture detected! Plant death imminent!!! :'(
“””
# This is the message that will be sent when moisture IS detected again
message_alive = “””From: Sender Name <sender@email.com>
To: Receiver Name <receiver@email.com>
Subject: Moisture Sensor Notification
Panic over! Plant has water again 🙂
“””
# This is our send email function
def sendEmail(smtp_message):
try:
smtpObj = smtplib.SMTP(smtp_host, smtp_port)
smtpObj.login(smtp_username, smtp_password) # If you don’t need to login to your smtp provider, simply remove this line
smtpObj.sendmail(smtp_sender, smtp_receivers, smtp_message)
print “Successfully sent email”
except smtplib.SMTPException:
print “Error: unable to send email”
# This is our callback function, this function will be called every time there is a change on the specified GPIO channel, in this example we are using 17
def callback(channel):
if GPIO.input(channel):
print “LED off”
sendEmail(message_dead)
else:
print “LED on”
sendEmail(message_alive)
# Set our GPIO numbering to BCM
GPIO.setmode(GPIO.BCM)
# Define the GPIO pin that we have our digital output from our sensor connected to
channel = 17
# Set the GPIO pin to an input
GPIO.setup(channel, GPIO.IN)
# This line tells our script to keep an eye on our gpio pin and let us know when the pin goes HIGH or LOW
GPIO.add_event_detect(channel, GPIO.BOTH, bouncetime=300)
# This line asigns a function to the GPIO pin so that when the above line tells us there is a change on the pin, run this function
GPIO.add_event_callback(channel, callback)
# This is an infinte loop to keep our script running
while True:
# This line simply tells our script to wait 0.1 of a second, this is so the script doesnt hog all of the CPU
time.sleep(0.1)
Now we need to make some changes in the script, so open the script in the editor:
Nano moisture.py
Read the comments in the script and edit the various variables that have been defined.
To run the script, simply run the following command from the directory where the script is located:
sudo python moisture.py
So, if you have set all the variables and your potentiometer is set to the correct value, your raspberry pie will now be emailed to you when your plant’s water is dry!
To test, simply set your potentiometer to high and check your inbox!
You may also like to read these articles
How to Connect your Raspberry Pi and Arduino Together
How to set up a Raspberry Pi Plex server
How to Build DIY Heart Rate Monitor with Arduino
Hope my article “How to Build Smart Garden with Raspberry Pi using Moisture sensor” helps you to Build Smart Garden with Raspberry Pi. If you have any query, feel free to comment. | https://technicalustad.com/build-smart-garden-with-raspberry-pi/ | CC-MAIN-2018-09 | refinedweb | 1,272 | 66.47 |
Exit Print View
#include <sys/dditypes.h>
#include <sys/sunddi.h>
int ddi_get_eventcookie(dev_info_t *dip, char *name,
ddi_eventcookie_t *event_cookiep);
Solaris DDI specific (Solaris DDI).
Child device node requesting the cookie.
NULL-terminated string containing the name of the event.
Pointer to cookie where event cookie will be returned.
The ddi_get_eventcookie() function queries the device tree for a cookie matching the given event name and returns a reference to that cookie. The search is performed by a calling up the device tree hierarchy until the request is satisfied by a bus nexus driver, or the
top of the dev_info tree is reached.
The cookie returned by this function can be used to register a callback handler, unregister a callback handler, or post an event.
Cookie handle is returned.
Request was not serviceable by any nexus driver in the driver's ancestral device tree hierarchy.
The ddi_get_eventcookie() function can be called from user and kernel contexts only.
See attributes(5) for a description of the following attributes:
attributes(5), ddi_add_event_handler(9F), ddi_remove_event_handler(9F)
Writing Device Drivers for Oracle Solaris 11.2 | http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-get-eventcookie-9f.html | CC-MAIN-2016-40 | refinedweb | 179 | 50.02 |
Function Documentation
◆ get_model_text()
Definition at line 16 of file transport8.py.
16def get_model_text():
def get_model_text()
Definition: warehouse.py:18
◆ scen_solve()
Definition at line 66 of file transport8.py.
References scen_solve.
66def scen_solve(workspace, checkpoint, bmult_list, list_lock, io_lock):
74 # instantiate the GAMSModelInstance and pass a model definition and GAMSModifier to declare bmult mutable
79 # dynamically get a bmult value from the queue instead of passing it to the different threads at creation time
def scen_solve(workspace, checkpoint, bmult_list, list_lock, io_lock)
Definition: transport8.py:66
Variable Documentation
◆ args
◆ bmult_list
Definition at line 109 of file transport8.py.
◆ checkpoint
Definition at line 107 of file transport8.py.
◆ cp
Definition at line 103 of file transport8.py.
◆ io_lock
Definition at line 113 of file transport8.py.
◆ list_lock
Definition at line 112 of file transport8.py.
◆ nr_workers
Definition at line 116 of file transport8.py.
◆ scen_solve
Definition at line 119 of file transport8.py.
Referenced by scen_solve().
◆ t8
Definition at line 106 of file transport8.py.
◆ target
◆ threads
Definition at line 117 of file transport8.py.
◆ ws
Definition at line 99 of file transport8.py. | https://www.gams.com/latest/docs/apis/examples_python/namespacetransport8.html | CC-MAIN-2019-39 | refinedweb | 180 | 52.05 |
sound capabilities of modern computer hardware are usually much more advanced than the dumb terminals of yesterday, and typical users expect their computers to make sounds that are prettier than the coarse console bell. Java programs can do this by loading and playing a file of audio data with the java.applet.AudioClip interface. As the package name implies, the AudioClip interface was originally intended only for applets. Since Java 1.0, applets have been able to call their getAudioClip( ) instance method to read an audio file over the network. In Java 1.2, however, the static java.applet.Applet.newAudioClip( ) method was added to allow any application to read audio data from any URL (including local file: URLs). This method and the AudioClip interface make it very easy to play arbitrary sounds from your programs, as demonstrated by Example 17-2.
AudioClip
getAudioClip( )
java.applet.Applet.newAudioClip( )
file
Invoke PlaySound with the URL of a sound file as its sole argument. If you are using a local file, be sure to prefix the filename with the file: protocol. The types of sound files supported depend on the Java implementation. Sun's default implementation supports .wav, .aiff, and .au files for sampled sound, .mid files for MIDI, and even .rmf files for the MIDI-related, proprietary "Rich Music Format" defined by Beatnik. [1]
PlaySound
When you run the PlaySound class of Example 17-2, you may notice that the program never exits. Like AWT applications, programs that use Java's sound capabilities start a background thread and do not automatically exit when the main( ) method returns. [2] To make PlaySound better behaved, we need to explicitly call System.exit( ) when the AudioClip has finished playing. But this highlights one of the shortcomings of the AudioClip interface: it allows you to play( ), stop( ), or loop( ) a sound, but it provides no way to track the progress of the sound or find out when it has finished playing. To achieve that level of control over the playback of sound, we need to use the JavaSound API, which we'll consider in the next section.
main( )
System.exit( )
play( )
stop( )
loop( )
Example 17-2. PlaySound.java
package je3.sound;
/**
* Play a sound file from the network using the java.applet.Applet API.
*/
public class PlaySound {
public static void main(String[ ] args)
throws java.net.MalformedURLException
{
java.applet.AudioClip clip =
java.applet.Applet.newAudioClip(new java.net.URL(args[0]));
clip.play( );
}
}
Before you can test PlaySound, you'll need some sound files to play. You probably have lots of sampled audio files sitting around on your computer: they are often bundled with operating systems and applications. Look for files with .wav, .au, or .aif extensions. You may not have MIDI files on your computer, but many MIDI hobbyists make files available for download on the Internet; a quick Internet search will locate many samples you can use.
The JavaSound web page is also worth a visit (). This page includes links to an interesting JavaSound demo application that includes source code and its own set of sampled audio and MIDI files. Also of interest here are free downloadable MIDI soundbank files, which may give you richer MIDI sound. 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.onjava.com/pub/a/onjava/excerpt/jenut3_ch17/?page=last&x-order=date | CC-MAIN-2014-42 | refinedweb | 561 | 57.57 |
The first part of the Flatiron self paced program is all about Ruby. I knew absolutely nothing about Ruby and still struggle with it a bit. Part of the curriculum is all about making a Tic Tac Toe game. I can't remember how many of the labs are centered around Tic Tac Toe, but I definitely got sick of the game. I really struggled with parts of this challenge. I wanted to write about how I worked through the code as I think it really helps to explain to myself why this works.
First, I had to create the board by defining how I wanted the board to be visually displayed. Also creating a board array that assigned indexes to the squares on the board.
def display_board(board) puts " #{board[0]} | #{board[1]} | #{board[2]} " puts "-----------" puts " #{board[3]} | #{board[4]} | #{board[5]} " puts "-----------" puts " #{board[6]} | #{board[7]} | #{board[8]} " end
Then we had to get input from the user that would indicate where the player wanted to move. The player would indicate 1-9 where they wanted to move on the board. Since arrays start their index at 0, I had to minus one from whatever the player input to work correctly.
def input_to_index(user_input) user_input.to_i - 1 end
Next was creating the the rules for the move and how turns would work. I had to check if the move would be valid. What makes a move valid? Well it must be associated with one of the indexes on the board, so say the player input 15, well that isn't valid since there is no space 15 on the board. Also, the spaces can't be occupied already. The thing I loved about this section was the use of && (and), || (or), and the bang (!) operator. Also gets.strip is such a cool method of getting user input and making sure there isn't any excess on it like an additional line.
def move(board, index, current_player) board[index] = current_player end def valid_move?(board, index) index.between?(0,8) && !position_taken?(board, index) end def turn(board) puts "Please enter 1-9:" input = gets.strip index = input_to_index(input) if valid_move?(board, index) move(board, index, current_player(board)) display_board(board) else turn(board) end end
Now it was time to essentially figure out when the game was over. This included making a counter to indicate the total number of turns, only when a space was taken. The counter would go up, only if a valid input was put in. This was also helpful to use to determine is it was player X's turn or player O's turn. This was my first time using a single line syntax for a conditional.
def turn_count(board) count =0 board.each do |occupied| if occupied == "X" || occupied == "O" count += 1 end end count end def current_player(board) turn_count(board) % 2 == 0 ? "X" : "O" end def position_taken?(board, index) !(board[index].nil? || board[index] == " ") end
The last section was to define how someone won the game. This was probably the most challenging section for me. I found myself constantly trying to call local variables outside of the method. First I had to define the winning combinations by noting all the indexes that would indicate three in a row on the board. The indexes also had to be filled with either three X's or three O's. How about if all the spots were filled but no one won? Well that needed to be accounted for too. This was a great chance to use the .all? function. If it is all full and not won, then it must be a draw. The game is over if the board is won, a draw, or full. While all of this code is very simple, mostly just one line, it was a challenge to think so abstractly about how to achieve all of these conditions.
WIN_COMBINATIONS =[[0,1,2], [3,4,5], [6,7,8], [0,3,6], [1,4,7], [2,5,8], [0,4,8], [2,4,6]] def won?(board) WIN_COMBINATIONS.each do |win_combination| win_index_1 = win_combination[0] win_index_2 = win_combination[1] win_index_3 = win_combination[2] if board[win_index_1] == "X" && board[win_index_2] == "X" && board[win_index_3] == "X" return win_combination elsif board[win_index_1] == "O" && board[win_index_2] == "O" && board[win_index_3] == "O" return win_combination end end return false end def full?(board) board.all? { |token| token== "X" || token=="O"} end def draw?(board) full?(board) && !won?(board) end def over?(board) won?(board) || draw?(board) || full?(board) end def winner(board) if won?(board) return board[won?(board)[0]] end end def play(board) until over?(board) do turn(board) end if won?(board) puts "Congratulations #{winner(board)}!" else draw?(board) puts "Cat's Game!" end end
One great take away from this lab was writing out instructions for myself. Essentially writing the steps I needed to take to achieve the goal. I found when I didn't write out the steps, I would become lost so quickly. I also learned the magic that is pry so I wasn't coding in the dark anymore. I also liked writing my thought process afterwards so I better understood how each piece of code worked. I also found myself going into a dark place when I got stuck and thinking I would never succeed. However, by writing this blog, it proves that I survived and conquered. There are so many ways to achieve this same outcome, which is so fascinating to me.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/dotnotation/tic-tac-toe-58jn | CC-MAIN-2021-31 | refinedweb | 913 | 74.59 |
17 September 2008 12:45 [Source: ICIS news]
By Hilde Ovrebekk
LONDON (ICIS news)--The FTSE 100 index of UK corporate stocks recovered slightly on Wednesday but performance from oil and chemicals companies was mixed.
At 10:16 GMT, the FTSE 100 had risen 1.37% to 5,094.70, after falling to a three-year low on Tuesday.
Shares in ?xml:namespace>
However, shares in BP fell a further 3.76% after closing down 3.15% at 476.00 pence during the previous day’s trading.
The Dow Jones Eurostoxx 50 index also recovered to 3,112.72 points by 10:05 GMT after dropping 3.1% by the close on Tuesday.
Energy major Total's shares recovered slightly to €42.53, while BASF and Air Liquide stocks continued to fall. The former’s shares fell a further 1% on Wednesday to €34.85, while the latter was down another 1.2% to €81.81.
A key
“Whatever the product, it is dire,” the distributor said.
A European buyer of monopropylene glycol (MPG) said demand had already been slow this year due to the economic slowdown, and did not rule out that the current finanical crisis could have further effects on demand.
One MPG distributor said it was "an unsure market and customers are buying hand-to-mouth", adding that this was the case for other chemicals products as well.
However, another distributor said MPG demand was stable and had not seen an immediate negative impact from the current economic events.
Crude futures rose more than $3/bbl on the news of the $85bn (€60.35bn)
October NYMEX crude was up $2.62/bbl at 11:00 GMT, while October Brent was at $91.65/bbl, up $2.43/bbl from the previous close.
In the financial services sector, shares in HBOS, owners of the
The company’s shares plunged more than 30% on Tuesday and slumped by another 50% when the FTSE 100 opened.
Barclays Capital, the investment banking division of Barclays Bank, said it had agreed to acquire substantially all of Lehman Brothers’ North American businesses and operating assets, including 10,000 employees, for a consideration consisting of assumed liabilities, $250m in cash and certain other terms and conditions.
Barclays also said it was in talks about similar takeovers of Lehamn's operations outside North America, but added that this was still early in the negotiation stages.
($1 = €0.71)
Jane Massingham and Heidi Fin | http://www.icis.com/Articles/2008/09/17/9156561/europe-chems-shares-mixed-as-ftse-100-recovers.html | CC-MAIN-2015-11 | refinedweb | 408 | 66.94 |
Listening for Checkbox Clicks in User Data
On 22/08/2018 at 14:13, xxxxxxxx wrote:
Hello,
Is it possible, with a Python tag or otherwise, to listen for the click of an object's User Data checkbox? I have gotten this to work with the User Data button in the past. I'm looking to run some scripts when the user checks a "Use IK" checkbox.
I tried overriding the "message" method in my Python tag, but I couldn't find any of the attributes for the data object that gets passed as a parameter. In my web searches, it seemed that attributes such as 'descid' only existed for certain types of message data.
Any clarification on how to set this up (and decipher the message data) would be very helpful. Thank you.
On 23/08/2018 at 02:03, xxxxxxxx wrote:
Hi BlastFrame !
Actually there is a way, but is not recommended and pretty experimental, so if strange behavior occurs, we can't do anything. In the same time it's the only solution.
Whith that said we get AddEventNotification, which allow to receive message from another object.
Here is the code of the python tag
import c4d def message(msg_type, data) : if msg_type == c4d.MSG_NOTIFY_EVENT: event_data = data['event_data'] if event_data['msg_id'] == c4d.MSG_DESCRIPTION_POSTSETPARAMETER: desc_id = event_data['msg_data']['descid'] if desc_id[1].id == 1: print op.GetObject()[c4d.ID_USERDATA,1] def main() : obj = op.GetObject() # Check if we already listen for message if not obj.FindEventNotification(doc, op, c4d.NOTIFY_EVENT_MESSAGE) : obj.AddEventNotification(op, c4d.NOTIFY_EVENT_MESSAGE, 0, c4d.BaseContainer())
If you have any questions, pelase let me know.
Cheers,
Maxime.
On 24/08/2018 at 15:02, xxxxxxxx wrote:
This works! Thank you, Maxime.
It may be worth calling out that this line's id attribute is the ID assigned when creating the User Data:
if desc_id[1].id == 1:
It took me a little to figure this out.
Thank you, again! | https://plugincafe.maxon.net/topic/10925/14381_listening-for-checkbox-clicks-in-user-data/ | CC-MAIN-2020-05 | refinedweb | 322 | 59.5 |
Eventlet is Awesome!
Eventlet is an asynchronous networking library for Python which uses coroutines to allow you to write non-blocking code without needing to perform the normal mental gymnastics that usually go along with asynchronous programming. There are a bunch of async/event-driven networking framworks (eventmachine, node.js, tornado, and twisted to name a few), and their defining characteristic is that they use select/epolll/kqueue/etc. to do I/O asynchronously. Asynchronous I/O is cool because when done correctly it allows your server to handle a much greater number of concurrent clients than the one-OS-thread-per-connection approach. Furthermore, since you're using co-operative rather than preemptive multitasking, you can safely update shared data structures without locks, which makes it a lot easier to write correct code.
Anyway, eventlet is pretty cool. If you like Python and you're interested in async programming, you should check it out. After all, anything that reduces the incidence of Heisenbugs is worth a look ;)
Proxymachine in 55 Lines
Since they're so specialized, playing with this kind of library requires a specific kind of project. Consequently, I decided to put together a "Hello Word!" version of Proxymachine. Proxymachine is a proxy built on eventmachine that lets you configure it's routing logic using Ruby. Don't get me wrong, Proxymachine is awesome and way more production ready than this. That being said, it's still friggin' cool that I could put togther a pale imitation of Proxymachine in less than 100 lines thanks to eventlet:
import functools import eventlet CHUNK_SIZE = 32384 class Router(object): def route(self, addr, data): raise NotImplemented def merge(*dicts): result = {} for d in dicts: result.update(d) return result def forward(source, dest): while True: d = source.recv(CHUNK_SIZE) if d == '': break dest.sendall(d) def route(router, client, addr): blocks = [] while True: block = client.recv(CHUNK_SIZE) if block == '': raise Exception('Failed to route request: "{0}"'.format("".join(blocks))) blocks.append(block) route = router.route(addr, "".join(blocks)) if route is not None: print "Forwarded connection from {0} to {1}".format(addr, route) server = eventlet.connect(route) for block in blocks: server.sendall(block) eventlet.spawn_n(forward, server, client) forward(client, server) return def start(router, **kwargs): defaults = { "listen": ('localhost', 8080), } config = merge(defaults, kwargs) print "Listening on:", config['listen'] eventlet.serve( eventlet.listen(config['listen']), functools.partial(route, router()) )
Roulette: A router in ten lines
The logic here is laughable, route inbound connections to either localhost:9998 or localhost:9999 depending on whether the remote port is divisible by two, but the point is that the routing logic could be anything. We're writing Python here. We could look stuff up in a database, or check the phase of the moon or y'know, do something useful.
import roulette class Router(roulette.Router): def route(self, addr, data): if addr[1] % 2 == 0: return ("localhost", 9999) return ("localhost", 9998) roulette.start(Router, listen = ("localhost", 80) ) | http://aaron.maenpaa.ca/blog/entries/2011/02/01/eventlet_is_awesome/ | CC-MAIN-2017-13 | refinedweb | 495 | 57.06 |
Introduction/Overview
Very often in projects involving multiple users with complex system integration, it happens that due to some reason BO (Business Object) instance is locked by integration or business user. This causes the update failure.
In case you would like to know the business object instance locking status then you can utilize SAP provided standard Reuse Library “BOAction”. Thanks to Jürgen Ravnik for giving the tip.
Note: Reuse Library “BOAction” comes as part of “AP.PlatinumEngineering” namespace which is, in general, not officially supported by SAP. In most cases, they are very well equipped to support key business features like Sending Email, Getting the code list value based on data type and more. But unfortunately, they are not bound to SAP Contractual Support SLAs.
For more details please refer:
Pre-Requisite
- SAP Cloud Application Studio
- SAP Cloud for Customer system access with PDI / SDK access
- Basic overview about ABSL Script, BO Action and Events.
Use Case:
The use case I am defining here is entirely hypothetical and could be easily achieved by standard features.
My requirement is, from the custom BO I would like to validate if opportunity instance associated with custom BO is locked or not. If the opportunity instance is locked by some other user then raise an error message and avoid save of the Custom BO instance.
Implementation:
Enough talking, let’s see this in action!
Create a custom BO with fields and most importantly message definition. Since I want to pass the exact same message which is being returned by SAP Reuse Library hence I kept message definition quite generic.
BO Definition for Message Text Definition:
import AP.Common.GDT as apCommonGDT; import AP.PDI.bo; [ChangeHistory] businessobject SK_PlayGround raises MSG_Error { message MSG_Error text "&1": String; [AlternativeKey] element ID: ID; element StartDate: Date; element EndDate: Date; element ContactTime: Time; element Note: MEDIUM_Name; element OpportunityUUID: UUID; **** **** **** }
We want to show the error message as soon as a user saves the custom BO instance. hence I have created ABSL script for Validation-OnSave of Root node for my Custom BO
ABSL Code for reference
import ABSL; import Common.DataTypes; import AP.PlatinumEngineering; var uuid: UUID; uuid =Library::UUID.ParseFromString(this.OpportunityUUID.content.ToString());// Opp ID 12450 var lock = BOAction.CheckLock("Opportunity","","Root",uuid); var messages = lock.MessageTypeItem; var validationFailed : Indicator = false; if(!lock.IsInitial() && messages.Count() > 0){ foreach(var message in messages) { if(message.MessageSeverityText == "E" && message.MessageID.content.ReplaceRegex("[[:space:]]","") == "AP_ESI_COMMON/101") { validationFailed = true; MSG_Error.Create(message.MessageType, message.Text.content); break; } } } return !validationFailed;
Testing
Go to generated UIs for your custom BO and create a new instance of your custom Business Object. Fill in the mandatory field and press SAVE. At this moment a call will be fired to the backend SDK (You can also set a breakpoint in the ABSL script to simulate the scenario) and an error will be raised and instance save will be rejected.
Conclusion
Using “BOAction” we can not just validate if the Instance is locked but can also check if an action is allowed (IsActionAllowed) and many more.
Further, It’s worth noting that AP.PlatinumEngineering namespace, in general, offers many such Reuse Libraries which are neither documented anywhere nor have been discussed. So it’s worth spending some time exploring what these libraries can do and share them with the whole community.
Hope you like it and let me know your feedback in the comment section.
PS: This was my first attempt to write a blog and now I know how “difficult” it is to write down but at the same time how insightful it could be when you actually write it down and find out things which you could have simply ignored in general.
Thanks & Regards
Saurabh Kabra
Hey Saurabh, great start mate! Keep going!
About Platinum namespace… My POV is that it needs to be documented or should not exist at all. There are so many usefull feautures/libraries there. But without an ability to debug them or read their code, it might be just a time bomb if you’re not certain what this or that library is doing.
I’m also wondering how it works from legal perspective. Having doubts there is any explicit statement regarding this particular namespace in customer’s contract. And not sure that the mentioned KBA is good enough to reduce the support scope and liability.
Thanks Andrei!
Reg documentation of these Platinum Engineering based libraries, I also feel the same that they should be documented. In fact, I raised it as an Idea some 2-3 years back (then idea forum) but it was rejected.
However, They have really helped me in the past for my project work hence I tend to use them occasionally wherever needed. | https://blogs.sap.com/2020/07/02/how-to-check-if-bo-instance-is-locked-in-sap-cloud-for-customer-pdi-sdk/ | CC-MAIN-2020-40 | refinedweb | 788 | 55.44 |
QMCGROUP(3) Library Functions Manual QMCGROUP(3)
QmcGroup - container representing a single fetch group of metrics from multiple sources
#include <QmcGroup.h> CC ... -lqmc -lpcp
A QmcGroup object is a container for contexts and metrics that should be fetched at the same time. The class manages the QmcContext(3) objects, timezones and bounds for every context created with QmcGroup::use and QmcGroup::addMetric.
~QmcGroup(); Destructor which destroys all metrics and contexts created by this group. QmcGroup(bool restrictArchives = false); Construct a new fetch group. restrictArchives will restrict the creating of multiple archive contexts created from the same host.
The default context of the group is defined as the first context created with QmcGroup::use before the first call to QmcGroup::addMetric. If no context is created before the first metric is added, the localhost is used as the default context. Therefore, if any metrics specifications contain archive sources, an archive source must have been created with QmcGroup::use to avoid an error for mixing context types. uint_t numContexts() const; The number of valid contexts created in this group. QmcContext const& context(uint_t index) const Return a handle to a context. QmcContext& context(uint_t index); Return a modifiable handle to a context. int mode() const; Return the type of context, either PM_CONTEXT_LOCAL, PM_CONTEXT_HOST or PM_CONTEXT_ARCHIVE. QmcContext* which() const; Return a handle to the current context of this group. This does not call pmUseContext(3) so it may not be the current PMAPI(3) context. uint_t whichIndex() const The index to the current group context. int use(int type, char const* source); Use the context of type from source. If a context to this source already exists in this group, that context will become the current PMAPI(3) context. Otherwise a new context will be created. The result is the PMAPI(3) context handle for the QmcGroup::context or a PMAPI(3) error code if the context failed. bool defaultDefined() const; Returns true if a default context has been determined. int useDefault(); Use the default context. If a default context has not been created, the context to the local host will be attempted. A result less than zero indicates that the method failed with the PMAPI(3) error encoded in the result. void createLocalContext(); Create and use a context to the local host. A result less than zero indicates that the method failed with the PMAPI(3) error encoded in the result.
These addMetric methods should be used to create new metrics as the QmcMetric constructors are private. These methods will always return a pointer to a QmcMetric object, however the QmcMetric::status() field should be checked to ensure the metric is valid. QmcMetric* addMetric(char const* str, double theScale = 0.0, bool active = false); Add the metric str to the group, with a scaling factor of scale. If active is set the metric will use only active instances (see QmcMetric(3)). QmcMetric* addMetric(pmMetricSpec* theMetric, double theScale = 0.0, bool active); Add the metric theMetric to the group, with a scaling factor of scale. If active is set the metric will use only active instances (see QmcMetric(3)). int fetch(bool update = true); Fetch all the metrics in all the contexts in this group. If update is equal to true, all counter metrics will be automatically converted to rates (see QmcMetric(3)). int setArchiveMode(int mode, const struct timeval *when, int interval); Set the mode and time to access all archive contexts in this group. See pmSetmode(3) for more details.
These methods assist in the management of multiple timezones and help to control the current timezone. enum TimeZoneFlag { localTZ, userTZ, groupTZ, unknownTZ }; Enumeration used to describe the origin of the default timezone. localTZ, userTZ and groupTZ indicate that the timezone was set with QmcGroup::useLocalTZ, QmcGroup::useTZ(QString const&) and QmcGroup::useTZ() respectively. unknownTZ indicates that a timezone has not been set. userTZ indicates that the timezone was int useTZ(); Use the timezone of the current group context as the default. int useTZ(const QString &tz); Add and use tz as the default timezone of this group. int useLocalTZ(); Use the timezone of the localhost as the default for this group. void defaultTZ(QString &label, QString &tz); Return the label and tz string of the default timezone of this group. TimeZoneFlag defaultTZ() const Return the origin of the default timezone. int useDefaultTZ(); Set the timezone to be the default timezone as defined in a previous call to QmcGroup::useTZ or QmcGroup::useLocalTZ. struct timeval const& logStart() const; Return the earliest starting time of any archives in this group. Assumes that QmcGroup::updateBounds has been called. struct timeval const& logEnd() const; Return the latest finish time of any archives in this group. Assumes that QmcGroup::updateBounds has been called. void updateBounds(); Determine the earliest start and latest finish times of all archives in this group. int sendTimezones(); Send the current timezones to kmtime(3).
void dump(ostream &os); Dump state information about this group to os.
PMAPI(3), QMC(3), QmcContext(3), QmcMetric(3), pmflush(3), pmprintf(3) and pmSetModeGROUP(3)
Pages that refer to this page: QMC(3), QmcContext(3), QmcSource(3) | https://www.man7.org/linux/man-pages/man3/QmcGroup.3.html | CC-MAIN-2021-04 | refinedweb | 852 | 56.96 |
Count Min Sketch
Get FREE domain for 1st year and build your brand new site
A count min sketch data structure is a probabilistic data sturcture (a data structure that provides an approximate solution) that keeps track of frequency. In other words, the data structure keeps track of how many times a specific object has appeared.
Why use it?
The most common solution to the problem of keeping track of frequency is using a hash map.
If you don't know what a hash map is, I suggest you read up about it using OpenGenus's Hash Map article: Hash Map
However, there are two main issues using a hash map for this problem.
- Collision issue
- Often times a hash map will have to dynamically grow it's number of buckets/internal storage to hold all the data.
- This means that there is a chance for a collision to happen and this would mean equality checks would be needed to find the key. Thus the performance is slower.
- Memory issue
- Another issue is that a hash map can be memory intensive and could be a problem if you want to not use too much memory. When you have over a million unique objects then that could become quite memory intensive as keeping a reference takes requires memory along with the contents within the object itself takes memory.
So how does count min sketch solve the problems of a hash map?
A count min sketch data structure is an alternative approach that can save memory while keeping high performance at the cost of accuracy.
This image shows a representation of what the data structure tries to do, which is creating rows of width (w) number of buckets along an associated hash function for each row. The number of rows are determined by the number of hash functions, so technically yes you could only have 1 hash function but the more you have the more likely you will be more accurate (as long as the buckets are reasonable). And the width of each row (number of buckets) is up to the testing. There is no magic number but you can guarantee good results if there are more buckets than possible values but that defeats the purpose of saving memory, so you should aim for a number less than the total number of values through testing different widths.
A count min sketch uses the idea of a hash function and buckets to keep count but uses multiple hash functions and rows of buckets to figure out the count. Using a count min sketch will always result in an overestimate of the true count, so you will never get an underestimate of the true count.
- For example: a number may have only appeared 5 times but due to collisions it may be 10 times. Or perhaps a number never appeared but will have a count greater than 0 due to the hash values corresponding to a bucket with a positive count value.
The idea behind this is that even if a collision happens, there are other rows to check and hope that there weren't many collisions either. With enough buckets and enough high quality hash functions then an exact value could potentially be found but there is a chance for an overestimate.
The count min sketch doesn not need to store a reference of the object nor keep the object around for later, it only needs to compute the hash value as the column index and update the count for that hash function row.
What's important to take note is that the more high quality hash functions you have and sufficient enough buckets then the better the accuracy will be. However, you also to play around with the number of buckets, too little buckets will cause collisions but too many buckets will cause a waste of space.
Now that you the fundamental idea behind this data structure, it's important to know the standard functions that come with this data structure.
Functions
The two major functionalities that a count min sketch implement are:
put/update and get/estimate
The two functions can have different names but these are the common names that appear in different code implementations. The follow functions are written in java.
These two functions should be included in the implemention section to be fully functional.
Put / Update
public void update(T item, int count) { for (int row = 0; row < depth; row++) { frequencyMatrix[row][boundHashCode(hashers[row].hash(item))] += count; } }
The code fragment above is a method that takes the item that you want to give an associated count to. A count min data structure typically has multiple rows, so you would iterate through each row and use the hash function to find which bucket the count will be added to. There is a possibility that the hash function creates a value that is greater than the length of the row, so the hash code is bounded to the value of 0 to row length - 1.
Get / Estimate
public int estimate(T item) { int count = Integer.MAX_VALUE; for (int row = 0; row < depth; row++) { count = Math.min(count, frequencyMatrix[row][boundHashCode(hashers[row].hash(item))]); } return count; }
Now because this is probabalistic and there is a chance for collision, we have to go through each row and find the corresponding bucket index that item would appear in and find the minimum amount. The reason why we want the minimum is because we can guarantee the position where it should be but not the value if there are collisions on a few rows. As the implementation of add will simple handle collisions by adding onto the current value at that bucket position, it's safe to check the other rows as it's extremely unlikely for two objects of different values to have the same hash values (assuming it's a good hash function) for each row.
Possible Implementation
The following is a possible implementation of how a count min sketch using java.
import java.util.HashMap; import java.util.Random; public class CountMinSketch<T> { public static interface Hasher<T> { public int hash(T obj); } private int depth; private int width; private Hasher<T>[] hashers; private int[][] frequencyMatrix; @SafeVarargs public CountMinSketch(int width, Hasher<T>... hashers) { this.width = width; this.hashers = hashers; depth = hashers.length; frequencyMatrix = new int[depth][width]; } public void update(T item, int count) { //implementation was shown in the section above } public int estimate(T item) { //implementation was shown in the section above } //Keep the hash code within 0 and length of row - 1. private int boundHashCode(int hashCode) { return hashCode % width; } public static void main(String[] args) { //Using functional programming to create these. CountMinSketch.Hasher<Integer> hasher1 = number -> number; CountMinSketch.Hasher<Integer> hasher2 = number -> { String strForm = String.valueOf(number); int hashVal = 0; for (int i = 0; i < strForm.length(); i++) { hashVal = strForm.charAt(i) + (31 * hashVal); } return hashVal; }; CountMinSketch.Hasher<Integer> hasher3 = number -> { number ^= (number << 13); number ^= (number >> 17); number ^= (number << 5); return Math.abs(number); }; CountMinSketch.Hasher<Integer> hasher4 = number -> String.valueOf(number).hashCode(); int numberOfBuckets = 16; CountMinSketch<Integer> cms = new CountMinSketch<>(numberOfBuckets, hasher1, hasher2, hasher3, hasher4); Random rand = new Random(); //Using a hashmap to keep track of the real count HashMap<Integer, Integer> freqCount = new HashMap<>(); int maxIncrement = 10; int maxNumber = 1000; int iterations = 50; //add 50 random numbers with a count of 1-10 for (int i = 0; i < iterations; i++) { int increment = rand.nextInt(maxIncrement) + 1; int number = rand.nextInt(maxNumber); freqCount.compute(number, (k, v) -> v == null ? increment : v + increment); cms.update(number, increment); } //Print out all the numbers that were added with their real and estimated count. for (Integer key : freqCount.keySet()) { System.out.println("For key: " + key + "\t real count: " + freqCount.get(key) + "\t estimated count: " + cms.estimate(key)); } } }
Some important things to take note:
- The update and estimate function implementations are in the sections above.
- I've created a generic hasher interface and count min sketch data structure to allow flexbility in what values to hash. Though if you were working with strings or some primitive data type then you could simply change the types to suit what you want.
- Inside the main function I've created 4 hash functions that deal with integers (they are not the best hash functions but can work) using functional programming.
- Try to play around with the number of buckets, iterations, increment amount, and max number to get a better observations of the effects.
- Also try removing a few of the hash functions and you will notice that the accuracy will decrease with less rows.
- Lastly, there are other implementations out there that you can find that take into consideration of error.
Pros and Cons
Pros
- Very simple and easy to implement.
- Can minimize memory greatly and avoid equality checks.
- Customizable data structure to provide your own hash functions and the number of buckets.
- Can guarantee the value will be either be exact or an overestimate.
Cons
- A good chance the count is not exact, will probably be overestimate.
- Such as a value never appeared once but will have a positive frequency count.
- Requires really good hash functions.
- Requires testing around to get the benefits of the data structure.
Other Applications
Besides being used for an alternative method of keeping track of a relative frequency count, there are other uses this data structure may be used in.
- Possibly for a safe password picking mechanism.
- passwords that are popular could be declined.
- passwords that are extremely rare could have a high amount due to collisions which could also be declined.
- This would mean passwords may need to be varied for this application.
- In NLP (natural language processing), keeping frequency count on a large amount of data such as pairs / triplets can be reduced by using this data structure.
- Heavily useful in queries too such as avoiding to go through the entire table of data and use approximate values to speed up queries.
4) There are other applications such as: heavy hitters, approximate page rank, distinc count, K-mer counting in computational biology, and etc. | https://iq.opengenus.org/count-min-sketch/ | CC-MAIN-2021-43 | refinedweb | 1,680 | 51.99 |
Thank you for your reply, the hernia appatently is very small and was to do when it was born apparently umblical. I have spoken to our vet and they can confirm they we confident when they vet checked it there were no issues. The new owners have advised it was going to be £210.00 and they want 50% towards it. We feel very disappointed as we spent alot of time get the right owners and bringing the puppies up in our own home as part of our family. As you would get attached and that why we have no issue in providing a complete refund and keeping her. Do we know if they will have the operation for it or keep the money.
Thank you Andrew,
First, let me say that as I am veterinary surgeon (as you have posted to the veterinary category) and not a lawyer, I will give you my opinion regarding the situation and generally what legal ramifications there are. If you wish afterwards, just let me know and I will forward your question to the legal experts to aid you.
Now it is unfortunate that your vet may not have caught this subtle umbilical hernia. These are genetically passed from parents to pup, but if it small and reducible on palpation (meaning it pops back in and likely why it was missed) then it usually will cause no health issue for the dog. This means we wouldn't jump into surgery in a wee pup this age (especially if this wasn't a massive hernia that she was displacing her guts into --which I suspect everyone would have noticed). Instead as this is a female puppy,we'd just plan to repair the hernia when the dog was in for spaying. At that time, it would just mean making a slightly bigger incision at the midline to incorporate the hernia and closing the deficit afterwards.
The reason I asked about this “contribution” is because when we spay female dogs in my practice (and pretty much all the practice I have worked in here in the UK) we don’t tend to charge anything more for a small hernia repair done at the time of spaying. It is literally a few more minutes and a few extra stitches. I suppose if the practice they frequent are sticklers, then perhaps they could charge a token fee for the additional time (ie £30 or so). As you can see, a £210 fee is a bit odd for a small hernia that isn't causing issue and therefore doesn't need to be addressed right away. Sure, if it was massive, thus couldn't wait until spaying, and needed a separate operation now the cost would be justified. But for a small hernia that was so small that her first vet didn't catch it, this doesn't make sense. And I have to admit, I would question this. Therefore, I would strongly suggest that you consider getting your vet to ring their vet to get some answers to this fishy situation.
Otherwise, I have looked into the laws that would surround this issue. In this case, poor wee Titch would be considered a "defective" item (since dogs are considered to be property in the legal world). Therefore, under Section 13 of the Sale of Goods Act 1979, we'd have to appreciate that they do have some consumer rights after buying Titch. And even having had a health check, if the pup is found within 6 months to be defective you would be liable to either refund the purchase price for her return (as you offered) or to pay to have the puppy's defect fixed. (reference). And I do have to note that they can choose to push you to cover "reasonable" costs (or take you to small claims court) rather then go for full refund since SOGA 1979 doesn't require them to return her but only that they have the right to do so if they wish.
Overall, this situation sounds a wee bit dodgy as it is. Therefore, the first step is to ring your vet, let them know the situation and the contact details for their vet. They will need to ring them and discuss the state of this pup. This will ensure they are not exaggerating the situation and trying anything on. Depending on the outcome of that, you can again offer a full refund for her return. But if they are not keen to oblige, then they are taking a puppy "as is" then I would suggest getting a quote for umbilical hernia repair from your vet to compare (ie done at the time of spay and without). This will give you further idea of what they can reasonable request and what would be reasonable indicated at this time based on hernia.
Otherwise, just let me know if you do want me to pass your query on to the legal experts or if you want to first touch base with your own vet.! : ) | http://www.justanswer.co.uk/veterinary/8az55-hello-not-breeder-two-kc-registered.html | CC-MAIN-2017-04 | refinedweb | 843 | 72.8 |
The complex menu has caused issues for the chain
These moves can modify your mood
Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more!
After a
turbulent week
based mostly on the ongoing debt drama emanating from across the
pond, stock futures are headed higher this morning, as Greece
appears to have reached a fiscal resolution just in time for the
Feb. 15 deadline
. With little in the way of domestic economic and earnings news on
today's agenda, Wall Street is cheering the austerity measures
approved yesterday by the Greek Parliament, although the resolution
comes amid violent backlash by its citizens. Against this backdrop,
all three major market indexes are looking to jump right out of the
gate.
In earnings news, Regeneron Pharmaceuticals, Inc. (REGN -
102.08) reported a fourth-quarter loss of $53.4 million, or 58
cents per share, compared to last year's loss of $14.6 million, or
17 cents per share. Excluding items, the per-share loss arrived at
37 cents. Revenue for the quarter fell to $123 million, versus its
year-ago revenue of $133.7 million. The bottom-line results came in
better than expected, as analysts, on average, were calling for a
loss of 60 cents per share on $131 million in sales. REGN is up
more than 11% ahead of the bell.
Elsewhere, KVH Industries (KVHI - 9.90) recorded a
fourth-quarter profit of $1.6 million, or 11 cents per share, a
significant increase over its year-ago profit of $200,000, or 2
cents per share. Revenue was also on the rise, jumping 18% to $31.9
million. The results fell in line with analysts' expectations.
Looking ahead, the electronics concern is calling for a
first-quarter loss of 3 cents to 9 cents per share on revenue
ranging between $25.5 million and $28.5 million. Analysts,
meanwhile, are projecting earnings of 6 cents per share on $29.5
million in sales.
Earnings Preview
Today's earnings docket will also feature reports from Alexander
& Baldwin (
ALEX
), Amicus Therapeutics (
FOLD
), AsiaInfo Linkage (
ASIA
), Gen-Probe (
GPRO
), Health Management Associates (
HMA
), Lender Processing Services (LPS), Limelight Networks (LLNW),
Nordic American Tanker (NAT), Rackspace Hosting (RAX), Seattle
Genetics (SGEN), and Skilled Healthcare Group (SKH). Keep your
browser at
SchaeffersResearch.com
for more news as it breaks.
Economic Calendar
The economic calendar kicks off Tuesday with retail sales
figures and business inventories, along with the latest data on
import and export prices. The Empire State manufacturing index hits
the Street on Wednesday. Also on the day's docket are the NAHB's
housing market index, industrial production and capacity
utilization, weekly crude inventories, and the minutes from the
latest meeting of the Federal Open Market Committee ,287,092 call contracts traded on Friday, compared to
986,033 put contracts. The resultant single-session put/call ratio
arrived at 0.77, while the 21-day moving average was 0.59.
Overseas Trading
Stocks in Asia ended mostly higher today, after legislators in
Athens ratified a critical austerity bill. The news alleviated
investors' worst fears about a potential Greek debt default, which
provided support for exporters -- including Mazda and Mitsubishi in
Tokyo, along with Kia and Hyundai in Seoul. As a result, traders
were able to shrug off a disappointing gross domestic product (GDP)
report for Japan, with the quake-stricken country's economy
shrinking by a steeper-than-forecast 0.6% during the fourth
quarter. Elsewhere, property stocks pulled back in China, pressured
by reports that the city of Wuhu in Anhui is halting its
home-subsidy program. By the close, South Korea's Kospi and Japan's
Nikkei gained 0.6% apiece, while Hong Kong's Hang Seng added 0.5%,
and China's Shanghai Composite slipped 0.01%.
The major European benchmarks are in the black at midday,
despite increasingly violent protests across Greece following the
parliamentary approval of unpopular austerity measures. With the
cash-strapped country one step closer to dodging a disorderly debt
default, buyers have tiptoed cautiously back into equities. The
bulls are also responding to some significant M&A developments,
as Vodafone confirmed that it's pondering a bid for telecom titan
Cable & Wireless Worldwide. At last check, the French CAC 40 is
up 0.7%, the German DAX is 0.8% higher, and London's FTSE 100 has
climbed 1.1%.
Currencies and Commodities
The U.S. dollar index is pointed lower this morning, with the
greenback last seen down 0.5% at $78.76. Crude oil, on the other
hand, is edging closer to the psychologically significant century
mark, with the front-month contract up 0.9% at $99.58 per barrel.
Set to
shake off last week's loss
, gold futures are 0.4% higher at $1,731.30? | http://www.nasdaq.com/article/stocks-set-to-jump-as-greece-approves-austerity-measures-cm120279 | CC-MAIN-2014-52 | refinedweb | 806 | 56.66 |
Can't figure out how to do this.
Switch Statement: set char variable penaltyKick value to L, R, or C
public class Switch {
public static void main(String[] args) {
**char penaltyKick = 'L' 'R' 'C';** switch (penaltyKick) { case 'L': System.out.println("Messi shoots to the left and scores!"); break; case 'R': System.out.println("Messi shoots to the right and misses the goal!"); break; case 'C': System.out.println("Messi shoots down the center, but the keeper blocks it!"); break; default: System.out.println("Messi is in position...");
If I am not mistaken this is Java and you posted it in the python section.
You would have better luck over there.
Oh ok my bad. thank you. | https://discuss.codecademy.com/t/switch-statement-set-char-variable-penaltykick-value-to-l-r-or-c/9598 | CC-MAIN-2018-26 | refinedweb | 117 | 71.21 |
/>
A while ago I wrote a post on estimating Pi using a variety of methods. As a sort of follow-up I will now write a post on estimating e, or Euler's Number, in Python.
Let's dive straight into the code which lives in a single file called e.py. (At least you can't accuse me of using overly long filenames). You can download the source code as a zip or clone/download the Github repository.
Source Code Links
This is the first part of the source code.
e.py (part 1)
import math E15DP = "2.718281828459045" RED = "\x1B[31;1m" GREEN = "\x1B[32;1m" WHITE = "\x1B[37;1m" RESET = "\x1B[0m" def main(): print("-----------------") print("| codedrome.com |") print("| Estimatinge |") print("-----------------\n") expansion_1() #expansion_2() #newtons_series() #brothers_formula() def print_as_text(e): """ Takes a value for e and prints it below a definitive value, with matching digits in green and non-matching digits in red """ e_string = "{:1.15f}".format(e) print("Definitive: " + WHITE + E15DP + RESET) print("Estimated: ", end="") for i in range(0, len(e_string)): if e_string[i] == E15DP[i]: print(GREEN + e_string[i] + RESET, end="") else: print(RED + e_string[i] + RESET, end="") print("\n")
Near the top of the file I have defined a variable called E15DP which is a string containing e to 15 decimal places. This will be used as a definitive value to compare with values we calculate. The name is all upper case to denote that it should be considered as a constant.
Next we have a few colour constants to output text to the console in various colours.
The main function consists of four function calls to estimate e using various methods. All but the first are commented out for now as we'll run them one at a time.
The print_as_text function takes a value for e and converts it to a string with 15 decimal places. It then prints it after the definitive value, with correct digits in green and incorrect ones in red.
e.py (part 2)
def expansion_1(): print(WHITE + "First expansion\nn = 100000\n---------------" + RESET) n = 100000 e = (1 + 1/n)**n print_as_text(e)
The first function to calculate e is expansion_1 which implements this formula...
First Expansion
e = (1 + 1/n)n
...with a value of n = 100000
We now have enough to run with this command.
Running the program
python3.7 e.py
The output is
Program Output
----------------- | codedrome.com | | Estimating e | ----------------- First expansion n = 100000 --------------- Definitive: 2.718281828459045 Estimated: 2.718268237192297
As you can see this gives us 4dp of accuracy; you might like to experiment with different values of n. Now let's try another expansion to implement this formula with n = 0.000000001.
Second Expansion
e = (1 + n)1/n
e.py (part 3)
def expansion_2(): print(WHITE + "Second expansion\nn = 0.000000001\n----------------" + RESET) n = 0.000000001 e = (1 + n)**(1/n) print_as_text(e)
Comment out expansion_1() in main, uncomment expansion_2() and run the program again. This is the output.
Program Output
----------------- | codedrome.com | | Estimating e | ----------------- Second expansion n = 0.000000001 ---------------- Definitive: 2.718281828459045 Estimated: 2.718282052011560
This gives us 5dp of accuracy but again you can try different values of n..)
e.py (part 4)
def newtons_series(): print(WHITE + "Newton's Series\nn = 0 to 12\n---------------" + RESET) e = 0 for n in range(0, 13): e += (1 / math.factorial(n)) print_as_text(e)
Uncomment newtons_series() in main and run the program.
Program Output
----------------- | codedrome.com | | Estimating e | ----------------- Newton's Series n = 0 to 12 --------------- Definitive: 2.718281828459045 Estimated: 2.718281828286169
Sir Isaac gives us 9dp of accurancy with a maximum value of n = 13.
As with π people are still working on ways to calculate e and I'll finish off with this impressive recently discovered method, Brothers Formula, devised as recently as 2004./>
As with Newton's Series this is the sum of a infinite series although rather more complex.
e.py (part 5)
def brothers_formula(): print(WHITE + "Brothers Formula\nn = 0 to 8\n----------------" + RESET) e = 0 for n in range(0, 9): e += (2*n + 2) / (math.factorial(2*n + 1)) print_as_text(e)
The extra complexity pays off though: we only need to count to 8 to get 15dp (possibly more) of accuracy.
Program Output
----------------- | codedrome.com | | Estimating e | ----------------- Brothers Formula n = 0 to 8 ---------------- Definitive: 2.718281828459045 Estimated: 2.718281828459045 | https://www.codedrome.com/estimating-e-in-python/ | CC-MAIN-2021-31 | refinedweb | 722 | 66.94 |
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
OK, let's finish up this year and this series. We have an algorithm that can compute what cells in the zero octant are in view to a viewer at the origin when given a function that determines whether a given cell is opaque or transparent. It marks the visible points by calling an action with the visible cells. We would like that to work in any octant, and for the viewer at any point, not just the origin.
We can solve the "viewer at any point" problem by imposing a coordinate transformation on the "what is opaque?" function and the "cell is visible" action. Suppose the algorithm wishes to know whether the cell (3, 1) is opaque. And suppose the viewer is not at the origin, but rather at (5, 6). The algorithm is actually asking whether the cell at (3 + 5, 6 + 1) is opaque. Similarly, if that cell is determined to be visible then the transformed cell is the one that is visible from (5, 6). We can easily transform one delegate into another:
private static Func<int, int, T> TranslateOrigin<T>(Func<int, int, T> f, int x, int y){ return (a, b) => f(a + x, b + y);}
private static Action<int, int> TranslateOrigin(Action<int, int> f, int x, int y){ return (a, b) => f(a + x, b + y);}
Similarly we can perform an octant transformation; if you want to map a point in octant one into a point in octant zero, just swap its (x,y) coordinates! Every octant has a simple transformation that reflects it into octant zero:
private static Func<int, int, T> TranslateOctant<T>(Func<int, int, T> f, int octant){ switch (octant) { default: return f; case 1: return (x, y) => f(y, x); case 2: return (x, y) => f(-y, x); case 3: return (x, y) => f(-x, y); case 4: return (x, y) => f(-x, -y); case 5: return (x, y) => f(-y, -x); case 6: return (x, y) => f(y, -x); case 7: return (x, y) => f(x, -y); }}
(And similarly for actions.)
Now that we have these simple transformation functions we can finally implement the code that calls our octant-zero-field-of-view algorithm:
public static void ComputeFieldOfViewWithShadowCasting( int x, int y, int radius, Func<int, int, bool> isOpaque, Action<int, int> setFoV){ Func<int, int, bool> opaque = TranslateOrigin(isOpaque, x, y); Action<int, int> fov = TranslateOrigin(setFoV, x, y);
for (int octant = 0; octant < 8; ++octant) { ComputeFieldOfViewInOctantZero( TranslateOctant(opaque, octant), TranslateOctant(fov, octant), radius); }}
Pretty slick, eh?
One minor downside of this algorithm is that it computes the visibility of the points along the axes and major diagonals twice; however, the number of such cells grows worst case linearly with radius. The algorithm as a whole is worst case quadradic in the radius, so the extra linear cost is probably irrelevant.
I've posted the project that builds the Silverlight control from the first episode here, if you want to get a complete working example.
Happy New Year everyone and we'll see you in 2012 for more Fabulous Adventures in Coding!
"Suppose the algorithm wishes to know whether the cell (3, 1) is opaque. And suppose the viewer is not at the origin, but rather at (5, 6). The algorithm is actually asking whether the cell at (3 + 5, 6 + 1)"
I think there is a typo there. I guess the algorithm should be asking if cell at (3 - 5, 1 - 6) is visible.
Very nice series by the way!
Freakin' awesome man. Thanks a lot for doing this. This is definitely going into some of my future games :)
No need for the TranslateOctant() switch. Create an 8 entry array of the value to multiply X and Y by. entry 1 is (1,1), entry 2 is (1,-1), ...
Much faster than the switch.
@Tom: ...except that doesn't account for the cases whee X & Y are swapped. You could create an array of 2x2 matritices representing two linear equations (X & Y) each with two inputs (X & Y). That might still be faster than the switch, but short branches that stay within the L1 cache are very fast, so a single branch might be faster than 4 array lookups. Only profiling would tell if it's worht the trouble.
As to which is easier to understand, I think that depends on your background. People familiar with graphics would no doubt say that the matrix multiply method is clearer, while other would likely chose the switch as clearer.
@Federico: At first, I thought the same. However notice that he only does the correction the other way around. The algorithm always asks for opacity (or sets visibility) in its local coordinates. The isOpaque and setFoV methods then translate them to global ones.
Wow, great articles! Very good.. Thank you for the contribution! | http://blogs.msdn.com/b/ericlippert/archive/2011/12/29/shadowcasting-in-c-part-six.aspx | CC-MAIN-2014-42 | refinedweb | 825 | 60.95 |
- Stackato
- Developer Tools
- Languages
- Support
, September 9, 2010
We (ActiveState) released Komodo 6.0 RC 1 today and we want your feedback. If you are using beta 3 from about a month ago you can update via Komodo's Help > Check for Updates... menu entry. release candidate:
Read some of the details below and tell us how you really feel:
What's New in RC 1?
- Project changes
- projects are now accessible through the places panel
- project tools are now merged into the main toolbox
- codeintel and fast open utilize the open project
- Menu changes
- the Komodo menus have been tweaked
- provides better functionality grouping and consistency
- Side panes
- left pane now uses a vertical tab layout
- customize the layout through the Appearance prefs
- Places
- fixed a number of CPU performance issues
- International keyboard and IME improvements
- Stability improvements on all platforms
What's New in Komodo 6?
A lot. HTML 5 autocomplete. CSS 3 autocomplete. Full Python 3 support (debugging, syntax checking, autocomplete, code browsing). A new Database Explorer tool for quickly exploring SQL databases (SQLite out of the box and extensions for MySQL and Oracle, with plans for Postgres). A new project system called "Places" that adds a file system browser (local and remote). New publishing support for syncing a directory with a remote machine. Additions to Komodo's Hyperlinks for quickly navigating to file references. Added support for PHP, Perl, Ruby and JavaScript regular expression debugging with Komodo's excellent Rx tool. See the Komodo 6.0 Features post for a full outline.
Your feedback please
We're hoping that a Komodo 6 final release is not too far away now. We'd love to have your input on how we can polish and improve on the above work and how we can make Komodo IDE and Komodo Edit tools that help you get your coding work done faster. Please subscribe to this blog or follow @activestate on twitter for coming posts that go into detail on what's coming and new in Komodo 6. And please give us your feedback. Enjoy.
Trackback URL for this post:
Category: announcementsSHARE THIS:
23 comments for Komodo 6.0 RC 1: More Stable, Places UI Love
Submitted by Karol Orzeł (not verified) on Sat, 2010-09-11 07:09.
Submitted by Trent Mick (not verified) on Mon, 2010-09-13 14:07.
Permalink
There is "Edit > Find in Current Project..." in the main menu (where it was before, I believe). As well, the Ctrl+Shift+F (Cmd+Shift+F on the Mac) keybinding for "Find in Files..." will remember its search scope: i.e. will remember to find "in current project" if you set the scope to that. You can also use "right-click > Find..." on the any of the directory nodes in places (including the root node). I suppose a "Find in Project..." entry in the "Projects" subpanel cog menu might be justified.
Submitted by Karol Orzeł (not verified) on Tue, 2010-09-14 06:19.
Permalink
Thanks for the reply Trent.
Thanks for the reply Trent.
I believe Search field I have been speaking about has kinda different functionality. To see what i mean:
It was pretty useful and to be honest i really miss this feature...
Submitted by Trent Mick (not verified) on Tue, 2010-09-14 14:54.
Permalink
Ah, the Places filter box. Unfortunately, no, we won't have this for 6.0 final release. Perhaps we can revisit for a 6.1 release. The problem was with potential large perf problems if the tree was large. Think about running a filter to find all files match "foo" when the Places root dir is '/'.
One thought we had was to use the fastopen dialog (File | Open | Fast Open...), put into a mode to search recursively only under the current Places root dir. That would help with the use case of trying to find files matching a certain filter string. The fastopen dialog as is wouldn't visually show a *hierarchy* though.
It would be helpful to know people's particular use cases for this (and any) feature to help us impl features.
Submitted by Trent Mick (not verified) on Mon, 2010-09-13 23:50.
Permalink
No explicit Jython support currently, unfortunately. However, Komodo does have a mechanism where by autocomplete for 'java' imports in Jython would be possible. Komodo's Code Intelligence system has allows one to add "API Catalogs". These are XML files that define an API and are used to provide autocomplete and calltips. For example Komodo ships with a pywin32.cix (CIX == Code Intelligence Xml) that defines the Python PyWin32 API -- i.e. it tells Komodo what to do for autocomplete when it sees "import win32api" et al.
Someone could write a jython.cix that was a Python API catalog for the "java" namespace. Presumably this would be generated by a script. I'm not a Jython developer so I don't know how difficult that would be, but I presume that it would be fairly straightforward using Jython introspection. A good start (if you or any reader is interested) is to look at pywin32.cix in a Komodo install and at the RelaxNG schema (with comments) for the CIX format (...) and to ask questions on the Komodo forums ().
The limitation here is that an API catalog is static: i.e. this would only cover java APIs that are known ahead of time and not java imports that are, say, only on the end user machine. If the generator script were open source, then users could run that over their Java libs to generate API catalogs for those.
The full solution would be an extension to Komodo's codeintel system's scanning to support scanning Java class libraries in the fly. That would require a motivated hacker to dig into the codeintel system in openkomodo ().
However, getting a good jython.cix I think would be a great 80% solution.
Submitted by Florent V. (not verified) on Wed, 2010-09-15 12:38.
Permalink
Hi, thanks for your work.
Hi, thanks for your work.
I was wondering: is the OSX theme in this RC definitive (barring fixing bugs/obvious glitches), or are there plans to improve it?
This may be harsh, but in my opinion the current look and feel on OSX lacks refinement and only gets the native look halfway done. (I do understand UI nitpicking may not be a priority for both Komodo developers and users.)
I may switch to OSX full-time in the next few months, so I'm looking at other editors, since the cross-platform advantage of Komodo won't be a factor then (I currently use Komodo Edit 5 on both Ubuntu and OSX). And unless I fall in love with the competition, I'd love to look at theme building and try to achieve a great native look. (I think somebody did this for Win7.)
So: is the current OSX theme final? Should I wait for improvements before digging in?
Submitted by Evan (not verified) on Mon, 2010-09-20 16:10.
Submitted by Igor (not verified) on Sun, 2010-09-26 18:42.
Permalink
Hi,
Hi,
I am using Komodo Edit, because IDE cannot show multiple projects open at sidebar tree.
Can 6.0 show multiple open projects and their files in Places, or it is still monoteistic in it's IDE incarnation? I do not want to close currently open project & its files, like IDE 5.x does, just to apply a quick fix to other project's file and commit it.
E.g. wanted to see all available files from multiple projects (like in Edit), but with full project-support (VCS, debugging), like in IDE. Is it possible in this version?
Submitted by Billy (not verified) on Mon, 2010-09-27 12:12.
Submitted by Trent Mick (not verified) on Mon, 2010-10-04 17:11.
Permalink
The final release will be out very very soon.
You should be able to right click on a project and select "Remove from Recent Projects" in the "Projects" subpanel of the Places sidebar. If not, then that change was after the RC1 build. Check out a nightly here:
| http://www.activestate.com/blog/2010/09/komodo-60-rc-1 | CC-MAIN-2013-20 | refinedweb | 1,364 | 74.79 |
Back to: Java Tutorials For Beginners and Professionals
The enumeration in Java with Examples
In this article, I am going to discuss Enumeration in Java with Examples. Please read our previous article, where we discussed Swings in Java applications with Examples.
The enumeration in Java:
An Enumeration may be a list of named constants. Java Enumeration appears almost like enumerations in other languages. In C++, enumerations are simple lists of named integer constants but in Java, an enumeration defines a category type.
An enumeration is made using the enum keyword. Once you’ve got defined an enumeration, you’ll create a variable of that type. However all enumerations define a category type, we can instantiate an enum using new. Instead, you declare and use an enumeration variable in much an equivalent way as you are doing one among the primitive types. An enumeration value also can be used to control a switch statement. Of Course, all of the case statements must use constants from an equivalent enum as that employed by the switch expression. Java compiler will create .class for each class or interface or enum available within the program.
Points to Remember:
- Internally enum is created as a final that extends a predefined abstract class called ENUM.
- All constants are public static final.
- Every constant holds an object of the corresponding enum.
- Java enum is very efficient compared to C Language enum because in C Language enum contains only constants but in java, an enum can contain constants along with other codes like constructors, methods, etc.
Methods in Enumeration in Java:
values():
It returns an array that contains a list of the enumeration constants.
Declaration : public static enum-type[] values(); Here, enum-type is the type of the enumeration
valueOf(String str):
It returns the enumeration constant whose value corresponds to the string passed in str.
Declaration : public static enum-type valueOf(String str); Here, enum-type is the type of enumeration constants.
Sample Program to demonstrate the use of values() and valuesOf() methods
enum Fruit { Apple, Mango, Pineapple, Banana, Orange } public class EnumDemo1 { public static void main (String[]args) { Fruit fr; System.out.println ("Here are all Fruits constants:"); //Use values() Fruit allfruits[] = Fruit.values (); for (Fruit f:allfruits) System.out.println (f); System.out.println (); //Use valueOf() fr = Fruit.valueOf ("Mango"); System.out.println ("fr contains: " + fr); } }
Output:
Rules for Enumeration
- An enum can be empty.
- An enum can contain only constants
- An enum can contain constants along with other codes.
- An enum cannot contain only other code without constants.
- Terminating constants by; is optional when there is no other code is not available otherwise of other code is available then terminating constants by; is mandatory.
- When we write other code-writing constants must be the first line.
- We can also write the main() method in the enum.
Example:
enum Day{ SUN, MON, TUE, WED, THU, FRI, SAT; public static void main(String args[]) { System.out.println("This is enum"); } }
8. enum types cannot be instantiated that is, the object cannot be created for enum.
Example : Day d = new Day(); //It is invalid
9. We can write a constructor inside the enum which can be accessed by creating a constant in the enum.
Example:
enum Day{ SUN, MON, TUE, WED, THU, FRI, SAT; Day() { System.out.println("This is constructor"); } }
10. We can write instance variables and instance methods in the enum.
Sample Program to demonstrate the use of an enum Constructor, instance variable, and method
enum Fruit { Apple (10), Mango (9), Pineapple (12), Banana (15), Orange (8); private int price; //Constructor Fruit (int p) { price = p; } int getPrice () { return price; } } public class EnumDemo2 { public static void main (String[]args) { Fruit fr; // Display price of Mango System.out.println ("Mango costs: " + Fruit.Mango.getPrice () + "cents. \n"); //Display all fruits and prices System.out.println ("All fruits prices:"); for (Fruit f:Fruit.values ()) System.out.println (f + " costs " + f.getPrice () + " cents. "); } }
Output:
Note: Up to java 1.4 version we can pass byte/short/char/int datatype values into switch case. But in 1.5 version onwards we can pass primitive datatypes byte/ short/ char/ int and its corresponding Wrapper Classes like Byte/ Short/ Character/ Integer. From Java 1.5 onwards we can also pass a String into the Switch case as an argument. In Java 1.7 we can also pass a String into the Switch case as an argument.
Sample Program Enumeration in Java
enum Day { SUN, MON, TUE, WED, THU, FRI, SAT; } public class EnumDemo3 { public static void main (String[]args) { Day d = Day.valueOf ("TUE"); switch (d) { case SUN: System.out.println ("It is a Jolly Day"); break; case MON: System.out.println ("First Working Day"); break; case TUE: System.out.println ("Second Working Day"); break; case WED: System.out.println ("Third Working Day"); break; case THU: System.out.println ("Fourth Working Day"); break; case FRI: System.out.println ("Fifth Working Day"); break; case SAT: System.out.println ("Shopping Day"); break; } } }
Output: Second Working Day
Note:- An enum cannot extend a class or enum, because an enum is internally created as a final class that extends a predefined enum class. We cannot create a class that extends any enum because internally enum is created as a final class and we cannot extend a final class. An enum can implement any number of interfaces because internally enum is created as a final class and a class can extend a class and also implement any number of interfaces.
Here, in this article, I try to explain Enumeration in Java with Examples and I hope you enjoy this Enumeration in Java with Examples article. | https://dotnettutorials.net/lesson/enumeration-in-java/ | CC-MAIN-2022-27 | refinedweb | 937 | 57.87 |
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.
operator<
Visual Studio .NET 2003
Tests if the object on the left side of the operator is less than the object on the right side.
Parameters
- _Left
- An object of type vector.
- _Right
- An object of type vector.
Return Value
true if the vector on the left side of the operator is less than the vector on the right side of the operator; otherwise false.
Example
// vector_op_lt.cpp // compile with: /EHsc #include <vector> #include <iostream> int main( ) { using namespace std; vector <int> v1, v2; v1.push_back( 1 ); v1.push_back( 2 ); v1.push_back( 4 ); v2.push_back( 1 ); v2.push_back( 3 ); if ( v1 < v2 ) cout << "Vector v1 is less than vector v2." << endl; else cout << "Vector v1 is not less than vector v2." << endl; }
Output
See Also
<vector> Members | vector::operator< Sample
Show: | https://msdn.microsoft.com/en-us/library/vstudio/691s080z(v=vs.71).aspx | CC-MAIN-2015-32 | refinedweb | 157 | 70.5 |
Code
The complete source: here
Intro
Starting to play with graph algorithms on the GPU (who wants to wait for nvGraph from NVIDIA, right?) As one of the precursor problems – sorting large arrays of non-negative integers came up. Radix sort is a simple effective algorithm quite suitable for GPU implementation.
Least Significant Digit Radix Sort
The idea is simple. We take a set A of fixed size elements with lexicographical ordering defined. We represent each element in a numeric system of a given radix (e.g., 2). We then scan each element from right to left and group the elements based on the digit within the current scan window, preserving the original order of the elements. The algorithm is described here in more detail.
Here is a the pseudocode for radix = 2 (easy to extend for any radix):
k = sizeof(int) n = array.length for i = 0 to k all_0s = [] all_1s = [] for j = 0 to n if bit(i, array[j]) == 0 then all_0s.add(array[j]) else all_1s.add(array[j]) array = all_0s + all_1s
GPU Implementation
This is a poster problem for the “split” GPU primitive. Split is what <code>GroupBy</code> is in LINQ (or <code>GROUP BY</code> in TSQL) where a sequence of entries are split (grouped) into a number of categories. The code above applies split k times to an array of length n, each time the categories are are “0” and “1”, and splitting/grouping are done based on the current digit of an entry.
The particular case of splitting into just two categories is easy to implement on the GPU. Chapter 39 of GPU Gems describes such implementation (search for “radix sort” on the page).
The algorithm described there shows how to implement the pseudocode above by computing the position of each member of the array in the new merged array (line 10 above) without an intermediary of accumulating lists. The new position of each member of the array is computed based on the exclusive scan where the value of scanned[i] = scanned[i – 1] + 1 when array[i-1] has 0 in the “current” position. (scanned[0] = 0). Thus, by the end of the scan, we know where in the new array the “1” category starts (it’s scanned[n – 1] + is0(array[n – 1]) – total length of the “0” category, and the new address of each member of the array is computed from the scan: for the “0” category – it is simply the value of scanned (each element of scanned is only increased when a 0 bit is encountered), and start_of_1 + (i – scanned[i]) for each member of the “1” category: its position in the original array minus the number of “0” category members up to this point, offset by the start of the “1” category.
The algorithm has two parts: sorting each block as described above and then merging the results. In our implementation we skip the second step, since Alea.CUDA
DeviceScanModule does the merge for us (inefficiently so at each iteration, but makes for simpler, more intuitive code).
Coding with Alea.CUDA
The great thing about Alea.CUDA libray v2 is the Alea.CUDA.Unbound namespace that implements all kinds of scans (and reducers, since a reducer is just a scan that throws away almost all of its results).
let sort (arr : int []) = let len = arr.Length if len = 0 then [||] else let gridSize = divup arr.Length blockSize let lp = LaunchParam(gridSize, blockSize) // reducer to find the maximum number & get the number of iterations // from it. use reduceModule = new DeviceReduceModule<int>(target, <@ max @>) use reducer = reduceModule.Create(len) use scanModule = new DeviceScanModule<int>(target, <@ (+) @>) use scanner = scanModule.Create(len) use dArr = worker.Malloc(arr) use dBits = worker.Malloc(len) use numFalses = worker.Malloc(len) use dArrTemp = worker.Malloc(len) // Number of iterations = bit count of the maximum number let numIter = reducer.Reduce(dArr.Ptr, len) |> getBitCount let getArr i = if i &&& 1 = 0 then dArr else dArrTemp let getOutArr i = getArr (i + 1) for i = 0 to numIter - 1 do // compute significant bits worker.Launch <@ getNthSignificantReversedBit @> lp (getArr i).Ptr i len dBits.Ptr // scan the bits to compute starting positions further down scanner.ExclusiveScan(dBits.Ptr, numFalses.Ptr, 0, len) // scatter worker.Launch <@ scatter @> lp (getArr i).Ptr len numFalses.Ptr dBits.Ptr (getOutArr i).Ptr (getOutArr (numIter - 1)).Gather()))
This is basically it. One small optimization - instead of looping 32 times (length of int in F#), we figure out the bit count the largest element and iterate fewer times.
Helper functions/kernels are straightforward enough:
let worker = Worker.Default let target = GPUModuleTarget.Worker worker let blockSize = 512 [<Kernel; ReflectedDefinition>] let getNthSignificantReversedBit (arr : deviceptr<int>) (n : int) (len : int) (revBits : deviceptr<int>) = let idx = blockIdx.x * blockDim.x + threadIdx.x if idx < len then revBits.[idx] <- ((arr.[idx] >>> n &&& 1) ^^^ 1) [<Kernel; ReflectedDefinition>] let scatter (arr : deviceptr<int>) (len: int) (falsesScan : deviceptr<int>) (revBits : deviceptr<int>) (out : deviceptr<int>) = let idx = blockIdx.x * blockDim.x + threadIdx.x if idx < len then let totalFalses = falsesScan.[len - 1] + revBits.[len - 1] // when the bit is equal to 1 - it will be offset by the scan value + totalFalses // if it's equal to 0 - just the scan value contains the right address let addr = if revBits.[idx] = 1 then falsesScan.[idx] else totalFalses + idx - falsesScan.[idx] out.[addr] <- arr.[idx] let getBitCount n = let rec getNextPowerOfTwoRec n acc = if n = 0 then acc else getNextPowerOfTwoRec (n >>> 1) (acc + 1) getNextPowerOfTwoRec n 0
Generating Data and Testing with FsCheck
I find
FsCheck quite handy for quick testing and generating data with minimal setup in FSharp scripts:
let genNonNeg = Arb.generate<int> |> Gen.filter ((<=) 0) type Marker = static member arbNonNeg = genNonNeg |> Arb.fromGen static member ``Sorting Correctly`` arr = sort arr = Array.sort arr Arb.registerByType(typeof<Marker>) Check.QuickAll(typeof<Marker>)
Just a quick note: the first line may seem weird at first glance: looks like the filter condition is dropping non-positive numbers. But a more attentive second glance reveals that it is actually filtering out negative numbers (currying, prefix notation should help convince anyone in doubt).
Large sets test.
I have performance tested this on large datasets (10, 000, 000 - 300, 000, 000) integers, any more and I'm out of memory on my 6 Gb Titan GPU. The chart below goes up to 100,000,000.
I have generated data for these tests on the GPU as well, using Alea.CUDA random generators, since FsCheck generator above is awfully slow when it comes to large enough arrays.
The code comes almost verbatim from this blog post:))
except I'm converting the generated array to integers in the [0; n] range. Since my radix sort work complexity is O(k * n) (step complexity O(k)) where
k = bitCount(max input) I figured this would give an adequate (kinda) comparison with the native F#
Array.orderBy .
Optimization
It is clear from the above chart, that there is a lot of latency in our GPU implementation - the cost we incur for the coding nicety: by using
DeviceScanModule and
DeviceReduceModule we bracket the merges we would otherwise have to do by hand. Hence the first possible optimization: bite the bullet, do it right, with a single merge at the end of the process and perform a block-by-block sort with
BlockScanModule
One thought on “GPU Split & Sort With Alea.CUDA” | https://viralfsharp.com/2016/06/11/gpu-split-sort-with-alea-cuda/ | CC-MAIN-2019-35 | refinedweb | 1,229 | 55.74 |
Allen Downey
This notebook contains a solution to a problem I posed in my Bayesian statistics class: time between goals is exponential with parameter $\lambda$, the goal scoring rate. In this case we are given as data the inter-arrival time of the first two goals, 11 minutes and 12 minutes. We can define a new class that inherits from
thinkbayes2.Suite and provides an appropriate
Likelihood function:
import thinkbayes2 class Soccer(thinkbayes2.Suite): """Represents hypotheses about goal-scoring rates.""" def Likelihood(self, data, hypo): """Computes the likelihood of the data under the hypothesis. hypo: goal rate in goals per game data: interarrival time in minutes """ x = data lam = hypo / 90 like = thinkbayes2.EvalExponentialPdf(x, lam) return like
Likelihood computes the likelihood of
data given
hypo, where
data is an observed time between goals in minutes, and
hypo is a hypothetical goal-scoring rate in goals per game.
After converting
hypo to goals per minute, we can compute the likelihood of the data by evaluating the exponential probability density function (PDF). The result is a density, and therefore not a true probability. But the result from
Likelihood only needs to be proportional to the probability of the data; it doesn't have to be a probability.
Now we can get back to Step 1.
Before the game starts, what should we believe about Germany's goal-scoring rate against Brazil? We could use previous tournament results to construct the prior,(134) # fake data chosen by trial and error to yield the observed prior mean thinkplot.Pdf(suite) suite.Mean()
1.3441732095365195
Now that we have a prior, we can update with the time of the first goal, 11 minutes.
suite.Update(11) # time until first goal is 11 minutes thinkplot.Pdf(suite) suite.Mean()
1.8620612271278361
After the first goal, the posterior mean rate is almost 1.9 goals per game.
Now we update with the second goal:
suite.Update(12) # time between first and second goals is 12 minutes thinkplot.Pdf(suite) suite.Mean()
2.2929790004763997
After the second goal, the posterior mean goal rate is 2.3 goals per game.
Now on to Step 3.
If we knew the actual goal scoring rate, $\lambda$, we could predict how many goals Germany would score in the remaining $t = 90-23$ minutes. The distribution of goals would be Poisson with parameter $\lambda t$.
We don't actually know $\lambda$, but we can use the posterior distribution of $\lambda$ to generate a predictive distribution for the number of additional goals.
def PredRemaining(suite, rem_time): """Plots the predictive distribution for additional number of goals. suite: posterior distribution of lam in goals per game rem_time: remaining time in the game in minutes """ metapmf = thinkbayes2.Pmf() for lam, prob in suite.Items(): lt = lam * rem_time / 90 pred = thinkbayes2.MakePoissonPmf(lt, 15) metapmf[pred] = prob thinkplot.Pdf(pred, color='gray', alpha=0.3, linewidth=0.5) mix = thinkbayes2.MakeMixture(metapmf) return mix mix = PredRemaining(suite, 90-23)
PredRemaining takes the posterior distribution of $\lambda$ and the remaining game time in minutes (I'm ignoring so-called "injury time").
It loops through the hypotheses in
suite, computes the predictive distribution of additional goals for each hypothesis, and assembles a "meta-Pmf" which is a Pmf that maps from each predictive distribution to its probability. The figure shows each of the distributions in the meta-Pmf.
Finally,
PredRemaining uses
MakeMixture to compute the mixture of the distributions. Here's what the predictive distribution looks like.
thinkplot.Hist(mix) thinkplot.Config(xlim=[-0.5, 10.5])
After the first two goals, the most likely outcome is that Germany will score once more, but there is a substantial chance of scoring 0 or 2--4 additional goals.
Now we can answer the original questions: what is the chance of scoring 5 or more additional goals:
mix.ProbGreater(4)
0.057274188144370755
After the first two goals, there was only a 6% chance of scoring 5 more times. And the expected number of additional goals was only 1.7.
mix.Mean()
1.7069804402488897
That's the end of this example. But for completeness (and if you are curious), here is the code for
MakeMixture:.Incr(x, p1 * p2) return mix | https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/soccer_soln.ipynb | CC-MAIN-2021-10 | refinedweb | 699 | 57.37 |
Voronoi diagram (VD) could be a very colorful geometric figure, and we gonna generate and plot random version of it
both in JavaScript and in R.
According to Wikipedia [1]:
You would be surprised how many "real world" and theoretical applications are there [2-4].
In [2] it was stressed that there are "literally hundreds of different algorithms for constructing various types of Voronoi diagrams".
So, let's talk about algorithms in general and applied to generating/plotting of VD in particular.
There are a lot of suggestions, e.g., to use D3.js library in JavaScript [5] or use this or that package in R.
You can find a lot of algorithms realizes in C, C++, C# and VB, and actual samples of code even here, on CP [6-10].
Almost all of them overfilled with math, have a "monster" size of code [6,7], etc. The one in [9] is definitely not simple, and I'm not
sure if the one in [8] is fast. But I know for sure: generating/plotting in JavaScript is faster than in R on my computer, i.e.:
In R language, you could be confused totally, because
one of the R's great powers is its unlimited number of packages, virtually thousands of them.
For any applications, big or small, you can find a package. In case of VD there are many packages,
e.g.: deldir, alphahull, dismo, ggplot, ggplot2, tripack, CGAL, etc. Not to mention all linked packages
you need to install too. Do you need random colors? Again, find a few packages more...
In fact, I've found only black-and-white VDs built in JavaScript and R.
I was lucky to find what I need instead of advised tools: The Algorithm.
Here is a long story in short. I was reading description of "the simplest" algorithm in [2], and I've stopped right at the beginning.
I thought: "Interesting, but too time consuming..." After this I stumble at [6,7] and thought: "WOW, it will take too much time even to understand it..." Later I've "scanned" scripts in [5] and have found a small, compact code
in Python. My thoughts: "It looks like it is fake, too simple... It can't work!? Test it!" So, in 5 min I've created the first very simple prototype,
and it worked!
Find a page almost like the initial prototype page in a downloaded source file.
Take a look at this prototype page script code below. This version is even a little bit expanded if compare it with
an initial prototype page.
function Metric(x,y) {return (x*x + y*y)}
// Generating and plotting Voronoi diagram (simple version).
function pVoronoiD() {
var cvs=document.getElementById("cvsId");
var ctx=cvs.getContext("2d");
var n=7, w=640, x=y=d=dm=j=0, w1=w-2;
// Arrays for 7 predefined sites (located on canvas) and colors
X=[20,30,60,80,100,120,140];
Y=[200,100,300,400,200,400,340];
C=["red","orange","yellow","green","blue","navy","violet"];
ctx.fillStyle="white"; ctx.fillRect(0,0,w,w);
// Plotting sites and applying linked color.
for(y=0; y<w1; y++) {
for(x=0; x<w1; x++) {
dm=w1*w1 + w1*w1; j=-1;
for(var i=0; i<n; i++) {
d=Metric(X[i]-x,Y[i]-y)
if(d<dm) {dm=d; j=i;}
}//fend i
ctx.fillStyle=C[j]; ctx.fillRect(x,y,1,1);
}//fend x
}//fend y
// Plotting site points (in black). Note: these 7 lines were added later! LOL
// Also, there was no title, h3 and comment tags. No JS comments too. Only 21 lines.
// I've re-formatted it later to look like now (and match prime page code).
ctx.fillStyle="black";
for(var i=0; i<n; i++) {
ctx.fillRect(X[i],Y[i],3,3);
}
}
All "magic" of VD generation is happening inside triple "for" loop (see above).
In short, it works like following:
VD generated by this simple script is shown below.
Remember:
Additionally, a few custom helper functions can simplified code, and they can be used for any other applications,
and even can be "translated" to other language, e.g., R language in our case.
Now, VD generation/plotting is available in JavaScript and R producing beautiful colorful diagrams with any reasonable amount of sites.
It means that, in fact, two "VD generators" are available for CP users.
Let's take a closer look at the JavaScript:
// Helper Functions
// HF#1 Like in PARI/GP: return random number 0..max-1
function randgp(max) {return Math.floor(Math.random()*max)}
// HF#2 Random hex color returned as "#RRGGBB"
function randhclr() {
return "#"+
("00"+randgp(256).toString(16)).slice(-2)+
("00"+randgp(256).toString(16)).slice(-2)+
("00"+randgp(256).toString(16)).slice(-2)
}
// HF#3 Return the value of a metric: Euclidean, Manhattan or Minkovski
function Metric(x,y,mt) {
if(mt==1) {return Math.sqrt(x*x + y*y)}
if(mt==2) {return Math.abs(x) + Math.abs(y)}
if(mt==3) {return(Math.pow(Math.pow(Math.abs(x),3) + Math.pow(Math.abs(y),3),0.33333))}
}
// Generating and plotting Voronoi diagram.
function pVoronoiD() {
var cvs=document.getElementById("cvsId");
var ctx=cvs.getContext("2d");
var w=cvs.width, h=cvs.height;
var x=y=d=dm=j=0, w1=w-2, h1=h-2;
var n=document.getElementById("sites").value;
var mt=document.getElementById("mt").value;
// Building 3 arrays for requested n random sites (located on canvas)
// and linked to them random colors.
var X=new Array(n), Y=new Array(n), C=new Array(n);
ctx.fillStyle="white"; ctx.fillRect(0,0,w,h);
for(var i=0; i<n; i++) {
X[i]=randgp(w1); Y[i]=randgp(h1); C[i]=randhclr();
}
// Plotting sites and applying selected mt metric and linked colors.
for(y=0; y<h1; y++) {
for(x=0; x<w1; x++) {
dm=Metric(h1,w1,mt); j=-1;
for(var i=0; i<n; i++) {
d=Metric(X[i]-x,Y[i]-y,mt)
if(d<dm) {dm=d; j=i;}
}//fend i
ctx.fillStyle=C[j]; ctx.fillRect(x,y,1,1);
}//fend x
}//fend y
// Plotting site points (in black).
ctx.fillStyle="black";
for(var i=0; i<n; i++) {
ctx.fillRect(X[i],Y[i],3,3);
}
}
As you can see, 3 helper functions are very small and self-explanatory. Note: the first two
are from VOE.js described in [6] and used "as is".
Prime generating and plotting function pVoronoiD() has 3 simple steps:
In fact, all prime fragments of the code are the same both here and in the prototype script, e.g., 3 arrays,
triple "for" loop, plotting "site points".
See 4 plotted in JavaScript Voronoi diagrams for yourself in the figure below. Metrics applied are as
following: Euclidean, Manhattan, Minkovski and again Euclidean.
As usually, right-clicking any VD image you can save it as png-file, for example. Original dimensions are 640 x 640 pixels,
i.e. 4.6 times bigger than presented here..
Functions in R script look like "twins" of similar JavaScript functions, because
they were "translated" from JavaScript. See for yourself:
## VORDgenerator.R
## Voronoi Diagram Generator
# Helper Functions
## HF#1 Random hex color returned as "#RRGGBB".
randHclr <- function() {
m=255;r=g=b=0;
r <- sample(0:m, 1, replace=TRUE);
g <- sample(0:m, 1, replace=TRUE);
b <- sample(0:m, 1, replace=TRUE);
return(rgb(r,g,b,maxColorValue=m));
}
## HF#2 Return the value of a metric: Euclidean, Manhattan or Minkovski
Metric <- function(x, y, mt) {
if(mt==1) {return(sqrt(x*x + y*y))}
if(mt==2) {return(abs(x) + abs(y))}
if(mt==3) {return((abs(x)^3 + abs(y)^3)^0.33333)}
}
## Generating and plotting Voronoi diagram.
## ns - number of sites, fn - file name, ttl - plot title.
## mt - type of metric: 1 - Euclidean, 2 - Manhattan, 3 - Minkovski.
pVoronoiD <- function(ns, fn="", ttl="",mt=1) {
cat(" *** START VD:", date(), "\n");
if(mt<1||mt>3) {mt=1}; mts=""; if(mt>1) {mts=paste0(", mt - ",mt)};
m=640; i=j=k=m1=m-2; x=y=d=dm=0;
if(fn=="") {pf=paste0("VDR", mt, ns, ".png")} else {pf=paste0(fn, ".png")};
if(ttl=="") {ttl=paste0("Voronoi diagram, sites - ", ns, mts)};
cat(" *** Plot file -", pf, "title:", ttl, "\n");
plot(NA, xlim=c(0,m), ylim=c(0,m), xlab="", ylab="", main=ttl);
# Building 3 arrays for requested n random sites (located on canvas)
# and linked to them random colors.
X=numeric(ns); Y=numeric(ns); C=numeric(ns);
for(i in 1:ns) {
X[i]=sample(0:m1, 1, replace=TRUE);
Y[i]=sample(0:m1, 1, replace=TRUE);
C[i]=randHclr();
}
# Plotting sites and applying selected mt metric and linked colors.
for(i in 0:m1) {
for(j in 0:m1) {
dm=Metric(m1,m1,mt); k=-1;
for(n in 1:ns) {
d=Metric(X[n]-j,Y[n]-i, mt);
if(d<dm) {dm=d; k=n;}
}
clr=C[k]; segments(j, i, j, i, col=clr);
}
}
# Plotting site points (in black).
points(X, Y, pch = 19, col = "black", bg = "white")
dev.copy(png, filename=pf, width=m, height=m);
dev.off(); graphics.off();
cat(" *** END VD:",date(),"\n");
}
## Executing:
#pVoronoiD(10) ## Euclidean metric
#pVoronoiD(10,"","",2) ## Manhattan metric
#pVoronoiD(10,"","",3) ## Minkovski metric
#pVoronoiD(150) ## Euclidean metric
As you can see, all functions in R having the same functionality, the same steps as in JavaScript, because they were adopted
to R native operators and functions. That's it!
See 4 plotted in R Voronoi diagrams for yourself in the figure below.
Two Voronoi diagram generators were presented: HTML/JavaScript page and R script, and this is a whole source code for this project.
It should be stressed, that cited in the beginning VD definition from [1] is only the one of many possible, especially if VD is built
to show something like temperature, or water, or even something more abstract.
Also metric could be not related to any distance, but acting as a "love meter" or "mood meter", for example.
Anyway, in many researches randomly selected sites and colors should be not random, but predefined.
Even sequence of "points" could be changed to fit the plotting algorithm. To determine how you should re-arrange sites for proper plotting watch
animation of a plotting process.
If your computer is not a super fast one, you can watch "animation" in "R Graphics" sub-window of the "RGui" window.
It means that scripts should be modified.
Fortunately, these scripts are small and clear, so, it won't be difficult to modify them.
Note: "real world" or theoretical applications are out of scope of this article.
Presented algorithm is reliable for sure, just because it works already in 7 languages [5].
another question would be reasonable: "Does it fit your application task?"
To find the answer: implement it in your language and test it! There is no other way.
Finally, using "VD Generators" as a template, any person can start any kind of research suitable for
demonstrating results as a colorful Voronoi diagram.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Voevudko A. E., Ph.D. wrote:First of all, there is no "speed of the algorithms" in the article, but only "Speed of generating/plotting".
So, if I understand it correctly, this is a rhetorical, philosophical question from you. Same as "To be or not to be?!".
Voevudko A. E., Ph.D. wrote:If "monster" size code is written in user's fave language, and it fits nicely user's application task, -
then it would be very useful and time saving to use such code. And again, applications and speed test are out of my interests.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/1189287/Generating-Random-Voronoi-Diagrams | CC-MAIN-2019-04 | refinedweb | 1,997 | 66.33 |
I thought several times about this question but I'd always forget to ask. So how can I write a C/C++ routine which checks what key I pressed so the result is returned immediately after the key has been pressed ? How to accomplish that in Windows ? How to accomplish that in Linux ? What headers I should include if I use Dev-C++ and Code::Blocks ?
I try this:
#include <iostream>
#include <conio.h>
int main(void)
{
using namespace std ;
cout << "Press a key" << endl ;
char key ;
getch() >> key ;
cout << "\nYou have pressed: " << key << endl ;
return 0 ;
}
You can use the
getch() function from
<conio.h>, for example:
int c; c = getch();
There is also a
GetAsyncKeyState function: you might find useful. | https://codedump.io/share/sxvSFln4RmQq/1/how-to-write-a-c-routine-which-checks-which-key-i39ve-pressed-on-a-keyboard-so-i-can-handle-each-key-press-separately | CC-MAIN-2018-09 | refinedweb | 122 | 82.24 |
Learn more about Scribd Membership
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
November 5, 2006CopyrightBCU SDK DocumentationCopyright (C) 2005-2006 Martin Kögler <mkoegler@auto.tuwien.ac.at> You can redistribute and/or modify this document under the terms of the GNU Gen-eral Public License as published by the Free Software Foundation; either version 2 ofthe License, or (at your option) any later version. This document is distributed in the hope that it will be useful, but WITHOUT ANYWARRANTY; without even the implied warranty of MERCHANTABILITY or FIT-NESS FOR A PARTICULAR PURPOSE. See the GNU General Public License formore details. You should have received a copy of the GNU General Public License along with thisdocument; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite330, Boston, MA 02111-1307 USA Where the GNU General Public License mentions ”source code”, these LaTeX files oranother editable file format, if it became the preferred format of modification, shall bereferred to. Any not ”source code” form derived from this ”source code” are regardedas ”binary form”. Modified versions, if the modification is beyond correcting errors or reformatting, mustbe marked as such.
AcknowledgmentsThis text is based on my diploma thesis. Wolfgang Kastner and Georg Neugschwandtner, Automations Systems Group, Tech-nical University Vienna, have helped me a lot. Georg Neugschwandtner is contributing to this document by proofreading and revisingchanges.
34AbstractThe European Installation Bus (EIB) is a field bus system for home and building au-tomation. Bus Coupling Units (BCUs) provide a standardized platform for embeddedbus devices. They can be programmed and configured via the bus. BCUs are basedon the Freescale (Motorola) M68HC05 microcontroller family and provide a few tensof bytes of RAM and less than 1 KB of EEPROM. A common integration tool (calledETS) is used for the planning and installation of EIB systems. Several problems exist for non commercial development projects. Although a freeSDK for the BCU 1 is available, there is no free C compiler. Additionally, only certifiedprograms can be processed by ETS. ETS as well as standard libraries for PC based busaccess are only available for Windows. During the course of the present project, a set of free tools for developing programsfor BCU 1 and BCU 2 (twisted pair version) as well as loading them into the respectiveBCU were created. A RAD (Rapid Application Development) like approach is used forprogramming. Properties and event handlers of the used objects are described using aspecial specification language. Necessary code elements are written in C (inline assembleris also supported). An interface to an integration tool is also available. A multi-user and network-capable Linux daemon to access the EIB was developed,which provides access to the transport layer as well as complex device managementfunctions. Different interfaces for bus access are supported (PEI 16, FT 1.2, EIBnet/IPRouting + Tunneling and TPUART). The tool chain is based on the GNU tool chain. The hardware limitations of the targetsystem were a key point of the porting activities. It is described how GCC was portedto an accumulator architecture with only two 8 bit registers – but with 16 bit addressspace – and only one call stack. Small code size is a primary requirement. Therefore, integers of 3, 5, 6 and 7 bytes aresupported for situations, where a two byte integer is too small, but 4 or 8 byte integertypes would be unnecessarily large. As the architecture uses variable length instructionformats, a mechanism which selects the smallest variant was implemented into the linker. A mechanism is shown which makes GCC distribute variables over non contiguoussegments automatically. This feature is required by the BCU 2 architecture. Addition-ally, transparent access to the EEPROM was added. Its concept is related to ISO/IECTR 18037 named address spaces.
56KurzfassungDer European Installation Bus (EIB) ist ein Feldbus für die Heim- und Gebäudeauto-mation. Die Standardplattform für Embedded-Teilnehmer bilden Bus Coupling Units(BCUs). Diese können über den Bus mit Anwendungen und Konfiguration versehenwerden. BCUs basieren auf der Freescale (Motorola) M68HC05 Mikrocontroller-Familie.Sie bieten einige dutzend Bytes RAM und weniger als 1 KB EEPROM an. Für diePlanung und Installation von EIB-Systemen steht eine einheitliche Integrationssoftware(ETS) zur Verfügung. Für nicht-kommerzielle Entwicklungen stellen sich mehrere Probleme. Zwar existiertfür die BCU 1 ein freies SDK, es steht aber kein freier C-Compiler zur Verfügung. Au-ßerdem können standardmäßig verfügbare ETS-Versionen nur zertifizierte Anwendungenverarbeiten. Darüber hinaus sind ETS und Standardbibliotheken für den PC-basiertenBuszugriff nur für Windows verfügbar. Im Rahmen der vorliegenden Arbeit wurde ein Set von frei verfügbaren Tools geschaf-fen, das sowohl die Entwicklung von Programmen für BCU 1 und BCU 2 (Twisted-Pair-Version) als auch deren Übertragung auf die entsprechende BCU ermöglicht. Fürdie Programmierung wird eine RAD (Rapid Application Development)-ähnliche Vorge-hensweise unterstützt. Eigenschaften und Eventhandler der benutzten Objekte werdenmittels einer eigenen Spezifikationssprache festgelegt. Nötige Codeelemente werden in C(mit Unterstützung für Inline Assembler) ergänzt. Eine Schnittstelle für ein Integrati-onswerkzeug wird ebenfalls bereitgestellt. Für den Zugriff auf EIB wurde ein Multi-User- und netzwerk-fähiger Linux-Daemonentwickelt, der einerseits direkten Zugriff auf die Transportschicht, andererseits auchkomplexe Gerätemanagementfunktionen anbietet. Für den Buszugriff werden verschie-dene Schnittstellen unterstützt (PEI 16, FT 1.2, EIBnet/IP Routing + Tunneling undTPUART). Das Entwicklungswerkzeug basiert auf der GNU Toolchain. Bei der Portierung standendie Hardwarebeschränkungen der Zielplattform im Mittelpunkt. Es wird gezeigt, wieGCC auf eine Akkumulator-Architektur mit nur zwei 8-Bit Registern – aber 16 BitAddressraum – und reinem Call-Stack portiert werden kann. Geringe Codegröße ist eine zentrale Anforderung. Daher werden Integervariablen mitGrößen von 3, 5, und 7 Bytes für Situationen unterstützt, in denen der Wertebereicheines 2-Byte-Integer nicht ausreicht, 4- oder 8-Byte-Typen aber zu groß sind. Da dieArchitektur verschieden lange Instruktionsformate bereitstellt, wurde auch in den Linkerein Mechanismus implementiert, der das kleinstmögliche Format wählt. Es wird ein Mechanismus gezeigt, der es erlaubt, mit GCC automatisch Variablenauf nicht zusammenhängende Segmente zu verteilen. Diese Möglichkeit ist aufgrundder speziellen Anforderungen der BCU 2 notwendig. Weiters wird transparenter Zugriffauf das EEPROM bereitgestellt. Der dabei gewählte Mechanismus orientiert sich amKonzept der “named address spaces” aus ISO/IEC TR 18037.
78Contents
1. Introduction 19 1.1. The European Installation Bus . . . . . . . . . . . . . . . . . . . . . . . 19 1.2. The GNU project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.3. Goal of the present project . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4. Features and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4.1. Licence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5. Place of the BCU SDK in the development and deployment work flow . . 22 1.5.1. Development work flow . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.2. Deployment work flow . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6. Course of the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.7. Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.8. Structure of the document . . . . . . . . . . . . . . . . . . . . . . . . . . 26
I. M68HC05 272. M68HC05 architecture 29 2.1. Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2. Addressing modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.3. Instruction set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3. GNU utilities 35 3.1. Overview of the GNU utilities . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2. Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3. Opcode library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4. Bfd library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4.1. Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.5. Binutils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.6. GNU assembler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.6.1. Assembler syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.7. GNU linker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.8. Sim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.9. GNU debugger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.10. Newlib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.11. Libgloss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9Contents
4. GCC 47 4.1. Structure of GCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.2. RTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.3. Machine description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.3.1. Normal named instruction . . . . . . . . . . . . . . . . . . . . . . 51 4.3.2. Normal anonymous instruction . . . . . . . . . . . . . . . . . . . 52 4.3.3. Definition of an expander . . . . . . . . . . . . . . . . . . . . . . 52 4.3.4. Definition of constants . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3.5. Definition of attributes . . . . . . . . . . . . . . . . . . . . . . . . 54 4.3.6. Definition of a combination of instruction and splitter . . . . . . . 54 4.3.7. Peephole optimization . . . . . . . . . . . . . . . . . . . . . . . . 55 4.4. Libgcc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.5. Target description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.6. Overview of the M68HC05 port . . . . . . . . . . . . . . . . . . . . . . . 56 4.7. Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.7.1. Type layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.7.2. Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.7.3. Register classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.7.4. Pointer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.7.5. Calling convention . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.7.6. Stack frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.7.7. Frame pointer elimination . . . . . . . . . . . . . . . . . . . . . . 60 4.7.8. Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.7.9. Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.7.10. Operands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.7.11. RTL split helper functions . . . . . . . . . . . . . . . . . . . . . . 63 4.7.12. RTL patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.7.13. Predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.7.14. Cost functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.7.15. The eeprom attribute . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.7.16. The loram attribute . . . . . . . . . . . . . . . . . . . . . . . . . 67
II. BCU/EIB 695. BCU operating system 71 5.1. Modes of communication . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.2. BCU 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.2.1. Accessing the PEI . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.2.2. Timer Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.2.3. BCU 1 API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5.3. BCU2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.3.1. BCU 2 API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
10 Contents
6. BCU SDK 83 6.1. Common files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.2. XML related programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.3. Build system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.4. Configuration file parser . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 6.5. Bcugen1 and bcugen2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.6. Overview of the generated code . . . . . . . . . . . . . . . . . . . . . . . 90 6.7. Memory layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
11Contents
10.Usage/Examples 149 10.1. Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 10.1.1. Installation in a home directory . . . . . . . . . . . . . . . . . . . 149 10.1.2. Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 10.1.3. Getting the source . . . . . . . . . . . . . . . . . . . . . . . . . . 150 10.1.4. Installing GCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 10.1.5. Installing pthsem . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 10.1.6. Installing the BCU SDK . . . . . . . . . . . . . . . . . . . . . . . 151 10.1.7. Granting EIB access to normal users . . . . . . . . . . . . . . . . 151 10.1.8. Development version . . . . . . . . . . . . . . . . . . . . . . . . . 152 10.1.9. Building install packages . . . . . . . . . . . . . . . . . . . . . . . 152 10.2. Using eibd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 10.2.1. Command line interface . . . . . . . . . . . . . . . . . . . . . . . 153 10.2.2. USB backend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 10.2.3. EIBnet/IP server . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 10.2.4. Example programs . . . . . . . . . . . . . . . . . . . . . . . . . . 155 10.2.5. Usage examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 10.2.6. eibd utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
12 Contents
B. Tables 173 B.1. Available DP Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 B.2. Available property IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
13Contents
14List of Figures 1.1. BCU SDK work flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.2. BCU SDK data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
15List of Figures
16List of Tables 4.1. Type sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.2. Register classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.3. Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
17List of Tables
181. Introduction1.1. The European Installation BusThe European Installation Bus EIB is a home and building automation bus system. Itis optimized for low-speed control applications like lighting and blinds control. EIB devices can be configured remotely via the network. The EIB protocol follows acollapsed OSI architecture with layers 1, 2, 3, 4 and 7 implemented. Different transmis-sion media are available. EIB was absorbed in the KNX specification ([KNX]), which is maintained by KonnexAssociation. References to the KNX Specification ([KNX]) in this document will be ofthe form [KNX] Document number -section number. For planning an EIB installation and configuring the individual devices, a special MSWindows based software, called ETS, is used. This software is maintained by EIBA (EIBAssociation). It will handle every device which has passed compliance certification. ForPC based EIB nodes, a Windows library for EIB access is available, which is also usedby ETS. BCUs (Bus Coupling Units) are standardized, generic platforms for embedded EIBdevices. They include the entire physical layer network interface, power supply and amicrocontroller with an implementation of the EIB protocol stack stored in the ROM. Asprocessor core, the Freescale (Motorola) M68HC05 architecture is used.1 The applicationprogram can be downloaded into the EEPROM via the bus. Currently, two BCU families exist: the older BCU 1 and the new BCU 2 family. Withinboth, there are different revisions which can be distinguished by the mask version. Forthis project, only the mask versions 1.2 and 2.0 have been used. BCUs are available for EIB twisted-pair (TP) and power-line (PL) media. WithinKNX, these are referred to as TP1 and PL110. In this project, only the twisted pairmedium (KNX TP1) is supported. This medium consists of two dedicated low voltagewires and supports bus powered devices. Every BCU includes an asynchronous serial interface which provides access to the EIBprotocol stack. The BCU 1 supports a protocol with RTS/CTS handshake (“PEI 16”),the BCU 2 additionally a FT1.2 based protocol (for details, see [KNX] 3/6/2-6.3 and3/6/2-6.4). An alternate way to access an EIB TP network is the Siemens TPUART IC. This ICimplements OSI layer 1 and parts of layer 2 only, instead of the entire stack as BCUsdo. 1 One rare variant uses the M68HC11. It is however only available without housing and thus referred to as BIM (Bus Interface Module).
191. Introduction
Finally, EIBnet/IP allows to access EIB via IP based networks (see [KNX] 3/8). Itprovides tunneling of EIB frames in both a point-to-point mode (referred to as Tunnel-ing) and a point-to-multipoint mode (referred to as Routing).
• It includes its own specification language to describe the configuration of the BCU environment. Its concept is RAD-like, requiring the programmer to specify prop- erties and event handlers only.
20 1.4. Features and limitations
• To access the bus, FT1.2, the BCU 1 kernel driver, the KNX USB protocol (ac- cording to [KNX] AN037, only EMI1 and EMI2), the TPUART kernel and user mode driver and EIBnet/IP Routing + Tunneling are supported. Additionally (more or less working) experimental access to a BCU 1 without a kernel driver is supported. Either way, all management tasks are supported as well. Further details (including a description of these interfaces) can be found in Chapter 7. • It can act as a limited EIBnet/IP server. • It provides an API to provide EIB access in other programs. Several utility pro- grams, which also illustrate the use of this API, are included. • It includes a standard bus monitor, which optionally can decode EIB frames. Ad- ditionally, a special monitor mode (called vBusmonitor ) even allows some traffic to be traced without switching to bus monitor mode. • The programs are compiled after all configuration settings are known to reduce image size. The following limitations are present: • No data exchange with the ETS is possible. • No graphical interface has been written. Input files can be edited with any text editor. The BCU SDK programs can be invoked from the command line or from a makefile. • The compiler output will in most cases be larger than well optimized, hand written assembler code. • It is not compatible with the original, commercial BCU SDK. • Only C code (with inline assembler) is supported. • If the bus is accessed via a BCU 1 or BCU 2, this BCU is inaccessible to the BCU SDK (using the local mode is not supported).
1.4.1. LicenceAs the used tool chain is released under open source licences, the new parts of the SDKshould also be freely available as open source. As there are many definitions of free and open source software, the Debian definition offreedom, which is described in the Debian Free Software Guidelines (DFSG) ([DFSG]),was used. According to these guidelines, a BSD style licence ([BSD]) or the GPL ([GPL])are reasonable choices. Finally, the whole project was released under the GPL. The eibd client library (seeChapter 7) and the libraries for the BCU contain a linkage exception (like libgcc) so thatthey may be used for non-open source software. Modified versions of a XML Schemadefinition and XML DTDs must use a different namespace and version number.
211. Introduction
2 This can be either a binary image or preprocessed code. In the BCU SDK, it contains a slightly modified collection of all program sources.
22 project engineer
Integration Tool planning and installation direct bus access application configuration information description for development
BCU config build.ai image build.img eibd for development
2324 meta data meta data processing 1. Introduction
Integration Tool
program text
skeleton generator application configuration information configuration description generator description for development
build.ai build.img 1.6. Course of the project
251. Introduction
threading library. Because the existing synchronization primitives did not provide thefeatures needed, semaphore support was added. A patched version is distributed underthe name pthsem ([PTHSEM]). Finally, the image building utility was rewritten as wellto support most features of the BCU 2.
• Another useful project would be the creation of a graphical front end for the BCU SDK.
• There are still some features missing, which could be added to the BCU SDK (e.g. user PEI handler or user application callbacks).
• Additionally, the BCU kernel drivers need to be improved, especially the timing behavior of the BCU 1 driver.
• The GCC offers lots of optimization possibilities. Also, the automatic generation of bit operations is still missing.
• The back ends of eibd for BCU 1 and BCU 2 do not yet support all features the BCU provides (e.g. additional group addresses).
• The TPUART kernel driver and back end could be extended to deliver more tele- grams in the vBusmonitor mode.3
• Based on the current GDB release, a BCU simulator could be created which sup- ports the debugging of BCU applications.
M68HC05 This part describes the M68HC05 architecture and covers key issues con- cerning the porting process of the GNU utilities.
EIB/BCU This part gives an overview of the internals of the BCU SDK.
Using the BCU SDK This part contains information concerning the use of the BCU SDK, including installation, operation and file formats.
3 This extension has already been included in the TPUART user mode driver.
26 Part I.
M68HC05
272. M68HC05 architectureThe Freescale (formerly Motorola) M68HC05 is a family of 8 bit, low cost microcon-trollers. They are based on an accumulator architecture with their IO registers mappedat the start of the memory. The members of the family differ in the amount of RAM,ROM and EEPROM as well as the different IO interfaces available. The models MC68HC05B6 and MC68HC705BE12 are used in BCU 1 andBCU 2, respectively. While the MC68HC05B6 is a generally available model, theMC68HC705BE12 contains KNX specific on-chip peripherals and is only used in KNX.As the peripherals are managed by the BCU operating system, only the processor coreis described. The processor core is a von Neumann architecture with a linear 16 bit address space.The different memory types are mapped at different addresses. For read accesses, thereis no difference for all memory types. The opcodes have a length of one byte with null to two bytes of address information.There are no alignment constraints for instructions as well as for data.
2.1. RegisterA The 8 bit accumulator, which is used in nearly every arithmetic operation. It is used as a source operand as well as the destination.
X The 8 bit index register. It can be used as temporary storage, as operand for the multiplication, as well as the only way to access data at memory locations which are not fixed.
PC The 16 bit instruction pointer. Internally, some models fix the three high order bits to 0.
SP The stack pointer. Its value cannot be retrieved or set by any instruction. Only a reset of its value to the startup value is available. The high order bits are fixed, so that it can only store values between 0xC0 and 0xFF. An interrupt allocates 5 bytes, a normal subroutine call uses 2 bytes on the stack.
292. M68HC05 architecture
The flags can be only checked by conditional jump operations. Nearly all opera- tions change the condition code register.
DIR, 8 bit address All logic and arithmetic operations, which support memory ad- dresses, support this addressing mode. Here, after the opcode, a memory address within the first 256 bytes follows.
EXT, 16 bit address A limited set of operations also supports 16 bit addresses. Here, the opcode is followed by the 16 bit address in big endian format.
IX, indexed without offset The content of the X register is used as memory address within the first 256 bytes. Here only the opcode is present. In the Motorola assembler, such an addressing mode is written as ,X.
IX1, indexed with 8 bit offset All logic and arithmetic operations which support mem- ory addresses support this addressing mode. After the opcode, an 8 bit offset follows. As address, the content of the X register plus the offset is used. In the Motorola assembler, such an addressing mode is written as offset, X.
IX2, indexed with 16 bit offset A limited set of operations also supports a 16 bit offset as well as this addressing mode. As address, the content of the X register plus the offset is used. In memory, after the opcode, the 16 bit offset is stored in big endian format. In the Motorola assembler, such an addressing mode is written as offset, X.
IMM, immediate The 8 bit parameter of the operation is stored directly after the opcode. In the Motorola assembler syntax, an immediate value is prefixed by a #.
REL, PC relative This addressing mode is used for conditional jumps. Here, a signed offset relative to the end of the jump instruction is stored as an 8 bit value after the opcode.
8 bit address + PC relative Some jumps need a data address as well as a target ad- dress. Here, the data address and then the 8 bit PC relative address are stored after the opcode.
30 2.3. Instruction set
• Instructions which support IMM, DIR, EXT, IX, IX1 and IX2:
• The following operations use one operand as source and destination. They support the modes DIR, IX1 and IX. Additionally, there is one variant which uses the A register, whose name has an A appended, as well as a variant for the X register, with an X appended.
CLR stores 0. INC increments the operand. DEC decrements the operand. NEG performs binary negation. COM calculates the two’s complement. ROL rotates left through carry.
312. M68HC05 architecture
BCLR clear a bit of the operand. In the Motorola assembler syntax, the bit number is written as the first parameter. BSET set a bit of the operand. In the Motorola assembler syntax, the bit number is written as the first parameter. BRCLR branch if a bit is cleared. In the Motorola assembler syntax, the bit number is written as the first parameter. As a third parameter, the PC relative target address follows. BRSET branch if a bit is set. In the Motorola assembler syntax, the bit number is written as the first parameter. As a third parameter, the PC relative target address follows.
• The following conditional branches with the REL addressing mode are supported:
32 2.3. Instruction set
• AND, CLR, DEC, EOR, INC, LDA, LDX, ORA, STA and STX modify N and Z
• ASL, ASR, COM, LSL, LSR, NEG, ROL, ROR, SUB and SBC modify N, Z and C
332. M68HC05 architecture
343. GNU utilitiesFor the BCU SDK an assembler, compiler and linker were needed. As there was no freetool chain available, a new one needed to be created. First ideas of writing a compiler which directly generates binary code from scratchwere abandoned, because writing a complete C parser with type checking would havebeen too much work for the scope of this project. Therefore, the decision was to port an existing C compiler to the M68HC05 architec-ture. In addition to GCC, other free compilers were searched for. Possible candidateswere anyc ([ANYC]) (for the compiler front end) or SDCC ([SDCC]), which supportsdifferent architectures and has some optimizations. Compared to GCC, SDCC is muchsimpler and has less features. Finally, GCC was selected, because the front end and optimizations part has definitelyproven to work, as it is the standard compiler on most free operating systems. In additionthe core parts of GCC are maintained by a large community. As GCC was selected, itwas clear to use the binutils as assembler and linker. Porting GCC to the MC68HC05 architecture caused some problems, which were solvedwith various tricks. At the moment, the GCC port generates usable code, but there ismuch room left for target specific optimizations. If SDCC had been the first choice, thestructure would have been much simpler, but some parts, like the automatic removal ofunused static variables, would have had to be implemented from scratch. Large parts of the porting activity consists of finding the right code in other archi-tectures and copying it into the new one. In many situations, the code needs smalladaptations, but only a very small part is totally new code.
bfd The bfd library is used to create, read and write executables and object files in various formats.
353. GNU utilities
gas The GNU assembler, which can support a wide range of architectures.
binutils Various tools to work with executables and object files, like strip, ar and ranlib. The distribution package binutils also includes gas and ld.
sim Normally distributed as a part of gdb. It contains a simulator for some architectures.
gdb The GNU debugger. It also supports remote debugging as well as debugging of programs running in the simulator.
GCC The GNU C compiler (or GNU compiler collection). It is presented in detail in Chapter 4.
libstdc++ The GNU C++ runtime library. As it is too big, it is not used here.
libgloss Part of newlib. It should contain the platform specific interfaces to an operating system for newlib.
For this project bfd, opcode, gas, ld, binutils and GCC were ported to the M68HC05architecture. Since finding all errors in GCC only by reviewing the output is impossible,a simulator for the M68HC05 architecture based on the M68HC11 version of sim wascreated. Because the regression tests for GCC need dejagnu and a C runtime library, dejagnu,libgloss and newlib were also ported. As finding bugs in the GCC output without adebugger turned out to be very difficult, a working gdb based on the simulator wascreated. Functions which were too difficult to implement and are not really needed wereleft out (see section 3.9). The rest of this chapter covers the important points of the port of the GNU utilities(except GCC).
3.2. ConfigurationGNU programs as well as other software use autoconf to adapt the software for a specifictarget. If a software has to be compiled, first configure must be run, which determineswhat kind of operating system is installed, which header files, libraries and compilersare present, to which location the program should be installed, and other settings. configure describes machines by a triplet, e.g. i686-pc-linux-gnu. Its first part is theprocessor, then the machine type follows and finally comes the operating system. Up to three triplets are used:
36 3.3. Opcode library
target The machine, for which libraries should be built or for which the program should generate output.
Under normal conditions, the correct values are guessed automatically. In the case ofa cross tool chain, as the m68hc05 port, the target has to be specified. As the name of the processor m68hc05 was chosen, so that running configure withthe parameter –target=m68hc05 creates the correct Makefile for the m68hc05 port. Internally, the triplet m68hc05-unknown-bcu is used. As the operating system, bcu isused because the whole port was designed with the features and limits of the BCU inmind.
M68HC05_INS("adc",M68HC05_IMM,0xa9)
Every instruction is represented by the definition M68HC05 INS. The application pro-gram, which uses this list, can convert it to the needed form by defining M68HC05 INSand including the list after that. The first parameter contains the opcode name as astring, the second the supported addressing mode and the third the opcode number. The only change to the original instruction set is that for bit operations (BSET,BCLR, BRSET, BRCLR), the bit number is appended after a period. Therefore suchinstructions have the syntax bset.1 test instead of bset 1,test. All names are written inlower case. Then the files for the m68hc05 architecture were added and hooked into the generalcode. Finally, a small function, which reads one byte, searches the opcode in a table,determines its addressing mode, reads the parameters and prints everything in a humanreadable format, was written.
373. GNU utilities
M68HC05 architecture, there was only the definition of the architecture number, butnot for relocations. So they had to be defined from scratch. The implemented set includes:
R M68HC05 NONE A relocation, which does nothing. This is needed to turn a relo- cation off.
R M68HC05 PCREL 8 stores a PC relative address as 8 bit value, as used for relative branches.
R M68HC05 RELAX GROUP this relocation does not change the code. It only marks, that an expanded branch starts.
R M68HC05 SECTION OFFSET8 stores the offset in the current section of the ad- dress as 8 bit value.
R M68HC05 SECTION OFFSET16 stores the offset in the current section of the ad- dress as 16 bit value.
38 3.4. Bfd library
As the 16 bit addresses are stored in big endian format, big endian is used as defaultformat. Bfd uses other names for the relocations for internal purposes. Commonly used re-locations have default names, architecture specific relocations must be added to thislist. For managing this, a clever way is used: A file contains a list of their names andtheir documentation. Out of this list, the C definitions are generated by running makeheaders in the bfd directory. In addition to the list of relocations, bfd keeps a list of the architectures and targetvectors. An architecture describes the name, byte and word size as well as subtypes. Asan example, for i386, a CPU with 64 bit extension is represented as a subtype. The target vector contains the list of concrete functions to handle object files. Formost parts, the default values are used. Only a few are overwritten. The important parts are:
3.4.1. RelaxationRelaxation (in the context of bfd) means that the size of code sections is shrunk at linktime by replacing instructions with other instructions. For various instructions, the M68HC05 architecture offers multiple variants with dif-ferent instruction lengths, but otherwise same function. Often, the final value of asymbol is still unknown at assembler time, so the largest variant must be chosen. If theaddress of the jump target is unknown or too far away, relative jumps are automaticallynegated and an absolute jump to the final location is added. As the generation of smallcode is a key requirement for the tool chain, relaxation is really needed. For the relaxation, three relocations have been defined. The code is based on theM68HC11 port ([GNU11]). It checks every relocation, if it is one of these three specialrelocations. The location of the opcode is known from the relocation type. If the parameter of theinstruction can fit in a smaller variant, the opcode and the type of the relaxation arechanged and the free bytes are deleted. For expanded jumps, the process is reversed, ifpossible. This is repeated, until no more conversions are possible. The code which is processing the opcodes takes advantage of their schema, so onlya few rules are needed. For example, every opcode whose upper 4 bits are 0xC can beconverted to 0xB if the address fits in 1 byte. 1 This is the same function as bfd normally performs, but uses the relocation function for this archi- tecture.
393. GNU utilities
There are two special rules: JMP is turned into BRA and JSR into BSR, if possible.In assembler code, only JMP and JSR should be used, because BRA will be expandedas a conditional jump and BSR will not be expanded. The relaxation will generate theoptimal code. The delete byte routine is simpler than the M68HC11 variant, because everythingwhich uses a symbol or a PC relative address is stored as a relocation. So it is onlynecessary to adjust the values of the relocations. Bfd ensures that the correct value willbe stored. For the M68HC11, the routine needs to check every relative jump and adjustit manually. The heavy use of this technique (typically about 13 of the assembler code size canbe eliminated for GCC code) causes the stabs debugging line symbols to lose theirsynchronization with the code. This makes source level debugging nearly impossible. So a routine which adjusts the stab information was written. As it turned out thatusing DWARF2 was better suited as debugging information, DWARF2 become the de-fault debugging information for GCC. Therefore the stabs adjustment code is not testedvery well. It is kept, because stabs is kept as an alternative for objdump and similarutilities which do not support DWARF2.
3.5. BinutilsThe binutils directory contains various programs to process object files such as ar, stripand strings. As they use bfd, no porting activity was needed. Only within the readelfprogram, which can decode elf headers, the new relocations were added.
1. Each line is parsed and the temporary assembler code is stored in a fragment. If the length of an opcode cannot be determined at this time, the free space for the longest possible code is left and the next opcode is stored in a new fragment. For references to symbols, a fixup is created. This is the same as a relocation, only that it can store a complex assembler expression.
2. The assembler asks the back end for the size of each variable fragment and adjusts the symbols according to it.
40 3.6. GNU assembler
3. The assembler asks the back end if the size of a fragment has changed because of the new symbol values. This is repeated until no further change occurs.
4. With the complete symbol information, the back end converts the variable frag- ments to fixed length and creates more fixups, if necessary.
5. In the output pass, fixups with constant values are embedded in the code. Then the code is stored. For the remaining fixups, relocations are stored in the object file.
• An expression parser wrapper, which can also parse the target specific functions, like lo8.
• A function which divides the assembler instruction into its parts, finds the corre- sponding opcodes, parses the parameters, checks everything and emits the corre- sponding fragment and fixups.
• A size estimation function, which returns the worst case size of a fragment.
• A conversion function, which converts a variable fragment into a fixed length frag- ment.
• Two functions to support the target specific functions, like lo8, for data storage pseudo operations.
offset8 stores the offset of the expression result relatively to its section start.
413. GNU utilities
offset16 stores the offset of the expression result relatively to its section start.
Functions ending with 8 produce an 8 bit result and may only be used in places where8 bit values can be stored; 16 bit functions may only be used at places where a 16 bitvalue can be stored. It is not possible to store a lo8 in a 16 bit word. The syntax of the assembler operations is not the Motorola syntax:
• For bit operations, the bit number is appended after a period to the instruction name.
Conditional jumps are automatically expanded if they do not fit a PC relative reloca-tion or the target is unknown. BSR, BRA and BRN should not be used (use JSR, JMP and NOP instead). In section names ending in !!!!!, this part is replaced by a unique number for theassembler run. This is used to put symbols in different sections to aid the sectionmovement code of the linker. Some examples:
.section test.!!!!!test:lda %X,2lda %Xsta testlda $2lda $0x10brset.0 test1, testadd $offset8(test).byte lo8(test).hword offset16(test)
42 3.8. Sim
For the M68HC05 target, a generic linker script is not possible. The default scriptworks for the simulator. BCU specific scripts were not placed in the ld distributionbecause they depend closely on the BCU runtime environment. The script is basically a default elf linker script. It creates an additional section named.data.lo for storing the pseudo register of GCC at an address lower than 0x100. The only C code added to ld is the one necessary for section movement. This codecan be easily dropped. It is needed for the BCU SDK to support the distribution ofvariables over different data segments. The section movement code is activated by the ––move-sections command line switchof ld. Its operation is controlled by a number of SYSLIB directives. A clean solution would have been to introduce a new syntax for it in the parser.However, this would have the disadvantage that merging the patches from upstreamwould become more complicated. Therefore an unused hook, named SYSLIB, was used. A command has the followingsyntax:
SYSLIB([from-section]:[to-section]:[current-symbol]:[maximum-symbol])
For each SYSLIB directive, the section movement code performs the following (beforethe relaxation):
• The values of the current and maximum size symbols are evaluated.
• If the current size is smaller than the maximum size, the processing of this directive is finished.
• A section of the from-section is selected (at the moment the biggest section is selected).
A better implementation of this code, which examines all directives at the same time,would generate better results. As the typical variable size used in BCU programs willprobably be around 2 bytes and therefore mostly 1–2 byte sections will be created, theresults should not be too bad.
3.8. SimGDB contains a collection of simulators for different architectures, which can be eitherused as stand alone programs or inside gdb. They consist of a common core and target specific supplements. The core provides aframework to load programs, register and access virtual devices as well as some virtualdevices like RAM. Additionally, it is possible to configure the number and parametersof virtual devices with special command line parameters.
433. GNU utilities
m68hc05-sim is based on the m68hc11 port. Because it is intended for regression tests,the simulation is not complete: • All opcodes, except STOP, WAIT, SWI, RTI, BIH, BIL are simulated. The cycles counter is also implemented.
• If an ELF executable is loaded, the simulator starts at the entry point of the ELF file, otherwise at 0xFFFE.
2 reads a byte from stdin and stores it in register A. If an error occurs, the C flag is not set, otherwise it is set. The core new part is the instruction interpreter. It uses the opcode list and generatesthe interpretation function. This function is a huge switch statement. For each opcodea case is generated. For each case, all addresses are generated and the parameters arefetched, according to the addressing mode. Then, the C statements to execute theinstruction are generated according to the instruction name.
44 3.10. Newlib
For each stack frame, a data structure is created which contains the value or addressof all saved registers. With this information, it is possible to evaluate expressions in thecontext of one of the callers of the current function. Most ports extract the necessary information by scanning the prologue of the function.For the m68hc05 this is not done because the prologue generated by GCC is generatedas high level RTL code, which is converted into low level RTL code and passed toan optimizer. This makes the recognition of the prologue code very difficult and thedetermining the actions of the code is even more complicated. Because the prologue isnot analyzed, evaluations in non-top stack frames might give wrong results. There is no easy solution for this problem, since future optimizations might move theprologue code to another location. For example, the save of a callee save register canbe moved just before its first usage. Likewise, the content of the SP register needs nochange, if no called function uses the stack. As the calling convention of GCC is supposed to be specific to a certain compilerversion and therefore maybe incompatible between different versions, calling functionsout of gdb is not supported. A usage example:m68hc05-gdb testprog(gdb) target sim(gdb) load testprog(gdb) break main(gdb) runNote that the bss-section is not cleared and the data-section is not reloaded if run isexecuted a second time in a gdb process.
3.10. NewlibThe portable C runtime library newlib needs no big changes. Only the endianess of them68hc05 architecture is declared in a header file and the compiler options are adjustedfor the generation of small sized code.
3.11. LibglossLibgloss is a library which contains the glue code between newlib and the target operatingsystem. An advantage of this concept is that for a slightly modified target (e.g. anothersyscall entry), only a small part has to be changed. For the m68hc05 port, libgloss allocates the virtual register for GCC, contains thestartup code and wraps the call to the emulated operating system of the simulator (read,write, exit). For other POSIX compatible functions, which are expected by newlib, adummy implementation is included. If a program for the simulator is linked, the command line options -lsim -lc -lsim mustbe added because of the circular dependencies between libgloss and newlib.
453. GNU utilities
464. GCCPorting GCC [GCC, GCCINT] to the M68HC05 architecture turned out to be a com-plicated task. GCC is designed to work with architectures with many registers, stackand unrestricted memory access. The M68HC05 has only two registers, the stack isunusable for user programs and has no 16 bit pointer. Thus, the missing features needto be emulated. While designing the GCC port, the constraints of a BCU were used as the designdriver. It can be used for any member of the M68HC05 microcontroller family, if enoughmemory for the stack and virtual registers is available in appropriate memory regions.
GENERIC For representation, GCC defines the C type tree. It is generated by the front end and passed to the new RTL independent optimizer. The details of this representation are only relevant for the creation of a front end.
RTL (Register Transfer Language) This representation is used to store the program in a more target specific form. For representation, GCC defines the C type rtx. The syntax used to display RTL statements is similar to LISP. RTL uses registers (and memory, if necessary) to store values.
1. The file is parsed by the language front end, the syntax and semantics are checked and a GENERIC representation is created.
474. GCC
7. More optimizations and machine dependent passes are done on the RTL.
4.2. RTLA function is represented as a list of RTL instructions. Examples are:insn A normal instruction
(use X) indicates that X is used. Related calculations must not be removed by opti- mization.
(parallel list) states that each instruction in the list is calculated at the same time.
(call function args) represents a function call. If the return value is used, it is used as VAL in a set pattern.
48 4.2. RTL
Each expression has a mode, which describes the size and type of the expression.Important ones are:
QI A 1 byte integer
HI A 2 byte integer
SI A 4 byte integer
DI A 8 byte integer
(const double:MODE ...) Depending on the mode, this is either a floating point con- stant or a very long integer constant.
(subreg:MODE expr byte) Extracts a part of size MODE of expr starting at byte.
(if then else COND THEN ELSE) returns depending on COND THEN or ELSE. COND can be e.g. ne, ltu and eq, which takes two expressions.
(sign extend:MODE expr) Extends expr to the size of MODE, keeping the sign bit.
494. GCC
(zero extend:MODE expr) Extends expr to the size of MODE by filling up the high order bits with 0. Additionally there are arithmetic and logic operations like plus, minus, mult, ior, and,compare and ashift, which take two expressions. Unary operations like neg or not alsoexist. Examples of RTL statements are1 : • Compare the content of register 42 with 0 and set the condition code register.
• Jump to label 33, if the condition code register fulfills the condition lower or equal.
50 4.3. Machine description
• Add the constant 20 to register 28 and store the result in register 27.
• State that the content of register 9 (which is called RegB and in this case not a pseudo register) is no longer valid.
• Call the function eestore HI. As a side effect the contents of register 12 and 10 are used.
514. GCC
Define an instruction pattern named jump. The first element is the RTL pattern. Thematch operand indicates a variable part. In most cases, it has a mode appended whichmust match the mode of the variable part. Then the number of the operand follows,which is used for referencing. The next element can contain the name of a predicatewhich the operand must match as a string. The last element contains constraints for thereload pass. If an operand must be exactly the same as another operand, match dup isused instead of match operand. If the pattern of the insn matches, the second element ofthe insn is evaluated. If it is true, the pattern matches. This expression can be empty.Then the output follows. This can either be a string or some C statements enclosed in {and }. In the output string, % modifiernumber (e.g. %o0 or %0 ) stands for the textualrepresentation of the operand with the specified number. The modifier is passed to theoutput function for operands. %% turns into %. As a last optional part, the attributes of the instruction can be modified. Because thename does not start with *, an expander is also generated.
In this example, the attribute cc is set to compare. The instruction specifies threealternatives in the match operand part. The @ at the output string states that each ofthe following lines is the output pattern for an alternative.
52 4.3. Machine description
emit_insn(gen_eestoreHI(operands[0],tmp)); DONE; } if (memaddr_operand(operand0, HImode) && memaddr_operand(operand1, HImode)) { rtx tmp=gen_reg_rtx(Pmode); emit_insn(gen_movhi(tmp,operands[1])); operands[1] = tmp; }})
This defines an expander for movhi. An expander generates a function (in that case calledgen movhi ), which takes all operands and returns the RTL. With that approach, eachtarget has the possibility to return non standard RTL expressions for specific situations. As for define insn, the first element is the RTL pattern, then a condition can follow.The match operand has the last element empty, because an expander cannot match aninstruction and therefore is not used in the reload pass. Then some C code (either as string or in enclosed in { }) follows, which can alter theRTL. In the array operands the matched operands are stored. Additionally they areaccessible as operand number. This example (out of the M68HC05 port) was chosen because it shows some ways toalter the result:
• If operand0 and operand1 match the predicate memaddr operand, the content of operand1 is moved to a temporary register and operand 1 is replaced with it. In this case the default pattern will be appended, but with a different operand.
534. GCC
Define constants does what one would expect. It defines that e.g. REG A can be usedinstead of 0.
Defines an attribute named cc, which can be one of the values listed in the secondargument.
This pattern is a combination of a define insn and a split pattern. The first, second,third and last argument are the same as for a define insn. The fourth argument specifies a condition which must be fulfilled before the instructioncan be split into other instructions. A split is only done at certain passes of the compiler. The next argument specifies the output pattern of the split. match dup indicates theplaces where an operand should be inserted. Then a preparation statement follows,which can modify existing operands or calculate new ones.
54 4.4. Libgcc
4.4. LibgccFor functions which GCC needs, but the target does not support directly, functions in theGCC library must be created. The default set consists of, among others, a floating pointemulator, functions to calculate with 64 bit integers and exception handling functions.This library is automatically linked to a program when it is linked by GCC.
• defines for the compiler and flags to pass to the linker and assembler
• calling convention
• handling of trampolines (memory structures for calling nested functions via point- ers)
• instruction costs
554. GCC
56 4.6. Overview of the M68HC05 port
stack is done relative to the stack pointer. After the register allocation/reload pass inthe machine specific reorganization pass, the RTL is converted into the low level RTL. In this representation, each RTL insn represents a real instruction. An addition oftwo 4 byte values is converted into the 12 statements needed plus all necessary reloadsand clobber statements of the X register. Over this representation the optimizer is runagain, which eliminates all unnecessary load and store operations. GCC expects pointers which can cover the whole address space. With the M68HC05architecture, the problem occurs that only an 8 bit index register is available. So thestore, load and call operations with 16 bit pointers have to be emulated. The firstsolution was to use a table. The upper 8 bits of the pointer were used to jump to astatement in this table which loads the lower 8 bits into the X register. The statementthen executes the operation with the hard coded pointer, which equals the upper partof the pointer. An advantage of this approach is that none (if the load operation isduplicated for each possible pointer location) or only up to three bytes of RAM areused. The disadvantage is that a lot of code is needed. The current solution works with self modifying code. In the RAM, four bytes (plusone to save data for the store operation) are reserved. If a pointer operation happens,the opcode of the corresponding instruction with the IX2 addressing mode is stored atthe first byte, then the content of the pointer and finally a RTS. A jump to the RAMregion starts the operation. If multi byte values are accessed with a pointer, the offset relative to the pointer isstored in the X register, so that no extra address calculations are needed. The GCC floating point simulator for single and double precision values is compiledinto libgcc. However, many of its functions need too much memory, which causes themto fail on a BCU. Some functions even need more stack size than the M68HC05 GCCport supports. Multiplications and divisions not supported by the hardware are also emulated. Insome situations GCC will expand an unsupported multiplication to a combination ofsupported instructions. A feature that was completely left out is setjmp/longjmp, as it cannot be implementedin an efficient way. Already saving the virtual registers would need a lot of memory. Themain problem however is the call stack, for which the stack pointer is inaccessible. Animplementation of the setjmp function would need to issue a call (or a sequence of calls)from known locations and then search these values in the call stack area. The addressof the first known address plus 2 would be the call stack pointer value before the call. longjmp would need to save the used stack location (either the current stack pointercould be determined by the previous method or the whole possible stack would have tobe saved), then the stack pointer needs to be reset. After that recursive calls are madeuntil the saved stack pointer value is reached. Finally the content of the stack can berestored. As a consequence, an exception handling system is also impossible to implement,because it requires mechanisms similar to setjmp/longjmp. Another known limitation is that ignored overflows are possible even for signed com-pares. This is because internally a subtraction is made. As the M68HC05 has no overflow
574. GCC
detection mechanism and its simulation would enlarge the code, it was decided to ignoresuch overflows. If GCC optimizes the compare in another way, a correct behavior ispossible. As an experimental option, –mint8 is present. It changes the size of the type int to8 bits, but keeps the size of the type short of 16 bit length. This is a clear violation ofthe C standard and may cause faults in GCC. It is a workaround to stop the automaticpromotion of 8 bit values to 16 bit values. The main problem, where the promotion cannot be prevented, are 8 bit return values.As this promotion is not done for library functions automatically, it would be impossibleto write some of them in C. To solve the problem, the register normally holding the lowbyte of the return value is used instead of the normal return register if an 8 bit value isreturned by a library function. A better solution would be to disable the promotion ofthe return value in start function, which makes GCC incompatible with the C standard. To use the non-standard integer sizes, define a new type with the specific mode:typedef signed int sint7 __attribute__ ((mode (EI)));typedef unsigned int uint3 __attribute__ ((mode (AI)));which defines a 7 byte integer type named sint7 and a 3 byte unsigned integer typenamed uint3.
4.7. Details4.7.1. Type layoutAll types are stored in big endian format. All values are stored packed with no additionalalignment bytes. For the type sizes see Table 4.1.
4.7.2. RegisterThe general virtual general purpose 8 bit registers are called RegB, RegC, RegD, RegE,RegF, RegG, RegH, RegI, RegJ, RegK, RegL, RegM and RegN.
58 4.7. Details
The name of the hardware registers are A and X. They (and therefore their registerclasses) should never be used as input or output registers in any user assembler state-ment. This is because the compiler does not know that these registers are implicitlyused by the high level RTL while the register allocator runs. If a value is needed in oneof these registers, tell the compiler to put the value in a general purpose register, anddo the load/store from/to them in the user assembler code. All hardware registers maybe clobbered by a function call. The data stack pointer is called SP, the frame pointer FP. Besides QI mode, theseregisters can also be accessed in HI mode, although only one byte is reserved in memoryfor them. The HI references are eliminated in the low level RTL.
4.7.4. PointerFor pointers, only addresses in a register in the register class POINTER REGS aresupported. Internally, they are implemented as GCC base registers. The GCC index
594. GCC
4.7.8. SectionsCode is placed in the .text section, read-only data in the .rodata section. Each global or static variable is placed in a section which starts with .bss. (initializedwith 0) or .data. and ends with a unique number. Internally the section name ends with!!!!!, which is replaced with a unique number by the assembler. For common symbols, the ELF COMMON section is not used, because multiple COM-MON sections are not possible. Instead they are allocated as normal variables in a bsssection.
60 4.7. Details
4.7.9. ConstraintsAll supported constraints are listed in Table 4.3. Constraints are used by the reload pass to determine if a variable fits a certain location.If an operand in the RTL is specified, it contains a constraint string, possibly withmultiple alternatives, separated by a comma. Each constraint can include some modifiers. The important ones are:& for early clobber, which means that this operand will be (partially) written before all source operands are read. Therefore it may not overlap with another operand.? means that this alternative is more expensive and should be avoided.= means that a value is written to this operand. If an instruction has multiple operands, they must have the same number of alterna-tives. The reload pass examines the first alternative of all operands, then the second,and so on. Finally, the cheapest one is selected and pseudo registers are either bound toa real register or to a stack location. This process is repeated until a sufficient solutionis found. A GCC port can define its own constraints. Apart from a general constraint, which iseither fulfilled or not, address and memory constraints are possible. A memory constraintwill also match, if the operand can be placed in memory. For the M68HC05 port, some constraints exist as normal constraints as well as memoryconstraints (e.g. A and B ). This is caused by the fact that a general memory operandis not supported in many situations. In these situations, the normal constraint is used. If the stack is also supported, the U constraint is added. The R constraint is alsoadded, because otherwise GCC will fail in some situations. The problem is that GCCkeeps open the decision whether a pseudo register should be put on the stack in somecases. Therefore the stack constraint does not match. As the other constraints do notallow a value to be put on the stack (they are not memory constraints), GCC concludesthat the value must be in a register. If it runs out of registers, the compilation fails. The R constraint causes that, in such a case, the constraint matches if the value canbe put on the stack only while the reload is in progress. At the end of the reload, thevalue is either put in a register, which causes another constraint to match, or is put onthe stack, which causes the U constraint to match. In situations, where pointers are possible (Q constraint), memory constraints are used,as this is a cleaner solution.
4.7.10. OperandsOperands in the assembler templates are specified as % modifiernumber (e.g. %o0 ),where number stands for the number of the operand. This port defines two modifiers:o Print only the offset for a memory location at x+offset or x (where x can be register SP, FP or X).
6162 4. GCC
constraint purpose 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 the same as operand with the this number a, b, c, d, e, f, h, q, t, u, v, w, x, y, z register classes, see Table 4.2 i immediate value m any memory operand r register of GENERAL REGS A fixed memory location (memory constraint) B fixed memory location C loram memory location D lowlevel loram pointer (X or X+constant) F immediate floating point value G immediate double with value 0 M immediate between 0 and 0xff N immediate value 1 Q valid base pointer expression (memory constraint) R accept registers which can be reloaded on the stack in the reload process
The function is implemented in print operand, which supports the following constructs:
• an integer constant
• the upper or lower part of a double floating point value, encoded as a hexadecimal number
• an address
• genXLoad takes an operand and determines if it makes use of the stack. If this is true, a load of the X register with the stack (or frame) pointer is returned, else a NOP.
How an addition of a long value is split is shown in the following. For each byte(processing from the low to the high byte), the following output is produced:
• genXLoad(operand[1],byte)
• LDA of genSUBREG(operand[1],byte)
634. GCC
• genXClobber(operand[1],byte)
• genXLoad(operand[2],byte)
• ADD/ADC of genSUBREG(operand[2],byte)
• genXClobber(operand[2],byte)
• genXLoad(operand[0],byte)
• STA of genSUBREG(operand[0],byte)
• genXClobber(operand[0],byte)
The NOPs are eliminated immediately after the split. The resulting code may containredundant reloads of the X register. They are eliminated by a GCC optimization pass,if optimizations are turned on.
gensplit move defines a splitter, which turns a move operation into a sequence of move operations for each byte.
gensplit binop nocc defines a splitter which performs a binary operation on each byte of the operands. It is intended for operations which do not need to pass information in the carry flag.
gensplit binop cc defines a splitter which performs a binary operation on each byte of the operands. It is intended for operations which must pass information in the carry flag.
gensplit shift splits a shift/rotate operation in byte-wise shift and rotate operations.
64 4.7. Details
asm op outputs the definition of a low level instruction which uses A as a source and the destination operand.
expand neg outputs a splitter which turns a negation into byte-wise operations.
expand eestore outputs an expander which creates a library call to store a value in the EEPROM.
expand move creates an expander which checks if the destination operand is in the EEPROM (eeprom attribute present). If this is true, an EEPROM store operation is created. Otherwise a normal move pattern is generated.
4.7.13. PredicatesThe port specific predicates match:
654. GCC
Figure 4.1.: Comparison of ISO/IEC TR 18037 named address spaces with m68hc05-gccaddress spaces
The semantic is similar to the named address spaces of [CEXT], but uses two GCCattributes instead of one directive for each named address space. Using define statementsfor attributes, it can be used like in [CEXT], with the difference that the user must selectif the variant for the target of a pointer or for a variable is to be used. This attributedoes not care if a variable is actually allocated in an EEPROM section nor does it checkif a pointer actually points to the EEPROM. Implementing this function was not trivial. The first attempt was to make minimalchanges to the GCC core. Only the eeprom attribute was used for all purposes. Mak-
66 4.7. Details
ing the attribute known to GCC was simple, in fact it only has to be added to theattribute list. Then all internal GCC code infers the correct type for many tree basedrepresentations, keeping the eeprom attribute, if it is necessary. A drawback was that this approach was not sufficient for structures (in conjunctionwith pointers). The attribute needed to be propagated to all elements of a type. Imple-menting this propagation was not trivial and caused a lot of problems. Because GCC by default offers no way to implement back end functions on tree level,this must be done at RTL level. If the type points to a fixed location, GCC stores thetype in the memory reference. In that case, a MEM-expression has only to be checkedfor the eeprom attribute. If this is present, the store operation is replaced by a functioncall. All other functions except the move instructions reject eeprom types, so only themove expanders have to be updated. The problem was the pointer arithmetic. In such a case it is possible that GCC splitsa pointer expression in many RTL statements while generating the RTL. Finally, thepointer is stored in a pseudo register, but without any type information. Therefore suchreferences were missed. Thus, the code which stored the expression in some memoryreferences was changed to always store the expression. After a regression test, thischange seems to be compatible with the GCC core. Yet, it caused some assertions inthe debug print functions to fail. This was fixed by printing the root node of the tree inthese cases. After presenting this solution on the GCC mailing list and some discussion, a newsolution was written. Instead of storing the type (or the expression) in the RTL,MEM REF FLAGS were introduced. If a tree expression is associated with a MEMRTL expression, the back end calculates the flags in a target hook. The eeprom operandpredicate only needs to check if a bit is set in the MEM REF FLAGS. Instead of propagating the eeprom attribute, a named address space attribute(MEM AREA) was added to the tree type. For pointers, it stores the address space ofthe pointer target. For variables, it stores the address space where they are located.This attribute is propagated in expressions which can potentially refer to memory.Much of the propagation is done automatically, while a tree expression is created. Onlyfor dereferencing pointers more work is necessary. Additional target hooks for named address space compatibility checking as well asaddress space merging were added. In the back end code, the two C attributes wereintroduced, because GCC does not support to use one attribute for both purposes. Named address spaces are also of interest for other targets, like the AVR microcon-troller family.
674. GCC
Pointers to the low RAM are automatically converted into normal pointers, if neces-sary. A normal pointer may only be converted to a low RAM pointer (with an explicitconversion), if it really points to the appropriate memory region (0x000 – 0x0FF). This attribute does not yet produce optimal code.
68Part II.
BCU/EIB
695. BCU operating systemA BCU is a standardized device which is connected to the EIB bus and can load anapplication program. It contains the physical external interface (PEI ), where applicationmodules can be connected to. As a second variant, BIMs exist, which only contain all electronic parts on a small cir-cuit board (without housing). They offer the same possibilities as a BCU, but necessaryelements like the programming button have to be connected separately. Concerning thesoftware interface, a BIM is identical to a BCU with the same mask version. The maindifference is that since BIMs are integrated into the housing of the final product, thePEI interface is typically not accessible. In the remainder of the text, the term BCUwill also refer to compatible BIMs. Although a BIM based on a M68HC11 is available, most BCUs (and BIMs) use theM68HC05 processor core. They have some RAM and EEPROM integrated. The EIBsystem software is contained in the ROM. In the EEPROM, the application program isloaded. Each EIB device has two elements (apart from the bus connector): A programmingmode button to turn programming mode on and off and a LED, which shows if thedevice currently is in programming mode.
5.2. BCU 1Different mask versions of the BCU 1 exist. This section will cover the version 1.2.Other mask versions share the same principles, but offer more or less features. This
715. BCU operating system
section will in most cases cover the BCU SDK interface rather than the plain assemblerview of the interface. The RAM and memory mapped IO is located between 0x000 and 0x100. For a userprogram, a block of 18 bytes is available. Additionally, the RAM contains the registersRegB – RegN and the system state at 0x060. If the application program is running, thislocation should contain a value of 0x2E. If a problem occurs (e.g. a checksum error), itcan contain a different value. A stopped application program can be started by writing0x2E to this location. For the BCU SDK, there is no need to access any RAM location directly. If port Aor port C are accessed directly, bit set/clear operations must be done using an inlineassembler statement at the location 0x00 and 0x02. If only the standard PEI interfaceis used in a program, no direct port access is necessary. The EEPROM has a size of 256 bytes and starts at 0x100. The first 22 bytes containa header which describes the program and contains all entry points. Parts of the BCUEEPROM can be protected by a checksum. If a checksum error is detected, the userprogram is halted. This is controlled via the header. A header byte of particular interest for the developer is 0x10D, the RunError. Thisvalue can be read during runtime. If an error condition happens, a bit is set to 0.Interpreting this value, some error conditions (e.g. a stack overflow, see [BCU1, BCU2])can be detected. Another important value of the header is the expected PEI type.The user program only runs when the expected PEI type matches the PEI type of thecurrently connected application module. At the moment, 20 PEI types are defined (onlyuseful values are listed in Table 5.1). The header also describes:
72 5.2. BCU 1
735. BCU operating system
74 5.2. BCU 1
• typedef struct { signed short value; bool error; } U_map_Result;
• typedef struct { uchar pointer;
755. BCU operating system
bool error; } S_xxShift_Result;
• typedef struct { uchar octet; bool error; } U_SerialShift_Result;
• typedef struct { bool expired; uchar time; } TM_GetFlg_Result;
76 5.2. BCU 1
• typedef struct { uchar_loptr pointer; bool valid; } AllocBuf_Result;
• typedef struct { uchar_loptr pointer; bool found; } PopBuf_Result;
• typedef struct { unsigned short product; bool overflow; } U_Mul_Result;
775. BCU operating system
multiplies two unsigned 16 bit values. If you do not check for an overflow, use the NE variant. Do not expect that using this function has a specific effect on the resulting code size compared to a normal implementation in C.
• typedef struct { unsigned short quotient; unsigned short remainder; bool error; } U_Div_Result;
78 5.3. BCU2
5.3. BCU2A BCU 2 provides the same features as the BCU 1. Therefore, the content of Section5.2 is also valid for the BCU 2 if not otherwise noted. The main new features are:
• Access Control
• A new message system, which provides queues to send messages to specific layers
• Support for the FT1.2 Protocol and the PEI type 10, which supports a user PEI handler
The BCU 2 has over 850 bytes of EEPROM available for the user application. Addi-tionally a new RAM section is added, where 24 bytes are available for the user applica-tion. The BCU 2 provides 4 protection levels. Properties as well as memory regions haveaccess levels. An access is only permitted if the current access level is lower or same asthe access level of the object. Connections which are not authenticated use access level3. The memory management of the BCU 2 has changed. A downloading tool mustallocate a section of memory before it can be read or written. During allocation, theaccess levels as well as the presence of a checksum are specified. Properties provide a clean interface to access internals of an EIB device. They areused in a BCU 2 for two purposes:
• They are used for the loading of applications as well as to query information about a BCU and change its state.
• An application program can also create its own properties to provide access to its state. Either a variable or an array can be exported by a property. Otherwise a handler function can be used.
795. BCU operating system
In that case, the handler function is called every time when the property is read or written. The handler must do the processing of the incoming EIB message and return the response message. The BCU 2 offers 4 builtin objects. When loading an application, the address table,association table and application program object are unloaded. Then, for each objecta memory region is allocated and filled with data. More allocation of memory regionsmay follow. Then the pointers for this object are set and its state is changed to loaded. As many pointers are now set using properties, they can store 16 bit values. Thereforemany things need no longer be in the region between 0x100 and 0x1FF, while othervalues, like the telegram rate limit, must still be in these memory locations. In a BCU 2, each OSI layer is a task, which has its own message queue. Using theBCU 2 API functions, it is possible to send messages to specific tasks. In a BCU 2, a system timer can also be used to send a message when it expires.Additionally, it can be used as a periodic message timer, which periodically sends amessage to the user application. The PEI Type 10 is added. For this PEI Type, the user can write his own PEI handler. The BCU 2 provides an application call back, where the application can provide itsown mechanisms to handle EIB telegrams. The default handler can be called from theapplication call back, so that only specific cases must be handled by the application.
80 5.3. BCU2
• typedef struct { bool newstate; uchar stateok; } FT12_GetStatus_Result;
815. BCU operating system
826. BCU SDKThe BCU SDK consists of the XML1 DTD2 and Schema files for the data exchange(in the xml directory), which are covered in Chapter 9; helper programs to extract andembed the program ID in XML files; eibd (in the eibd directory, see Chapter 7); BCUheaders and libraries; a generation tool for all necessary glue code and tables; and somebuild scripts. To access a XML file, the BCU SDK uses the libxml2 tree interface. With it, thecontent of a XML file can be loaded into a tree like structure. An XML tree can also bewritten to a file.
836. BCU SDK
84 6.3. Build system
bcugen1
cpp gcc as
c.inc p.o p1.o
c.c BCU Header Libraries
ar gcc
linker script conf ld
embedprogid elf linker script
ld ai.xml
application information
load
test image
856. BCU SDK
extractprogid
ar
config
bcugen2
c.h p1.s c.inc
as
p1.o
c.c BCU Header Libraries
gcc
linker script ld
c.o
elf linker script
ld
load
image
86 6.3. Build system
bcugen3
cpp as
c.inc p1.o
c.c BCU Header Libraries
gcc
linker script ld
c.o
elf linker script
ld
load
image
876. BCU SDK
ATTRIB_IDENT(Name)ATTRIB_FLOAT(Time)
CI_OBJECT(Debounce)
END_OBJECT The declaration of an object starts with OBJECT and ends with END OBJECT.Between these, all attributes are listed:PRIVATE VAR adds a normal variable to the object.ATTRIB STRING defines an attribute which stores a string.ATTRIB IDENT defines an attribute which stores an identifier.ATTRIB INT defines an attribute which stores an integer.ATTRIB BOOL defines an attribute which stores a boolean.ATTRIB FLOAT defines an attribute which stores a float.ATTRIB ARRAY OBJECT defines an attribute, which can store many objects of the specified type. The object must be declared using this specification before it can be used in this directive.ATTRIB ENUM defines an attribute which stores its value as an enumeration. In the configuration file, an identifier is used. To map between these, two mapping functions must be provided.ATTRIB INT MAP defines an attribute which stores an integer. In the configuration file, an identifier may be used besides an integer. This identifier is converted using a mapping function.
88 6.5. Bcugen1 and bcugen2
ATTRIB FLOAT MAP defines an attribute which stores a floating point value. In the configuration file, an identifier may be used besides a floating point value. This identifier is converted using a mapping function.
ATTRIB ENUM MAP defines an attribute which stores an array of name/value pairs.
ATTRIB EXPR defines an attribute which stores an expression suitable for InvisibleIf.
CI OBJECT marks the start of the attributes belonging to the device configuration.These are used in the CI blocks as well as in the configuration description. Using different defines, the list of objects and attributes is included at different loca-tions. Using this mechanism, the parsers, scanner, class definitions, initialization codeand output code are generated out of a single list. For mapping lists, the name/value pairs of one type are stored in one file. Eachpair is written as MAP( Value,Name). They are included with different definitions ofMAP to generate all mapping functions for one type out of one list (sometimes even thedefinitions of an enumeration). All attributes, except the object array, include a line number variable. This is au-tomatically set when a value is read by the parser. The writer only exports variableswhich have the line number set. Using this mechanism, no reserved values for the at-tribute variable itself are needed. Identifiers and strings are stored in the class String.For arrays, the template class Array is used. As the BCU SDK uses two formats (BCU information and the data exchange formatbetween build.ai and build.img) which are very similar but have small differences, allattributes which are not appropriate in one of the two parsers are hidden using prepro-cessor commands.
896. BCU SDK
• In the output of bcugen1 nothing is deactivated, so this is the worst case for the size estimation.
• bcugen1 outputs only empty address and association tables of a selectable size.
• bcugen1 puts parameters in a different file and adds the location of the parameters to the image.
The image created from the output of bcugen1 is more like a BCU program as it isused by the ETS. As a proof of concept program, imageedit was written (included in theBCU SDK ), which takes the output image of build.ai and a configuration description.It then stores the selected values into the image. This program only supports group objects, parameters (except FloatParameter ) andthe access control keys. As the images of bcugen2 provide better optimization possibili-ties, its development was not continued. bcugen3 compared to bcugen2 and bcugen1 :
2. The device configuration is read from the CI blocks instead of the XML file.
90 6.7. Memory layout
916. BCU SDK
0x000 IO Space
ROM
0x050 Low RAM RegB − RegN reserved 0x0CE .ram .bss .data stack 0xE0 call stack
0x100 Header EEPROM Addresstable Association table Group Objects Init Code Code Timer .loconst read only copy of .data .eeprom .parameter
0x1FF checksum
92 6.7. Memory layout
0x000 IO Space
ROM
0x050 Low RAM RegB − RegN reserved 0x0CE .ram .bss .data stack 0xE0 call stack
0x100 Header EEPROM Addresstable .eeprom Timer .loconst <=0X1FF Group Objects Init Code Properties Code read only copy of .data copy of .data.hi
Association table
0x4DF .parameter
0x900 0x972 .bss.hi High RAM .data.hi stack 0x98A
0x9D0
936. BCU SDK
947. EIB bus access7.1. OverviewTo access the EIB bus, a daemon (called eibd ) for Linux systems was developed. Itprovides a simple EIB protocol stack as well as some management functions. A bigadvantage of using a daemon is, that the applications can use a high level API. Another isthat multiple clients, even on different computers, can connect to the bus simultaneously.A disadvantage of this concept is that some future extensions will need a modificationof the daemon. The daemon supports different ways to access the EIB bus: • PEI10 protocol (FT1.2 protocol subset) • PEI16 protocol (using the BCU 1 kernel driver) • TPUART (using the TPUART kernel driver) • EIBnet/IP Routing • EIBnet/IP Tunneling • TPUART user mode driver • PEI16 user mode driver (not really usable) • KNX USB protocol (only EMI1 and EMI2) The daemon consists of a front end which accepts connections from applications overTCP/IP or Unix domain sockets, a protocol and management core and some back ends,which are the interface to the medium access devices. The daemon is intended for the TP1 medium and uses the TP1 frame format as itsinternal representation. It does not support TP1 polling. Support for the extendedframe format is present, but it requires support in the back end part for the selectedbus access mechanism. Of the EMI based back ends, only CEMI frames support dataareas which are large enough (therefore, only EIBnet/IP is possible). Both TPUARTback ends support sending extended data frames. The kernel level driver based back endsupports decoding such data frames, if they are delivered by the kernel driver (whichdoes not support them at the moment). In the user mode TPUART back end, thedecoding of extended data frames is unimplemented. Because using the bus monitor mode prevents the sending of frames, a best effort busmonitor, called vBusmonitor, was introduced. This feature can be activated at any time.
957. EIB bus access
If eibd runs in bus monitor mode, a vBusmonitor client will get the normal bus monitorservices. Otherwise all telegrams which eibd receives will be delivered. Theoretically,there need not be a difference in the services between the two modes. For the currentback ends, at least all ACKs are lost. Most back ends also only deliver frames to or fromthe bus access device. eibd has no security mechanisms. If the operating system allows a connection, eibdwill provide its services. Because dynamic buffer management is used, buffer overflowsare not very likely to happen. The best security level can be achieved, if the daemononly listens on the Unix domain socket and the access to this socket is restricted to acertain group. Additionally, it should not be run as root. The user account which runseibd then needs access privileges for the appropriate device node. For EIBnet/IP, noprivileges are needed.
7.2. Architecture
The whole of eibd is based on the GNU pth user mode threading library ([PTH]). GNU pth only supports non preemptive threading. Also, only one thread is runningat a time. This makes locking superfluous in many situations and therefore the programcode is simpler. However, it may be a performance problem for some application tasks.For eibd, this is not an issue, because nearly all the time, the tasks are waiting for anevent. Additionally, it provides a powerful event management, which supports waiting untilone of a set of different events has occurred. For system calls which can block, pthprovides wrappers which will switch to another task if the system call blocks. As anadditional parameter, the pth versions of the system calls accept a list of events whichwill abort the system call if they occur before the system call is completed. For inter-thread communication FIFO queues and semaphores are used. Thesemaphore support for pth was written as part of eibd and is available as a patch([PTHSEM]). Using pthreads was also considered, but the need for more locking to avoid raceconditions as well as the missing support of a waiting construct for multiple eventsdiscouraged its use. For storing arrays, the template class Array was written; for the use of strings, theclass String. For the packing and unpacking of frames of the different layers, the classesAPDU (Layer 7), TPDU (Layer 4) and LPDU (Layer 2) were introduced. Each subclassof them represents a specific type and implements associated functions. The back end to be used can be selected over an URL. The first part selects the backend, then a colon and back end specific data follow.
96 Client
Client Connection
Management Management
Layer 3
EIBnet/IP USB
KNX IP Router TPUART BCU 2 BCU 1 USB Interface EIB network 7.2. Architecture
977. EIB bus access
7.3.1. EMI2The EMI2 interface is implemented in the EMI2Layer2Interface class. It supports thebus monitor mode. In vBusmonitor mode, all outgoing frames are delivered. Incomingframes are filtered as described below. For normal communication, the same restrictionsapply. At the moment, this back end accepts listen requests for any group address. Toactually receive telegrams on a group address, it must be in the BCU address table, orthe address table length needs to be set to 0 before eibd is started (using the bcuaddrtabcommand). Telegrams with an individual address as destination are delivered only whenthis address matches the individual address of the BCU 2. Changing the address table length involves changing a byte in the BCU EEPROM.The problem with this solution is, that an EEPROM has a limited number of writecycles and that in the case of a crash of eibd the original value would not be restored.Therefore, this change is not made automatically within eibd.
FT1.2
7.3.2. EMI1The EMI1 interface is implemented in the class EMI1Layer2Interface. Regarding busmonitor mode and frame filtering, it has the same features and limitations as the EMI2back end described above. The PEI16 protocol used to transmit EMI1 messages from and to the BCU 1 is highlytiming sensitive. RTS/CTS handshaking is done for every character and a message mustbe transmitted within 130 ms. This complicates implementation on the PC side.
98 7.3. Back ends
997. EIB bus access
version. Therefore all limitations of the EMI1 and EMI2 backends hold depending onthe EMI version used. The URL of this backend is usb:[bus[:device[:config[:interface]]]]. The values of bus,device, config and interface can be determined using findknxusb. Currently only EMI1 and EMI2 are supported. cEMI is not implemented, as no devicesupporting this feature is available for testing.
100 7.3. Back ends
In the class TPUARTSerialLayer2Driver, a complete user mode driver for the TPUARTis implemented. It has the same addressing capabilities as the kernel driver, but thevBusmonitor mode delivers all frames. Although a kernel module is naturally in a betterposition to handle the timing requirements of the TPUART communication protocolproperly, the user-mode solution is attractive due to its higher flexibility and reducedinstallation hassle. In tests, the user mode driver performed satisfactorily on a ratherheavily loaded Pentium-4/1.8 GHz. When a frame is received from the TPUART, an acknowledgement request has to bereturned within a short time after the destination address has been received. When it isnot returned in time, the remote station will repeat its transmission on the EIB networkup to three times. On a reasonably recent workstation PC, it is possible to acknowledgeat least one of these transmit attempts by using low latency mode. Due to a bug in thelow latency mode implementation, running this back end on a Linux 2.6 kernel with aversion lower than 2.6.11 will crash the computer (see section 7.3.2). Because no history of recently received frames is kept, all repeated frames are dis-carded. With this strategy, it is possible to lose a frame if the first send attempt iscorrupted. For recognizing frame starts, a dual strategy is implemented. It assumes that a framestarts at the first byte received. If the byte sequence starting at this byte is not a correctframe, or if an expected byte is not received after an expected timeout, the head of thereceive buffer is discarded and a new receive attempt is made with the new head. There is no problem regarding the transmission of frames. The URL for this backend is tpuarts:/dev/ttySx, where /dev/ttySx has to be replaced with the correct serialinterface.
1017. EIB bus access
7.4. CoreThe core of the driver is organized in layers. The definition of the layers is inspired bytheir definitions in the KNX specification ([KNX]), but adapted to fulfill all requirements.
7.4.1. Layer 3The class Layer3 is the main dispatcher and the interface to the back end. It decideswhen to enter bus monitor mode. Each higher layer task registers at the Layer 3 and states what kind of packets andaddresses it is interested in. The group address 0/0/0 and the individual address 0.0.0have a special meaning. Listening on group address 0/0/0 means that all group com-munication packets from the back end should be delivered. For getting the broadcastpackets, which use this address on the bus, a special call back is implemented. Listening for frames with the source address 0.0.0 means, that all packets to a specificindividual address of the back end should be delivered. Listening for frames with thedestination address 0.0.0 means, that all packets to the default individual address of theback end should be delivered. The mapping of the address 0.0.0 to the default address isdone transparently in both directions. In fact, higher layers cannot even ever determinethe real EIB address of the back end.
7.4.2. Layer 4This layer provides a communication endpoint to applications for one specific Layer 4service. The services are:
• The class T Group implements an endpoint for group communication with a spe- cific group address. For sending, an APDU is passed (as a character array). Of a received group telegram, the APDU as well as the source will be returned.
102 7.5. Layer 7
• The class T Connection implements the client of a connection between two de- vices. It implements the necessary state engine and opens the connection when instantiated. The connection will be closed when the class gets destroyed. If the connection gets closed by the remote device, an empty APDU will be transmitted to the higher layers. The higher layers must take care, that the connection is not idle too long. This will cause the remote device to close the connection. It is possible to use multiple connections if they have different remote targets. This fact is not checked by eibd. If multiple connections are made to one device, they will interfere and in most cases all these connections will get closed. Such a race condition can also happen if a connection is closed and immediately reopened, because the T Disconnect may not have been sent on the EIB bus yet when the new connection is opened. In normal operation, APDUs are exchanged with the higher layers.
7.5. Layer 7The call of management relevant Layer 7 functions is implemented in the classesLayer7 Connection and Layer7 Broadcast. Layer7 Broadcast can send a A IndividualAddress Write or can collect all correspond-ing responses after sending a A IndividualAddress Read. Layer7 Connection provides functions to send a connection oriented request andparse the result. Functions are, for example, A Memory Read, A Memory Write,A ADC Read . . . . Additionally there are some functions prefixed with X , which domore complicated tasks:
X Memory Write Block writes a block of memory (no size limit) and verifies it.
1037. EIB bus access
104 7.7. EIBD front end
7.7.1. ProtocolThe protocol is quite simple. The client connects to the eibd daemon over TCP/IP orUnix domain sockets. Then it sends its requests and receives all responses. To free theconnection, the connection is simply closed using the means provided by the operatingsystem. The relevant functions in the library are:
EIBSocketLocal connects to eibd over the Unix socket passed in the parameter.
Normally a connection is switched to a certain mode and after that only certain typesof requests can be processed. Multi byte values are passed in the big-endian format. Every packet starts with a two byte length field of the data, counting from after thesecond byte of the packet. Then two bytes determine the purpose of the packet (thiswill be called the type in the following). The possible values of this field are defined ineibtypes.h. If a request is unsuccessful, a packet with the type EIB INVALID REQUEST or amore specific response is returned by eibd. Therefore, if the result type is not in therange of the expected types, an error should be returned.
1057. EIB bus access
content is needed, an application can use the normal services and decode the packetsitself. The relevant functions are:
Layer 4 connectionsA layer 4 connection is opened by a packet of 5 bytes with one of the following types:
• EIB OPEN T GROUP opens a T Data Group connection, the group address is transmitted in bytes 2–3. Byte 4 is 0, if the connection is only used to send data, else it is 0xff.
• EIB OPEN T TPDU opens a raw Layer 4 connection. The local address is trans- mitted in bytes 2–3 (0 means the default address).
• EIB OPEN GROUPCON opens a group socket, which can be used to send and receive group telegrams for any group address. Receiving telegrams over this mech- anism does not generate Layer 2 ACKs6 . Byte 4 is 0, if the connection is only used to send data, else it is 0xff.
The data are exchanged in packets of the type EIB APDU PACKET. For some types,two bytes with the EIB address are inserted before the data. A raw connection transmitsan EIB address in both directions, group and broadcast connections transmit addressesonly from the EIB daemon. A group socket transmits packets of the type EIB GROUP PACKET. For sendinggroup telegrams, the destination address is put in bytes 2–3 followed by the APDU. For 6 see Section 7.4.2, class GroupSocket for details
106 7.7. EIBD front end
received telegrams, bytes 2–3 contain the source address, followed by the destinationaddress in bytes 4–5 and the APDU. Note that a T Connection is automatically closed if there is no traffic for some seconds.The close event is indicated by receiving an empty APDU. The relevant functions are:
EIBGetAPDU Src receives an APDU with source address over a T Data Broadcast or T Data Group connection.
EIBGetGroup Src receives a TPDU and a source and destination address over a group socket.
Connectionless functionsThere are some management functions which can only be executed on connections whereno mode switch has been executed. After that, the connection remains in the same state.The functions are:
• Switch programming mode: a 5 byte packet of the type EIB PROG MODE is sent to the daemon. Bytes 2–3 contain the address of the EIB device, byte 5 the function code:
1077. EIB bus access
If the request is successful, a packet of the same type is returned. If the state of the programming mode flag is requested, it is returned in the third byte.
• To read the mask version, a packet of the type EIB MASK VERSION, with the address in bytes 2–3, is sent. If successful, a packet of the same type with the mask version in bytes 2–3 is returned. This function is implemented in EIB M GetMaskVersion.
• EIB LOAD IMAGE loads an image into a BCU. The request consists of an image in the BCU SDK image format stored in a packet of the type EIB LOAD IMAGE. As a result a packet of the same type will be returned. In the bytes 2–3, the load result is returned. The list of possible values is defined in the type BCU LOAD RESULT. The function is implemented in EIB LoadImage.
Management connectionA management connection is opened by sending a packet of the typeEIB MC CONNECTION with the address of the EIB device in bytes 2–3 tothe daemon. If the request is successful, a packet of the same type is returned. Thisfunction is implemented in EIB MC Connect. After opening, various management functions can be called. If a management con-nection is idle for some seconds, it is automatically closed. This has the result that allfuture calls will fail. The different functions are:
108 7.7. EIBD front end
EIB MC PROG MODE This basically does the same as EIB PROG MODE, only the request packet is different. Here the address of the device is not transmitted. Instead, the function code is transmitted at byte 3 (instead of byte 5). The return packet has the same structure, but the type is EIB MC PROG MODE. The functions are implemented in EIB MC Progmode Toggle, EIB MC Progmode On, EIB MC Progmode Off and EIB MC Progmode Status.EIB MC MASK VERSION An empty packet of this type is sent to the daemon and a packet of the same type with the mask version in bytes 2–3 is returned. It is implemented in EIB MC GetMaskVersion.EIB MC PEI TYPE to read the PEI type, a packet with this type is sent. If the request is successful, a packet of the same type with the PEI type in bytes 2–3 is returned. It is implemented in EIB MC GetPEIType.EIB MC ADC READ A packet with the ADC channel in byte 2 and the count in byte 3 is sent. A successful result has the same type and the ADC value in bytes 2–3. It is implemented in EIB MC ReadADC.EIB MC PROP READ Byte 2 of the request contains the object index, byte 3 the property ID, bytes 4–5 the start offset and byte 6 the element count. A success- ful result has the same type and contains the data read. It is implemented in EIB MC PropertyRead.EIB MC READ A packet of this type with the address in bytes 2–3 and the length in bytes 4–5 is sent. The result is contained in a packet of the same type. It is implemented in EIB MC Read.EIB MC PROP WRITE Byte 2 of the request contains the object index, byte 3 the property ID, bytes 4–5 the start offset and byte 6 the element count. Then the data to be written follows. A successful write is acknowledged by a packet of the same type with the content of the response packet. It is implemented in EIB MC PropertyWrite.EIB MC WRITE Bytes 2–3 contain the address, bytes 4–5 the length. The data to be written follows. A successful write is acknowledged by a packet of the same type. It is implemented in EIB MC Write. Other error codes are: EIB ERROR VERIFY After sending the write request, a read request for the same memory address is sent for verification purposes. If this error code is returned, the response differs from the data in the write request. EIB PROCESSING ERROR An unspecified processing error occurred.EIB MC PROP DESC A packet of this type with the object index in byte 2 and the property ID in byte 3 is sent to the server. The property type is returned in byte 2, the element count in bytes 3–4 and the access level in byte 5. It is implemented in EIB MC PropertyDesc.
1097. EIB bus access
EIB MC AUTHORIZE A packet of this type with the key in bytes 2–5 is sent to the server. The level returned by the device is delivered in byte 3 of a packet of the same type. It is implemented in EIB MC Authorize.
EIB MC KEY WRITE Bytes 2–5 contain the key and byte 6 the level of the key write request. A successful request is acknowledged by a request of the same type. It is implemented in EIB MC SetKey.
EIB MC PROP SCAN A request of this type is sent to eibd to get a list of all prop- erties. If the request is successful, a packet of the same type is returned. For each property, 6 bytes are returned. Byte 0 contains the object index, byte 1 the prop- erty ID, byte 2 the property type. Bytes 3–4 contain the object type if the property ID is 1 and the property type is 4. Otherwise, they contain the element count. Byte 5 contains the access level. It is implemented in EIB MC PropertyScan.
• Close := EIBClose closes an EIBD connection 7 Note that this is only the connection between client and eibd (think of it as a session). It should not be confused with T Connection, which encapsulates a reliable connection between devices on the EIB network. 8 | means or and has the highest precedence. * means any count of occurences. Function calls are set emphasized.
110 7.7. EIBD front end
• StatelessFuncs := StatelessFunc *
1117. EIB bus access
112 7.7. EIBD front end
1137. EIB bus access
114 Part III.
1158. Input format8.1. BCU configurationThe BCU configuration is defined in a text file:
• Different tokens are separated by white space (space, newline and tabulator). In some situations, white space is not necessary (e.g. between an identifier and a semicolon).
• Strings are used as in C. They can even consist of different parts, which are auto- matically concatenated. C escape sequences are supported.
• Numbers can be floating point numbers (in C format) or integer numbers, either decimal or hexadecimal. A hexadecimal number is prefixed with 0x.
• Each entry consists of a keyword and its value. The entry is ended with a semicolon.
The root element is a block called Device. All other blocks are nested within thisblock.
1178. Input format
Some blocks may contain a block named CI. This block is optional. During a devel-opment build (via build.dev ), it can be used to supply the values which are normallyspecified in the configuration description (see Section 9.4). They must meet the lim-itations listed in Section 9.5. All rules for a normal build apply; e.g. a unreferencedProperty is disabled even if Disabled is not set to true in the associated CI block. During a normal build, the contents of the CI block are ignored. However, it is stillchecked for syntax errors. Attributes containing syntax errors must thus be deleted orcorrected before the build can succeed.
BCU Mandatory, selects the used BCU. Supported values are bcu12, bcu20 and bcu21.
Model Optional identifier to select a different feature set. Available choices are:
SyncRate Optional integer, contains the raw value as specified in [BCU1] and [BCU2].
Title Mandatory string, contains the short description of the application program.
AddInfo Optional string, contains additional information text about the application.
118 8.1. BCU configuration
Category Optional string, contains the hierarchical function class, e.g. Application Modules / Push Button Sensor / Two Fold.
Test Addr Count Optional integer, number of group addresses used in the test compile (for size estimation in build.ai ).
Test Assoc Count Optional integer, number of associations used in the test compile (for size estimation in build.ai ).
include An array of strings which contains the name of all used C files.
CPOL Optional boolean, set CPOL (clock phase for serial synchronous interface).
CPHA Optional boolean, set CPHA (clock phase for serial synchronous interface).
Key Optional, contains a list of Name-Value pairs (e.g. 1 = 0xFFFF ) which contain the access keys for specific access levels.
InstallKey Optional, device key of the device before downloading (e.g. 0xFFFF ).
1198. Input format
PhysicalAddress Mandatory, contains the individual address of the device to which the program is to be downloaded. 2 The format is $x.y.z. Additionally a 16 bit hex value must be also understood (e.g. 0xFFFF ).
• FunctionalBlock
• IntParameter
• FloatParameter
• ListParameter
• StringParameter
• GroupObject
• Object
• Debounce
• Timer
• PollingMaster
• PollingSlave
ProfileID Mandatory, contains the unsigned integer number describing the object type of the functional block as specified in [KNX] 3/7/3-2.2.
AddInfo Optional, additional textual information about the functional block as a string.
A functional block without any interfaces or only containing interfaces which do notreference anything is left out of the application information. 2 The assignment of individual addresses has to be done separately from the download process.
120 8.1. BCU configuration
DPType Mandatory, contains the DP Type ([KNX] 3/7/3-5) of the interface as floating point value. The DP Types, which can be accessed by name, are listed in Section B.1.
GroupTitle Optional, specifies the title of the group to which the interface belongs as string. This attribute is intended to provide the name of a property page, on which an integration tool should display the interface if the functional block is unknown to the integration tool.
• exprb := expri is true, if the integer value is not 0. • exprb := ’(’ exprb ’)’ • exprb := ’ !’ exprb NOT • exprb := exprb ’&&’ exprb AND • exprb := exprb ’||’ exprb OR • exprb := ident ’IN’ ’(’ ident [ ’,’ ident ] * ’)’ The first ident must be the name of a ListParameter. All following idents must be the name of an element of the ListParameter. The expression is true, if the name of the current selected value of the ListParameter is in the ident list. • exprb := exprs ’==’ exprs | exprs ’! =’ exprs | exprs ’<’ exprs | exprs ’<=’ exprs | exprs ’>=’ exprs | exprs ’>’ exprs returns true, if the a bytewise compare of the two strings fulfils the conditions.
1218. Input format
• exprb := expri ’==’ expri | expri ’! =’ expri | expri ’<’ expri | expri ’<=’ expri | expri ’>=’ expri | expri ’>’ expri returns true, if the a byte wise compare of the two integer values fulfils the conditions. • exprb := exprf ’==’ exprf | exprf ’! =’ exprf | exprf ’<’ exprf | exprf ’<=’ exprf | exprf ’>=’ exprf | exprf ’>’ exprf returns true, if the a bytewise compare of the two floating point values fulfils the conditions. • exprs := ’(’ exprs ’)’ • exprs := C-style string • exprs := ident returns the current value of the StringParameter with the name ident. • expri := any valid integer constant or constant integer expression • expri := ident returns the current value of the IntParameter with the name ident. • expri := expri ’+’ expri | expri ’-’ expri | expri ’%’ expri | expri ’*’ expri | expri ’/’ expri | ’-’ expri • exprf := ’(’ exprf ’)’ • exprf := any valid constant floating point expression • exprf := ident returns the current value of the FloatParameter with the name ident. • exprf := exprf ’+’ exprf | exprf ’-’ exprf | exprf ’*’ exprf | exprf ’/’ exprf | ’-’ exprf • exprf := expri • exprf := ’(’ exprf ’)’ The precedence and meaning of operators are the same as in C.Reference An array of identifiers, which are the names of a GroupObject, PollingMaster, PollingSlave, Property or a kind of parameter. If one of these objects is not referenced by an interface, it is left out of the application information. Additionally, most functions of these objects are removed, so that they consume less space. Their interface is not changed.
122 8.1. BCU configuration
Default Mandatory, contains the integer number which the integration tool should pre- select.
Unit Optional, contains a string which represents the unit in which the value is mea- sured.
Precision Optional, contains an integer value which describes the size of the smallest interval whose bounds the final BCU application will consider as separate values.
Increment Optional, contains an integer value which is the default increment value which the integration tool should offer if up/down buttons are shown.
Value Mandatory, contains the parameter value to use for the compile process.
Default Mandatory, contains the floating point number which the integration tool should preselect.
Precision Optional, contains a floating point value which describes the size of the small- est interval whose bounds the final BCU application will consider as separate val- ues.
1238. Input format
Increment Optional, contains a floating point value which is the default increment value which the integration should offer if up/down buttons are shown.
Value Mandatory, contains the name of the selected entry to use for the compile process.
MaxLength Mandatory integer, contains the maximum string length the application supports.
RegExp Optional string, regular expression which the string must match. The format of the regular expression is the same as used in XML Schema ([XML2], Appendix F).
124 8.1. BCU configuration
Default Mandatory, contains the value as a string which the integration tool should preselect.
AddInfo Optional, additional textual information about the group object as a string.
Type Mandatory, contains the type. The supported types are listed in Table 8.1.
Sending Optional boolean value, which is true if the application can transmit values via this group object. Generates the function Name transmit(), which initiates the transmission of the group object value.
1258. Input format
Receiving Optional boolean value, which is true if the application can receive values via this group object.
Reading Optional boolean value, which is true, if the application can send A GroupValue Read telegrams via this group object. Generates the function Name readrequest(), which sends a read request. If the answer is received, the on update handler is called, if present. If both Reading and Sending are enabled, the function Name clear() is generated. It is not possible to call Name transmit() before the answer of an outstanding Name readrequest() is received. Therefore Name clear() is available to cancel an open read request, so that a normal transmit is possible again.
StateBased Mandatory boolean value, which is true if the group object contains state information (which means that a A GroupValue Read operation will return a meaningful value).
on update Optional, contains the name of the functions which will be called when the value of the group object changes. It automatically activates Receiving.
eeprom Optional boolean value, which is true if the value of the group object should be allocated in the EEPROM. Transparent EEPROM access is automatically ac- tivated. The default is false. The CI block has the following attributes:Priority Optional, contains one of the values low, normal, urgent or system specifying the priority of the messages transmitted via this group object.
SendAddress Optional, contains the group address to send A GroupValue Write tele- grams to (format $x/y/z or $x/y or 0xFFFF ). May only be present if Sending is true. If not present, sending is disabled.
126 8.1. BCU configuration
eeprom Operation boolean, which is true if the variable should be located in the EEP- ROM. Transparent EEPROM access is activated.
handler The name of a function which is used to handle a custom property type.
Disable Optional boolean value; if it is true, the property should be deactivated. The default value is false.
1278. Input format
128 8.1. BCU configuration.
typedef struct{ bool write; uint1 ptr;} PropertyRequest;
typedef struct{ bool error; uint1 ptr; uint1 length;} PropertyResult;
The handler function takes a variable of the type PropertyRequest as its only parameterand returns a variable of the type PropertyResult. If no handler is set, but MaxArrayLength is greater than one, a structure is allocatedunder the name given in Name. It contains the element count, which is the numberof used elements. The other element is named elements, which is an array of MaxAr-rayLength elements of the type selected by Type. In any other case, a normal variable with the type selected by Type is allocated underthe name given in Name.
1298. Input format
130 8.1. BCU configuration
UserTimer allocates a user timer. Resolution must contain a value for the user timers from Table 8.3. on expire can contain the name of a function which is called periodically while the timer is expired. Under the value of Name, an 8 bit variable is available which contains the current value. Name get() returns true if the timer is expired. Name set(uint1 value) loads the timer with the new value (between 0 and 127).
EnableUserTimer provides the same interface as a UserTimer. The difference is, that the on expire function is only called once, if the time expires (and needs more memory). Name del() cancels a running timer. If the timer is running, can be check by Name enable, which is true as long as the timer is running.
SystemTimer only allocates a system timer. This timer can only be accessed by the BCU API.
MessageTimer only works on a BCU 2. Resolution must contain a value for the BCU 2 message timer out of Table 8.3. Name del() stops the timer. Name set(uint1 value) loads the timer with the new value.
MessageCyclicTimer only works on a BCU 2. Resolution must contain a value for the BCU 2 cyclic message timer out of Table 8.3. Name del() stops the timer. Name set(uint1 param) starts the timer (param is put into the periodic message).
The count of user timers is only limited by the memory available. For the user, twosystem timers are available. If the debouncer is used, only one of them is available.
1318. Input format
Name The name which is used as the base name for all functions which are related to the polling master interface.
AddInfo Optional, additional textual information about the polling master interface as a string.
PollingAddress Mandatory, contains the polling address which the master uses, e.g. 0x1232.
PollingCount Mandatory, contains the number of the highest polling slot used for this address (numbering starts with 1).3
Name The name which is used as the base name for all functions which are related to the polling slave interface.
AddInfo Optional, additional textual information about the polling slave interface as a string.
PollingAddress Optional, contains the polling address which the slave listens to, e.g. 0x1232.
PollingSlot Optional, Contains the number of the polling slot to send the polling result (numbering starts with 0).4
132 8.2. C files
8.2. C filesThe C files can contain any C code which is allowed by GCC. To avoid problems andcreate the smallest code, follow these guidelines:
• The preprocessor is run over all C sources in an extra pass to combine them into one file. At this time, the BCU header files are not included. Therefore you can only refer to your own defines. Some functions of the BCU SDK may be implemented as defines. Since they are handled specially and may be implemented differently in future versions, they only should be used in the specified way.
• Make all your functions and global variables static. This allows GCC to optimize the code better or even remove variables and functions. For event handlers, do not specify static, as this is already done in the header files if necessary. Adding the static attribute to an event handler may cause the program to break.
• The GCC floating point emulator is linked if necessary. However, try to avoid the use of floating point values, as it needs lots of memory. The floating point types have the standard names float and double.
• The BCU SDK includes newlib as runtime library by default. Operating system independent functions like strcpy should work, but need a lot of space. They are used by including the header file, as in any C program. Functions which perform I/O, need an operating system call, or do dynamic mem- ory management (except alloca), are not supported. Newlib functions are docu- mented in [NEW2, NEW3].
• Integer types from 1 to 8 bytes are available. A bigger type needs more code to handle and uses more RAM. The unsigned types are called uint1 to uint8, the signed types sint1 to sint8.
• Normal variables are placed in a section where enough free space is available. If the variable must be in a certain memory region (e.g. because the BCU operating system expects it there), the following modifiers can be added to the variable declaration:
RAM SECTION place it in low RAM at an address lower than 0x100. Such a variable is initialized with zero, even if an initializer is added. LOW CONST SECTION place a constant in the EEPROM between 0x100 and 0x1ff. You may not change the value of such a variable, as this will cause a checksum mismatch which will stop the application program. If possible, add the const qualifier.
1338. Input format
EEPROM SECTION place it in the EEPROM between 0x100 and 0x1ff. The value of such a variable may be changed.
An example:
int x EEPROM_SECTION;
• To guarantee that variables are allocated in a certain order, place them in a struc- ture.
• The present version of the GCC port only accesses byte blocks, as it does not uses bit operations. If you need to access only a single bit, this must be done with inline assembler statements.
• In some cases, the use of local variables results in larger code, because stack han- dling code is needed. Otherwise the memory locations of global variables cannot be reused for different functions.
• The BCU SDK supports transparent access to the EEPROM, even using pointers. EEPROM write access is only supported for variables which are allocated with the EEPROM SECTION modifier. If the address of any other memory location is loaded into a transparent EEPROM access pointer and the pointer is dereferenced, anything can happen (even the EEPROM content may be changed in a way that the program will be destroyed). To use this feature, add EEPROM ATTRIB to the variable declaration (e.g. int x EEPROM SECTION EEPROM ATTRIB;). If you use a pointer to an EEP- ROM variable, add EEPROM PTR ATTRIB to the pointer definition (int EEP- ROM PTR ATTRIB* ptr;). If the pointer variable itself is located in the EEP- ROM, EEPROM ATTRIB is necessary. Be sure to use the right attribute. If an attribute cannot be added, GCC issues a warning. It cannot give any feedback for a wrong attribute of a pointer variable. Mixing the use of pointers to the EEPROM and normal memory can produce errors and warnings.
134 8.3. API functions
• If you are doing multiplications or divisions, cast the operands to unsigned, as the signed functions are larger. On the BCU 1, the use of the API functions for multiplication and division can be beneficial in some situations, as it avoids the additional code for the GCC multiplication and division functions.
• void reset_watchdog(); Resets the watchdog counter in longer calculations.
1358. Input format
1369. File format for data exchange with integration toolsConfiguring an EIB system can be a complex task. Different, more or less intelligent,tools for solving this problem have been implemented. To aid the interworking betweenthe BCU SDK and these tools, a clear interface is needed. The most important existing solution is the ETS format, which is not publicly docu-mented (and uses ZIP files encrypted with a password unknown to the public). Projectslike [BASYS] have found workarounds to read this file and have partially decoded thefile format. As the password of the ETS format is unknown, writing such files is impossible.Using a similar file format (e.g. the same as the ETS format, but not encrypted) wasnot considered useful, because defining a new format gives the possibility to include newfeatures. The primary goals of the new format are:
• Fully specified and freely available, so that it is easy to integrate into applications
The format is based on XML and provides means to create custom formats based onit. The format provides support for nearly all constructs which can be implemented ona BCU, even in finer detail than supported by the current BCU system software. An important point is that the format hides how an image is finalized and by whichmeans this is done. The ETS uses image patching, which supports only small modifica-tions (e.g. change of a byte). Because modern compilers create better code when theyhave more information, compiling the image at download time with all settings from theintegration tool leads to smaller images in some cases.1 The format groups everything into functional blocks, which are a kind of high leveldescription of the application. The organizational process of how functional blocks andtheir interfaces are defined is beyond the scope of this document. Some are defined in 1 This is especially important because the GCC port produces larger code than a good assembler programmer.
1379. File format for data exchange with integration tools
the KNX standard [KNX]. If no appropriate functional block exists, the applicationprogrammer may (and will have to) create his own. The interfaces of the functional blocks contain all necessary information concerningthe format of the data types and how to interpret them. The objects which actuallyimplement the interface only contain a basic type, which in most cases does not representmore than the size of the data type.
3. Some application descriptions are collected, their meta data are unified and turned into a product catalog for an integration tool. Additionally, language translations are included in this step.
6. The integration tool sends the necessary configuration for each device to the down- load part as a configuration description.
7. The download part of the BCU SDK creates a downloadable image and writes it into the BCU memory.
The format and creation of the product catalog, as well as how the integration toolworks, are out of scope of this document. For the translations, two alternatives exist:
• After an application description has been created by the BCU SDK, translations into other languages are put in XML tags using the lang attribute. The big disadvantage is that, for new translations, the whole file has to be changed.
• Use the GNU gettext system. After creating the application information, all texts are extracted. The resulting file is passed to a translator, who creates the transla- tions. Then the translations for one language are bundled in catalogs, which are used by integration tools to replace the text. This system is used by nearly all Linux applications and has proven to work. Additionally, the automatic reuse of translations is possible and there is no need for releasing translations and applications at the same time.
138 9.2. Basic definitions
How the code is passed between the BCU SDK and the download tool is not importantfor the integration tool. Two alternatives are possible:
• The code is stored in another file (using an internal file format) and imported in the download tool.
• The code is embedded as a large text block in the application description (again, using an internal format).
1399. File format for data exchange with integration tools
MaskVersion Mandatory, contains the mask version of the device the application is designed for as a hexadecimal number string. Additionally, this can be used by the integration tool to determine if certain features are available in the target BCU.
Category Optional, contains the hierarchical function class, e.g.: Application Modules / Push Button Sensor / Two Fold as a string.
9.3.2. InterfaceFor each interface, the functional block contains a node with the name Interface. Thisnode also has a unique ID in the attribute id. It contains the following nodes:
GroupTitle Optional, specifies the title of the group to which the interface belongs. This attribute is intended to provide the name of a property page on which an integration tool should display the interface, if the functional block is unknown to the integration tool.
140 9.3. Application information
1419. File format for data exchange with integration tools
• exprf := ident returns the current value of the FloatParameter with the ID ident. • exprf := exprf ’+’ exprf | exprf ’-’ exprf | exprf ’*’ exprf | exprf ’/’ exprf | ’-’ exprf • exprf := expri • exprf := ’(’ exprf ’)’
The link to the actual objects is made by one or more references with the elementReference. It has no content, only the attribute idref. idref contains the ID of a groupobject, property, polling object or parameter, which the interface represents. Then the definitions of the properties, parameters, polling objects and group objectsfollow.
GroupType Mandatory, contains the basic type number of the group object as specified in [KNX] 3/5/1-4.3.2.1.2.1.
Sending Mandatory boolean value, which is true, if the application can send values (i.e., transmit A GroupValue Write requests) via this group object.
Receiving Mandatory boolean value, which is true, if the application can receive values via this group object. This includes both A GroupValue Write indications and A GroupValue Read responses.
Reading Mandatory boolean value, which is true, if the application can send A GroupValue Read requests via this group object.2
StateBased Mandatory boolean value, which is true if the group object contains state information, i.e., if an A GroupValue Read request will return a useful answer (cf. the StateBased keyword in the BCU configuration, Section 8.1). 2 Although it will usually be reasonable that Receiving is enabled together with this option, it is not mandatory.
142 9.3. Application information
9.3.4. PropertiesA group object is defined by the node Property and contains a unique ID in the attributeid. It contains the following elements:
AddInfo Optional, additional textual information about the polling master interface.
Here, no type information is given because the basic type is only one byte. The actualinterpretation is defined by the DPType in the interface.3
AddInfo Optional, additional textual information about the polling slave interface. 3 At the moment, there is no DPType for this. When functional blocks with polling will be introduced, a DPT needs to be allocated.
1439. File format for data exchange with integration tools
9.3.7. ParameterAll parameters are grouped together in the node Parameter. For each parameter, thereis a child node which carries its unique ID in the attribute id. Possible parameters are:
ListParameter Represents a selection of one element out of a list. It has the following elements:
IntParameter This parameter represents an integer value. It has the following elements:
FloatParamter This parameter represents a floating point value. It has the following elements:
144 9.4. Configuration description
Unit Optional, contains some text which represents the unit for measuring the value. Precision Optional, contains a floating point value which describes the size of the smallest interval whose bounds the final BCU application will consider as separate values. Increment Optional, contains a floating point value which is the default increment value the integration tool should offer if up/down buttons are shown.
1459. File format for data exchange with integration tools
9.4.2. PropertyA property is configured with the node Property and contains the unique ID in theattribute id. It contains the following elements:Disable Optional boolean value; if it is true, the property should be deactivated. The default value is false.
146 9.5. Limitations
PollingAddress Mandatory, contains the polling address which the master uses.
PollingCount Mandatory, contains the number of the highest polling slot used for this address (numbering starts with 1).
PollingAddress Mandatory, contains the polling address which the slave listens to.
PollingSlot Contains the number of the polling slot to send the polling result (number- ing starts with 0).
9.4.5. ParameterAll parameters are grouped together in the node Parameter. For each parameter, thereis a child node, which has its unique ID stored in the attribute id (with the same id asin the application information). Possible parameters are:
ListParameter Contains the ID of the selected list element in the node Value.
IntParameter contains the selected integer value in the node Value. It must meet all constraints defined in the application information.
FloatParameter contains the selected floating point value in the node Value. It must meet all constraints defined in the application information.
StringParameter contains the selected string value in the node Value. It must meet all constraints defined in the application information.
9.5. LimitationsAs a BCU has only limited capabilities, an integration tool must know that the followinglimitations exist:
• All access control features will not work for the BCU 1.
• If different access levels are specified for properties with the same object index, the highest one will be used for the BCU 2.
1479. File format for data exchange with integration tools
• The sending address for group objects is implicitly added to the other address lists used, if it is not present there (for BCU 1 and BCU 2).
• The union of the different listening addresses for a group object will be used, if they are not the same (for BCU 1 and BCU 2).
• For BCU 1 and BCU 2, SendAddress and ReadRequestAddress must have the same value, if both are set.
14810. Usage/Examples10.1. InstallationThe BCU SDK can either be installed as root in a global location, or in the homedirectory of a user. In the latter case, no root access is needed. Loading the EIB kerneldrivers requires root access. The rights to use all eibd back ends can be granted to anormal user. Up to 2 GB of disk space will be used during compilation.
10.1.2. PrerequisitesThe BCU SDK needs the following software to be installed: • A reasonably recent Linux system (e.g. Fedora Core 2 or Debian Sarge) • GCC and G++ in version 3.3 or higher • libxml2 (plus development files) 2.6.16 or higher • flex (Debian users must install flex-old, if it is present, instead of flex) • bison • GNU make • xsltproc (part of libxslt) • xmllint (part of libxml2 sources) If a tool is too old, it is possible to install a newer version in the home directory ofthe user.
14910. Usage/Examples
• pthsem sources
After the process has finished, m68hc05-gcc should be available for execution.
./configuremakemake install
Note: If pthsem is installed as root, the above puts the libraries in /usr/local/lib. Insome distributions, /usr/local/lib is not part of the library path. In that case, you haveto take one of the following measures to allow the BCU SDK installation to continue:
cp /usr/local/lib/libpthsem* /usr/lib
2. Add /usr/local/lib to LD LIBRARY PATH in each shell instance you use to build (or run) programs depending on pthsem:
export LD_LIBARY_PATH=/usr/local/lib 1 Starting with version 0.0.1, only one big tarball exists.
150 10.1. Installation
If the TPUART or bcu1 driver is used, load the driver as root and run
The interface file used by the USB backend is generated dynamically. It changes itsname with every reconnect of the USB cable or reboot. To determine the name for thecurrent session, use findknxusb (as discussed in Section 10.2.2) to determine the USBbus number and device number of the KNX interface. The interface file has the name/proc/bus/usb/ bus-number/ device-number. Both numbers have to be expanded to 3digits with leading zeros. Issue chmod 666 filename to grant access to all users. 2 Be aware that corrupting /etc/ld.so.conf can break programs, especially desktop environments.
15110. Usage/Examples
cg-clone
cg-update
cg-export TARFILE
Before the bcusdk repository can be built, the following commands must be issued:4
aclocalautoheaderautomake -a --foreignautoconf
To build a tar file of the bcusdk, you must issue a configure command followed by
make dist
For the build process, the same order must be followed as for a normal installation (firstpthsem build and install, then m68hc05-gnu and finally the bcusdk). To build a Debian package, first extract the tar file and then issue:
152 10.2. Using eibd
ip:[multicast addr[:port]] connects via an EIBnet/IP router using the EIBnet/IP Rout- ing protocol (its routing tables must be set up correctly)
bcu1s:/dev/ttySx connects to a BCU 1 over the experimental user mode driver. For more details, see 7.3.2.
-d, –daemon[=FILE] Start the program as daemon. The output will be written to FILE, if the argument is present.
-e, –eibaddr=EIBADDR Set our own EIB-address to EIBADDR (default 0.0.1) for drivers which need such an address.
15310. Usage/Examples
The TPUART and EIBnet/IP Routing back ends need an EIB address, which mustbe specified by the -e option. For better security, only pass the -u option and avoidusing -i. If you are using eibd with the ft12 back end using /dev/ttyS0, run the following in aterminal:
eibd -u ft12:/dev/ttyS0
If you experience any problems, pass -t1023 as option, which will print all debugginginformation. The program can be ended by pressing Ctrl+C.
The first part is the bus number, the second the device number. Beware that this listmay contain devices which are not KNX interfaces. Please use the product name toverify that the correct device is used by eibd. If eibd interacts with another USB device,it could harm the device. The address of an USB device consists of four parts separated by colons. This addressis passed with the prefix usb: as URL to eibd. The last parts can be omitted if only onedevice matches the specified first parts. Otherwise, eibd will randomly select one of thematching devices. If only one device is found at all, the address can be left empty.
154 10.2. Using eibd
IP address (or DNS name). The supported service protocols are enabled using otheroptions (e.g. pass the parameter -TRDS to eibd to enable all functions). The options are:
-D, –Discovery enable the EIBnet/IP server to answer discovery and description re- quests(SEARCH, DESCRIPTION).
Enabling this server prohibits the normal bus monitor mode (vBusmonitor is stillworking). All other eibd functions are not affected. Eibd does not provide the ability touse filter tables for routing. This means that telegram loops can easily occur when it isused in parallel with another EIBnet/IP router on the same line. The EIBnet/IP server front end of eibd performs a kind of network address translationon individual addresses. In outgoing frames, 0.0.0 is replaced with the individual addressof the bus access device. Likewise, incoming frames with the individual address of thebus access device as destination have this destination address replaced with 0.0.0.6 Also,0.0.0 is returned as the KNX individual address assigned to the Tunnelling connectionin the connection response data block. Source addresses other than 0.0.0 can be used for outgoing frames, but incomingframes addressed to individual addresses other than that of the bus access device aresuppressed.7 Therefore, 0.0.0 should be used as the local individual address by eibd-EIBnet/IP client applications. Tunneling clients which use the address returned in theconnection response will do so automatically.
15510. Usage/Examples
groupwrite Send a group write telegram to a group address (for values with more than 6 bit width)
groupswrite Send a group write telegram to a group address (for values with less than 6 bit width)
groupread Send a group read telegram to a group address. The result is not captured by this tool. It has to be monitored by a bus monitor.
grouplisten Displays all received frames with a particular (destination) group address.
156 10.2. Using eibd
15710. Usage/Examples
158 10.4. Developing BCU applications
15910. Usage/Examples
build.ai bcu.config
for building the application information is used. As the configuration description con-tains the program text (program ID) from the application description, the image is builtwith
There are two useful utilities, embedprogid and extractprogid, to extract and embedprogram IDs in application informations and configuration descriptions.
cond.config
Device { PEIType 0 ; BCU bcu12 ; // use bcu20 f o r a BCU 2 . 0 Title ” Conditional negation ”;
FunctionalBlock { Title ” Conditional negation ”; ProfileID 10000; Interface { R e f e r e n c e { send } ; A b b r e v i a t i o n send ; DPType DPT Bool ; // same as 1 . 0 0 2 }; Interface { Reference { recv }; Abbreviation recv ; DPType DPT Bool ; }; Interface {
160 10.6. Example program
R e f e r e n c e { cond } ; A b b r e v i a t i o n cond ; DPType DPT Bool ; }; };
GroupObject { Name send ; Type UINT1 ; Sending t r u e ; T i t l e ” Output ” ; S tateBased t r u e ; };
GroupObject { Name r e c v ; Type UINT1 ; on update s e n d u p d a t e ; T i t l e ” Input ” ; S tateBased t r u e ; };
GroupObject { Name cond ; Type UINT1 ; Receiving true ; T i t l e ” Condition ” ; S tateBased t r u e ; };
i n c l u d e { ” cond . c ” } ;};cond.c
16110. Usage/Examples
<?xml v e r s i o n =”1.0”?><D e v i c e C o n f i g > <ProgramID>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx </ProgramID> <P h y s i c a l A d d r e s s >1.3.1 </ P h y s i c a l A d d r e s s > <GroupObject i d=”i d 0”> <P r i o r i t y >low</ P r i o r i t y > <SendAddress >0/0/1</SendAddress> </GroupObject> <GroupObject i d=”i d 2”> <P r i o r i t y >low</ P r i o r i t y > <ReceiveAddress > <GroupAddr>0/0/5</GroupAddr> </ReceiveAddress > </GroupObject> <GroupObject i d=”i d 4”> <P r i o r i t y >low</ P r i o r i t y > <ReceiveAddress > <GroupAddr>0/0/7</GroupAddr> </ReceiveAddress > </GroupObject></D e v i c e Con f ig >
Description
The program will be loaded on the device 1.3.1. It uses 0/0/1 as group address for theoutput, 0/0/5 for the input and 0/0/7 for the condition. Each GroupObject command creates a group object, as well a variable to receivevalues. The send object has set Sending to true, so that a transmit function namedobject name transmit is generated. If a value is to be sent, the variable is changed andthe transmit function is called. The recv object contains the on update statement, which causes the given function tobe called when a value for this group object is received. Setting Receiving to true for cond causes a group object to receive new values in itsassociated variable, but does not cause any action in that case. As a special feature, group objects (and other objects) can be declared in any numberwithout necessarily having impact on the code size. Objects which are to be visible inthe application information must be referenced in an Interface within a FunctionalBlock.Otherwise, they will not consume space in the binary image. Also, objects which aredisabled or not referenced in the configuration description will not increase the imagesize. However, this automatic removal is not possible when some parts of the object areused in the C code which cannot be removed by GCC.
162 10.6. Example program
timer.config
Device { BCU bcu20 ; PEIType 0 ; Title ” Cyclic switching ”;
CI { P h y s i c a l A d d r e s s $1 . 3 . 2 ; } ;
FunctionalBlock { T i t l e ” Timer ” ; ProfileID 10001;
Interface { DPType 1 . 0 0 0 ; R e f e r e n c e { out } ; A b b r e v i a t i o n out ; }; Interface { DPType 5 . 0 0 0 ; R e f e r e n c e { time , ctime } ; Abbreviation period ; }; };
GroupObject { Name out ; Type UINT1 ; T i t l e ” Output ” ; S tateBased t r u e ; Sending t r u e ;
CI { SendAddress $0 / 0 / 9 ; } ; };
Timer { Name timeout ; Type UserTimer ;
16310. Usage/Examples
R e s o l u t i o n RES 1066ms ; o n e x p i r e send ; };
I n t P ar amete r { T i t l e ” Period ” ; Name time ; MinValue 1 ; MaxValue 1 2 7 ; Default 10;
CI { Value 2 0 ; } ; };
Object { Name prop ; ObjectType 1 0 0 ;
P r o pe rt y { Name ctime ; PropertyID 1 0 0 ; Type PDT UNSIGNED CHAR ; T i t l e ” Transmit p e r i o d ” ; }; };};timer.c
void i n i t ( ){ c ti m e=time ; t i m e o u t s e t ( ctime ) ;}v o i d send ( ){ out =! out ; out transmit ( ) ; t i m e o u t s e t ( ctime ) ;}timer.ci
164 10.6. Example program
<?xml v e r s i o n =”1.0”?><D e v i c e Con f ig > <ProgramID>YYYYYYYYYYYYYYYYYYYY</ProgramID> <P h y s i c a l A d d r e s s >1.3.2 </ P h y s i c a l A d d r e s s > <GroupObject i d=”i d 4”> <P r i o r i t y >low</ P r i o r i t y > <SendAddress >0/0/9</SendAddress> </GroupObject> <Parameter> <I n tParamete r i d=”i d 1”> <Value >20</Value> </IntParameter> </Parameter></D e v i c e Con f ig >DescriptionThe program will be loaded into the device 1.3.2. It uses group address 0/0/9 asdestination. An integer parameter named ctime is defined, which accepts values between 1 and127. Additionally, a property named ctimer is defined. The on init clause causes init to be executed. This function copies the initial valuefrom time to ctime and starts the timer timeout. This timer is defined as a user timer. When it expires, on expire causes that send is executed. This function inverts thecurrent state of the group object, transmits its state and starts the timer again. timer.config also include the CI blocks for build.dev with the same configuration asspecified in timer.ci.
16510. Usage/Examples
166Part IV.
Appendix
167A. Image formatThe image format is a binary format. All multi byte values are stored in big endianformat. The image is divided into streams. A valid image starts with a 10 byte header. After this header, it may only includewell formed streams as described in this chapter, without any space between them. Itis possible to include new stream types, if a new stream identifier number is chosen forthem. The header has the following content:
A.1. StreamsIn general, a stream which is specified in the following section must have the describedformat and may not be extended.
A.1.2. L CODEThis stream contains the memory image of the BCU program. For BCU 1.2 and 2.x,this is the content of the EEPROM starting at address 0x100.
169A. Image format
170 A.1. Streams
171A. Image format
172B. TablesB.1. Available DP TypesBCU SDK knows the following DPT types by name. This list is based on the proposedlist of 2004-02-12.
173B. Tables
174 B.1. Available DP Types
175B. Tables
176 B.1. Available DP Types
177B. Tables
178 B.1. Available DP Types
179B. Tables
180 B.2. Available property IDs
Property ID Name 1 PID OBJECT TYPE 2 PID OBJECT NAME 3 PID SEMAPHOR 4 PID GROUP OBJECT REFERENCE 5 PID LOAD STATE CONTROL 6 PID RUN STATE CONTROL 7 PID TABLE REFERENCE 8 PID SERVICE CONTROL 9 PID FIRMWARE REVISION 10 PID SERVICES SUPPORTED 11 PID SERIAL NUMBER 12 PID MANUFACTURER ID 13 PID PROGRAM VERSION 14 PID DEVICE CONTROL 15 PID ORDER INFO 16 PID PEI TYPE 17 PID PORT CONFIGURATION 18 PID POLL GROUP SETTINGS 19 PID MANUFACTURER DATA 20 PID ENABLE 21 PID DESCRIPTION 22 PID FILE 23 PID TABLE 24 PID ENROL 25 PID VERSION 26 PID GROUP OBJECT LINK 27 PID MCB TABLE 28 PID ERROR CODE 51 PID ROUTING COUNT 52 PID MAX RETRY COUNT 53 PID ERROR FLAGS 54 PID PROGMODE 55 PID PRODUCT ID 56 PID MAX APDULENGTH 57 PID SUBNET ADDR 58 PID DEVICE ADDR
181B. Tables
Property ID Name 59 PID PB CONFIG 60 PID ADDR REPORT 61 PID ADDR CHECK 62 PID OBJECT VALUE 63 PID OBJECTLINK 64 PID APPLICATION 65 PID PARAMETER 66 PID OBJECTADDRESS 67 PID PSU TYPE 68 PID PSU STATUS 69 PID PSU ENABLE 70 PID DOMAIN ADDRESS 71 PID IO LIST 72 PID MGT DESCRIPTOR 01 73 PID PL110 PARAM 74 PID RF REPEAT COUNTER 75 PID RECEIVE BLOCK TABLE 76 PID RANDOM PAUSE TABLE 77 PID RECEIVE BLOCK NR 78 PID HARDWARE TYPE 79 PID RETRANSMITTER NUMBER 80 PID SERIAL NR TABLE 81 PID BIBATMASTER ADDRESS 101 PID CHANNEL 01 PARAM 102 PID CHANNEL 02 PARAM 103 PID CHANNEL 03 PARAM 104 PID CHANNEL 04 PARAM 105 PID CHANNEL 05 PARAM 106 PID CHANNEL 06 PARAM 107 PID CHANNEL 07 PARAM 108 PID CHANNEL 08 PARAM 109 PID CHANNEL 09 PARAM 110 PID CHANNEL 10 PARAM 111 PID CHANNEL 11 PARAM 112 PID CHANNEL 12 PARAM 113 PID CHANNEL 13 PARAM 114 PID CHANNEL 14 PARAM 115 PID CHANNEL 15 PARAM 116 PID CHANNEL 16 PARAM 117 PID CHANNEL 17 PARAM 118 PID CHANNEL 18 PARAM
182 B.2. Available property IDs
Property ID Name119 PID CHANNEL 19 PARAM120 PID CHANNEL 20 PARAM121 PID CHANNEL 21 PARAM122 PID CHANNEL 22 PARAM123 PID CHANNEL 23 PARAM124 PID CHANNEL 24 PARAM125 PID CHANNEL 25 PARAM126 PID CHANNEL 26 PARAM127 PID CHANNEL 27 PARAM128 PID CHANNEL 28 PARAM129 PID CHANNEL 29 PARAM130 PID CHANNEL 30 PARAM131 PID CHANNEL 31 PARAM132 PID CHANNEL 32 PARAM51 PID EXT FRAMEFORMAT52 PID ADDRTAB153 PID GROUP RESPONSER TABLE51 PID PARAM REFERENCE51 PID MEDIUM TYPE52 PID COMM MODE53 PID MEDIUM AVAILABILITY54 PID ADD INFO TYPES55 PID TIME BASE56 PID TRANSP ENABLE51 PID GRPOBJTABLE52 PID EXT GRPOBJREFERENCE51 PID POLLING STATE52 PID POLLING SLAVE ADDR53 PID POLL CYCLE51 PID AR TYPE REPORT51 PID PROJECT INSTALLATION ID52 PID KNX INDIVIDUAL ADDRESS53 PID ADDITIONAL INDIVIDUAL ADDRESSES54 PID CURRENT IP ASSIGNMENT METHOD55 PID IP ASSIGNMENT METHOD56 PID IP CAPABILITIES57 PID CURRENT IP ADDRESS58 PID CURRENT SUBNET MASK59 PID CURRENT DEFAULT GATEWAY60 PID IP ADDRESS61 PID SUBNET MASK
183B. Tables
Property ID Name 62 PID DEFAULT GATEWAY 63 PID DHCP BOOTP SERVER 64 PID MAC ADDRESS 65 PID SYSTEM SETUP MULTICAST ADDRESS 66 PID ROUTING MULTICAST ADDRESS 67 PID TTL 68 PID EIBNETIP DEVICE CAPABILITIES 69 PID EIBNETIP DEVICE STATE 70 PID EIBNETIP ROUTING CAPABILITIES 71 PID PRIORITY FIFO ENABLED 72 PID QUEUE OVERFLOW TO IP 73 PID QUEUE OVERFLOW TO KNX 74 PID MSG TRANSMIT TO IP 75 PID MSG TRANSMIT TO KNX 76 PID FRIENDLY NAME
184C. Source documentationC.1. eibclient.h File Reference#include "sys/cdefs.h" #include "stdint.h" #include "eibloadresult.h"
Defines • #define EIBGetTPDU EIBGetAPDU Src Receive a TPDU with source address.
Typedefs • typedef EIBConnection EIBConnection type represents a connection to eibd
Functions • EIBConnection ∗ EIBSocketURL (const char ∗url) Opens a connection to eibd.
185C. Source documentation
186 C.1. eibclient.h File Reference
187C. Source documentation
188 C.1. eibclient.h File Reference
• int EIB MC Read (EIBConnection ∗con, uint16 t addr, int len, uint8 t ∗buf) Read BAU memory (over a management connection).
189C. Source documentation
• int EIB MC Read async (EIBConnection ∗con, uint16 t addr, int len, uint8 t ∗buf) Read BAU memory (over a management connection) - asynchronous.
• int EIB MC Write (EIBConnection ∗con, uint16 t addr, int len, const uint8 t ∗buf) Write BAU memory (over a management connection).
• int EIB MC Write async (EIBConnection ∗con, uint16 t addr, int len, const uint8 t ∗buf) Write BAU memory (over a management connection) - asynchronous.
190 C.1. eibclient.h File Reference
191C. Source documentation
• int EIB LoadImage async (EIBConnection ∗con, const uint8 t ∗image, int len) Loads a BCU SDK program image (over a management connection) - asynchronous.
192 C.1. eibclient.h File Reference
Returns: received length or -1 if error
Parameters: con eibd connection image pointer to image len legth of the image
Returns: result
int EIB LoadImage async (EIBConnection ∗ con, const uint8 t ∗ image, int len)Loads a BCU SDK program image (over a management connection) - asynchronous.
Returns: 0 if started, -1 if error
193C. Source documentation
Parameters: con eibd connection dest address of EIB device
Returns: -1 if error, else mask version
Returns: 0 if successful, -1 if error
194 C.1. eibclient.h File Reference
Parameters: con eibd connection dest address of EIB device
Returns: 0 if not in programming mode, -1 if error, else programming mode
195C. Source documentation
196 C.1. eibclient.h File Reference
Parameters: con eibd connection dest new address of EIB device
Returns: -1 if error, 0 if successful
Parameters: con eibd connection key key
Returns: -1 if error, else access level
197C. Source documentation
198 C.1. eibclient.h File Reference
Parameters: con eibd connectionReturns: 0 if started, -1 if error
Parameters: con eibd connectionReturns: 0 if successful, -1 if error
199C. Source documentation
Parameters: con eibd connectionReturns: 0 if not in programming mode, -1 if error, else programming mode
Parameters: con eibd connection obj object index
200 C.1. eibclient.h File Reference
property property ID type pointer to store type max nr of elem pointer to store element count access pointer to access level
Returns: -1 if error, else 0
Parameters: con eibd connection obj object index property property ID type pointer to store type max nr of elem pointer to store element count access pointer to access level
Parameters: con eibd connection obj object index property property ID start start element nr of elem number of elements max len buffer size buf bufferReturns: -1 if error, else read length
201C. Source documentation
Parameters: con eibd connection obj object index property property ID start start element nr of elem number of elements max len buffer size buf buffer
Parameters: con eibd connection maxlen buffer size buf buffer
Returns: number of used bytes in the buffer or -1 if error
202 C.1. eibclient.h File Reference
Parameters: con eibd connection obj object index property property ID start start element nr of elem number of elements len buffer size buf buffer max len length of the result buffer res buffer for the result
Returns: -1 if error, else length of the returned result
203C. Source documentation
int EIB MC Read (EIBConnection ∗ con, uint16 t addr, int len, uint8 t ∗ buf)Read BAU memory (over a management connection).
Parameters: con eibd connection addr memory address len size to read buf buffer
Returns: -1 if error, else read length
int EIB MC Read async (EIBConnection ∗ con, uint16 t addr, int len, uint8 t ∗buf)Read BAU memory (over a management connection) - asynchronous.
Parameters: con eibd connection channel ADC channel count repeat count val pointer to store result
Returns: 0, if successful or -1 if error
204 C.1. eibclient.h File Reference
Parameters: con eibd connection channel ADC channel count repeat count val pointer to store result
Parameters: con eibd connection level level to set key key
int EIB MC SetKey async (EIBConnection ∗ con, uint8 t key[4], uint8 t level)Sets a key (over a management connection) - asynchronous.
int EIB MC Write (EIBConnection ∗ con, uint16 t addr, int len, const uint8 t ∗buf)Write BAU memory (over a management connection).
Parameters: con eibd connection
205C. Source documentation
int EIB MC Write async (EIBConnection ∗ con, uint16 t addr, int len, constuint8 t ∗ buf)Write BAU memory (over a management connection) - asynchronous.Parameters: con eibd connection addr Memory address len size to read buf bufferReturns: 0 if started, -1 if error
206 C.1. eibclient.h File Reference
Returns: return value, as returned by the synchronous function call
Parameters: con eibd connection maxlen buffer size buf buffer
Parameters: con eibd connection maxlen buffer size buf buffer src pointer, where the source address should be stored
207C. Source documentation
208 C.1. eibclient.h File Reference
209C. Source documentation
Parameters: con eibd connection write only if not null, no packets from the bus will be delivered
Parameters: con eibd connection dest destination address
int EIBOpenT Group (EIBConnection ∗ con, eibaddr t dest, int write only)Opens a connection of type T Group.
Parameters: con eibd connection dest group address write only if not null, no packets from the bus will be delivered
210 C.1. eibclient.h File Reference
int EIBOpenT Individual (EIBConnection ∗ con, eibaddr t dest, int write only)Opens a connection of type T Individual.Parameters: con eibd connection dest destination address write only if not null, no packets from the bus will be deliveredReturns: 0 if successful, -1 if error
211C. Source documentation
212 C.1. eibclient.h File Reference
213C. Source documentation
Parameters: host hostname running eibd port portnumber
Returns: connection handle or NULL
214BibliographyThe entire GNU tool chain documentation is distributed with the respective programsources. Different versions are available at. However, thebest information source is the version distributed with the sources you are using. Onlinereferences are valid as of 2005-05-05.
[BFD] Steve Chamberlain and others, libbfd – The Binary File Descriptor Library. Available as part of the binutils sources
[BFDINT] Ian Lance Taylor and others, BFD Internals. Available as part of the binu- tils sources
215Bibliography
[GAS] Dean Elsner, Jay Fenlason and others, Using as – The GNU assembler. Available as part of the binutils sources
[GCC] Free Software Foundation, Using the GNU Compiler Collection (GCC). Available as part of the GCC sources
[GDB] Richard Stallman, Roland Pesch and others, Debugging with GDB. Avail- able as part of the GDB sources
[GDBINT] John Gilmore and others, Using GDB – A guide to the internals of the GNU debugger. Available as part of the GDB sources
[LD] Steve Chamberlain, Ian Lance Taylor and others, Using ld – The GNU linker. Available as part of the binutils sources
[LDINT] Per Bothner, Steve Chamberlain and others, A guide to the internals of the GNU linker. Available as part of the binutils sources
[NEW1] Rob Savoye and others, Porting The GNU Tools To Embedded Systems. Available as part of the newlib sources
[NEW2] Steve Chamberlain and others, The Red Hat newlib C Library. Available as part of the newlib sources
[NEW3] Steve Chamberlain and others, The Red Hat newlib Math Library. Avail- able as part of the newlib sources
216 Bibliography
217
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.Cancel anytime. | https://www.scribd.com/document/392990693/KNX-Architecture-April-2003 | CC-MAIN-2020-16 | refinedweb | 24,952 | 56.35 |
The TooN library is a set of C++ header files which provide basic numerics facilities:
It provides classes for statically- (known at compile time) and dynamically- (unknown at compile time) sized vectors and matrices and it can delegate advanced functions (like large SVD or multiplication of large matrices) to LAPACK and BLAS (this means you will need libblas and liblapack).
The library makes substantial internal use of templates to achieve run-time speed efficiency whilst retaining a clear programming syntax.
Why use this library?
This section is arranged as a FAQ. Most answers include code fragments. Assume
using namespace TooN;.
To get the code from cvs use:
cvs -z3 -d:pserver:anoncvs@cvs.savannah.nongnu.org:/cvsroot/toon co TooN
The home page for the library with a version of this documentation is at:
The code will work as-is, and comes with a default configuration, which should work on any system.
On a unix system,
./configure && make install will install TooN to the correct place. Note there is no code to be compiled, but the configure script performs some basic checks.
On non-unix systems, e.g. Windows and embedded systems, you may wish to configure the library manually. See Manual configuration.
To begin, just in include the right file:
#include <TooN/TooN.h>
Everything lives in the
TooN namespace.
Then, make sure the directory containing TooN is in your compiler's search path. If you use any decompositions, you will need to link against LAPACK, BLAS and any required support libraries. On a modern unix system, linking against LAPACK will do this automatically.
If you get errors compiling code that uses TooN, look for the macro TOON_TYPEOF in the messages. Most likely the file
internal/config.hh is clobbered. Open it and remove all the defines present there.
Also see Manual configuration for more details on configuring TooN, and Functions using LAPACK, if you want to use LAPACK and BLAS. Define the macro in
internal/config.hh.
Vectors can be statically sized or dynamically sized.
Vector<3> v1; //Create a static sized vector of size 3 Vector<> v2(4); //Create a dynamically sized vector of size 4 Vector<Dynamic> v2(4); //Create a dynamically sized vector of size 4
See also Can I have a precision other than double?.
Matrices can be statically sized or dynamically sized.
Matrix<3> m; //A 3x3 matrix (statically sized) Matrix<3,2> m; //A 3x2 matrix (statically sized) Matrix<> m(5,6); //A 5x6 matrix (dynamically sized) Matrix<3,Dynamic> m(3,6); //A 3x6 matrix with a dynamic number of columns and static number of rows. Matrix<Dynamic,2> m(3,2); //A 2x3 matrix with a dynamic number of rows and static number of columns.
See also Can I have a precision other than double?.
To write a function taking a local copy of a vector:
template<int Size> void func(Vector<Size> v);
To write a function taking any type of vector by reference:
template<int Size, typename Precision, typename Base> void func(const Vector<Size, Precision, Base>& v);
See also Can I have a precision other than double?, How do I write generic code? and Why don't functions work in place?
Slices are strange types. If you want to write a function which uniformly accepts
const whole objects as well as slices, you need to template on the precision.
Note that constness in C++ is tricky (see What is wrong with constness?). If you write the function to accept
Vector<3, double, B>& , then you will not be able to pass it slices from
const Vectors. If, however you write it to accept
Vector<3, const double, B>& , then the only way to pass in a
Vector<3> is to use the
.as_slice() method.
See also How do I write generic code?
In TooN, the behaviour of a Vector or Matrix is controlled by the third template parameter. With one parameter, it owns the data, with another parameter, it is a slice. A static sized object uses the variable:
double my_data[3];
to hold the data. A slice object uses:
double* my_data;
When a Vector is made
const, C++ inserts
const in to those types. The
const it inserts it top level, so these become (respectively):
const double my_data[3]; double * const my_data;
Now the types behave very differently. In the first case
my_data[0] is immutable. In the second case,
my_data is immutable, but
my_data[0] is mutable.
Therefore a slice
const Vector behaves like an immutable pointer to mutable data. TooN attempts to make
const objects behave as much like pointers to immutable data as possible.
The semantics that TooN tries to enforce can be bypassed with sufficient steps:
//Make v look immutable template<class P, class B> void fake_immutable(const Vector<2, P, B>& v) { Vector<2, P, B> nonconst_v(v); nonconst_v[0] = 0; //Effectively mutate v } void bar() { Vector<3> v; ... fake_immutable(v.slice<0,2>()); //Now v is mutated }
See also How do I write a function taking a vector?
Assignments are performed using
=. See also Why does assigning mismatched dynamic vectors fail?.
These operators apply to vectors or matrices and scalars. The operator is applied to every element with the scalar.
*=, /=, *, /
Vector and vectors or matrices and matrices:
+, -, +=, -=
Dot product:
Vector * Vector
Matrix multiply:
Matrix * Matrix
Matrix multiplying a column vector:
Matrix * Vector
Row vector multiplying a matrix:
Vector * Matrix
3x3 Vector cross product:
Vector<3> ^ Vector<3>
All the functions listed below return slices. The slices are simply references to the original data and can be used as lvalues.
Getting the transpose of a matrix:
Matrix.T()
Accessing elements:
Vector[i] //get element i Matrix(i,j) //get element i,j Matrix[i] //get row i as a vector Matrix[i][j] //get element i,j
Turning vectors in to matrices:
Vector.as_row() //vector as a 1xN matrix Vector.as_col() //vector as a Nx1 matrix
Slicing with a start position and size:
Vector.slice<Start, Length>(); //Static slice Vector.slice(start, length); //Dynamic slice Matrix.slice<RowStart, ColStart, NumRows, NumCols>(); //Static slice Matrix.slice(rowstart, colstart, numrows, numcols); //Dynamic slice
Slicing diagonals:
Matrix.diagonal_slice(); //Get the leading diagonal as a vector. Vector.as_diagonal(); //Represent a Vector as a DiagonalMatrix
Like other features of TooN, mixed static/dynamic slicing is allowed. For example:
See also What are slices?
Vectors and matrices start off uninitialized (filled with random garbage). They can be easily filled with zeros, or ones (see also TooN::Ones):
Vector<3> v = Zeros; Matrix<3> m = Zeros Vector<> v2 = Zeros(2); //Note in they dynamic case, the size must be specified Matrix<> m2 = Zeros(2,2); //Note in they dynamic case, the size must be specified
Vectors can be filled with makeVector:
Vector<> v = makeVector(2,3,4,5,6);
Matrices can be initialized to the identity matrix:
note that you need to specify the size in the dynamic case.
Matrices can be filled from data in row-major order:
A less general, but visually more pleasing syntax can also be used:
Note that underfilling is a run-time check, since it can not be detected at compile time.
They can also be initialized with data from another source. See also I have a pointer to a bunch of data. How do I turn it in to a vector/matrix without copying?.
Addition to every element is not an elementary operation in the same way as multiplication by a scalar. It is supported throught the Ones object:
It is supported the same way on Matrix and slices.
Vectors are not generic containers, and dynamic vectors have been designed to have the same semantics as static vectors where possible. Therefore trying to assign a vector of length 2 to a vector of length 3 is an error, so it fails. See also How do I resize a dynamic vector/matrix?
As C++ does not yet support move semantics, you can only safely store static and resizable Vectors in STL containers.
Do you really want to? If you do, then you have to declare it:
Vector<Resizable> v; v.resize(3); v = makeVector(1, 2, 3); v = makeVector(1, 2); //resize v = Ones(5); //resize v = Zeros; // no resize
The policy behind the design of TooN is that it is a linear algebra library, not a generic container library, so resizable Vectors are only created on request. They provide fewer guarantees than other vectors, so errors are likely to be more subtle and harder to track down. One of the main purposes is to be able to store Dynamic vectors of various sizes in STL containers.
Assigning vectors of mismatched sizes will cause an automatic resize. Likewise assigning from entities like Ones with a size specified will cause a resize. Assigning from an entities like Ones with no size specified will not cause a resize.
They can also be resized with an explicit call to .resize(). Resizing is efficient since it is implemented internally with
std::vector. Note that upon resize, existing data elements are retained but new data elements are uninitialized.
Currently, resizable matrices are unimplemented. If you want a resizable matrix, you may consider using a
std::vector, and accessing it as a TooN object when appropriate. See I have a pointer to a bunch of data. How do I turn it in to a vector/matrix without copying?. Also, the speed and complexity of resizable matrices depends on the memory layout, so you may wish to use column major matrices as opposed to the default row major layout.
By default, everything which is checked at compile time in the static case is checked at run-time in the dynamic case (with some additions). Checks can be disabled with various macros. Note that the optimizer will usually remove run-time checks on static objects if the test passes.
Bounds are not checked by default. Bounds checking can be enabled by defining the macro
TOON_CHECK_BOUNDS. None of these macros change the interface, so debugging code can be freely mixed with optimized code.
The debugging checks can be disabled by defining either of the following macros:
TOON_NDEBUG
NDEBUG
Additionally, individual checks can be disabled with the following macros:
TOON_NDEBUG_MISMATCH
TOON_NDEBUG_SLICE
TOON_NDEBUG_SIZE
TOON_NDEBUG_FILL
TOON_NDEBUG_FILL
Errors are manifested to a call to
std::abort().
TooN does not initialize data in a Vector or Matrix. For debugging purposes the following macros can be defined:
TOON_INITIALIZE_QNANor
TOON_INITIALIZE_NANSets every element of newly defined Vectors or Matrixs to quiet NaN, if it exists, and 0 otherwise. Your code will not compile if you have made a Vector or Matrix of a type which cannot be constructed from a number.
TOON_INITIALIZE_SNANSets every element of newly defined Vectors or Matrixs to signalling NaN, if it exists, and 0 otherwise.
TOON_INITIALIZE_VALSets every element of newly defined Vectors or Matrixs to the expansion of this macro.
TOON_INITIALIZE_RANDOMFills up newly defined Vectors and Matrixs with random bytes, to trigger non repeatable behaviour. The random number generator is automatically seeded with a granularity of 1 second. Your code will not compile if you have a Vector or Matrix of a non-POD type.
Slices are references to data belonging to another vector or matrix. Modifying the data in a slice modifies the original object. Likewise, if the original object changes, the change will be reflected in the slice. Slices can be used as lvalues. For example:
Matrix<3> m = Identity; m.slice<0,0,2,2>() *= 3; //Multiply the top-left 2x2 submatrix of m by 3. m[2] /=10; //Divide the third row of M by 10. m.T()[2] +=2; //Add 2 to every element of the second column of M. m[1].slice<1,2>() = makeVector(3,4); //Set m_1,1 to 3 and m_1,2 to 4 m[0][0]=6;
Slices are usually strange types. See How do I write a function taking a vector?
See also
Yes!
Vector<3, float> v; //Static sized vector of floats Vector<Dynamic, float> v(4); //Dynamic sized vector of floats Vector<Dynamic, std::complex<double> > v(4); //Dynamic sized vector of complex numbers
Likewise for matrix. By default, TooN supports all builtin types and std::complex. Using custom types requires some work. If the custom type understands +,-,*,/ with builtin types, then specialize TooN::IsField on the types.
If the type only understands +,-,*,/ with itself, then specialize TooN::Field on the type.
Note that this is required so that TooN can follow the C++ promotion rules. The result of multiplying a
Matrix<double> by a
Vector<float> is a
Vector<double>.
Each vector has a
SliceBase type indicating the type of a slice.
They can be slightly tricky to use:
Vector<2, double, Vector<4>::SliceBase> sliceof(Vector<4>& v) { return v.slice<1,2>(); } template<int S, class P, class B> Vector<2, P, Vector<S, P, B>::SliceBase> sliceof(Vector<S, P, B>& v) { return v.template slice<1,2>(); } template<int S, class P, class B> const Vector<2, const P, typename Vector<S, P, B>::ConstSliceBase > foo(const Vector<S, P, B>& v) { return v.template slice<1,2>(); }
You use the decomposition objects (see below), for example to solve Ax=b:
Matrix<3> A; A[0]=makeVector(1,2,3); A[1]=makeVector(3,2,1); A[2]=makeVector(1,0,1); Vector<3> b = makeVector (2,3,4); // solve Ax=b using LU LU<3> luA(A); Vector<3> x1 = luA.backsub(b); // solve Ax=b using SVD SVD<3> svdA(A); Vector<3> x2 = svdA.backsub(b);
Similarly for the other decomposition objects
For general size matrices (not necessarily square) there are: LU , SVD and gauss_jordan()
For square symmetric matrices there are: SymEigen and Cholesky
If all you want to do is solve a single Ax=b then you may want gaussian_elimination()
Consider the function:
void func(Vector<3>& v);
It can accept a
Vector<3> by reference, and operate on it in place. A
Vector<3> is a type which allocates memory on the stack. A slice merely references memory, and is a subtly different type. To write a function taking any kind of vector (including slices) you can write:
template<class Base> void func(Vector<3, double, Base>& v);
A slice is a temporary object, and according to the rules of C++, you can't pass a temporary to a function as a non-const reference. TooN provides the
.ref() method to escape from this restriction, by returning a reference as a non-temporary. You would then have to write:
Vector<4> v; ... func(v.slice<0,3>().ref());
to get func to accept the slice.
You may also wish to consider writing functions that do not modify structures in place. The
unit function of TooN computes a unit vector given an input vector. In the following context, the code:
produces exactly the same compiler output as the hypothetical
Normalize(v) which operates in place (for static vectors). Consult the ChangeLog entries dated ``Wed 25 Mar, 2009 20:18:16'' and ``Wed 1 Apr, 2009 16:48:45'' for further discussion.
Yes!
Matrix<3, 3, double, ColMajor> m; //3x3 Column major matrix
To create a vector use:
double d[]={1,2,3,4}; Vector<4,double,Reference> v1(d); Vector<Dynamic,double,Reference> v2(d,4);
Or, a functional form can be used:
double d[]={1,2,3,4}; wrapVector<4>(d); //Returns a Vector<4> wrapVector<4,double>(d); //Returns a Vector<4> wrapVector(d,3); //Return a Vector<Dynamic> of size 3 wrapVector<Double>(d,3); //Return a Vector<Dynamic> of size 3
To crate a matrix use
double d[]={1,2,3,4,5,6}; Matrix<2,3,double,Reference::RowMajor> m1(d); Matrix<2,3,double,Reference::ColMajor> m2(d); Matrix<Dynamic, Dynamic, double, Reference::RowMajor> m3(d, 2, 3); Matrix<Dynamic, 3, double, Reference::RowMajor> m4(d, 2, 3); // note two size arguments are required for semi-dynamic matrices
See also wrapVector() and wrapMatrix().
The constructors for TooN objects are very permissive in that they accept run-time size arguments for statically sized objects, and then discard the values, This allows you to easily write generic code which works for both static and dynamic inputs.
Here is a function which mixes up a vector with a random matrix:
template<int Size, class Precision, class Base> Vector<Size, Precision> mixup(const Vector<Size, Precision, Base>& v) { //Create a square matrix, of the same size as v. If v is of dynamic //size, then Size == Dynamic, and so Matrix will also be dynamic. In //this case, TooN will use the constructor arguments to select the //matrix size. If Size is a real size, then TooN will simply ighore //the constructor values. Matrix<Size, Size, Precision> m(v.size(), v.size()); //Fill the matrix with random values that sum up to 1. Precision sum=0; for(int i=0; i < v.size(); i++) for(int j=0; j < v.size(); j++) sum += (m[i][j] = rand()); m/= sum; return m * v; }
Writing functions which safely accept multiple objects requires assertions on the sizes since they may be either static or dynamic. TooN's built in size check will fail at compile time if mismatched static sizes are given, and at run-time if mismatched dynamic sizes are given:
template<int S1, class B1, int S2, class B2> void func_of_2_vectors(const Vector<S1, double, B1>& v1, const Vector<S2, double, B2>& v2) { //Ensure that vectors are the same size SizeMismatch<S1, S2>::test(v1.num_rows(), v2.num_rows()); }
For issues relating to constness, see and
Create two vectors and work out their inner (dot), outer and cross products
// Initialise the vectors Vector<3> a = makeVector(3,5,0); Vector<3> b = makeVector(4,1,3); // Now work out the products double dot = a*b; // Dot product Matrix<3,3> outer = a.as_col() * b.as_row(); // Outer product Vector<3> cross = a ^ b; // Cross product cout << "a:" << endl << a << endl; cout << "b:" << endl << b << endl; cout << "Outer:" << endl << outer << endl; cout << "Cross:" << endl << cross << endl;
Create a vector and a matrix and multiply the two together
// Initialise a vector Vector<3> v = makeVector(1,2,3); // Initialise a matrix Matrix<2,3> M(d); M[0] = makeVector(2,4,5); M[1] = makeVector(6,8,9); // Now perform calculations Vector<2> v2 = M*v; // OK - answer is a static 2D vector Vector<> v3 = M*v; // OK - vector is determined to be 2D at runtime Vector<> v4 = v*M; // Compile error - dimensions of matrix and vector incompatible
One aspect that makes this library efficient is that when you declare a 3-vector, all you get are 3 doubles - there's no metadata. So
sizeof(Vector<3>) is 24. This means that when you write
Vector<3> v; the data for
v is allocated on the stack and hence
new/
delete (
malloc/
free) overhead is avoided. However, for large vectors and matrices, this would be a Bad Thing since
Vector<1000000> v; would result in an object of 8 megabytes being allocated on the stack and potentially overflowing it. TooN gets around that problem by having a cutoff at which statically sized vectors are allocated on the heap. This is completely transparent to the programmer, the objects' behaviour is unchanged and you still get the type safety offered by statically sized vectors and matrices. The cutoff size at which the library changes the representation is defined in
TooN.h as the
const int TooN::Internal::max_bytes_on_stack=1000;.
When you apply the subscript operator to a
Matrix<3,3> and the function simply returns a vector which points to the the apropriate hunk of memory as a reference (i.e. it basically does no work apart from moving around a pointer). This avoids copying and also allows the resulting vector to be used as an l-value. Similarly the transpose operation applied to a matrix returns a matrix which referes to the same memory but with the opposite layout which also means the transpose can be used as an l-value so
M1 = M2.T(); and
M1.T() = M2; do exactly the same thing.
Warning: This also means that
M = M.T(); does the wrong thing. However, since .T() essentially costs nothing, it should be very rare that you need to do this.
These are implemented in the obvious way using metadata with the rule that the object that allocated on the heap also deallocates. Other objects may reference the data (e.g. when you subscript a matrix and get a vector).
When you write
v1 = M * v2; a naive implementation will compute
M * v2 and store the result in a temporary object. It will then copy this temporary object into
v1. A method often advanced to avoid this is to have
M * v2 simply return an special object
O which contains references to
M and
v2. When the compiler then resolves
v1 = O, the special object computes
M*v2 directly into
v1. This approach is often called lazy evaluation and the special objects lazy vectors or lazy matrices. Stroustrup (The C++ programming language Chapter 22) refers to them as composition closure objects or compositors.
The killer is this: What if v1 is just another name for v2? i.e. you write something like
v = M * v;. In this case the semantics have been broken because the values of
v are being overwritten as the computation progresses and then the remainder of the computation is using the new values. In this library
v1 in the expression could equally well alias part of
M, thus you can't even solve the problem by having a clever check for aliasing between
v1 and
v2. This aliasing problem means that the only time the compiler can assume it's safe to omit the temporary is when
v1 is being constructed (and thus cannot alias anything else) i.e.
Vector<3> v1 = M * v2;.
TooN provides this optimisation by providing the compiler with the opportunity to use a return value optimisation. It does this by making
M * v2 call a special constructor for
Vector<3> with
M and
v2 as arguments. Since nothing is happening between the construction of the temporary and the copy construction of
v1 from the temporary (which is then destroyed), the compiler is permitted to optimise the construction of the return value directly into
v1.
Because a naive implemenation of this strategy would result in the vector and matrix classes having a very large number of constructors, these classes are provided with template constructors that take a standard form. The code that does this, declared in the header of class
Vector is:
template <class Op> inline Vector(const Operator<Op>& op) : Base::template VLayout<Size, Precision> (op) { op.eval(*this); }
This documentation is generated from a cleaned-up version of the interface, hiding the implementation that allows all of the magic to work. If you want to know more and can understand idioms like:
template<int, typename, int, typename> struct GenericVBase; template<int, typename> struct VectorAlloc; struct VBase { template<int Size, class Precision> struct VLayout : public GenericVBase<Size, Precision, 1, VectorAlloc<Size, Precision> > { ... }; }; template <int Size, class Precision, class Base=VBase> class Vector: public Base::template VLayout<Size, Precision> { ... };
then take a look at the source code ...
Configuration is controlled by
internal/config.hh. If this file is empty then the default configuration will be used and TooN will work. There are several options.
TooN needs a mechanism to determine the type of the result of an expression. One of the following macros can be defined to control the behaviour:
TOON_TYPEOF_DECLTYPE
TOON_TYPEOF_TYPEOF
typeofextension. Only works with GCC and will fail with -pedantic
TOON_TYPEOF___TYPEOF__
__typeof__extension. Only works with GCC and will work with -pedantic
TOON_TYPEOF_BOOST
TOON_TYPEOF_BUILTIN
std::complex<float>and
std::complex<double>.
Under Win32, the builtin typeof needs to be used. Comment out all the TOON_TYPEOF_ defines to use it.
Some functions use internal implementations for small sizes and may switch over to LAPACK for larger sizes. In all cases, an equivalent method is used in terms of accuracy (eg Gaussian elimination versus LU decomposition). If the following macro is defined:
TOON_USE_LAPACKthen LAPACK will be used for large systems, where optional. The individual functions are:
TOON_DETERMINANT_LAPACK
Note that these macros do not affect classes that are currently only wrappers around LAPACK. | http://docs.ros.org/diamondback/api/libtoon/html/index.html | CC-MAIN-2017-30 | refinedweb | 4,043 | 54.12 |
#include <stdio.h> #include <unistd.h> #include <sys/ioctl.h> #include <termios.h> int main( int argc, char **argv ){ struct winsize size; ioctl( 0, TIOCSWINSZ, (char *) &size ); printf( "Rows: %u\nCols: %u\n", size.ws_row, size.ws_col ); return 0; }
I was using Vim to code this program, and I had Vim suspended when I compiled it. When I ran it, I got ridiculously high values (in the thousands) for both terminal dimensions, and when I started Vim again, there was a massive overload of.. something. I don't know what happened, but my terminal window was all static, and the Activity Monitor showed the memory usage of Vim as 1 GB and climbing. I had to force quit the Terminal.
How can I do this without causing a memory overload (or whatever it was)? I don't really care that much about ioctl(), mostly I just want to know how to get the correct terminal dimensions. | http://forum.codecall.net/topic/66159-ioctl-messes-up-the-terminal/ | crawl-003 | refinedweb | 156 | 73.68 |
Pandas how to get a cell value and update it
Accessing a single value or setting up the value of single row is sometime required when we doesn’t want to create a new Dataframe for just updating that single cell value. There are indexing and slicing methods available but to access a single cell values there are Pandas in-built functions at and iat.
Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of overhead in order to figure out what you’re asking for. If you only want to access a scalar value, the fastest way is to use the at and iat methods, which are implemented on all of the data structures.
Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to iloc
Found a very Good explanation in one of the StackOverflow Answers which I wanted to Quote here:
There are two primary ways that pandas makes selections from a DataFrame.
By Label By Integer Location
There are three primary indexers for pandas. We have the indexing operator itself (the brackets
[]),
.loc, and
.iloc. Let’s summarize them:
[] - Primarily selects subsets of columns, but can select rows as well. Cannot simultaneously select rows and columns.
.loc - selects subsets of rows and columns by label only
.iloc - selects subsets of rows and columns by integer location only
Never used
.at or
.iat as they add no additional functionality and with just a small performance increase. I would discourage their use unless you have a very time-sensitive application. Regardless, we have their summary:
.at selects a single scalar value in the DataFrame by label only
.iat selects a single scalar value in the DataFrame by integer location only
In addition to selection by label and integer location, boolean selection also known as boolean indexing exists.
Dataframe cell value by Column Label
at - Access a single value for a row/column label pair Use at if you only need to get or set a single value in a DataFrame or Series.
Let’s create a Dataframe first
import pandas as pd df = pd.DataFrame([[30, 20, 'Hello'], [None, 50, 'foo'], [10, 30, 'poo']], columns=['A', 'B', 'C']) df
Let’s access cell value of (2,1) i.e index 2 and Column B
df.at[2,'B'] 30
Value 30 is the output when you execute the above line of code
Now let’s update the only NaN value in this dataframe to 50 , which is located at cell 1,1 i,e Index 1 and Column A
df.at[1,'A']=50 df
So you have seen how we have updated the cell value without actually creating a new Dataframe here
Let’s see how do you access the cell value using loc and at
df.loc[1].B OR df.loc[1].at['B'] Output: 50
Dataframe cell value by Integer position
From the above dataframe, Let’s access the cell value of 1,2 i.e Index 1 and Column 2 i.e Col C
iat - Access a single value for a row/column pair by integer position. Use iat if you only need to get or set a single value in a DataFrame or Series.
df.iat[1, 2] Ouput foo
Let’s setup the cell value with the integer position, So we will update the same cell value with NaN i.e. cell(1,0)
df.iat[1, 0] = 100
Select rows in a MultiIndex Dataframe
Pandas xs Extract a particular cross section from a Series/DataFrame. This method takes a key argument to select data at a particular level of a MultiIndex.
Let’s create a multiindex dataframe first
#xs import itertools import pandas as pd import numpy as np a = ('A', 'B') i = (0, 1, 2) b = (True, False) idx = pd.MultiIndex.from_tuples(list(itertools.product(a, i, b)), names=('Alpha', 'Int', 'Bool')) df = pd.DataFrame(np.random.randn(len(idx), 7), index=idx, columns=('I', 'II', 'III', 'IV', 'V', 'VI', 'VII'))
Access Alpha = B
df.xs(('B',), level='Alpha')
Access Alpha = ‘B’ and Bool == False
df.xs(('B', False), level=('Alpha', 'Bool'))
Access Alpha = ‘B’ and Bool == False and Column III
df.xs(('B', False), level=('Alpha', 'Bool'))['III']
Conclusion. at Works very similar to loc for scalar indexers. Cannot operate on array indexers.Advantage over loc is that this is faster. Similarly, iat Works similarly to iloc but both of them only selects a single scalar value. Further to this you can read this blog on how to update the row and column values based on conditions. | https://kanoki.org/2019/04/12/pandas-how-to-get-a-cell-value-and-update-it/ | CC-MAIN-2022-33 | refinedweb | 776 | 62.98 |
For the upcoming two weeks we are going to team up and create a working machine! Sounds like huge challenge since I never thought that I’m going to construct the working machine in my life!
I teamed up with: Coral, Carolina and Eve.
Coral created a website where we’ve merged our documentations:
She also created a cool video which is showing the process of creating:
We are four people in the group so it won’t get messy each one was mainly focusing on one task, but still taking part in consulting and supporting others.
For me the task will be: take care about electronics (connections between the boards, motors and the battery) + programming.
Schedule and tasks:
We have two weeks. First one is ment to be mostly for working on mechanical desig, second - for machine design. As I’ve mentioned before we are four people, each one will be focusing on main task and supporting the others. But becouse some tasks can be executed paralelly, we will treat this weekly division (mechanicel/machine design) quite fluently, trying to use the time in the best way.
We are not planning the days exactly but more making to-do list with tasks:
Organsing:
Designing:
Electronics:
Assigning the main tasks:
Dorota: electronics and programming Carolina: design and electronics Coral: website, video, design Eva: design, lasercutting
For the mechanical design I’ve took part in first phase which was reaserch and brainstorming. Based on a inspiration from FabLab Ajaccio machine design I’ve also created one of the repeatable elements in Fusion 360.
I exported it as a 2D Drawing and retoch (joined the broken lines in Illustrator)
I’ve understood pretty fast that it’s going to be very messy if we all four are going to work on design parts…they have to match each other so will be better to make them in one file. Also with the use of one type of programm (me and Coral are using Fusion 360, Carolina and Eve - Rhino).
I’ve passed the files to Carolina and jumped to my electronics&programming challenge.
In the meantime, after girls tested and lasercutted the pieces, we’ve assembled the structure to see how everything looks like together. We were also able to test manually the machanism.
Electronics & programming
We are going to use Nadia Peek and James Coleman framework (pygestalt) for modular electronics (Gestalt Nodes).
First I wanted to understand the connections between boards, motors, fab usb and battery. We are going to create a cabled network, where components are:
The below schematics shows the connections in a simple way.
About the FAB NET:
Generally Fabnet is an RS-485 bus () for serial communication system. Multiple receivers can be connected to such a network in a linear, multidrop bus. In our case we are going to connect two motors to drive two axises.
Informations about FabNet:
Motor
We are going to use two
stepper motors, each for driving one axis:
Battery
We are going to use battery which have output
12 Volts and 2.1 Ampere. Battery is going to transform the
alternating current (AC) into
direct current (DC) that we need to run our machine. We will get 220 Volts of alternating current from the socket and transform it to 12 Volts of direct current.
For that I had to connect 4 cables to the battery: two coming from the plug with AC (brown, blue), and two getting out to the board with DC (green,red).
Pay attention:
NEVER HOT PLUG! ( boards should never be powered while connecting). We power the boards at the very end, forgetting this and connecting while powered will most likely destroy the Gestalt Node..
Now we have to connect pins from 8-pin header from FAB NET USB with pins from 10-pin header from geatalt node. Since the components are having different amount of pins we are going to use the cable which at the end is divided (so one can connect the pins in unconventional order).
Pay attention:
The FTDI cables order is different than ina regular FTDI cable. I guess it’s becouse it creates better composition with traces.
Here for reminding order and pinout of regular FTDI, order of colors of traditional rainbow cable:
We are using ready FABNET USB that we have in FabLab Barcelona, though if one would like to mill on its own, here are the traces:
PROGRAMMING
GESTALT NODE FIRMWARE:
Credits for Kris Rijnieks for writting a helpful README file in the GitHub repository of the firmware! The things got clearer after reading it.
One has to burn the firmware on the Gestalt nodes to work further with axis movement . Firmware is available on GitHub under this link:.
You have to download it and go into terminal to the folder which contains makefile.
To burn the firmware on the gestalt board connect all the necessary equipement:
Run the
Makefile (by running command
make in the terminal) for burning makefile on the board and setting up fuses.
PROGRAMMING THE BOARD/S:
We are going to program the boards in
Python, with the use of
pygestalt library which contains series of examples to control machines.
To properly install the library follow the
README file inside the linked pygestalt git repository.
Once we have the library installed we can run the example codes:
In both of the files one important thing is to
change the usb-serial port which enables serial communication.
In the terminal you can list usb ports by typing
ls /dev/cu. Now you have to copy the port name and place it in
portName=
Now we can run the code by using command:
python single_node.py
IMPORTANT: to run the example you have to get to the folder which contains it: pygestalt-master/examples/machines/htmaa
For controlling the axises we are using this line:
moves = [[0,10],[0,20],[20,20],[20,0],[0,0]]
where you specify the x and y position [x,y] of each step. The code above will give us the drawing of the square.
The triangle will be:
moves = [[0,20],[20,0],[0,0]]
We were also experimenting with circle drawing. My coding (and math) skills are very basic, so when it appeared that drawing a circle is much more complicated…I’ve got a hand from our instructor Xavi!
For that one would need a
formula which will generate the points and put them in the place of coordinates. We have to specify some additional variables like pi, radius, degrees.
We’ve got the result of circle but with too many points, so the motor was getting a bit crazy and was shaking all the machine. If one would like to polish the code one have to reduce the amount of coordinates.
Python
I’m new to Python, here I’m writting down some code tipps that I’ve gathered during the Python tutorial:
sudo pip install pySerial
from pySerial import *
import pySerial
python myscript.py
installing the git repository directlyy from terminal: git+git://github.com/nadya/pygestalt.git
or download the .zip package, navigate to the folder and: sudo easy_install pygestalt
or run the setup file: sudo python ./setup.py install
INSPIRATIONS:
Nadja:
Xavi:
Francesca Perona:
FabNet:
Files made/modified during this assignment:
triangles.dxf
pygestalt with examples of single_node.py and xy_plotter.py | http://fab.academany.org/2018/labs/barcelona/students/dorota-orlof/assignments/assignment14/ | CC-MAIN-2019-13 | refinedweb | 1,225 | 59.94 |
Ok, so we've converted the InChI library to JavaScript. There are two ways to go from here, either call it directly from JavaScript or write a C function to call it and convert that to JavaScript (and then call that). It might seem that the second plan is more work, but it actually makes things easier as calling these JavaScriptified C functions is a bit tricky especially if we need to pass anything beyond basic C types.
The following code uses the InChI API to read in an InChI, and list the atoms and bonds in a molecule:
#include <stdio.h>
#include <string.h>
#include "inchi_api.h"
int InChI_to_Struct(char* inchi)
{
inchi_InputINCHI inp;
inchi_OutputStruct out;
int i, j, retval;
inp.szInChI = inchi;
memset(&out, 0, sizeof(out));
retval = GetStructFromINCHI(&inp, &out);
printf("number of atoms: %d\n" , out.num_atoms);
for(i=0;i<out.num_atoms;++i)
{
inchi_Atom* piat = &out.atom[i];
printf("Atom %d: %s\n", i, piat->elname);
for(j=0;j<piat->num_bonds;++j)
{
printf("Bond from %d to %d of type %d\n", i, piat->neighbor[j], piat->bond_type[j]);
}
}
FreeStructFromINCHI( &out );
return retval;
}
int test_InChI_to_Struct()
{
int retval;
char inchi [] = "InChI=1S/CHClO/c2-1-3/h1H";
retval = myInChI_to_Struct(inchi);
return retval;
}
I saved the above code along with the InChI library's own C files in inchi_dll/mycode.c, and added it in the two appropriate places in the Makefile so that the compilation as described in Part II created two extra functions in inchi.js.
To test at the command line, you need to edit the run() method to call InChI_to_Struct, and then call the run() method itself. When you do this, you will find that _strtod is not implemented (so you need to add an implementation - I just pass the call to _strtol) and that there is a call to some clock-related functions (I make this just return 0 - to sort this out properly you would need to look at the original C code and figure out what this function is used for in this context). So, here it is in action if I call run("InChI=1/S/CHClO/c2-1-3/h1H"):
user@ubuntu:~/Tools/inchidemo$ ~/Tools/v8-repo/d8 inchi.js
number of atoms: 3
Atom 0: C
Bond from 0 to 1 of type 1
Bond from 0 to 2 of type 2
Atom 1: Cl
Bond from 1 to 0 of type 1
Atom 2: O
Bond from 2 to 0 of type 2
Once tested, you can make a webpage that incorporates it. Using Chrome, check out the InChI JavaScript demo.
So...does it work? Well, almost. For some simple InChIs it works perfectly. For others, it returns an error. There are a couple of ways of tracking down the problem but, you know, I have to draw the line somewhere so I'll leave that as an exercise for the reader. Also, the page needs to be refreshed after each InChI, so there's something wrong there with the way I've set it up. The file size is currently too big, but that can be reduced by leaving out unnecessary functions (for example) as well as by using the techniques discussed in the previous post. Perhaps the biggest problem is that the code maxes out the stack space on Firefox/Spidermonkey - this can probably only be addressed by discussion with the emscripten author and changes to the InChI source code.
So that's where I'll leave it for now. I'm very impressed with how well this works - the whole idea is really quite amazing and I didn't expect to get this far, especially with such a complex piece of code. I'll leave the interested reader with a few questions: can you track down all the problems and sort them out?, what other C/C++ libraries could usefully be converted to JavaScript?, and what other languages can be generated from LLVM bytecode?
Supporting info: Various versions of the InChI JavaScript code: vanilla, for running at command-line, ready for webpage, and finally minified.
Acknowledgement: Thanks to kripken, the main emscripten author, for the rapid fix to my reported bug.
1 comment:
Mmmm... impressive! | http://baoilleach.blogspot.com/2011/05/almost-translate-inchi-code-into_24.html | CC-MAIN-2015-32 | refinedweb | 702 | 67.79 |
Get an api up and running quickly
Project Description)
Requests are translated from the left bit to the right bit of the path (so for, **params): pass def POST(self, **params): pass")
Post requests are also merged with the **params on the controller method, with the POST params taking precedence:
POST /foo?param1=GET1¶m2=GET2 body: param1=POST1¶m3=val3 -> prefix.Foo.POST(param1="POST1", param2="GET2", param3="val3")
Fun with parameters
The endpoints.decorators module gives you some handy decorators to make parameter handling and error checking easier:
from endpoints import Controller from endpoints.decorators import param class Foo(Controller): @param('param1', default="some val") @param('param2', choices=['one', 'two']) def GET(self, **params): pass
For the most part, the param decorator tries to act like Python’s built-in argparse.add_argument() method.
There is also a get_param decorator when you just want to make sure a query param exists and don’t care about post params and a post_param when you only care about posted parameters. There is also a require_params decorator that is a quick way to just make sure certain."
The CorsMixin will handle all the OPTION requests, and setting all the headers, so you don’t have to worry about them (unless you want to).()
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/endpoints/0.8.18/ | CC-MAIN-2018-17 | refinedweb | 236 | 58.82 |
This article will cover how you can use visualization libraries and software to determine runtime complexities for different algorithms. We will cover the basic usage of
matplotlib for visualization of 2d plots and
numpy for calculating lines of best fit, and then go over how these libraries can be used to determine their runtime complexity either through "guesstimation" or by comparing the plots of their runtimes to that of known functions (i.e.
y=x,
y=2^n).
If you are interested in downloading the code featured in this article, please visit this repository on my GitHub (chroline/visualizingRuntimes).
What is runtime complexity?
Runtime complexity, more specifically runtime complexity analysis, is a measurement of how “fast” an algorithm can run as the amount of operations it requires increases. Before we dive into visualizing the runtimes of different algorithms, let’s look at a couple of basic examples to explain the concept.
Take the following
add function. It accepts two parameters,
a and
b, and performs addition on
a and
b.
def add(a, b): return a + b
When given any two parameters (1 and 2, 2 and 3, 29347 and 93648), the amount of operations does not change. Therefore, we say the algorithm runs in constant, or
O(1), time.
However, now consider the following
permutations function. It accepts one main parameter,
string, and prints all of the permutations of that string.
def permutations(string, step = 0): if step == len(string): print "".join(string) for i in range(step, len(string)): string_copy = [c for c in string] string_copy[step], string_copy[i] =string_copy[i], string_copy[step] permutations(string_copy, step + 1)
As you could imagine, this function would take much longer than the previous
add function; in fact, this function would run in what is called factorial time, represented as
O(n!). That is because as the amount of characters in
string increases, the amount of operations required to find all the permutations increases factorially.
When comparing the runtimes of two functions visually, you would notice a stark contrast in the graphs they produce. For the
add function, you would observe a flat line, as the input of the function does not affect the amount of operations required by the function. However, for the
permutations function, you would observe a line that drastically spikes upwards (the slope of the line would approach infinity) because the amount of operations increases factorially as the size of the input increases.
Now that we have covered the basics of runtime complexity analysis, we can begin the process of writing code to visualize the runtimes of different algorithms.
Initialization
Before running any visualizations, we must first import the necessary libraries and initialize them.
matplotlibis a library that will create and display the graphs
numpyis a library that consists of numerous mathematical utility functions
timeitis a library that we will use to time how long each call to the algorithm takes
mathis the basic Python math library
randomis the basic Python randomization library
import matplotlib.pyplot as plt import numpy as np import timeit import math import random plt.rcParams['figure.figsize'] = [10, 6] # set size of plot
Demos
Below are some code segments that demonstrate how to use
matplotlib and
numpy.
sum function
The Python
sum function calculates the sum of all elements of a provided
Iterable. This function implements an algorithm with a
O(n) runtime complexity.
To test this, we will use the
linspace method from the
numpy library to generate an iterable with 50 evenly-spaced values ranging from 10 to 10,000. The graph, although not a perfectly straight plot, illustrates this.
ns = np.linspace(10, 10_000, 50, dtype=int) ts = [timeit.timeit('sum(range({}))'.format(n), number=100) for n in ns] plt.plot(ns, ts, 'or')
We can add a line of best fit (using a 4th-degree function) to further exemplify this. The graph will never be a perfectly straight-line, but it should come close.
degree = 4 coeffs = np.polyfit(ns, ts, degree) p = np.poly1d(coeffs) plt.plot(ns, ts,'or') plt.plot(ns, [p(n)for nin ns],'-b')
List Indexing
Retrieving an item from a list (list indexing) runs with
O(1) runtime complexity, which means that the amount of items in the list does not affect how long the algorithm takes to run. How is this represented in a graph?
lst = list(range(1_000_000)) ns = np.linspace(0, len(lst), 1000, endpoint=False, dtype=int) ts = [timeit.timeit('_ = lst[{}]'.format(n), globals=globals(), number=10000) for nin ns] plt.plot(ns, ts,'or') degree = 4 coeffs = np.polyfit(ns, ts, degree) p = np.poly1d(coeffs) plt.plot(ns, [p(n)for nin ns],'-b')
Algorithms
Now we will look at the graphs produced by the following algorithms:
- linear search
- binary search
- insertion sort
Linear Search
Linear search has a runtime complexity of
O(n), which will be represented by an approximately straight line.
Red plots demonstrate searching for an element in a shuffled, blue plots demonstrate searching for an element that is not present in the list.
The line of best fit for the red plots will generally be lesser than that of the blue plots because searching for an element that is not present in the list requires iterating through the entire list.
## searches for an item in a list def contains(lst, x): for y in lst: if x == y: return True return False ns = np.linspace(10, 10_000, 100, dtype=int) # red plots ts = [timeit.timeit('contains(lst, 0)', setup='lst=list(range({})); random.shuffle(lst)'.format(n), globals=globals(), number=100) for n in ns] plt.plot(ns, ts, 'or') # line of best fit for red plots degree = 4 coeffs = np.polyfit(ns, ts, degree) p = np.poly1d(coeffs) plt.plot(ns, [p(n) for n in ns], '-r') # blue plots ts = [timeit.timeit('contains(lst, -1)', setup='lst=list(range({}))'.format(n), globals=globals(), number=100) for n in ns] plt.plot(ns, ts, 'ob') # line of best fit for blue plots degree = 4 coeffs = np.polyfit(ns, ts, degree) p = np.poly1d(coeffs) plt.plot(ns, [p(n) for n in ns], '-b')
Binary Search
Binary search runs with
O(log n) runtime complexity.
# searches for an item in a list using linear search def contains(lst, x): lo = 0 hi = len(lst)-1 while lo <= hi: mid = (lo + hi) // 2 if x < lst[mid]: hi = mid - 1 elif x > lst[mid]: lo = mid + 1 else: return True else: return False ns = np.linspace(10, 10_000, 100, dtype=int) ts = [timeit.timeit('contains(lst, 0)', setup='lst=list(range({}))'.format(n), globals=globals(), number=1000) for n in ns] plt.plot(ns, ts, 'or') degree = 4 coeffs = np.polyfit(ns, ts, degree) p = np.poly1d(coeffs) plt.plot(ns, [p(n) for n in ns], '-b')
Above is what the graph should look like in a near-perfect simulation. As you can see, there are some some outliers, and in a perfect simulation the line of best fit would be identical to a logarithmic function.
Insertion Sort
What runtime complexity does insertion sort have? We can use the plots of its runtime and compare those plots against the graphs of known runtimes to determine which is the closest match.
def insertion_sort(lst): for i in range(1, len(lst)): for j in range(i, 0, -1): if lst[j-1] > lst[j]: lst[j-1], lst[j] = lst[j], lst[j-1] else: break # 15 values ns = np.linspace(100, 2000, 15, dtype=int) ts = [timeit.timeit('insertion_sort(l], '-r')
Now, we can compare that graph with graphs of different runtimes to ultimately determine which is most similar and which runtime complexity insertion sort has.
Red plots represent
O(log n), blue plots represent
O(n^2), green plots represent
O(2^n).
# y = log x # vertically stretched 1000x ns = range(1, 100) ts = [math.log(n, 2) * 1000 for n in ns] plt.plot(ns, ts, 'or'); # y = x^2 ns = range(1, 100) ts = [(n*n) for n in ns] plt.plot(ns, ts, 'ob'); # y = 2^x # vertically stretched 20x # horizontally compressed 1x ns = range(1, 10) ts = [math.pow(2, n)*20 for n in ns] plt.plot(ns, ts, 'og');
Based on these graphs, it is safe to assume that insertion sort runs in
O(n^2) time.
Mystery function runtime analysis
Can we use visualization of the runtimes of mystery functions to guesstimate their runtime complexities?
Mystery function #1
def f(l: list, val): # l is a python list with n items d = {} for i in range(len(l)): d[l[i]] = i return d[val] ns = range(5, 2000) ts = [timeit.timeit('f(lst, {})'.format(n], '-b')
Without even comparing this graph to the graphs of the possible runtimes, we can already safely assume that this function operates in
O(n) runtime.
Mystery function #2
def g(l): # l is a python list of integers of length n pairs = [ (i,j) for i in range(len(l)) for j in range(len(l)) if i < j ] result = [] for (i,j) in pairs: if l[i] == l[j]: result.append((i,j)) return result ns = range(5, 200) ts = [timeit.timeit('g(lst)', setup='lst=list(range({}))'.format(n), globals=globals(), number=1) for n in ns] plt.plot(ns, ts, 'or') degree = 4 coeffs = np.polyfit(ns, ts, degree) p = np.poly1d(coeffs) plt.plot(ns, [p(n) for n in ns], '-b')
This graph looks very similar to the one for insertion sort, so we can determine that this function has a runtime complexity of
O(n^2).
Mystery function #3
def h(n): if n <= 1: return n else: return h(n-1) + h(n-2) ns = range(5, 30) ts = [timeit.timeit('h({})'.format(n), globals=globals(), number=1) for n in ns] plt.plot(ns, ts, 'or')
This function is more ambiguous than the previous two. It is evident that its runtime must be greater than
O(n) as the slope increases as
n increases. Let's consider the graphs of runtimes
O(n^2) in red and
O(2^n) in blue.
# y = n^2 # vertically stretched 20x ns = range(5, 30) ts = [n*n*20000 for n in ns] plt.plot(ns, ts, 'or') # y = 2^n # vertically compressed 50x ns = range(5, 30) ts = [math.pow(2, n)/50 for n in ns] plt.plot(ns, ts, 'ob')
The graph of the runtime of mystery function #3 more closely resembles the blue plots, so therefore the runtime complexity of mystery function #3 is
O(2^n).
Conclusion
Using these visualization libraries, we are able to determine the runtime complexities of functions and algorithms by comparing them to plots/graphs of known runtimes (i.e. comparing plots of insertion sort runtime against
y=n^2). In addition to determining runtime complexities, this methodology can be used to compare the speeds of different algorithms against each other. With only a few lines of code, you can quickly see the speed at which your chosen algorithms will run with large sets of data!
Resources
- Pyplot tutorial - Matplotlib 3.4.2 documentation
- numpy.polyfit - NumPy v1.20 Manual
- Python Program to Display Fibonacci Sequence Using Recursion (programiz.com)
- How to find all possible permutations of a given string in Python? (tutorialspoint.com)
- 8 time complexities that every programmer should know | Adrian Mejia Blog
Discussion (6)
Unfortunately the section on runtime complexity has a mistake.
Constant runtime is O(1) ... O(n) is linear runtime. Summing the elements of an array would be linear, since you have to access each element of the array one time.
Great catch, I made the update right away! The add function given would run in O(1) time, not O(n) time.
Cool!
Thanks for reading!
Excellent tutorial!
Thanks so much! | https://dev.to/chroline/visualizing-algorithm-runtimes-in-python-f92 | CC-MAIN-2021-39 | refinedweb | 1,978 | 64.51 |
#include <sys/libkern.h>
#include <sys/random.h>
The arc4rand() function will return very good quality random numbers, ignored and zero is returned. The buffer is filled with no more than count bytes. It is strongly advised that read_random() is not used; instead use arc4rand() unless it is necessary to know that no entropy has been returned.
The read_random_uio() function behaves identically to read(2) on /dev/random. The uio argument points to a buffer where random data should be stored. This function only returns data if the random device is seeded. It blocks if unseeded, except when the nonblock argument is true.
All the bits returned by random(), arc4rand(), read_random(), and read_random_uio() are usable. For example, 'random()&01' will produce a random binary value.
The arc4random() is a convenience function which calls arc4rand() to return a 32 bit pseudo-random integer.
The arc4rand() function uses the RC4 algorithm to generate successive pseudo-random bytes. The arc4random() function uses arc4rand() to generate pseudo-random numbers in the range from 0 to 232-1.
The read_random() function returns the number of bytes placed in buffer.
read_random_uio() returns zero when successful, otherwise an error code is returned.
Please direct any comments about this manual page service to Ben Bullock. Privacy policy. | https://nxmnpg.lemoda.net/9/arc4rand | CC-MAIN-2019-51 | refinedweb | 209 | 50.43 |
I have a side project were I’m using Spring Boot, Liquibase and Postgres.
I have the following sequence of tests:
test1(); test2(); test3(); test4();
In those four tests I’m creating the same entity. As I’m not removing the records from the table after each test case, I’m getting the following exception:
org.springframework.dao.DataIntegrityViolationException
I want to solve this problem with the following constraints:
- I don’t want to use the
@repositoryto clean the database.
- I don’t want to kill the database and create it on each test case because I’m using TestContainers and doing that would increase the time it takes to complete the tests.
In short: How can I remove the records from one or more tables after each test case without 1) using the
@repository of each entity and 2) killing and starting the database container on each test case?
The simplest way I found to do this was the following:
- Inject a JdbcTemplate instance
@Autowired private JdbcTemplate jdbcTemplate;
- Use the class JdbcTestUtils to delete the records from the tables you need to.
JdbcTestUtils.deleteFromTables(jdbcTemplate, "table1", "table2", "table3");
- Call this line in the method annotated with
@Afteror
@AfterEachin your test class:
@AfterEach void tearDown() throws DatabaseException { JdbcTestUtils.deleteFromTables(jdbcTemplate, "table1", "table2", "table3"); }
I found this approach in this blog post:
Easy Integration Testing With Testcontainers
###
Annotate your test class with
@DataJpaTest. From the documentation:
By default, tests annotated with @DataJpaTest are transactional and roll back at the end of each test. They also use an embedded in-memory database (replacing any explicit or usually auto-configured DataSource).
For example using Junit4:
@RunWith(SpringRunner.class) @DataJpaTest public class MyTest { //... }
Using Junit5:
@DataJpaTest public class MyTest { //... }
###
You could use @Transactional on your test methods. That way, each test method will run inside its own transaction bracket and will be rolled back before the next test method will run.
Of course, this only works if you are not doing anything weird with manual transaction management, and it is reliant on some Spring Boot autoconfiguration magic, so it may not be possible in every use case, but it is generally a highly performant and very simple approach to isolating test cases.
###
i think this is the most effecient way for postgreSQL. You can make same thing for other db. Just find how to restart tables sequence and execute it
@Autowired private JdbcTemplate jdbcTemplate; @AfterEach public void execute() { jdbcTemplate.execute("TRUNCATE TABLE users" ); jdbcTemplate.execute("ALTER SEQUENCE users_id_seq RESTART"); }
###
My personal preference would be:
private static final String CLEAN_TABLES_SQL[] = { "delete from table1", "delete from table2", "delete from table3" }; @After public void tearDown() { for (String query : CLEAN_TABLES_SQL) { getJdbcTemplate().execute(query); } }
To be able to adopt this approach, you would need to
extend the class with DaoSupport, and set the DataSource in the constructor.
public class Test extends NamedParameterJdbcDaoSupport public Test(DataSource dataSource) { setDataSource(dataSource); } | https://exceptionshub.com/how-to-clean-database-tables-after-each-integration-test-when-using-spring-boot-and-liquibase.html | CC-MAIN-2022-05 | refinedweb | 479 | 52.29 |
SGI compatible version?Posted Wednesday, 27 June, 2012 - 13:26 by yunharla in
Hi guys,
I recently started porting some of my GL helpers to VC++ (adding ifdefs for .net compilers...), so i can write code for all the different IDEs without rewriting huge chunks. By doing so i
realised that the OpenTK version im using incompatible with the SGI headers, so i was wondering if and which version could help here? i mean, if there aint one i have to translate the whole
header to opentk code....
Re: SGI compatible version?
Maybe the Compatibility.Tao namespace is what you're looking for?
Re: SGI compatible version?
well you see the problem is that neither TAO nor OpenTK seem to work correctly with c++.
For example i use the following code i just get some random junk
but when i use a c# callback that converts the pixels pointer to a byte[] and then calls Gl.TexImage2D every works....
not to mention that some function that take floats as arguments take doubles....
and since the code is a 1:1 of android and ios i must assume that something is definitly wrong here....
Re: SGI compatible version?
some function that take floats as arguments take doubles
If you really should find anything like that, please be make a bug report.
You cannot port C++ code directly to C#, the garbage collector must be considered.
1. One important factor here is that memory allocated in C++ will remain at it's position until deleted, and the code you are porting very likely relies on that. Unless you use the fixed statement or GCHandle & Co. the garbage collector in .Net may compact memory alignment or delete any object that isn't referenced anymore.
2. The OpenGL commands issued by the CPU are queued first and later processed by the GPU at a different pace. This means code like this may work 99 times but fail the 100th with an Access Violation.
...because the garbage collector was faster than GL.Foo.
3. On topic Vertex Arrays, please read | http://www.opentk.com/node/3042 | CC-MAIN-2015-11 | refinedweb | 345 | 74.79 |
Mastering Props And PropTypes In React
Do props and PropTypes confuse you? You’re not alone. I’m going to guide you through everything about props and PropTypes. They can make your life significantly easier when developing React apps. This tutorial will introduce you to the details about props, passing and accessing props, and passing information to any component using props.
Building React applications involves breaking down the UI into several components, which implies that we will need to pass data from one component to another. Props are an important mechanism for passing information between React components, and we’re going to look into them in great detail. This article would be incomplete without looking into PropTypes, because they ensure that components use the correct data type and pass the right data.
It’s always a good practice to validate the data we get as props by using PropTypes. You will also learn about integrating PropTypes in React, typechecking with PropTypes, and using defaultProps. At the end of this tutorial, you will understand how to use props and PropTypes effectively. It is important that you already have basic knowledge of how React works.
Understanding Props
React allows us to pass information to components using things called props (short for properties). Because React comprises several components, props make it possible to share the same data across the components that need them. It makes use of one-directional data flow (parent-to-child components). However, with a callback function, it’s possible to pass props back from a child to a parent component.
These data can come in different forms: numbers, strings, arrays, functions, objects, etc. We can pass props to any component, just as we can declare attributes in any HTML tag. Take a look at the code below:
<PostList posts={postsList} />
In this snippet, we are passing a prop named
posts to a component named
PostList. This prop has a value of
{postsList}. Let’s break down how to access and pass data.
Passing and Accessing Props
To make this tutorial interesting, let’s create an application that shows a list of users’ names and posts. The app demo is shown below:
See the Pen [Passing and Accessing Props]() by David Adeneye.
The app comprises collections of components: an
App component, a
PostList component, and a
Post component.
The list of posts will require data such as the
content and the
name of the user. We can construct the data like so:
const postsList = [ { id: 1, content: "The world will be out of the pandemic soon", user: "Lola Lilly", }, { id: 2, content: "I'm really exited I'm getting married soon", user: "Rebecca Smith", }, { id: 3, content: "What is your take on this pandemic", user: "John Doe", }, { id: 4, content: "Is the world really coming to an end", user: "David Mark", }, ];
After this, we need the
App component to pull the data, Here is the basic structure of that component:
const App = () => { return ( <div> <PostList posts={postsList} /> </div> ); };
Here, we are passing an array of posts as a prop to the
PostList (which we’ll create in a bit). The parent component,
PostList, will access the data in
postsList, which will be passed as
posts props to the child component (
Post). If you’ll remember, our app comprises three components, which we’ll create as we proceed.
Let’s create the
PostList:
class PostList extends React.Component { render() { return ( <React.Fragment> <h1>Latest Users Posts</h1> <ul> {this.props.posts.map((post) => { return ( <li key={post.id}> <Post {...post} /> </li> ); })} </ul> </React.Fragment> ); } }
The
PostList component will receive
posts as its prop. It will then loop through the
posts prop,
this.props.posts, to return each posted item as a
Post component (which we will model later). Also, note the use of the
key in the snippet above. For those new to React, a key is a unique identifier assigned to each item in our list, enabling us to distinguish between items. In this case, the key is the
id of each post. There’s no chance of two items having the same
id, so it’s a good piece of data to use for this purpose.
Meanwhile, the remaining properties are passed as props to the
Post component (
<Post {...post} /> ).
So, let’s create the
Post component and make use of the props in it:
const Post = (props) => { return ( <div> <h2>{props.content}</h2> <h4>username: {props.user}</h4> </div> ); };
We are constructing the
Post component as a functional component, rather than defining it as a class component like we did for the
PostList component. I did this to show you how to access props in a functional component, compared to how we access them in a class component with
this.props. Because this a functional component, we can access the values using
props.
We’ve learned now how to pass and access props, and also how to pass information from one component to the other. Let’s consider now how props work with functions.
Passing Functions Via Props
In the preceding section, we passed an array of data as props from one component to another. But what if we are working with functions instead? React allows us to pass functions between components. This comes in handy when we want to trigger a state change in a parent component from its child component. Props are supposed to be immutable; you should not attempt to change the value of a prop. You have to do that in the component that passes it down, which is the parent component.
Let’s create a simple demo app that listens to a click event and changes the state of the app. To change the state of the app in a different component, we have to pass down our function to the component whose state needs to change. In this way, we will have a function in our child component that is able to change state.
Sounds a bit complex? I have created a simple React application that changes state with the click of a button and renders a piece of welcome information:
See the Pen [Passing Function via Props in React]() by David Adeneye.
In the demo above, we have two components. One is the
App component, which is the parent component that contains the app’s state and the function to set the state. The
ChildComponent will be the child in this scenario, and its task is to render the welcome information when the state changes.
Let’s break this down into code:
class App extends React.Component { constructor(props) { super(props); this.state = { isShow: true, }; } toggleShow = () => { this.setState((state) => ({ isShow: !state.isShow })); }; render() { return ( <div> <ChildComponent isShow={this.state.isShow} clickMe={this.toggleShow} /> </div> ); } }
Notice that we’ve set our state to
true, and the method to change the state is created in the
App component. In the
render() function, we pass the state of the app, as the prop
isShow, to the
ChildComponent component. We also pass the
toggleShow() function as a prop named
clickMe.
We will use this in the
ChildComponent which looks like this:
class ChildComponent extends React.Component { clickMe = () => { this.props.clickMe(); }; render() { const greeting = "Welcome to React Props"; return ( <div style={{ textAlign: "center", marginTop: "8rem" }}> {this.props.isShow ? ( <h1 style={{ color: "green", fontSize: "4rem" }}>{greeting}</h1> ) : null} <button onClick={this.clickMe}> <h3>click Me</h3> </button> </div> ); } }
The most important thing above is that the
App component passes down a function as a prop to the
ChildComponent. The function
clickMe() is used for the click handler in the
ChildComponent, whereas the
ChildComponent doesn’t know the logic of the function — it only triggers the function when the button gets clicked. The state is changed when the function is called, and once the state has changed, the state is passed down as a prop again. All affected components, like the child in our case, will render again.
We have to pass the state of the app,
isShow, as a prop to the
ChildComponent, because without it, we cannot write the logic above to display
greeting when the state is updated.
Now that we’ve looked at functions, let’s turn to validation. It’s always a good practice to validate the data we get through props by using PropTypes. Let’s dive into that now.
What Are PropTypes In React?
PropTypes are a mechanism to ensure that components use the correct data type and pass the right data, and that components use the right type of props, and that receiving components receive the right type of props.
We can think of it like a puppy being delivered to a pet store. The pet store doesn’t want pigs, lions, frogs, or geckos — it wants puppies. PropTypes ensure that the correct data type (puppy) is delivered to the pet store, and not some other kind of animal.
In the section above, we saw how to pass information to any component using props. We passed props directly as an attribute to the component, and we also passed props from outside of the component and used them in that component. But we didn’t check what type of values we are getting in our component through props or that everything still works.
It’s totally upon us whether to validate the data we get in a component through props. But in a complex application, it is always a good practice to validate that data.
Using PropTypes
To make use of PropTypes, we have to add the package as a dependency to our application through npm or Yarn, by running the following code in the command line. For npm:
npm install --save prop-types
And for Yarn:
yarn add prop-types
To use PropTypes, we first need to import PropTypes from the prop-types package:
import PropTypes from 'prop-types';
Let’s use ProTypes in our app that lists users’ posts. Here is how we will use it for the
Post component:
Post.proptypes = { id: PropTypes.number, content: PropTypes.string, user: PropTypes.string }
Here,
PropTypes.string and
PropTypes.number are prop validators that can be used to make sure that the props received are of the right type. In the code above, we’re declaring
id to be a number, while
content and
user are to be strings.
Also, PropTypes are useful in catching bugs. And we can enforce passing props by using
isRequired:
Post.proptypes = { id: PropTypes.number.isRequired, content: PropTypes.string.isRequired, user: PropTypes.string.isRequired }
PropTypes have a lot of validators. Here are some of the most common ones:
Component.proptypes = { stringProp: PropTypes.string, // The prop should be a string numberProp: PropTypes.number, // The prop should be a number anyProp: PropTypes.any, // The prop can be of any data type booleanProp: PropTypes.bool, // The prop should be a function functionProp: PropTypes.func // The prop should be a function arrayProp: PropTypes.array // The prop should be an array }
More types are available, which you can check in React’s documentation].
Default Props
If we want to pass some default information to our components using props, React allows us to do so with something called
defaultProps. In cases where PropTypes are optional (that is, they are not using
isRequired), we can set
defaultProps. Default props ensure that props have a value, in case nothing gets passed. Here is an example:
Class Profile extends React.Component{ // Specifies the default values for props static defaultProps = { name: 'Stranger' }; // Renders "Welcome, Stranger": render() { return <h2> Welcome, {this.props.name}<h2> } }
Here,
defaultProps will be used to ensure that
this.props.name has a value, in case it is not specified by the parent component. If no name is passed to the class
Profile, then it will have the default property
Stranger to fall back on. This prevents any error when no prop is passed. I advise you always to use
defaultProps for every optional PropType.
Conclusion
I hope you’ve enjoyed working through this tutorial. Hopefully, it has shown you how important props and propTypes are to building React applications, because without them, we wouldn’t be able to pass data between components when interactions happen. They are a core part of the component-driven and state-management architecture that React is designed around.
PropTypes are a bonus for ensuring that components use the correct data type and pass the right data, and that components use the right type of props, and that receiving components receive the right type of props.
If you have any questions, you can leave them in the comments section below, and I’ll be happy to answer every one and work through any issues with you.
References
- “Thinking in React”, React Docs
- “List and Keys”, React Docs
- “Typechecking With PropTypes”, React Docs
- “How to Pass Props to Components in React”, Robin Wieruch
| https://www.smashingmagazine.com/2020/08/mastering-props-proptypes-react/ | CC-MAIN-2020-40 | refinedweb | 2,133 | 64.3 |
thugkillaMember
Content count79
Joined
Last visited
Community Reputation138 Neutral
About thugkilla
- RankMember
SDL Animation?
thugkilla posted a topic in For BeginnersHello everyone, I'am using SDL to work on a game. I'am not expected to finish it I just want to use it to learn from practice. Ok so I have arrive to the point where I have to use animation. I'am kind of stuck on how I should do it.The Sprite sheet I'am using is 128 by 192 and its a basic 4 by 4 sprite sheet for walking having up,down,left,and right with backround color black. Any ideas on how to do it?
What Programming Books do you Have?
thugkilla posted a topic in GDNet LoungeHello again everyone i decided to make a thread on what programming books you have and how you would rate them. Here are mine(big list for me). -C by Example(3.5/5) Good source if you are trying to learn C for beggininers in my opinion. -Multiplayer Game Programming.(4/5) Very good book ,good examples , very good writing style makes me want to keep reading. -OpenGl SuperBible 3rd Addition(5/5) Very good book one of my favorites ,sadly I cant find time to read it =( -Focus on SDL-(5/5)- Probable the best SDL Reference I have seen it doesn't go deeply into the game programming itself such as animation and all , but none the less is one of my favorites. -Beginning C# Game Programming (3/5)Just bought this today so far so good. -Game Programming All in one 2nd edition(5/5) Even though I don't use Allegro anymore I still find this an excellant source this book showed me how a game was made and how it works. -Beggining C++ Game Programming (5/5) Perfecet beginning C++ source. -C++ Primer Plus (4/5) Its a good Problem Solver not something you would want to reading as a book.
I need proggramming tutorials that actually teach me
thugkilla replied to xanados's topic in For BeginnersIf I was you I would narrow it down to this. Even this is a whole lot. If you knew all this you would be a programmer God in my opionin. By the way just memorizing functions from a book and reciting them isn't enough you need to be able to put them into action. Mess with them alittle, add your own features. Java C# C++ DirectX 10 3D Studio Max Linux Networking OpenGL D3D [redundant, same as Direct3D] STL AI (Developement)
- Just slips out I guess. Some questions about C# does it have any other libraries except Direct X managed? C# does it have network functions and all integrated in? Is there any other Compiler Besides the Microsoft Compilers something more simple? Basically thats all.
- Thanks for the information guys I have decided to return this book and get one on C#. For C++ I'am a pretty good programmer I have been at it for maybe 2 years or so. I consider myself a novice for C++.
Visual Basic keep going or stop?
thugkilla posted a topic in For BeginnersHello everyone recently I have bought a programming book on Visual Basics 6 called "Visual Basics 6 For Dummies". I was wondering should I keep going or return the book and buy something else ? Basically these are my decisions Should I just keep going on with C++ and get into OpenGl Because I'am familar with SDL as well or should I spread my variety and learn Visual Basics?
- Thanks guys that was an excellant explanation.I understand now.=) @Vodka:I don't really have a source code right now.I'am just trying to get the point of memory allocation.
- Yes,I'am talking about new and delete.I just don't get why they are used and when during a game.
Memory Allocation?
thugkilla posted a topic in For BeginnersOk,Heres the thing I get how to use memory allocation,but I don't understand when and why I would use it in a game.An example would be helpful or a tutorial
OpenGl with glut or SDL?
thugkilla posted a topic in For BeginnersHi guys again I know C++ pretty well and I know SDL okay.I got a book on OpenGl and I want to start it soon and I'am thinking should I use Glut or SDL with OpenGL?I want some opinons to help me make up my mind.
thugkilla posted a topic in Your Announcements Check it out our site tell us what you think and visit da forum if you are bored
More SDL problems....
thugkilla replied to thugkilla's topic in For BeginnersThanks man.
More SDL problems....
thugkilla posted a topic in For Beginners#include <SDL/SDL.h> #include <cstdlib> const int screenwidth=640; const int screenheight=480; SDL_Surface* Displaysurface=NULL; SDL_Surface* aliensurface=NULL; SDL_Rect srect,drect; SDL_Event move,quit; int speed;; speed=5; Uint32 color; color=SDL_MapRGB(aliensurface->format,255,0,255); SDL_SetColorKey(aliensurface,SDL_SRCCOLORKEY,color); Uint8* keys; SDL_Event event; keys = SDL_GetKeyState(NULL); for(;;) { SDL_BlitSurface(aliensurface,&srect,Displaysurface,&drect); SDL_UpdateRect(Displaysurface,0,0,0,0); while (SDL_PollEvent(&event) ){ if ( event.type == SDL_QUIT ) { return 1; } if ( event.key.keysym.sym == SDLK_ESCAPE ) { return 1; } if(event.key.keysym.sym==SDLK_DOWN &&event.type==SDL_KEYDOWN) drect.y+=speed;} } return 0; } I cant get da alien to move while da button is down and it is the down button.It movez once and stops.
Moving my Alien around
thugkilla posted a topic in For BeginnersThere are no errors ,but I cant figure out how to move my alien around here is my source #include "SDL/SDL.h"; #include <cstdlib>; const int screenwidth=640; const int screenheight=480; SDL_Surface* Displaysurface=NULL; SDL_Surface* aliensurface=NULL; SDL_Rect srect,drect; SDL_Event move,quit;; Uint32 color; color=SDL_MapRGB(aliensurface->format,255,0,255); SDL_SetColorKey(aliensurface,SDL_SRCCOLORKEY,color); for(;;) { SDL_BlitSurface(aliensurface,&srect,Displaysurface,&drect); SDL_UpdateRect(Displaysurface,0,0,0,0); if(SDL_PollEvent(&move)==SDLK_DOWN) { --drect.y; } } return 0; }
error in SDL project(I haven't programmed in a long time).
thugkilla replied to thugkilla's topic in For Beginners#include SDL/SDL.h; #include "stats.h" int main() { SDL_Init(SDL_INIT_VIDEO); return 0; } #ifndef STATS_H_ #define STATS_H_ struct stats { int x,y; int health; enum mode { dead, alive, shootin }; }; #endif I get these errors still. Compiler: Default compiler Building Makefile: "C:\Dev-Cpp\Makefile.win" Executing make... make.exe -f "C:\Dev-Cpp\Makefile.win" all g++.exe Untitled1.o Project1_private.res -o "Project1.exe" -L"C:/Dev-Cpp/lib" -mwindows -lmingw32 -lSDLmain -lSDL C:/Dev-Cpp/lib/libSDLmain.a(SDL_win32_main.o)(.text+0x39c): In function `console_main': /home/hercules/public_cvs/SDL12/src/main/win32/SDL_win32_main.c:249: undefined reference to `SDL_main' collect2: ld returned 1 exit status make.exe: *** [Project1.exe] Error 1 Execution terminated | https://www.gamedev.net/profile/87203-thugkilla/ | CC-MAIN-2018-05 | refinedweb | 1,141 | 57.57 |
I am new to programming, so I made myself the challenge to create Pong, and so I did. Now I want to share it with a couple of friends, so I decided to try using pyinstaller (have tried cx_Freeze).
In this Pong game I have 3 sound effects, located in the folder "sfx". So I've looked into including files using pyinstaller, so my .spec file says:
added_files = [
('E:\Game Development Stuff\Python 3\Games\Pong\sfx\hitOutline.ogg', 'sfx'),
('E:\Game Development Stuff\Python 3\Games\Pong\sfx\hitPaddle.ogg', 'sfx'),
('E:\Game Development Stuff\Python 3\Games\Pong\sfx/score.ogg', 'sfx')
]
a = Analysis(['pong.py'],
pathex=['E:\\Game Development Stuff\\Python 3\\Games\\Pong'],
binaries=None,
datas=added_files,
and in the Pong program itself, I use this code to get the path:
def resource_path(relative):
if hasattr(sys, "_MEIPASS"):
return os.path.join(sys._MEIPASS, relative)
return os.path.join(relative)
fileDir = os.path.dirname(os.path.realpath('__file__'))
hitPaddle = resource_path(os.path.join(fileDir, "sfx", "hitPaddle.ogg"))
hitOutline = resource_path(os.path.join(fileDir, "sfx", "hitOutline.ogg"))
score = resource_path(os.path.join(fileDir, "sfx", "score.ogg"))
hitPaddleSound=pygame.mixer.Sound(hitPaddle)
hitOutlineSound=pygame.mixer.Sound(hitOutline)
scoreSound=pygame.mixer.Sound(score)
So I make the exe file using pyinstaller (with the command pyinstaller pong.spec)
but when I open the pong.exe file the command window says:
Traceback "<string>", Unable to open file 'E:\\Game Development Stuff\\Python 3\\Games\\Pong\\dist\\pong\\sfx\\hitPaddle.ogg'
but in that exact same path is hitPaddle.ogg
It seems to me that pygame isn't able to found it for some weird reason?
Thanks
Sisoma Gmo Munden
I believe the issue is in this line. You are not refrencing the files correctly. You wrote:
hitPaddle = resource_path(os.path.join(fileDir, "sfx", "hitPaddle.ogg"))
Instead you should of just:
hitpaddle = resource_path("sfx\hitPaddle.ogg")
This is because when you added the files in the spec file, you stated that they should be in "root\sfx" folder. When the .exe is run in onefile mode, all files are actually located in a temp folder called MEIXXXX, with XXXX being some integers. When you run the .exe, if you open this folder you should be able to see your files there. | http://m.dlxedu.com/m/askdetail/3/045334d85dcef9f915350a6d6668c1a8.html | CC-MAIN-2018-22 | refinedweb | 379 | 62.14 |
Many computational problems facing financial engineers and the scientific community today require massive amounts of computing power. There are several ways to deliver this computational performance. Symmetric multiprocessing systems or message-passing interfaces have been the standards for meeting high performance computing needs. However, using parallel web services, which are based on standard protocols and run on commodity x86-based servers, may be a simpler and less expensive way to deliver the computing horsepower needed.
This article demonstrates the power of asynchronous web services by explaining how to set up parallel web services with .NET 2.0 to tackle a Monte Carlo simulation that can approximate the value of Pi to several decimal places. Monte Carlo simulations use randomly generated numbers to evaluate complex problems that are not easily solved with more traditional mathematical methods. Monte Carlo methods are used for modeling a wide range of challenging financial problems and physical systems in scientific research.
The area of a circle is Pi*r^2, where r represents the circle's radius. To estimate the value of Pi, consider the quarter of the circle in the first quadrant, the area prescribed by (Pi*r^2)/4. See Figure below. The square in which the quarter circle is inscribed has an area of r^2. By setting the radius equal to 1, the quarter circle's area is Pi/4 and the area of the square is simply 1. The ratio of the area of the quarter circle to the area of the square is (Pi/4)/1 = Pi/4.
Consequently, Pi equals 4 times the ratio of the area of the quarter circle to the area of the square. A Monte Carlo simulation can be used to estimate this ratio and therefore approximate the value of Pi. The ratio can be estimated by randomly choosing points (x,y) within the square and keeping track of the number of those points that fall within the quarter circle - those where Sqrt(x^2+y^2)<=1 - versus the total number of points tried. Once a good estimate of the ratio is established, Pi can be estimated by multiplying the ratio by 4. The quality of the Pi estimate is dependent on the number of points used. The more points used, the better the estimate of Pi. Depending on the degree of accuracy required, millions or billions of points may be tried. Distributing billions of point calculations across multiple servers running Monte Carlo web services will parallelize the process and generate accurate results quickly.
A web service is essentially a function or method that is available to other machines over a network. Because web services use standardized interfaces such as the Web Services Description Language (WSDL), SOAP, XML and HTTP that are integrated with development tools like Visual Studio 2005, much of the work required to take advantage of web services is already done. Calling a remote web service from an application is only slightly more difficult than calling a local function.
Calling web services or functions across a network may involve a significant amount of latency. Good parallel web service architectures will minimize the frequency of the calls across the network relative to the amount of processing done by each call. This is usually achieved by designing the main computational loop into the web service and designing the client application to specify the number of iterations needed when calling the web service. This type of design will keep network traffic low relative to the amount of processing performed by the cluster of servers.
In the Microsoft environment, web services effectively run on top of IIS. This means that multiple calls to a web service will automatically be multi-threaded at the server. Another benefit of web services is that they are easily clustered by simply deploying the web service on many different servers. Consequently, applications that use web services can parallel-process by spreading the computing requirement across many threads on many CPUs. This is particularly beneficial to "embarrassingly parallel" workloads, which run many parallel computations with essentially no dependency between those computations. Most Monte Carlo simulations fall into the embarrassingly parallel category.
Server Load Balancers make parallel web services even easier. A server load balancer (SLB) distributes multiple client requests across many different servers. By using a server load balancer in front of a cluster of servers, an application can point at the load balancer and treat it like a single server. If additional servers are needed to meet the computational requirements of the client application, they can be added behind the SLB without any modifications to the web service or the client application. SLBs have become extremely sophisticated. However, to parallelize web services, only the most basic features are needed. A "round-robin" distribution with no session affinity will work nicely for most Monte Carlo simulations. More sophisticated SLBs can monitor the network or CPU utilization of each server in the cluster and pass the next request to the least utilized server. Depending on the type of problem being solved, these features can be extremely useful, but are usually not required.
When configuring an SLB for a web service cluster, make sure that session affinity and session persistence are turned off. Session affinity and persistence route traffic from a client system to the same server every time. This is required if the SLB sits in front of servers that are handling web shopping cart applications, but it is exactly the opposite of what is needed for a parallel cluster of web services.
Client applications can also take advantage of a parallel web service architecture without a server load balancer. The calling application simply handles the distribution of requests across the pool of servers itself. However, Windows Server 2003 ships with Network Load Balancing, which effectively balances up to 32 servers without a physical SLB. There are also open source SLBs like "Balancer," which can turn a Linux-based server into a cheap SLB for a compute farm.
The MonteLoop web service shown below generates random points and checks each point to see whether it falls in the quarter circle. MonteLoop requires the arguments Tries and Seed. Tries represents the number of random points to generate. Seed is the seed number for the random number generator. When using random number generators on multiple systems or in multiple threads, it is important to make sure that each instantiation of the random class uses a different seed. By default, the .NET random class uses the system clock to generate the seed. When calls are spread across multiple threads or multiple systems, it is possible to generate identical random sequences using system clocks. Consequently, seeds should be specified by the calling application. MonteLoop returns the object class Monte, which consists of InCircle, the number of points that fell inside the quarter circle, and Tries, the total number of points tested. As described above, these two outputs create the ratio needed to calculate Pi. Technically, Tries need not be returned since the calling application is already aware of the number of tries requested. However, to fully demonstrate the use of web services for Monte Carlo simulations, it is important to show how classes can be returned.
MonteLoop
Tries
Seed
Monte
InCircle();
}
public class Monte
{
public int InCircle = 0;
public int Tries = 0;
}
[WebMethod]
public Monte MonteLoop(int Tries, int Seed)
{
Monte Values = new Monte();
Random Rnd = new Random(Seed);
double x, y;
Values.InCircle = 0;
for (int i = 1; i < Tries; i++)
{
x = Rnd.NextDouble();
y = Rnd.NextDouble();
if (Math.Sqrt(x * x + y * y) <= 1) Values.InCircle++;
//Optimization Note: Sqrt is not needed 1^2=1
}
Values.Tries = Tries;
return (Values);
}
}
To create the MonteLoop web service in Visual Studio 2005, choose File, New Web Site, and ASP.NET Web Service. Make sure that the selected language is Visual C#. Edit the File System name so that it points to the correct directory and ends with a meaningful name. Then click OK. Copy the Monte class and the MonteLoop shown above into the public class Service. If the workstation is configured with IIS and .NET 2.0, test the web service without debugging by pressing ctrl+F5. Internet Explorer will open, showing the service description. Click on the MonteLoop link. Enter 4000 in Tries and 12345 in Seed and then click Invoke. The XML results will be shown in a new window. InCircle should be 3142 and Tries should be 4000.
Next, publish the web service by clicking on Build and then Publish Web Site. Now click the "…" icon under target location. To test on the localhost, choose the Local IIS icon, select "Default Web Site," and click the add directory icon. Enter the name of the new directory and click "Open" then "OK". Test to make sure everything worked correctly by opening Internet Explorer and entering{directory_name}/service.asmx.
An easy way to push the web service to multiple servers is simply to copy the files in the localhost directory to the appropriate servers' IIS directories, typically "C:\Inetpub\wwwroot\{directory name}", and then specify that those directories contain applications under IIS. To specify that a directory contains an application, open the Application Server control panel, click on IIS Manager, find the new directory that contains the web service, right click that directory, select Properties in the Directory Tab, click the Create button, and click OK. To test the web service on the server, simply open IE and enter http://{server IP address}/{directory name}/service.asmx.
With Visual C# 2005 and .NET 2.0, asynchronous web service calls are slightly different than in prior versions. With .NET 2.0, the web service proxy uses methods ending in Async and Completed to make asynchronous calls. In order to use these asynchronous calls, a handler for the Completed event must be registered using the CompletedEventHandler. An asynchronous web service call begins with the Async method. Once the web service has finished execution, the Completed event fires, causing the specified event handler to execute. The specified event handler receives the web service results as event arguments in the second parameter. The next code sample demonstrates how this works.
Async
Completed
CompletedEventHandler
In order to take advantage of the parallel nature of web services, the client application must be able to asynchronously call the web service many times before receiving any results. If the web service sits behind a server load balancer or if the calling application is going to make more than two calls at a time to a particular server -- which is typical when using multicore or hyperthreaded processors -- then the application must specify that more than 2 connections are allowed from the client to the same server address. W3C mandates that only 2 connections are allowed from a client machine to the same address. Visual Studio and .NET enforce this by default. To override the default, set max connections in the configuration section of the app.config file. One strategy is to set the maximum number of connections equal to the total number of CPUs or Hyper-Threaded CPUs available. However, a common rule of thumb is to set the maximum number of connections to 12 times the number of CPUs across which the web service is distributed. Add the following to the app.config file within the <configuration />tag:
<system.net>
<connectionManagement>
<add address="*" maxconnection="8"/>
</connectionManagement>
</system.net>
The client application consists of two major parts: 1) the class created to call the web service asynchronously and 2) the main function. The class consists of two methods: Start and DoneCallBack.
main
Start
DoneCallBack
The Start method configures the web service, sets up the event handler, calls the web service and increments the counter ActiveWebServiceCounter to track the number of called web services that have not yet returned results. The DoneCallBack method is called when the web service Complete event fires. This method adds the results of each individually called web service to the total number of points in the quarter circle and the total number of points tried. DoneCallBack then displays the current value of Pi and the total number of points tried so far. Showing the results as the web service calls return notifies the user that the program is still running and gives a feeling of the current level of precision. DoneCallBack also decrements the ActiveWebServiceCounter. The main portion of the program will check this counter to determine if all of the web service calls have finished running before reporting the final results.
ActiveWebServiceCounter
The main function of the program instantiates an object of the Cluster class for each web service call that is going to be made. It generates a list of random numbers that will be used to seed the random number generators of the web service calls, and then starts a loop to fire off the web service calls. Even though the calls will be asynchronous, the loop will be blocked by the number of maxconnections specified in the app.config file. Consequently, once the number of requests exceeds maxconnections, the application will wait for a web service to complete before making another web service call. After the loop finishes making the predetermined number of web service calls, the program will wait for all of the web services to complete. ActiveWebServiceCounter decrements each time a web service call completes. When ActiveWebServiceCounter reaches zero, all of the web services are done and the totals have been updated. The program then outputs the final value of Pi and the total number of Tries.
Cluster
maxconnections
For testing purposes, the program tracks the starting time, ending time and calculates the total elapsed time required for the Monte Carlo simulation. Note that PiConsoleApp is designed to use remote Web References. Pointing PiConsoleApp at the localhost may generate unexpected results. An unlimited number of connections are allowed to the localhost, regardless of the maxconnection setting. The local system can quickly be swamped by the main loop.
using System;
using System.Collections.Generic;
using System.Text;
namespace PiConsoleApp
{
class Program
{
const int TriesPerWSCall = 10000000;
const int NumWSCalls = 100;
//TriesPerWSCall * NumWSCalls = Total Number of Points
private static int ActiveWebServiceCounter = 0;
public static long ttl_incirc, ttl_tries;
class WSAsyncCall
{
public void DoneCallBack(Object source,
WSReference.MonteLoopCompletedEventArgs e)
//Create Web Reference named "WSReference"
{
WSReference.Monte data = new WSReference.Monte();
data = e.Result;
ttl_incirc += data.InCircle;
ttl_tries += data.Tries;
Console.WriteLine(" Pi = {0:N9} T = {1:N0}",
(double)ttl_incirc / (double)ttl_tries * 4.0, ttl_tries);
ActiveWebServiceCounter--;
}
public void Start(int seed)
{
WSReference.Service ser = new
PiConsoleApp.WSReference.Service();
ser.MonteLoopCompleted += new
WSReference.MonteLoopCompletedEventHandler(DoneCallBack);
ActiveWebServiceCounter++;
ser.MonteLoopAsync(TriesPerWSCall, seed);
}
}
static void Main(string[] args)
{
DateTime startTime = DateTime.Now;
Console.WriteLine("Starting Time: " + startTime);
WSAsyncCall[] WS = new WSAsyncCall[NumWSCalls + 1];
Random seed = new Random();
for (int i = 1; i <= NumWSCalls; i++)
{
WS[i] = new WSAsyncCall();
WS[i].Start(seed.Next());
}
while (ActiveWebServiceCounter > 0) ;
DateTime stopTime = DateTime.Now;
Console.WriteLine("Finish Time: " + stopTime);
TimeSpan duration = stopTime - startTime;
Console.WriteLine("Elapsed Time: " + duration);
Console.WriteLine();
Console.WriteLine(" Pi = {0:N9} T = {1:N0}",
(double)ttl_incirc / (double)ttl_tries * 4.0, ttl_tries);
Console.Read();
}
}
}
To create the PiConsoleApp application, choose File and then New Project. Again ensure that Visual C# is selected.Choose the Console Application icon and enter the name of the application. The name entered will be used as the namespace. Next, copy the WSAsyncCall class, the Main function and the variables from above into the new application under the Program class. Create a Web Reference to the server or servers running the PiMonte web service by clicking the Project drop down and selecting Add Web Reference. In the URL text box, enter the complete URL of the web service, i.e. http://{server_IP_address}/{directory_name}/service.asmx. Remember to NOT use localhost. If the web service cluster is running behind a Server Load Balancer, enter the virtual server name or IP address followed by any server directories and the service name with the .asmx extension. To use the source code unaltered, enter "WSReference" as the name of the Web Reference and click Add Reference. Visual Studio does not always handle renaming references or namespaces correctly, so try to be consistent from the beginning. Next, edit the app.config file's maxconnection tag as described above. At this point, the application should be ready to run.
PiConsoleApp
WSAsyncCall
Main
Program
PiMonte
"WSReference"
maxconnection
In the sample code above, one Web Reference was created by pointing at the IP address of a server load balancer that distributes the requests to multiple servers. If you do not have a load balancer or do not wish to configure the Windows 2003 Network Load Balancer, you can still use web services to parallelize an application. By creating a web reference to each server and then creating a class like the WSAsyncCall class for each reference, you can distribute the web service calls across multiple servers in the main loop. Simply round-robin the web server requests through the number of servers available.
The performance of this parallel web service-based application was tested using two dual core 2.8GHz Xeon processor-based servers running Microsoft Windows Server 2003 R2. The network was a 100Mb network with multiple hops between the client application and the servers. Each test consisted of 16 billion Monte Carlo points.
A server running a single threaded version of the Monte Carlo simulation required 34 minutes and 34 seconds to complete. Two servers using the Network Load Balancing feature included with Server 2003 running parallel web services typically required 4 minutes and 30 seconds. The same two servers using an external Server Load Balancer typically required only 3 minutes and 50 seconds to complete 16 billion Monte Carlo tries. The performance improvement of parallelizing this type of application is astounding.
Running 7 Monte Carlo simulations of 16 Billion tries each, the values of Pi worked out between 3.1415728 and 3.141623 with an average of 3.14159295. The actual value of Pi is 3.14159265. The Monte Carlo result is not bad!
By distributing a Monte Carlo simulation across multiple servers and multiple threads using web services, performance can be greatly improved. Because web services are basically written like traditional functions, they can be easily parallelized without hand-coding a multi-threaded application, custom writing a message passing interface or using other high performance computing management software. Web services provide a relatively simple and straightforward method of distributing parallel problems across multiple compute. | https://www.codeproject.com/Articles/15540/Monte-Carlo-Simulation-Using-Parallel-Asynchronous?msg=1664241 | CC-MAIN-2018-05 | refinedweb | 3,070 | 55.44 |
Hi,
I often use enumerations in code that I want to display as text on the screen. For example the state of an object recorded in an enum like
public enum StageType { PreRelease = 0, Proposed = 1, Open = 2, Closed = 3 }
So if I get the matching value out of the database and I want to render a textual representation of the enum to the UI I might do:
lblStatus.Text = ((StageType) iStage).ToString(); // where iStage is an int value representing the enum.
So this is great apart from the first item PreRelease will be shown without a space.
Step in my amazing class which will parse the string and add a space before a capital so “PreRelease” becomes “Pre Release”
This way I can use any Camel case enum string and have it render to the UI nicely.
public static class StringUtil { /// /// Returns a string with a space after a capital /// /// /// public static string SpaceOnCapital(string word) { StringBuilder sb = new StringBuilder(); foreach (char c in word) { if (sb.Length == 0) { sb.Append(c); } else { // not the first character // ch if (IsCapital(c)) { sb.AppendFormat("{0}", c); } else { sb.Append(c); } } } return sb.ToString(); } public static bool IsCapital(char c) { int ascii = (int)c; return (ascii >= 65 && ascii <= 90); } }
Cheers | https://ntsblog.homedev.com.au/index.php/2010/09/17/stringutil-class-space-before-a-capital-letter/ | CC-MAIN-2020-24 | refinedweb | 209 | 71.85 |
The Windows 8 app I’m currently working on requires me to send some data over a wireless network using UDP. This is usually done by using the Socket class contained in the System.Net.Sockets namespace. However, this isn’t supported in Windows RT, which replaces the behavior of Socket with several equivalent classes. The class used for UDP is DatagramSocket. While the code using DatagramSocket is relatively short and simple, there are some intricacies involved in getting it to function properly, so in this post I’ll make things simpler by providing a working example.
At a conceptual level, the steps required to open a UDP connection and send data are very simple:
1. Create a local socket.
2. Connect to a remote host.
3. Send data to the remote host.
The code itself is short, so let’s break it down line-by-line. Bonus points if you can guess what I’m working on from the IP address and port numbers!
Creating the local socket
The first step is creating the socket, in this case a DatagramSocket:
DatagramSocket udpSocket = new DatagramSocket();
One advantage of DatagramSocket is that I don’t have to specify what kind of protocol I want to use, because it sends exclusively UDP packets.
Next I need to bind the socket to a local port, which specifies which port you want the UDP packets to originate from. This is done by calling the DatagramSocket.BindServiceNameAsync() method. In my case, I’ll be using the port number 5556:
await udpSocket.BindServiceNameAsync(“5556”);
It’s as simple as passing a string containing the port number. I use the await keyword here because BindServiceNameAsync() is an asynchronous method. For more documentation on the await keyword and Windows RT Tasks, check the resources at the end of the article.
Connecting to a remote host
The second step is to open a connection to a remote host. In this example I want to connect to the IP address 192.168.1.1, but the DatagramSocket.ConnectAsync() method takes an instance of the HostName class as an argument. So first I need to create a new HostName to represent the remote host:
HostName remoteHost = new HostName(“192.168.1.1”);
Again, the argument I’m passing is a string containing the IP address or URL I want to connect to. I can then open the connection by calling ConnectAsync and passing the HostName I just created and the port I want to send data to on the remote host:
await udpSocket.ConnectAsync(remoteHost, “5556”);
Sending data to the remote host
Once the connection is established, it’s time to send data. The DatagramSocket class doesn’t contain any methods for sending packets, so I need to use the aptly-named DataWriter class. The first step of sending data is to create a new DataWriter, passing it the OutputStream property of my DatagramSocket, udpSocket:
DataWriter udpWriter = new DataWriter(udpSocket.OutputStream);
Now, say I want to send the string “juncti juvant” to the remote host. DataWriter doesn’t have any single method to handle this operation; sending data is a two part process. First I need to write the data to the OutputStream:
udpWriter.WriteString(“juncti juvant”);
Check out the documentation on the DataWriter class to see what other kinds of data you can write.
The data isn’t heading to the remote host yet; now I need to actually make the DataWriter send a UDP packet. To do this I call the DataWriter.StoreAsync() method:
await udpWriter.StoreAsync();
And with that, a UDP packet containing the payload “juncti juvant” is on its way to the remote host!
My combined code to send a UDP packet consists of the following:
// Create a new socket and bind it to a local port DatagramSocket udpSocket = new DatagramSocket(); await udpSocket.BindServiceNameAsync(“5556”); // Open a connection to a remote host HostName remoteHost = new HostName(“192.168.1.1”); await udpSocket.ConnectAsync(remoteHost, “5556”); // Send a data string to the remote host in a UDP packet DataWriter udpWriter = new DataWriter(udpSocket.OutputStream); udpWriter.WriteString(“juncti juvant”); await udpWriter.StoreAsync();
Resources
DatagramSocket:
HostName:
DataWriter:
await keyword and Tasks:
Just to make sure I understand, is this class available from the Windows RT environment?
Yes, it's available in the RT environment!
How do know what exceptions could be thrown, for example if the address were in use when: BindServiceNameAsync()
As far as I know, that information isn't listed in any central place. The code example at the bottom of the following link will give you an idea of how to deal with that method throwing exceptions msdn.microsoft.com/…/jj635238.aspx | https://blogs.msdn.microsoft.com/trycatchfinally/2012/09/06/udp-and-windows-8-apps/ | CC-MAIN-2017-13 | refinedweb | 775 | 55.24 |
Doing some practice problems for a class:
In this recitation you will be developing some C functions that will help get you started on the next assignment.
You will be using the 8-bit floating point format given in Figure 2.34 of the text[Computer Systems 2nd Edition, Bryant and O'Hallaron], with one sign bit, 4 exp bits and 3 frac bits. In the problems below, we will use an unsigned char to hold the value.
Read the implementation notes before starting.
1. Write a program that takes a single command line parameter.
The parameter is a number represented either in decimal or hexadecimal.
Use strtol with second parameter NULL and base equal to 0 to convert this to a long.
Print out the value both in decimal and hexadecimal.
//ok, so my code for the first part is:
Code:#include <stdio.h> #include <unistd.h> #include <stdlib.h> long int strtol(const char *nptr, char **endptr, int base); int main(int argc, const car *argv[]){ long x = 0; if (argc != 2){ fprintf(stderr, "Usage: %s <number>\n", argv[0]); return (-1); } x = strtol(argv[1], NULL, 0); printf("int: %d\thexadecimal: 0x%x\n", (int)x, (int)x); return(1); }
2. Write the function:
int getSign(unsigned char x);
that returns 1 if the sign bit is 0 and -1 if the sign bit is 1.
Add a statement to your main program to test this using the command line parameter converted to an unsigned char.
My question is:
How do I go about reading the most significant bit of the unsigned char?
Is there some way to read the bits out or should i just convert it to base 2, or maybe i just have to see if its greater than 127 as a regular long or int?
if so, then
Code:int getSign(unsigned char x){ if ((long)x > 127){ return (-1); } return (1); }
that should work, no?
notes state: Most of the functions in Part 1 (except for getValue) can be written with 1, 2, or 3 lines of code in the body of the function. You can create a double value of infinity by dividing 1 or -1 by 0.0, but if the denominator is a constant, the compiler might complain.
You can create a double value of NaN by calculating 0.0/0.0, but again the denominator should not be a constant.
sorry if these seem like weird/stupid questions. it's been a while since i used C and i really need to grasp all this computer arithmetic stuff soon... test next monday and it's going to be mostly bit manipulation and etc | http://cboard.cprogramming.com/c-programming/130315-find-msb-unsigned-char.html | CC-MAIN-2015-27 | refinedweb | 445 | 72.87 |
I want to do a Doughnut/Donut chart on JavaFX and searching I came to this example: Can PieChart from JavaFX be displayed as a doughnut?
I Works really nice, but since I'm using FXML to make my GUI, I can't use this example. First, I tried to add the DoughtnutChart.java class as a @FXML var in the controller class of the panel where I want to insert it, but launched errors.
Then, searched in Google to make the DoughnutChart a custom component, but all the examples are based on Panes. Also, If I try to import my donu.jar to SceneBuilder, the window to select a component is empty.
So, my question is: How do I implement this Doughnut Chart on JavaFX when my GUI is made on FXML?
Thanks a lot.
It's hard to tell what the cause of your error is without seeing the FXML and the error message.
I got this to work pretty easily: the one thing to be aware of is that the
FXMLLoader instantiates classes by invoking the no-argument constructor. If it can't find one, it tries to use a builder class as a back-up plan. So the one modification you need to make to @jewelsea's DoughnutChart implementation is to add a no-argument constructor. (You could also define a
DoughnutClassBuilder, but that's a lot more work, and doesn't get you any extra benefit.) So I did this:
package doughnut ; // imports as before... public class DoughnutChart extends PieChart { private final Circle innerCircle; public DoughnutChart() { this(FXCollections.observableArrayList()); } // everything else as before... }
Then the following FXML:
<?xml version="1.0" encoding="UTF-8"?> <?import javafx.scene.layout.StackPane?> <?import doughnut.DoughnutChart?> <StackPane xmlns: <DoughnutChart fx: </StackPane>
with the controller SampleController.java:
package doughnut; import javafx.fxml.FXML; import javafx.scene.chart.PieChart; public class SampleController { @FXML private PieChart doughnutChart ; public void initialize() { doughnutChart.getData().addAll( new PieChart.Data("Grapefruit", 13), new PieChart.Data("Oranges", 25), new PieChart.Data("Plums", 10), new PieChart.Data("Pears", 22), new PieChart.Data("Apples", 30)); } }
and the application class
package doughnut; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Scene; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class Main extends Application { @Override public void start(Stage primaryStage) throws Exception { StackPane root = (StackPane)FXMLLoader.load(getClass().getResource("DoughnutChartDemo.fxml")); Scene scene = new Scene(root,400,400); primaryStage.setScene(scene); primaryStage.show(); } public static void main(String[] args) { launch(args); } }
work exactly as expected. | https://codedump.io/share/48Ce4jJfU4FY/1/javafx-donut-chart-in-fxml | CC-MAIN-2017-13 | refinedweb | 419 | 52.76 |
CodePlexProject Hosting for Open Source Software
I am trying to serialize an generic list like the following:
[JsonObject]
class Test {
[JsonProperty(TypeNameHandling = TypeNameHandling.All)]
public List<Base> Listing { get; set; }
}
public interface Base { }
[JsonObject]
public class Derived1 : Base {
}
Test t = new Test();t.Listing = new List<Base>{new Derived1());
When serializing an instance of Test class, it stores the type info of Base (instead of Derived1) in the Json string; thus deserialization will fail. I searched the forum and found that the Dictionary<string,object> will work. But anything like Dictionary<string,Base>
will not work during deserialization. Do I need to use JsonConvert to custom convert my classes, or is there an easy way to do it? I tried all combinations of TypeNameHandling and none works.
Forgot to mention this is on .NET 4.0 with Json.NET 4.0 release 1.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://json.codeplex.com/discussions/253644 | CC-MAIN-2016-44 | refinedweb | 179 | 67.96 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
On Mon, May 5, 2014 at 6:01 PM, Jeff Law <law@redhat.com> wrote: > On 05/05/14 02:22, Richard Biener wrote: >>> >>> >>>? > > But this approach is going to be inconsistent with Andrew's work, right? > ISTM we'd end up with something like... > > So statements would be "gimple" > types would be "gimple_type" > expressions would be "gimple_expr" Well, I hope that Andrew doesn't do without a namespace (and I still don't believe in what he tries to achieve without laying proper ground-work throughout the compiler). With a namespace gimple we can use gimple::stmt. > It's a bit of bikeshedding, but I'd prefer "gimple_stmt". If you really > feel strongly about it, I'll go along without objection, but it seems wrong > to me. Less typing. But yes, it's bikeshedding. Still gimple_stmt is redundant information in almost all context. 'stmt' was opposed to elsewhere so we settle on 'gimple' (which is already existing). IMHO not changing things is the best bikeshedding argument. >> >>>.... Agreed on that, btw. But switch_ can't be the answer either. Maybe swidch (similar do klass) or swjdch. Or swtch. I like swtch the best (similar to stmt). Richard. > Jeff | https://gcc.gnu.org/ml/gcc-patches/2014-05/msg00230.html | CC-MAIN-2019-18 | refinedweb | 216 | 77.43 |
HOUSTON (ICIS)--Here is Monday’s end of day ?xml:namespace>
CRUDE: Mar WTI: $96.43/bbl, down $1.06; Mar Brent: $106.04/bbl, down 36 cents
NYMEX WTI crude futures finished down, tracking a sell-off in the stock market in response to released data showing slowdown in manufacturing and in factory activity in the US and in China.
RBOB: Mar $2.6069/gal, down 2.45 cents
Reformulated blendstock for oxygen blending (RBOB) gasoline futures for the March contract settled lower on Monday, its first day as the prompt month. RBOB followed crude oil futures lower, as a weak stock market and worries over slowing global economic growth pressured commodities prices.
NATURAL GAS: Mar $4.905/MMBtu, down 3.8 cents
The March contract closed the first trading session of the new week down for a third consecutive day as the market continued to readjust following the volatility that took contracts to four-year highs last week. However, unsettled weather forecasts should keep trade activity liquid over the coming weeks.
ETHANE: lower at 34.00 cents/gal
Ethane spot prices traded lower during the day, tracking weak natural gas futures.
AROMATICS: mixed xylene down at $3.57-3.64/gal, toluene wider at $3.64-3.72/gal
Prompt mixed xylene (MX) spot prices were discussed at $3.57-3.64/gal FOB (free on board) on Monday, sources said. The range was down from $3.64-3.66/gal FOB the previous session. Meanwhile, prompt toluene spot prices were discussed at $3.64-3.72/gal FOB, wider from $3.67-3.70/gal FOB on Friday.
OLEFINS: Feb ethylene offered lower at 54.5 cents/lb, Feb PGP done lower at 69 cents/lb
US February ethylene offer levels fell to 54.5 cents/lb from 55.0 cents/lb against no fresh bids. US February polymer-grade propylene (PGP) traded lower at 69.0 cents/lb, down from the previous reported trade at 71.5 cents/lb.
For more pricing intelligence please visit | http://www.icis.com/resources/news/2014/02/03/9749868/evening-snapshot-americas-markets-summary/ | CC-MAIN-2016-26 | refinedweb | 339 | 70.6 |
I don’t like C#, but the free version of Visual Studio only lets you use the interface builder in C#/.net programs, so here we are. My goal was to cheat and write the interesting parts of the program in C/C++, compiling to a .dll, and calling it from C#.
This has turned out to be an ordeal.
I have a great handle on calling functions in a .dll from Python. The CTypes module is amazing, and incredibly well documented. C# has a module (service?) in the System.Runtime.InteropServices namespace and the Platform Invoke service for “consuming un-managed code”, but it has been a real pain getting it to work.
I’ll admit that 80% of the problem is that I’m still fairly new to C#, but there is no shortage of information online for vanilla C# programming. Interfacing to a .dll seems to be uncommon enough that it’s hard to find exactly what I’m looking for when I run into a use-case that hasn’t been discussed much.
Here’s what I’m getting at. Let’s say I have a function in my .dll as such:
INTEROPSTRUCTTEST_API int GetInteger() { return 42; }
Here’s what the function in C# would look like for interfacing with GetInteger() in the .dll (which I put in a class called InteropStructTestDLLWrapper):
[DllImport("InteropStructTest.dll")] public extern static int GetInteger();
And here’s how to call that in C#:
int int_from_dll = 0; int_from_dll = InteropStructTestDLLWrapper.GetInteger();
This behaves exactly as you would expect – it returns the number 42. Things start getting weird when pointers are involved.
Function in .dll:
INTEROPSTRUCTTEST_API void PassIntegerPointer(int *i) { *i = 27; }
Function for calling the .dll function in C#:
[DllImport(InteropStructTest.dll, CallingConvention = CallingConvention.Cdecl)] public extern static void PassIntegerPointer([In, Out]ref int i);
Calling the function in C#:
InteropStructTestDLLWrapper.PassIntegerPointer(ref int_from_dll);
Now you have to deal with [In, Out] and ref, and CallingConvention.Cdecl. Much of this was guess-and-check to get working using information I gleaned from dozens of StackOverflow posts and other blogs. Things start getting extra weird when you want to pass a pointer to a struct or array of structs.
I decided it was best to just start making a sample .dll and .cs program that demonstrated a clear use-case for passing various data types to and from a .dll. Something that I could reference and add to as I learn. So far it has returning integers, passing a pointer to integer to be modified, passing a pointer to a struct to be modified, and passing an array of structs to be modified (which was super hard to find anything online about).
Right now it has examples of all the things I’ll have to do in small side-project I’m working on. I’ll flesh it out as needed.
Hopefully I’ll save someone some time. I’m embarrassed at how much time I burned getting this far! | https://chrisheydrick.com/2014/12/19/brief-example-of-calling-dll-functions-in-c/ | CC-MAIN-2017-34 | refinedweb | 497 | 66.94 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
I've been scouring the interwebs (and Atlassian Answers) trying to find an answer to this question and either I'm looking in the wrong place or such an answer doesn't exist. If you are are general user in a JIRA instance, is there a way through the UI, to determine who are the project admins of a project?
We have a small group of "spare time" system admins trying to support over 1K+ users. We get a lot of requests for, "please grant me permissions to project X". We would prefer to have people go directly to the project admins of a project, both because we don't have the bandwidth to handle such requests and we don't know if these people should have such access on a given project in the first place. We would much prefer they go directly to the project admins for such request. However, unless they have a way to find out who they are, we end up having to look it up for them.
Hi Jason,
You can achieve this by using Script Runner Web Panel with a code similar to the one below. This will add a panel to every issue in the right section which will show a list of users which are in the project Administrators role.
This sample code is tested with web panel location atl.jira.view.issue.right.context.
import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.config.properties.APKeys; import com.atlassian.jira.issue.Issue import com.atlassian.jira.security.roles.ProjectRoleManager; def issue = context.issue as Issue def roleManager = ComponentAccessor.getComponent(ProjectRoleManager) def adminsRole = roleManager.getProjectRole("Administrators") def actors = roleManager.getProjectRoleActors(adminsRole, issue.projectObject) def users = actors.collect({ it.applicationUsers }).flatten() def baseUrl = ComponentAccessor.getApplicationProperties().getString(APKeys.JIRA_BASEURL) def$it.displayName</a> </li>/$ } writer.write("<ul>$html</ul>")
here's a preview of the result:
2016-06-10_0935.png
Although this doesn't directly answer your immediate problem it may be a solution for your organization. If you find that your JIRA admins are being bombarded with these types of requests, is it an option to use JIRA itself to build a workflow to manage the request?
In other words, I want to be added to a project, so I create a JIRA issue e.g a Project Access Request. With the magic of custom fields and post-functions we can probably work out a mechanism to route the request to the appropriate project owner.
I've built this for others so let me know if you are interested and I can share the details.
There is no way to find it out from the UI. A simple add-on can expose the details but it is not there by default.
It might make sense to list out the project owners in your company intranet (or confluence if you have it), although it comes with the extra burden of keeping the list. | https://community.atlassian.com/t5/Jira-questions/How-can-a-general-user-find-out-who-are-the-project-admins-of-a/qaq-p/376847 | CC-MAIN-2018-17 | refinedweb | 513 | 56.66 |
Installation¶
BlenderNEURON installation is a simple two part process which enables communication between NEURON and Blender.
For Part 1, you can choose either the HOC (NEURON default) or Python versions. Both have identical functionality (HOC version has a wrapper that allows loading the library by opening a .hoc file).
HOC GUI Installation¶
Install this part if you generally use NEURON HOC interface (NEURON default).
Part 1: HOC NEURON Client
- If you haven’t already, install NEURON
- Download the latest HOC version (e.g. blender_neuron_HOC_xxx.zip)
- Extract the zip file and open the BlenderNEURON.hoc file using NEURON. You should see the BlenderNEURON interface.
Python Library Installation (Optional)¶
Install this part if you generally use NEURON+Python interface. The library will work if you use the
-python flag to launch NEURON or you have compiled NEURON Python module and can from neuron import h from Python console.
Part 1: Python NEURON Client
- If you haven’t already, install NEURON
- Using the same Python environment as NEURON, use the
pip install blenderneuroncommand to install the BlenderNEURON client library
- Start NEURON+Python, and type
from blenderneuron.quick import bnto load the BlenderNEURON interface
Blender Addon Installation¶
- Part 2: Blender Addon Server
- If you haven’t already, install and open Blender
- Download the BlenderNEURON addon (e.g. blender_neuron_addon_xxx.zip). Note: On MacOS, Safari browser may automatically extract the zip file, make sure you use the unextracted .zip file in the next step.
- In Blender, click File > User Preferences > Add-ons (tab) > Install Add-on From File > Point to the addon .zip file
- Tick the checkbox next to ‘Import-Export: NEURON Blender Interface’ to load the addon. Then click “Save User Settings”.
- If you see a NEURON tab on the left side of the screen, the addon has been loaded successfully.
Test NEURON-Blender Connectivity¶
Once you have the NEURON HOC/Python module and Blender Add-on activated, check whether NEURON can communicate with Blender with the following steps:
- From NEURON, at the bottom of the BlenderNEURON window, click “Test Connection” button
- If you have Blender running, with the add-on installed and loaded (checkbox checked), the connection status should say “Ready”. It means that NEURON can succesfully communicate with Blender.
| http://blenderneuron.org/docs/installation.html | CC-MAIN-2019-18 | refinedweb | 366 | 53.81 |
Pre.
Since its initial release, Preact’s maintainers have published several versions to address issues and add features. In October, Preact X rolled out with several updates designed to solve common pain points and improve existing features.
Let’s go over some of the recent changes and discuss how they can help us develop better applications using PreactJS.
N.B., this tutorial assumes a basic understanding of PreactJS or ReactJS. To learn more about Preact, read the library’s official guide.
New capabilities and improvements in Preact X
Preact’s maintainers have added major improvements to support many of the latest React features. Let’s review some of the most interesting new capabilities.
Fragments
Fragments let you group lists of children without adding extra nodes to the DOM because they are not rendered to the DOM. You can employ this feature where you would normally use a wrapper
div. It’s most useful when working with lists, tables, or CSS flexbox.
Consider the following markup:
class Table extends Component { render() { return ( <table> <tr> <Columns /> </tr> </table> ); } } class Columns extends Component { render() { return ( <div> <td>One</td> <td>Two</td> </div> ); } }
The rendered result will be invalid HTML because the wrapper
div from the
Columns component is rendered inside the
<tr> in the
Table component.
With fragments, you can render outputs on the DOM without adding any extra elements.
class Columns extends Component { render() { return ( <> <td>One</td> <td>Two</td> </> ); } }
Now, the output will be valid HTML, because no extra
div is added to the DOM. Fragments can be written in two ways:
BY : import { Fragment, render } from 'preact'; function TodoItems() { return ( <Fragment> <li>A</li> <li>B</li> <li>C</li> </Fragment> ) } or function TodoItems() { return ( <> <li>A</li> <li>B</li> <li>C</li> </> ) }
To learn more, read the Components article in the official Preact X guide.
Hooks
Hooks are an alternative to the class-based component API. Hooks allow you to compose state and stateful logic and easily reuse them between components. Preact X offers a lot of hooks out of the box as well as the ability to create custom hooks. You can import hooks from
preact/hooks or
preact/compat.
import {useState, useCallback} from 'preact/hooks'; or import {useState, useCallback} from 'preact/compat'; function Counter() { const [value, setValue] = useState(0); const increment = useCallback(() => setValue(value + 1), [value]); return ( <div> Counter: {value} <button onClick={increment}>Increment</button> </div> ); }
The above code is a counter component that increments in value when clicked. It utilizes the
useState and
useCallback hooks provided in the Preact X API. As shown, the code is also the same as you would write in React.
N.B., hooks are optional and can be used alongside class components.
componentDidCatch
Preact X includes an update to the
componentDidCatch lifecycle method, which is called after your component renders. This allows you to handle any errors that happen during rendering, including errors that happen in a lifecycle hook but excluding any asynchronously thrown errors, such as after a
fetch() call. When an error is caught, you can use this lifecycle to react to any errors and display a nice error message or any other fallback content.
class Catcher extends Component { state = { errored: false } componentDidCatch(error) { this.setState({ errored: true }); } render(props, state) { if (state.errored) { return <p>Something went badly wrong</p>; } return props.children; } }
In the above code, we call the
componentDidCatch(), which is invoked as soon as the component is rendered. If an error is caught, you can update your component to let users know an error occurred and log entries to logging services.
This ensures a much cleaner codebase and an even easier error tracking. The official guide has more information about
componentDidCatch().
createContext
Context provides a way to pass data through the component tree without having to pass props down manually at every level. Although context is not new to Preact, the legacy API
getChildContext() is known to have issues when delivering updates deeper down the virtual DOM tree.
A context object is created via the
createContext(initialValue) function. It returns a
Provider component that is used to set the context value and a
Consumer one that retrieves the value from the context.
import {useContext} from 'preact/compat'; const Theme = createContext('light'); function DisplayTheme() { const theme = useContext(Theme); return <p>Active theme: {theme}</p>; } // ...later function App() { return ( <Theme.Provider <OtherComponent> <DisplayTheme /> </OtherComponent> </Theme.Provider> ) }
Changes to Preact core
Previously,
preact-compat was included as a separate package. It is now included in the same package as Preact itself; there’s nothing extra to install to use libraries from the React ecosystem.
// Preact 8.x import React from "preact-compat"; // Preact X import React from "preact/compat";
Preact X also now directly supports CSS custom properties for styling Preact components. The Preact team specifically made sure to include several popular packages in the testing process to guarantee full support for them.
Conclusion
In this tutorial, we explored some features introduced in Preact X. To see a concrete list of all the changes and learn more about the new releases, be sure to check out the Preact release page on GitHub.
What is your favorite new feature or API? Feel free to share your thoughts in the. | https://blog.logrocket.com/whats-new-in-preact-x/ | CC-MAIN-2022-05 | refinedweb | 873 | 53.92 |
OCaml’s static typing allows it to detect many problems at compile-time. Still, some bugs slip though. In this post, I go over each discovered bug that made it into a Git commit and try to work out why it happened and whether it could have been prevented.
Note: As this post is entirely about bugs, it may appear rather negative. So let me say first that, overall, I’ve been very impressed with the reliability of the OCaml code: I’d have expected to find more bugs than this in 27,806 lines of new code!
Table of Contents
- Methodology
- Core OCaml issues
- Lwt-related bugs
- Curl-related bugs
- GTK bugs
- Logic errors
- Python bugs
- Summary
( This post is part of a series in which I am converting 0install from Python to OCaml, learning OCaml as I go. The code is at GitHub/0install. )
Methodology
I’ve gone back through the Git commit log and selected all the ones that say they fix a bug in the comment, starting from when I merged the initial OCaml code to master (on 2013-07-03). It’s possible that I sneakily fixed some bugs while making other changes, but this should cover most of them. Any bug that made it into a released version of 0install should certainly have its own commit because it would have to go on the release branch. I also included a few “near misses” (bugs I spotted before committing, but which I could plausibly have missed).
In a number of cases, I wrote and committed the new OCaml code first, and then ported the Python unit-tests in a later commit and discovered the bug that way (so some of these could never have made it into an actual release). Compile-time bugs have been ignored (e.g. code that didn’t compile on older versions of OCaml); I’m only interested in run-time errors here.
I’ve classified each bug as follows:
- Inexperience
- This bug was caused by my inexperience with OCaml. Making proper use of OCaml’s features would avoid this class of bug entirely.
- Third-party
- Caused by a bug in a library I was using (and hopefully now fixed). Could only have been discovered by testing.
- Poor API
- This bug was my fault, but could have been avoided if the library had a better API.
- Warning needed
- My fault, but the compiler could have detected the problem and issued a warning.
- Testing only
- I only know how to find such bugs through testing. Similar bugs could happen again.
Core OCaml issues
Note: I’m grouping the bugs by the library the code was interacting with, regardless of whether that library was at fault. This section is for bugs that occurred when just using OCaml itself and its standard library.
Out-of-range integers
Everything seemed to be working nicely on my Arch system, but the first run on a Debian VM gave this unhelpful error:
Failure "int_of_string"
I was trying to store a Unix timestamp in an
int. Unlike Python, OCaml’s integers are limited precision, having 1 bit less than the machine word size. The 32-bit VM only had 31-bit integers and the time value was out of range for it.
This was entirely my fault. OCaml always uses floats to represent times, and they work fine. I was just converting to ints to be consistent with the Python code.
However, the error message is very poor. I replaced all calls to
int_of_string with my own version, which at least displays the number it was trying to convert. This should make debugging any similar problems easier in future.
Type: Inexperience
Fails to start on Windows
Windows kept complaining that my program didn’t exist and to check that the path was correct, even when I was double-clicking on the executable in Explorer! Turns out, Windows refuses to run binaries with certain character sequences in their names (“instal” being one such sequence). See Vista doesn’t start application called “install” w/o being elevated.
Solution: you have to embed an XML “manifest” in Windows binaries to avoid this behaviour. Would be nice if OCaml did that automatically for you.
Type: Third-party
Spawning a daemon fails on Windows
Windows doesn’t have
fork, so the usual double-fork trick doesn’t work.
Solution: Use
create_process on Windows.
Would be nice if OCaml grouped all the POSIX-only functions together and made you check which platform you were on. Then you’d know when you were using platform-specific functions. e.g.
Type: Poor API
0install select ignores
--refresh
I forget to handle the
Refresh case for the “select” command.
Different commands need to handle different subsets of the options. I was using a plain variant (enum) type and throwing an exception if I got an option I wasn’t expecting:
(Note: Several people have asked why I used a default match case here. It’s needed because there are many options that don’t apply to the “select” command. The option parser makes sure that each sub-command’s handler function is only called with options it claims to support.)
Solution: I switched from plain variants to polymorphic variants and removed the default case. Now, the type-checker verifies at compile-time that each subcommand handles all its options:
See Option Handling With OCaml Polymorphic Variants for a write-up of that.
Type: Inexperience
Not found errors
When printing diagnostics about a failed solve, we check each interface to see if it has a replacement that it conflicts with. e.g. the new “Java” interface replaces (and conflicts with) the old “Java 6” interface. But if the conflicting interface wasn’t used in the solve, we’d crash with:
Exception: Not_found
I use a lot of maps with strings as the keys. I therefore created a
StringMap module in my
common.ml file like this:
StringMap.find raises
Not_found if the key isn’t found, which is never what you want. These exceptions are awkward to deal with and it’s easy to forget to handle them.
A nice solution is to replace the definition with:
This redefines the
find method to return an option type. Now you can’t do a
StringMap.find without the compiler forcing you to consider the case of the key not being there.
Would be nice if the OCaml standard library did this. Perhaps providing a
Map.get function with the new behaviour and deprecating
Map.find?
Type: Poor API
Octal value
I used
0700 instead of
0o700 to set a file mode. Would be nice if OCaml warned about decimals that start with 0, as Python 3 does.
Type: Warning needed
Parsing a path as a URL
This didn’t actually get committed, but it’s interesting anyway. Downloaded feeds are signed with GPG keys, which are trusted only for their particular domains. At one point, I used
Trust.domain_from_url feed.url to get the domain. It was defined as:
However, feeds come in different types: there are remote feeds with URLs, and local feeds with local paths (there are also virtual “distribution” feeds representing the output from the distribution’s package manager).
I was trying to get the trust domain for all feeds, not just remote ones where it makes sense.
Once again, the solution was to use polymorphic variants. The three different types of feed get three different constructors. A method (such as
domain_from_url) that only works on remote feeds is declared as:
Then, it’s impossible to call it without first ensuring you’ve got a remote feed URL.
This change also improves the type-safety of many other parts of the code (e.g. you can’t try to download a local feed now either), and uncovered another bug: you couldn’t use the GUI to set the stability rating for a distribution-provided implementation, because one of the functions used only worked for non-distribution feeds.
Type: Inexperience (x2)
Interrupted waitpid
The
Unix.waitpid function can raise
EINTR if the system call is interrupted, although the documentation doesn’t mention this. It would be nice if OCaml would automatically restart the call in this case (as Python’s
subprocess module does).
Type: Poor API
HTTP redirects with data cause corrupted downloads
We download to a temporary file. If we get an HTTP redirect, we truncate the file and try the new address. However,
ftruncate doesn’t reset the position in the file. So, if the redirecting site sent any data in its reply, you’d get that many zeroes at the start of the download. As with
waitpid, OCaml’s behaviour is standard POSIX, but not mentioned in the OCaml documentation.
Solution:
seek_out ch 0.
Also, I updated the test server used in the unit-tests to send back some data when doing a redirect.
Type: Testing only
Setting wrong mtime
Unix.utimes is supposed to set the mtime and atime of a file to the given values. However, the behaviour is a little odd:
- When 1 <= time < infinity, it sets it to the requested time.
- When 0 <= time < 1 however, it sets it to the current time instead.
That’s a problem for us, because we often use “0” as the time for files which don’t have a timestamp, and the time is part of the secure hashes we calculate.
Solution: I wrote a C function to allow setting the time to whatever value you like.
This bug didn’t make it into a commit because I hit it while writing a test script (I was trying to reset a timestamp file to time zero), and the unit-tests would have caught it if not, but it’s still a poor API. Not only does it fail to use a variant type to handle different cases, but it chooses a magic value that’s a valid input!
Or, rather than using a variant type for these two cases, it could just drop the magic current time feature completely - it’s easy enough to read the current time and pass it explicitly if you need it. That would make the code clearer too.
(note: the documentation does say “A time of 0.0 is interpreted as the current time”, but it’s easy to forget this if it wasn’t relevant the first time you read the docs)
Type: Poor API
Strict sequences
This didn’t make it into a commit, but it’s interesting anyway. Simplified version of the problem:
This prints:
START END
Why is there no warning? You might expect OCaml would infer the type of
fn as
unit -> unit and then complain that the function we pass has the wrong type (
unit -> string -> unit).
In fact, although OCaml warns if you ignore the result of a function that doesn’t return unit, it’s not actually an error. So it actually infers the type of
fn as (
unit -> 'a), and it compiles fine.
Solution: always compile with
-strict-sequence (or put
true: strict_sequence in your
_tags file)
Type: Inexperience
Lwt-related bugs
Lwt process fails with empty string
When spawning a process, the Lwt docs say you can pass an empty string as the binary to run and it will search for the first argument in
$PATH for you. However, that behaviour was added only in Lwt 2.4 and using the older version in Debian it failed at runtime with a confusing error.
Probably I should have been reading the old version of the docs (which the web-site helpfully lets you do).
I’m classifying this as a poor API because it was caused by using
"" as a magic value, rather than defining a new constructor function.
Type: Poor API
EPIPE on Windows
On Windows, we couldn’t read the output of GnuPG. This was due to a bug in Lwt, which they quickly fixed:
Lwt_io.read fails on Windows with EPIPE
Type: Third-party
Deleting temporary files
We download various files to a temporary directory. In some cases, they weren’t being deleted afterwards.
Solution: the downloader now takes a mandatory Lwt switch and deletes the file when the switch is turned off. Callers just have to wrap the download call in a
try ... finally block, like this:
To make this completely foolproof, you’d need something like the linear types from Rust or ATS, but this is good enough for me.
Type: Inexperience
Race shutting down test HTTP server
Some of the unit tests run a simple HTTP server. When the test is over, they use
Lwt.cancel to kill it. However, it appears that this call is unreliable: depending on exactly what the server is doing at the time it might just ignore it and continue.
Solution: we both cancel the task and set a boolean flag, which we test just before calling
accept. If we’re in
accept at the time of the cancel, the thread will abort correctly. If it’s anywhere else, it may continue handling the current request, but will quit as soon as it finishes and checks the flag.
Would perhaps be nice if Lwt remembered that an attempt was made to cancel the thread during a non-cancellable operation, and killed it at the next opportunity.
Type: Poor API
A related race occurred if we spawned a child process while handling an HTTP request, because the child would inherit the client socket and it would never get closed.
Solution: Use
Lwt_unix.set_close_on_exec connection as soon as the connection is accepted.
Note that both these hacks should be race-free, because Lwt is cooperatively multi-threaded (e.g. we can’t spawn a subprocess between accepting a connection and marking it
close_on_exec). I think.
Ideally, when spawning a child process you’d specify the file descriptors you wanted it to inherit explicitly (Go does this, but really it needs to be at the POSIX level).
Type: Testing only
(although these bugs only occurred in the unit-tests, I’m including them because they could just as easily appear in the main code)
Reactive event handler gets garbage collected
The OCaml D-BUS bindings use functional reactive programming to report property values. The idea is that you get an object representing the continuously-varying value of the property, rather than a particular sample of it. Then you can handle the signal as a whole (for example, you can get the “progress” signal from a PackageKit transaction and pass it to a GUI progress monitor widget, so that the widget always shows the current progress). You can build up chains of signal processors. For example, you might transform a “bytes downloaded” signal into a “percentage complete” one.
The technique seems to come from Haskell. Being purely functional, it’s always safe to garbage collect a signal if no-one is holding a reference to it.
However, OCaml is not purely functional. You might easily want to evaluate a signal handler for its side-effects.
I created a handler to monitor the transaction status signal to see when it was finished, and attached the resulting signal to a
Lwt_switch.
My plan was that the switch would keep it alive until it fired.
That didn’t work, because there was a subtle circular reference in the whole scheme, and OCaml would sometimes garbage-collect the handler and the switch. Then the process would ignore the finished event and appear to hang. I asked on StackOverflow and got some suggestions:
The solution seems to be to keep references to all active signals in a global variable. Rather messy.
Type: Testing only
Stuck reading output from subprocess
When using the
Lwt_glib module to integrate with the GTK mainloop, the
HUP response from
poll is ignored. This means that it will call
poll endlessly in a tight loop.
Patch
Type: Third-party
Downloads never complete
When using
Lwt_glib, downloads may never complete. This is because OCaml, like Python, has a global lock and
Lwt_glib fails to release it when calling
poll. Therefore, no other thread (such as the download thread) can make progress while the main thread is waiting (e.g. for the download thread to finish).
Patch
Type: Third-party
Event loop doesn’t work on OS X
Lwt_glib passes
-1000 to
poll to mean “no timeout”. This works on Linux, but not on BSD-type systems.
Patch
Type: Third-party
Curl-related bugs
Failing to reset Curl connections
For efficiency, Curl encourages the reuse of connections. However, I forgot to reset some parameters (max file size and expected modification time). If the next call didn’t use them, it would reuse the old values and possibly fail.
Newer versions of ocurl have a
reset function, which avoids these problems.
Type: Poor API
Error with cancelled downloads
Downloads were sometimes failing with this confusing error:
easy handled already used in multi handle
It happened when reusing connections (which Curl encourages, for efficiency). There was no direct way to cancel a download, so I handled cancellation by closing the channel the download was writing to. Then, next time some data arrived, my write callback would fail to write the new data and throw an exception, aborting the download. It turned out that this was leaving the connection in an invalid state.
Solution: return 0 from the handler instead of throwing an exception.
Ideally, ocurl should catch exceptions from callbacks and allow the C code to clean up properly. Now fixed.
Type: Third-party
GTK bugs
Sorted treeview iter mix-up
(I caught this before committing it, but it’s a nasty bug that could easily be missed. It was present for a while in the original Python version.)
The cache explorer dialog allows you to delete implementations from the cache by selecting an item and pressing the Delete button. It also allows you to sort the table by clicking on the column headings. However, if you sort the table and then delete something, it deletes the wrong thing!
To make a sortable table (which is just a special case of a tree to GTK), you first create an underlying (unsorted) list model, then wrap it with a sorted model, then pass that to the display widget (GtkTreeView), like so:
To do things with the model, you pass it a GtkTreeIter, which says which item you want to act on, e.g.
The trouble is, sorted and unsorted GtkTreeIters both have the same type, so you can easily pass an iterator of the sorted model as an argument to the unsorted model. Then it will act on the wrong item. If the view isn’t sorted then everything works fine, so you might not notice the problem while testing.
Solution: I created a new module for unsorted lists. The implementation (
unsorted_list.ml) just proxies calls to the real code:
However, the interface (
unsorted_list.mli) makes the types
t (the model) and
iter (its
GtkTreeIters) abstract, so that code outside of the module isn’t allowed to know their real types:
Now it’s impossible to mix up sorted and unsorted types:
It’s still possible to mix up iterators in some cases (e.g. between two different instances of a sorted model), but that’s a much less likely mistake to make.
Another way to solve the problem would be to bundle the owning model with each iterator, but that would be a big change to how the underlying GTK library works.
And ATS could solve this easily using its dependant types, by declaring the iterator type as
iter(m) (“iterator of model at address m”), linking models to iterators in the type system.
Type: Poor API
Crashes with GtkIconView
As with GtkTreeView, you can make a sorted GtkIconView with a pair of models. For some reason, clearing the underlying model didn’t clear the sorted version, and repopulating it corrupted memory:
Program received signal SIGSEGV, Segmentation fault.
Solution: since there is no UI to let the user change the sort column, I just removed the sorted model and sorted the underlying model myself in OCaml. I guess this is probably a GTK bug.
Type: Third-party
A second crashing bug with GtkIconView is caused by a bug in lablgtk.
The C wrapper
get_path_at_pos returns a
tree_path option (
None if you clicked in a blank area), but the OCaml declaration says it returns a plain
tree_path.
Solution: use
Obj.magic to do an unchecked cast to the correct type:
(reported as segfault due to GtkIconView type mismatch)
Two interesting things about this bug:
- Even the low-level GTK bindings are presumably not generated automatically. If they were, this kind of mismatch surely couldn’t happen.
- An OCaml optional pointer doesn’t have the same representation as a non-optional pointer. If it did, the code wouldn’t crash. This suggests OCaml is being a bit inefficient about option types, which is disappointing.
Type: Third-party
Logic errors
The remaining bugs aren’t very interesting, but I’ve included them for completeness:
- Failed to handle ambiguous short options (e.g.
-r)
- Missing tab completion for
0install add
0install runignores
--guioption
- Infinite loop handling recursive dependencies
- Allow absolute local paths in local implementations
- Default command for
--sourceshould be
compile, not
run
- Allow
machine:nullin JSON response
- Typo: scripts start
#!not
!#
- Report an error if we need to confirm keys during a headless background update
- Wrong attribute name in overrides XML file
- Allow executing selections with no command but an absolute main
- Handle local-path on
<group>elements
- Wrong attribute name when saving user overrides
- Typo:
"https://"not
"https://'"
- Don’t try to use GUI in
--dry-runmode
- Support old version of selections XML format
- Support old selections without a
from-feedattribute
- Don’t send an empty GetDetails request to PackageKit
- Cope with dpkg-query returning a non-zero exit status
- Detect Python even if python-gobject is missing
- Race when cancelling downloads
Type: Testing only (x21)
Python bugs
Just for interest, here are the Python bugs discovered over the same period (it doesn’t make sense to compare bug counts, because these are bugs in mature code, often several years old, not just-written new code).
I think these would be impossible or unlikely in OCaml (the problem would be detected at compile time):
- Error setting machine type for Cygwin packages (type error)
- When loading selections from a file, convert
last-check-mtimeattribute to an int (type error)
UnicodeErrorextracting or generating a manifest for archives with non-ASCII file names (no encoding of file names in OCaml :-)
- Crash when specifying bold text (PyGTK API change)
- Broken clipboard handling in cache explorer (PyGTK API change)
- Broken filtering in cache explorer (PyGTK API change)
- Broken cache explorer menu when using a filter (model mix up; could avoid with abstract types, as above)
- Fails running an app when the master feed is no longer cached (would be forced to handle None case in OCaml)
These would likely still be bugs in OCaml:
- Fix mtime check in
selections.get_unavailable_selectionsfor native packages
- Always return False for native packages in
needs_downloadif
include_packagesis False
- Always use the prefix “xml” for the XML namespace
- Don’t abort solving just because a local feed is missing
- Escape tooltips in cache explorer
- 32-bit size limit in cache explorer
This was a third-party bug:
- Workaround for PyGTK garbage collection bug
Summary
Despite the newness of the code, the bug-rate has been surprisingly low so far. Of the (detected) bugs that did make it past the compiler, about a sixth were due to bugs in third-party libraries, another sixth could have been avoided with better third-party APIs and a sixth we due to my inexperience with OCaml. For the remaining half, more testing is still the only way I can see to find such bugs.
It’s a shame that OCaml seems to have no system for deprecating old APIs. This means that poor API choices made years ago are still causing trouble today. It would be good if OCaml could flag functions as being there for historical reasons only, and issue a compiler warning if you used them. I do, however, like the fact that they stay around - breaking existing code (as Python 3 did) is not the solution either!
Two of the bugs (“Deleting temporary files” and “Reactive event handler gets garbage collected”) could have been avoided if OCaml had linear types, but I have reasonable solutions to both. The XML / JSON handling bugs could have been avoided by using proper schemas, but such schemas didn’t exist (my fault).
Overall, I’m pretty happy with the bug rate so far. No doubt more bugs will be discovered as the new code makes its way into the distributions and gains more users, but I think this code will be easier to maintain than the Python code, and much less likely to break silently due to changes in the language or third-party libraries. | https://roscidus.com/blog/blog/2014/01/07/ocaml-the-bugs-so-far/ | CC-MAIN-2019-22 | refinedweb | 4,158 | 59.94 |
Hello Everybody!
Hope your fine!
I am using Daz only for a while. I have clicked though the whole install process quickly before, but now I would like to read the terms a bit better. Where can I find them again?
Also, I would like to know how and when I can use the Daz avatars/genesis actors that I made in other software and final artwork. I would like to find a way to model some garments and show them on a custom avatar, but I would need to know if I am allowed. Can I make animation and film or whatever and just sell the film for example, with a daz genesis actor in it?
Could somebody send me a link to some valid information about this?
Thank you,
kind regards,
marjolein
The only one that is available at this time is the Gaming EULA which is different than the General EULA.
ARTCollaborations Store on DAZ3D ARTCollaborations on Facebook
*****Down On The Corner*****
HI lijlijlijntje
The End User Licence Agreement is part of every installer for every product sold here,
(it’s that part you have to agree to, before you can install the software, or any products you purchase in the store.
You can run any product installer, and read that agreement,.
then cancel the installation process. by choosing.. ( I do not accept the agreement ) (see pic)
In simple terms, you can use the products to create 2D images and animations, and those images and animations are YOUR work.
You can sell Your work,.(the images you create using the software and the models you’ve purchased licences to use)
You cannot redistribute the 3D models, or Software, or break apart any of the models or software, and you cannot try to sell the models or software as your own.
As with all things legal,.. you should read the agreement “fully” for each product, as the terms may be different for Software, and Models.
Hope it helps
Oops : I’m too slow at typing ..
There have been some recent changes to the EULA agreement that have not been included in some of the older content installers. You can find the most current EULA here:
Link removed
Thanks!
I know, but I forgot most that was in, just wanted to double check!
Great, thanks!!!!
I’ve removed the link from jasmine’s post as it is not the correct one - that is the EULA that goes with the Game Developer Licenses. The on-line versions of the regular content and application licenses have been lost in the site upgrade, this has been reported to DAZ.
DAZ Studio Frequently Asked Questions
Index of free DAZ Studio scripts and plugins list
Ok, so there is going to be a version online in the future. When will it be back up? Can you post that link here than? thank you in any case!
I’ve no idea when the EULAs will return (there are at least two, one for content and one for applications). I would hope soon. | http://www.daz3d.com/forums/viewthread/1842/ | CC-MAIN-2015-27 | refinedweb | 507 | 70.94 |
Hi all,
How can I get the lines of Text that is showing in the game, not in the inspector window, as u see in the picture:
myText = GetComponent<Text>();
string[] lines = myText.text.Split('\n');
try{print(lines[0]);}catch{}
try{print(lines[1]);}catch{}
//real output :
//New Text
//What I expect:
//New
//Text
Honestly without some trial and error I'm not sure if I can provide the correct solution.
What I can tell you, though, is that I've used a StringBuilder in the past to populate the contents of a Text field. When I used stringBuilder.AppendLine(someText) multiple times, it did correctly populate the contents of the Text field on separate lines which indicates that the line endings do somehow come across, at least internally. Being able to expose that internal string, however, is a different story. Perhaps the "useRichText" boolean may change the getter of ".text". Have you tried setting this to false and then getting the text?
Answer by sarahnorthway
·
Dec 31, 2016 at 02:40 AM
Text.cachedTextGenerator has the info you need. It won't show correct line splitting until the next frame, but you can force it to update early by calling Canvas.ForceUpdateCanvases().
myText.text = "this is a very extremely super duper incredibly unbelievably very long sentence";
Canvas.ForceUpdateCanvases();
for (int i = 0; i < myText.cachedTextGenerator.lines.Count; i++) {
int startIndex = myText.cachedTextGenerator.lines[i].startCharIdx;
int endIndex = (i == myText.cachedTextGenerator.lines.Count - 1) ? myText.text.Length
: myText.cachedTextGenerator.lines[i + 1].startCharIdx;
int length = endIndex - startIndex;
Debug.Log(myText.text.Substring(startIndex, length));
}
Thank you very much! The way you get the endIndex is so clever! (And t?x:y conditional operator is so usefull!)
Answer by juicyz
·
Sep 27, 2016 at 07:03 PM
You need to first get the GameObject that it is associated with. Then you need to get the text component of it. Then once you have the Text component, you can use .text to get/set the text.
To get the Gameobject, you can do something like GameObject.Find(); //can take in a tag or game object name.
Then a .GetComponent(); to get the text component
Then .text.
// You also need the correct imports, think it's only these two.
// Might have a text one too, cant remember. You can figure that out
using UnityEngine;
using UnityEngine.UI;
GameObject textGO = GameObject.Find("Text");
Text textInfo = textGO.GetComponent<Text>();
string[] lines = textInfo.text.Split('\n');
try{print(lines[0]);}catch{}
try{print(lines[1]);}catch{}
For reference:
This is actually attached to the text object, and the "myText" is the Text component that is attached to the gameobject. my problem is the myText.text returns the value that has been entered in the inspector, but I want what player see in the game window.
I will edit my code in first post.
anyway thanks for the reply
ohhh I see your problem... haha, that's funny.
So the text field is "New Text" as said in the inspector. The split isn't working because the text is "New Text". The reason the text on the screen is different is because the text box isn't big enough to support the string "New Text" on one line so it wraps it. There is no '\n'
You could split it on ' ' (space) instead but I don't think that is what you want?
string[] lines = myText.text.Split(' ');
Will give you:
New
Text
No , that's not what I want, I need more general solution.
There must be a way to see how many characters can a line hold in UI/Text, then we can split the words and count the length of every word and see by adding which word the line length goes over that maximum length. then we can consider it one line.
but I can't find such a property for UI/Text.
Answer by KeraStudios
·
Oct 22, 2017 at 05:53 PM
text = myText.GetComponent(); lines = text.cachedTextGenerator.line.
Change alpha color of specific letter in a Text UI with C#
2
Answers
Multiple Rich Text Not Working
1
Answer
Selecting a part of string in input field
1
Answer
Remove the & at the end of concatenated string
0
Answers
Changing just the name of person in UI Text
0
Answers | https://answers.unity.com/questions/1249043/get-lines-of-a-ui-text.html | CC-MAIN-2019-43 | refinedweb | 720 | 66.94 |
The current work-in-progress version of GraphGrow has three components that communicate via OSC over network. The visuals are rendered in OpenGL using texture array feedback, this process is graphgrow-video. The transformations are controlled by graphgrow-iface, with the familiar nodes and links graphical user interface. The interface runs on Linux using GLFW, and I'm working on an Android port for my tablet using GLFM. The component I'll be talking about in this post is the graphgrow-audio engine, which makes sounds using an audio feedback delay network with the same topology as the visual feedback network. Specifically, I'll be writing up my notes on what I did to make it around 2x as CPU efficient, while still making nice sounds.
First up, I tried gprof, but after following the instructions I only got an empty profile. My guess is that it doesn't like JACK doing the audio processing in a separate realtime thread. So I switched to perf:
perf record ./graphgrow-audio perf report
Here's the first few lines of the first report, consider it a baseline:
Overhead Command Shared Object Symbol 34.41% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 18.11% graphgrow-audio libm-2.24.so [.] expm1f 14.27% graphgrow-audio graphgrow-audio [.] audiocb 8.69% graphgrow-audio libm-2.24.so [.] sincos 8.34% graphgrow-audio libm-2.24.so [.] __logf_finite 4.47% graphgrow-audio libm-2.24.so [.] tanhf
That was after I already made some algorithmic improvements: I had 32 delay lines, all of the same length, with 64 delay line readers in two groups of 32, each group reading at the same offset every sample. This meant there was a lot of duplicated work calculating the delay line interpolation coefficients. I factored out the computation of the delay coefficients into another struct, which could be calculated 1x per sample instead of 32x per sample. Then the delay readers are passed the coefficients, instead of computing them themselves.
Looking at what to optimize, the calls to expm1f() seem to be a big target. Looking through the code I saw that I had 32 dynamic range compressors, each doing RMS to dB (and back) conversions every sample, which means a lot of log and exp. My compressor had a ratio of 1/8, so I replaced the gain logic by a version that worked in RMS with 3x sqrt instead of 1x log + 1x exp per sample:
index a6ba512..588d098 100644 --- a/graphgrow3/audio/graphgrow-audio.cc +++ b/graphgrow3/audio/graphgrow-audio.cc @@ -549,18 +549,25 @@ struct compress sample factor; hip hi; lop lo1, lo2; + sample thresrms; compress(sample db) : threshold(db) , factor(0.25f / dbtorms((100.0f - db) * 0.125f + db)) , hi(5.0f), lo1(10.0f), lo2(15.0f) + , thresrms(dbtorms(threshold)) { }; signal operator()(const signal &audio) { signal rms = lo2(0.01f + sqrt(lo1(sqr(hi(audio))))); +#if 0 signal db = rmstodb(rms); db = db > threshold ? threshold + (db - threshold) * 0.125f : threshold; signal gain = factor * dbtorms(db); +#else + signal rms2 = rms > thresrms ? thresrms * root8(rms / thresrms) : thresrms; + signal gain = factor * rms2; +#endif return tanh(audio / rms * gain); }; };
This seemed to work, the new perf output was:
Overhead Command Shared Object Symbol 38.89% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 22.11% graphgrow-audio libm-2.24.so [.] expm1f 10.78% graphgrow-audio libm-2.24.so [.] sincos 5.76% graphgrow-audio libm-2.24.so [.] tanhf
The numbers are higher, but this is actually an improvement, because if graphgrow::operator() goes from 34% to 39%, everything else has gone from 66% to 61%, and I didn't touch graphgrow::operator(). Now, there are still some large amounts of expm1f(), but none of my code calls that, so I made a guess: perhaps tanhf() calls expm1f() internally? My compressor used tanh() for soft-clipping, so I tried simply removing the tanh() call and seeing if the audio would explode or not. In my short test, the audio was stable, and CPU usage was greatly reduced:
Overhead Command Shared Object Symbol 60.53% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 17.62% graphgrow-audio libm-2.24.so [.] sincos 11.51% graphgrow-audio graphgrow-audio [.] audiocb
The next big target is that sincos() using 18% of the CPU. The lack of 'f' suffix tells me this is being computed in double precision, and the only place in the code that was doing double precision maths was the resonant filter biquad implementation. The calculation of the coefficients used sin() and cos(), at double precision, so I swapped them out for single precision polynomial approximations (9th order, I blogged about them before). The approximation is roughly accurate (only a bit or two out) for float (24bits) which should be enough: it's only to control the angle of the poles, and a few cents (or more, I didn't check) error isn't so much to worry about in my context. Another big speed improvement:
Overhead Command Shared Object Symbol 85.48% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 11.45% graphgrow-audio graphgrow-audio [.] audiocb 1.22% graphgrow-audio libc-2.24.so [.] __isinff 0.64% graphgrow-audio libc-2.24.so [.] __isnanf 0.41% graphgrow-audio graphgrow-audio [.] graphgrow::graphgrow
perf has a mode that annotates the assembly and source with hot instructions, looking at that let me see that the resonator was using double precision sqrt() when calculating the gain when single precision sqrtf() would be enough:
Overhead Command Shared Object Symbol 88.70% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 8.49% graphgrow-audio graphgrow-audio [.] audiocb 1.24% graphgrow-audio libc-2.24.so [.] __isinff 0.65% graphgrow-audio libc-2.24.so [.] __isnanf
Replacing costly double precision calculations with cheaper single precision calculations was fun, so I thought about how to refactor the resonator coefficient calculations some more. One part that definitely needed high precision was the calculation of 'r = 1 - t' with t near 0. But I saw some other code was effectively calculating '1 - r', which I could replace with 't', and make it single precision. Again, some code was doing '1 - c * c' with c the cosine of a value near 0 (so 'c' is near 1 and there is catastrophic cancellation), using basic trigonometry this can be replaced by 's * s' with s the sine of the value. However, I kept the final recursive filter in double precision, because I had bad experiences with single precision recursive filters in Pure-data (vcf~ had strongly frequency-dependent ring time, porting to double precision fixed it).
Overhead Command Shared Object Symbol 87.86% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 9.18% graphgrow-audio graphgrow-audio [.] audiocb 1.37% graphgrow-audio libc-2.24.so [.] __isinff 0.68% graphgrow-audio libc-2.24.so [.] __isnanf
The audiocb reponds to OSC from the user interface process, it took so much time because it was busy-looping waiting for the JACK processing to be idle, which was rare because I still hadn't got the CPU load down to something that could run XRUN-free in realtime at this point. I made it stop that, at the cost of increasing the likelihood of the race condition when storing the data from OSC:
Overhead Command Shared Object Symbol 96.87% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 1.49% graphgrow-audio libc-2.24.so [.] __isinff 0.80% graphgrow-audio libc-2.24.so [.] __isnanf
Still not running in realtime, I took drastic action, to compute the resonator filter coefficients only every 64 samples instead of every 1 sample, and linearly interpolate the 3 values (1x float for gain, 2x double for feedback coefficients). This is not a really nice way to do it from a theoretical standpoint, but it's way more efficient. I also check for NaN or Infinity only at the end of each block of 64 samples (if that happens I replace the whole block with zeroes/silence), which is also a bit of a hack - exploding filters sound bad whatever you do to mitigate it, but I haven't managed to make it explode very often.
Success: now it was using 60% of the CPU, comfortably running in real time with no XRUNs. So I added in a (not very accurate, but C2 continuous) rational approximation of tanh() to the compressor that I found on musicdsp (via Pd-list):
signal tanh(const signal &x) { signal x2 = x * x; return x < -3.0f ? -1.0f : x > 3.0f ? 1.0f : x * (27.0f + x2) / (27.0f + 9.0f * x2); }
CPU usage increased to 72% (I have been doing all these tests with the CPU frequency scaling governor set to performance mode so that figures are comparable). I tried g++-8 instead of g++-6, CPU usage reduced to 68%. I tried clang++-3.8 and clang++-6.0, which involved some invasive changes to replace 'c?a:b' (over vectors) with a 'select(c,a,b)' template function, but CPU usage was over 100% with both versions. So I stuck with g++-8.
The last thing I did was an algorithmic improvement: I was doing 4x the necessary amount of work in one place. Each of 4 "rules" gets fed through 4 "edges" per rule, and each edge pitchshifted its rule down an octave. By shifting (ahem) the pitchshifting from being internal to the edge to being internal to the rule, I saved 15% CPU (relative) 10% CPU (absolute), now there are only 8 pitchshifting delay lines instead of 32.
In conclusion, today I brought down the CPU usage of graphgrow-audio down from "way too high to run in realtime", to "60% of 1 core", benchmarking on an Intel Atom N455 1.66GHz netbook running Linux in 32bit (i686) mode. A side channel may be lurking, as the CPU usage (in htop) of graphgrow-audio goes up to 80% temporarily while I regenerate my blog static site... | https://mathr.co.uk/blog/2018-09-17_optimizing_audio_dsp_code.html | CC-MAIN-2020-34 | refinedweb | 1,653 | 55.95 |
By Kari Sherrodd, Senior Manager, Citizenship and Public Affairs:
[View:]
.
The Springonline training Framework has changed into a broadly used framework for era of business Java programs due to the countless advantages that it gives. Yet, there are some facets of operating with the Spring Framework that can be enhanced through the use of the Spring IDE plugin that is contained with Oracle Business Pack for Eclipse.
1 of the challenges that programmers new to the Spring online training Framework see as they learn about the theoretical model is working and comprehending with the Extensible Markup Language-established Spring setup files. These characteristics contain a “Layout” perspective of Spring XML configuration files, Spring component and aspect finish, and detection and telling of any noncompliant entries in the Spring context files.
The following two pictures reveal how to pick a Spring undertaking from the New Job Wizard and how to add special details on the Spring undertaking when creating it.“> spring
Spring Framework is a Java platform that provides comprehensive infrastructure support for developing Java applications. Spring handles the infrastructure so you can focus on your application.
Springonline.
Spring who have 10 plus years of experiance in our training instutution give to the student be
In Springtime online training. Java it is extremely common to make an ObjectFactory or ApplicationContext from an exterior XML settings file This performance is additionally given in Spring.NET. Yet, in Dot Net the System.Configuration namespace offers support for controlling application configuration details. The features in this namespace really depends on the option of particularly called files: Net.“>java spring online training
is the name of your executable. Included in the compilation procedure, in case you own a file-name App.config in the basis of your task, the compiler will rename the file to exe file.config and put it in the run time executable folder.
These program setup files are XML-BASED and include settings sections that may be referenced by title to regain custom settings objects. As a way to tell the .NET settings system just how to generate a personalized settings item from any of these sections, an execution of the interface, IConfigurationSectionHandler, must be enrolled. Spring.NET supplies two executions, one to make an IApplicationContext from a portion and still another to configure the circumstance with object definitions within an section. The section is quite expressive and strong. It gives complete support for finding all IResource via hierarchical circumstances and Uri syntax without cryptography or utilizing more verbose XML as could be needed in the present model of Springtime.Java
I just saw this competetion on 5th March at 6:30 local time and I wanted to enter my plans about I want to cause change in the world but the deadline was past. Do I still have a chance to enter the challenge? If yes when ?
By Elisa Willman, senior manager, Citizenship and Public Affairs
We lined up a great panel of judges
By Kari Sherrodd, senior manager, Citizenship and Public Affairs
After reviewing impressive entries
Hi sir,
How to send my .net app to certified authority of microsoft
Is any competition is live now.
I’m eagerly waiting to send my app to microsoft
Please let me know the process
My email id is balamurali5a3@hotmail.com
krishnabala342@gmail.com
ThanQ
balu
Pingback from Microsoft announces 2014 YouthSpark Challenge for Change finalists | The Fire Hose
Its a great thing introuduced by microsoft. I am sure, it will help many entrepreuners like me. I am going to reblog this in my company blog page. | https://blogs.technet.microsoft.com/microsoftupblog/2014/02/11/microsoft-youthspark-challenge-for-change-contest-is-back/ | CC-MAIN-2017-22 | refinedweb | 595 | 53.1 |
Components with Angular 6 CLI
First of all, navigate to
src/app/ open the
app.component.html file and delete the content of it. The website should be blank. As you can see there are some files the CLI has created for us. It's the
app.component.ts file, this is the main component file from our app. A
app.component.css stylesheet, a
app.component.html view file and a
app.component.spec.ts file for testing purposes and we will talk about it later. Open the
app.component.ts and you will see these files are registered in the Component already!.
Create a component with Angular 6 CLI
Now let's use the Angular CLI to create a new component. This is straight forward, type
ng generate component post-list. Your output should look something like this
CREATE src/app/post-list/post-list.component.css (0 bytes) CREATE src/app/post-list/post-list.component.html (28 bytes) CREATE src/app/post-list/post-list.component.spec.ts (643 bytes) CREATE src/app/post-list/post-list.component.ts (280 bytes) UPDATE src/app/app.module.ts (406 bytes)
As you can see, the CLI takes care of everything, it created a directory posts-list, a stylesheet file, the view file, the test file, registered these files in
post-list.component.ts and registered the component in the
app.module.ts file. For now just ignore the
app.module.ts we will talk about it later.
If you open
post-list.component.ts you will see the
templateUrl and the
styleUrls properties referencing the automatically generated files.
@Component({ selector: 'app-post-list', templateUrl: './post-list.component.html', styleUrls: ['./post-list.component.css'] })
As you may have noticed there is also a property
selector this is the identifier for the created component which we can reference from any other component. So let's do this by opening the
app.component.html file and add following tag attribute.
<app-post-list></app-post-list>
Check your website and you should see post-list-works!. Great you have created your first component and used it inside another component!
Create a list of posts with Angular 6
Navigate to the
post-list.component.ts file and remove the inheritance of
OnInit to have a minimal class and a clean start. See the code below.
import { Component} from '@angular/core'; @Component({ selector: 'app-post-list', templateUrl: './post-list.component.html', styleUrls: ['./post-list.component.css'] }) export class PostListComponent {}
Let's create a mocked list of posts. Add the following code inside class.
posts = [ { title: 'Great jobs at arconsis IT-Solutions' }, { title: 'Join our team at arconsis IT-Solutions' }, { title: 'At arconsis we work with newest technologies' }, ];
Now open
post-list.component.html and add following code inside it.
<div * {{post.title}} </div>
What is this? you may ask yourself. It's a foreach loop inside a .html file. How is this possible? Angular comes with a own template syntax which provides some built-in directives to do such things. Here you can find a good explanation from the official site. Check out the site and you will see a list of post titles! BUT there is a disadvantage with this approach. What if we want to create a list of old posts like an archive? We would have to create a new component with the same logic.
There is better way, let's create a more generic component in the next section Create a generic component.
Feel free to leave your feedback or suggestions here. You can find me at Twitter or simply send me an email to tomislav.eric@arconsis.com. | https://arconsis.de/unternehmen/blog/components-with-angular-6-cli | CC-MAIN-2019-18 | refinedweb | 611 | 61.43 |
PEP 484 -- Type Hints
Contents
- Abstract
- Rationale and Goals
- The meaning of annotations
- Type Definition Syntax
- Acceptable type hints
- Using None
- Type aliases
- Callable
- Generics
- User-defined generic types
- Scoping rules for type variables
- Instantiating generic classes and type erasure
- Arbitrary generic types as base classes
- Abstract generic types
- Type variables with an upper bound
- Covariance and contravariance
- The numeric tower
- Forward references
- Union types
- Support for singleton types in unions
- The Any type
- The NoReturn type
- The type of class objects
- Annotating instance and class methods
- Version and platform checking
- Runtime or type checking?
- Arbitrary argument lists and default argument values
- Positional-only arguments
- Annotating generator functions and coroutines
- Compatibility with other uses of function annotations
- Type comments
- Casts
- NewType helper function
- Stub Files
- Exceptions
- The typing Module
- Suggested syntax for Python 2.7 and straddling code
- Rejected Alternatives
- PEP Development Process
- Acknowledgements
- References
Abstract
PEP 3107 introduced syntax for function annotations, but the semantics were deliberately left undefined. There has now been enough 3rd party usage for static type analysis that the community would benefit from a standard vocabulary and baseline tools within the standard library.
This PEP introduces a provisional module to provide these standard definitions and tools, along with some conventions for situations where annotations are not available.
Note that this PEP still explicitly does NOT prevent other uses of annotations, nor does it require (or forbid) any particular processing of annotations, even when they conform to this specification. It simply enables better coordination, as PEP 333 did for web frameworks.
For example, here is a simple function whose argument and return type are declared in the annotations:
def greeting(name: str) -> str: return 'Hello ' + name. (While it would of course be possible for individual users to employ a similar checker at run time for Design By Contract enforcement or JIT optimization, those tools are not yet as mature.)
The proposal is strongly inspired by mypy [mypy] . For example, the type "sequence of integers" can be written as Sequence[int] . The square brackets mean that no new syntax needs to be added to the language. The example here uses a custom type Sequence , imported from a pure-Python module typing . The Sequence[int] notation works at runtime by implementing __getitem__() in the metaclass (but its significance is primarily to an offline type checker).
The type system supports unions, generic types, and a special type named Any which is consistent with (i.e. assignable to and from) all types. This latter feature is taken from the idea of gradual typing. Gradual typing and the full type system are explained in PEP 483 .
Other approaches from which we have borrowed or to which ours can be compared and contrasted are described in PEP 482 .
Rationale and Goals
PEP 3107 added support for arbitrary annotations on parts of a function definition. Although no meaning was assigned to annotations then, there has always been an implicit goal to use them for type hinting [gvr-artima] , which is listed as the first possible use case in said PEP.
This PEP aims to provide a standard syntax for type annotations, opening up Python code to easier static analysis and refactoring, potential runtime type checking, and (perhaps, in some contexts) code generation utilizing type information.
Of these goals, static analysis is the most important. This includes support for off-line type checkers such as mypy, as well as providing a standard notation that can be used by IDEs for code completion and refactoring.
Non-goals
While the proposed typing module will contain some building blocks for runtime type checking -- in particular the get_type_hints() function -- third party packages would have to be developed to implement specific runtime type checking functionality, for example using decorators or metaclasses. Using type hints for performance optimizations is left as an exercise for the reader.
It should also be emphasized that Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.
The meaning of annotations
Any function without annotations should be treated as having the most general type possible, or ignored, by any type checker. Functions with the @no_type_check decorator should be treated as having no annotations.
It is recommended but not required that checked functions have annotations for all arguments and the return type. For a checked function, the default annotation for arguments and for the return type is Any . An exception is the first argument of instance and class methods. If it is not annotated, then it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. For example, in class A the first argument of an instance method has the implicit type A . In a class method, the precise type of the first argument cannot be represented using the available type notation.
(Note that the return type of __init__ ought to be annotated with -> None . The reason for this is subtle. If __init__ assumed a return annotation of -> None , would that mean that an argument-less, un-annotated __init__ method should still be type-checked? Rather than leaving this ambiguous or introducing an exception to the exception, we simply say that __init__ ought to have a return annotation; the default behavior is thus the same as for other methods.)
A type checker is expected to check the body of a checked function for consistency with the given annotations. The annotations may also used to check correctness of calls appearing in other checked functions.
Type checkers are expected to attempt to infer as much information as necessary. The minimum requirement is to handle the builtin decorators @property , @staticmethod and @classmethod .
Type Definition Syntax
The syntax leverages PEP 3107 -style annotations with a number of extensions described in sections below. In its basic form, type hinting is used by filling function annotation slots with classes:
def greeting(name: str) -> str: return 'Hello ' + name
This states that the expected type of the name argument is str . Analogically, the expected return type is str .
Expressions whose type is a subtype of a specific argument type are also accepted for that argument.
Acceptable type hints
Type hints may be built-in classes (including those defined in standard library or third-party extension modules), abstract base classes, types available in the types module, and user-defined classes (including those defined in the standard library or third-party modules).
While annotations are normally the best format for type hints, there are times when it is more appropriate to represent them by a special comment, or in a separately distributed stub file. (See below for examples.)
Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references).
Annotations should be kept simple or static analysis tools may not be able to interpret the values. For example, dynamically computed types are unlikely to be understood. (This is an intentionally somewhat vague requirement, specific inclusions and exclusions may be added to future versions of this PEP as warranted by the discussion.)
In addition to the above, the following special constructs defined below may be used: None , Any , Union , Tuple , Callable , all ABCs and stand-ins for concrete classes exported from typing (e.g. Sequence and Dict ), type variables, and type aliases.
All newly introduced names used to support features described in following sections (such as Any and Union ) are available in the typing module.
Using None
When used in a type hint, the expression None is considered equivalent to type(None) .
Type aliases
Type aliases are defined by simple variable assignments:
Url = str def retry(url: Url, retry_count: int) -> None: ...
Note that we recommend capitalizing alias names, since they represent user-defined types, which (like user-defined classes) are typically spelled that way.
Type aliases may be as complex as type hints in annotations -- anything that is acceptable as a type hint is acceptable in a type alias:
from typing import TypeVar, Iterable, Tuple T = TypeVar('T', int, float, complex) Vector = Iterable[Tuple[T, T]] def inproduct(v: Vector[T]) -> T: return sum(x*y for x, y in v) def dilate(v: Vector[T], scale: T) -> Vector[T]: return ((x * scale, y * scale) for x, y in v) vec = [] # type: Vector[float]
This is equivalent to:
from typing import TypeVar, Iterable, Tuple T = TypeVar('T', int, float, complex) def inproduct(v: Iterable[Tuple[T, T]]) -> T: return sum(x*y for x, y in v) def dilate(v: Iterable[Tuple[T, T]], scale: T) -> Iterable[Tuple[T, T]]: return ((x * scale, y * scale) for x, y in v) vec = [] # type: Iterable[Tuple[float, float]]
Callable
Frameworks expecting callback functions of specific signatures might be type hinted using Callable[[Arg1Type, Arg2Type], ReturnType] . Examples: (three dots) for the list of arguments:
def partial(func: Callable[..., str], *args) -> Callable[..., str]: # Body
Note that there are no square brackets around the ellipsis. The arguments of the callback are completely unconstrained in this case (and keyword arguments are acceptable).
Since using callbacks with keyword arguments is not perceived as a common use case, there is currently no support for specifying keyword arguments with Callable . Similarly, there is no support for specifying callback signatures with a variable number of argument of a specific type.
Because typing.Callable does double-duty as a replacement for collections.abc.Callable , isinstance(x, typing.Callable) is implemented by deferring to isinstance(x, collections.abc.Callable) . However, isinstance(x, typing.Callable[...]) is not supported.
Generics
Since type information about objects kept in containers cannot be statically inferred in a generic way, abstract base classes have been extended to support subscription to denote expected types for container elements. Example:
from typing import Mapping, Set def notify_by_email(employees: Set[Employee], overrides: Mapping[str, str]) -> None: ...
Generics can be parameterized by using a new factory available in typing called TypeVar . Example:
from typing import Sequence, TypeVar T = TypeVar('T') # Declare type variable def first(l: Sequence[T]) -> T: # Generic function return l[0]
In this case the contract is that the returned value is consistent with the elements held by the collection.
A TypeVar() expression must always directly be assigned to a variable (it should not be used as part of a larger expression). The argument to TypeVar() must be a string equal to the variable name to which it is assigned. Type variables must not be redefined.
TypeVar supports constraining parametric types to a fixed set of possible types (note: those types cannot be parametrized by type variables). For example, we can define a type variable that ranges over just str and bytes . By default, a type variable ranges over all possible types. Example of constraining a type variable:
from typing import TypeVar AnyStr = TypeVar('AnyStr', str, bytes) def concat(x: AnyStr, y: AnyStr) -> AnyStr: return x + y
The function concat can be called with either two str arguments or two bytes arguments, but not with a mix of str and bytes arguments.
There should be at least two constraints, if any; specifying a single constraint is disallowed.
Subtypes of types constrained by a type variable should be treated as their respective explicitly listed base types in the context of the type variable. Consider this example:
class MyStr(str): ... x = concat(MyStr('apple'), MyStr('pie'))
The call is valid but the type variable AnyStr will be set to str and not MyStr . In effect, the inferred type of the return value assigned to x will also be str .
Additionally, Any is a valid value for every type variable. Consider the following:
def count_truthy(elements: List[Any]) -> int: return sum(1 for elem in elements if element)
This is equivalent to omitting the generic notation and just saying elements: List .
User-defined generic types
You can include a Generic base class to define a user-defined class as generic. Example:
from typing import TypeVar, Generic))
Generic[T] as a base class defines that the class LoggedVar takes a single type parameter T . This also makes T valid as a type within the class body.
The Generic base class uses a metaclass that defines _. This is valid:
from typing import TypeVar, Generic ... T = TypeVar('T') S = TypeVar('S') class Pair(Generic[T, S]): ...
Each type variable argument to Generic must be distinct. This is thus invalid:
from typing import TypeVar, Generic ... T = TypeVar('T') class Pair(Generic[T, T]): # INVALID ...
The Generic[T] base class is redundant in simple cases where you subclass some other generic class and specify type variables for its parameters:
from typing import TypeVar, Iterator T = TypeVar('T') class MyIter(Iterator[T]): ...
That class definition is equivalent to:
class MyIter(Iterator[T], Generic[T]): ...
You can use multiple inheritance with Generic :
from typing import TypeVar, Generic, Sized, Iterable, Container, Tuple T = TypeVar('T') class LinkedList(Sized, Generic[T]): ... K = TypeVar('K') V = TypeVar('V') class MyMapping(Iterable[Tuple[K, V]], Container[Tuple[K, V]], Generic[K, V]): ...
Subclassing a generic class without specifying type parameters assumes Any for each position. In the following example, MyIterable is not generic but implicitly inherits from Iterable[Any] :
from typing import Iterable class MyIterable(Iterable): # Same as Iterable[Any] ...
Generic metaclasses are not supported.
Scoping rules for type variables
Type variables follow normal name resolution rules. However, there are some special cases in the static typechecking context:
A type variable used in a generic function could be inferred to be equal to different types in the same code block. Example:
from typing import TypeVar, Generic T = TypeVar('T') def fun_1(x: T) -> T: ... # T here def fun_2(x: T) -> T: ... # and here could be different fun_1(1) # This is OK, T is inferred to be int fun_2('a') # This is also OK, now T is str
A type variable used in a method of a generic class that coincides with one of the variables that parameterize this class is always bound to that variable. Example:
from typing import TypeVar, Generic T = TypeVar('T') class MyClass(Generic[T]): def meth_1(self, x: T) -> T: ... # T here def meth_2(self, x: T) -> T: ... # and here are always the same a = MyClass() # type: MyClass[int] a.meth_1(1) # OK a.meth_2('a') # This is an error!
A type variable used in a method that does not match any of the variables that parameterize the class makes this method a generic function in that variable:
T = TypeVar('T') S = TypeVar('S') class Foo(Generic[T]): def method(self, x: T, y: S) -> S: ... x = Foo() # type: Foo[int] y = x.method(0, "abc") # inferred type of y is str
Unbound type variables should not appear in the bodies of generic functions, or in the class bodies apart from method definitions:
T = TypeVar('T') S = TypeVar('S') def a_fun(x: T) -> None: # this is OK y = [] # type: List[T] # but below is an error! y = [] # type: List[S] class Bar(Generic[T]): # this is also an error an_attr = [] # type: List[S] def do_something(x: S) -> S: # this is OK though ...
A generic class definition that appears inside a generic function should not use type variables that parameterize the generic function:
from typing import List def a_fun(x: T) -> None: # This is OK a_list = [] # type: List[T] ... # This is however illegal class MyGeneric(Generic[T]): ...
A generic class nested in another generic class cannot use same type variables. The scope of the type variables of the outer class doesn't cover the inner one:
T = TypeVar('T') S = TypeVar('S') class Outer(Generic[T]): class Bad(Iterable[T]): # Error ... class AlsoBad: x = None # type: List[T] # Also an error class Inner(Iterable[S]): # OK ... attr = None # type: Inner[T] # Also OK
Instantiating generic classes and type erasure
User-defined generic classes can be instantiated. Suppose we write a Node class inheriting from Generic[T] :
from typing import TypeVar, Generic T = TypeVar('T') class Node(Generic[T]): ...
To create Node instances you call Node() just as for a regular class. At runtime the type (class) of the instance will be Node . But what type does it have to the type checker? The answer depends on how much information is available in the call. If the constructor ( __init__ or __new__ ) uses T in its signature, and a corresponding argument value is passed, the type of the corresponding argument(s) is substituted. Otherwise, Any is assumed. Example:
from typing import TypeVar, Generic T = TypeVar('T') class Node(Generic[T]): def __init__(self, label: T = None) -> None: ... x = None # Type: T x = Node('') # Inferred type is Node[str] y = Node(0) # Inferred type is Node[int] z = Node() # Inferred type is Node[Any]
In case the inferred type uses [Any] but the intended type is more specific, you can use a type comment (see below) to force the type of the variable, e.g.:
# (continued from previous example) a = Node() # type: Node[int] b = Node() # type: Node[str]
Alternatively, you can instantiate a specific concrete type, e.g.:
# (continued from previous example) p = Node[int]() q = Node[str]() r = Node[int]('') # Error s = Node[str](0) # Error
Note that the runtime type (class) of p and q is still just Node -- Node[int] and Node[str] are distinguishable class objects, but the runtime class of the objects created by instantiating them doesn't record the distinction. This behavior is called "type erasure"; it is common practice in languages with generics (e.g. Java, TypeScript).
Using generic classes (parameterized or not) to access attributes will result in type check failure. Outside the class definition body, a class attribute cannot be assigned, and can only be looked up by accessing it through the class instance that does not have same-named instance attribute:
# (continued from previous example) Node[int].x = 1 # Error Node[int].x # Error Node.x = 1 # Error Node.x # Error type(p).x # Error p.x # Ok (evaluates to None) Node[int]().x # Ok (evaluates to None) p.x = 1 # Ok, but assigning to instance attribute
Generic versions of abstract collections like Mapping or Sequence and generic versions of built-in classes -- List , Dict , Set , and FrozenSet -- cannot be instantiated. However, concrete user-defined subclasses thereof and generic versions of concrete collections can be instantiated:
data = DefaultDict[int, bytes]()
Note that one should not confuse static types and runtime classes. The type is still erased in this case and the above expression is just a shorthand for:
data = collections.defaultdict() # type: DefaultDict[int, bytes]
It is not recommended to use the subscripted class (e.g. Node[int] ) directly in an expression -- using a type alias (e.g. IntNode = Node[int] ) instead is preferred. (First, creating the subscripted class, e.g. Node[int] , has a runtime cost. Second, using a type alias is more readable.)
Arbitrary generic types as base classes
Generic[T] is only valid as a base class -- it's not a proper type. However, user-defined generic types such as LinkedList[T] from the above example and built-in generic types and ABCs such as List[T] and Iterable[T] are valid both as types and as base classes. For example, we can define a subclass of Dict that specializes type arguments:
from typing import Dict, List, Optional class Node: ... class SymbolTable(Dict[str, List[Node]]): def push(self, name: str, node: Node) -> None: self.setdefault(name, []).append(node) def pop(self, name: str) -> Node: return self[name].pop() def lookup(self, name: str) -> Optional[Node]: nodes = self.get(name) if nodes: return nodes[-1] return None
SymbolTable is a subclass of dict and a subtype of Dict[str, List[Node]] .
If a generic base class has a type variable as a type argument, this makes the defined class generic. For example, we can define a generic LinkedList class that is iterable and a container:
from typing import TypeVar, Iterable, Container T = TypeVar('T') class LinkedList(Iterable[T], Container[T]): ...
Now LinkedList[int] is a valid type. Note that we can use T multiple times in the base class list, as long as we don't use the same type variable T multiple times within Generic[...] .
Also consider the following example:
from typing import TypeVar, Mapping T = TypeVar('T') class MyDict(Mapping[str, T]): ...
In this case MyDict has a single parameter, T.
Abstract generic types
The metaclass used by Generic is a subclass of abc.ABCMeta . A generic class can be an ABC by including abstract methods or properties, and generic classes can also have ABCs as base classes without a metaclass conflict.
Type variables with an upper bound
A type variable may specify an upper bound using bound=<type> (note: <type> itself cannot be parametrized by type variables). This means that an actual type substituted (explicitly or implicitly) for the type variable must be a subtype of the boundary type. A common example is the definition of a Comparable type that works well enough to catch the most common errors:
from typing import TypeVar class Comparable(metaclass=ABCMeta): @abstractmethod def __lt__(self, other: Any) -> bool: ... ... # __gt__ etc. as well CT = TypeVar('CT', bound=Comparable) def min(x: CT, y: CT) -> CT: if x < y: return x else: return y min(1, 2) # ok, return type int min('x', 'y') # ok, return type str
(Note that this is not ideal -- for example min('x', 1) is invalid at runtime but a type checker would simply infer the return type Comparable . Unfortunately, addressing this would require introducing a much more powerful and also much more complicated concept, F-bounded polymorphism. We may revisit this in the future.)
An upper bound cannot be combined with type constraints (as in used AnyStr , see the example earlier); type constraints cause the inferred type to be _exactly_ one of the constraint types, while an upper bound just requires that the actual type is a subtype of the boundary type.
Covariance and contravariance
Consider a class Employee with a subclass Manager . Now suppose we have a function with an argument annotated with List[Employee] . Should we be allowed to call this function with a variable of type List[Manager] as its argument? Many people would answer "yes, of course" without even considering the consequences. But unless we know more about the function, a type checker should reject such a call: the function might append an Employee instance to the list, which would violate the variable's type in the caller.
It turns out such an argument acts contravariantly , whereas the intuitive answer (which is correct in case the function doesn't mutate its argument!) requires the argument to act covariantly . A longer introduction to these concepts can be found on Wikipedia [wiki-variance] and in PEP 483 ; here we just show how to control a type checker's behavior.
By default generic types are considered invariant in all type variables, which means that values for variables annotated with types like List[Employee] must exactly match the type annotation -- no subclasses or superclasses of the type parameter (in this example Employee ) are allowed.
To facilitate the declaration of container types where covariant or contravariant type checking is acceptable, type variables accept keyword arguments covariant=True or contravariant=True . At most one of these may be passed. Generic types defined with such variables are considered covariant or contravariant in the corresponding variable. By convention, it is recommended to use names ending in _co for type variables defined with covariant=True and names ending in _contra for that defined with contravariant=True .
A typical example involves defining an immutable (or read-only) container class:
from typing import TypeVar, Generic, Iterable, Iterator T_co = TypeVar('T_co', covariant=True) class ImmutableList(Generic[T_co]): def __init__(self, items: Iterable[T_co]) -> None: ... def __iter__(self) -> Iterator[T_co]: ... ... class Employee: ... class Manager(Employee): ... def dump_employees(emps: ImmutableList[Employee]) -> None: for emp in emps: ... mgrs = ImmutableList([Manager()]) # type: ImmutableList[Manager] dump_employees(mgrs) # OK
The read-only collection classes in typing are all declared covariant in their type variable (e.g. Mapping and Sequence ). The mutable collection classes (e.g. MutableMapping and MutableSequence ) are declared invariant. The one example of a contravariant type is the Generator type, which is contravariant in the send() argument type (see below).
Note: Covariance or contravariance is not a property of a type variable, but a property of a generic class defined using this variable. Variance is only applicable to generic types; generic functions do not have this property. The latter should be defined using only type variables without covariant or contravariant keyword arguments. For example, the following example is fine:
from typing import TypeVar class Employee: ... class Manager(Employee): ... E = TypeVar('E', bound=Employee) def dump_employee(e: E) -> None: ... dump_employee(Manager()) # OK
while the following is prohibited:
B_co = TypeVar('B_co', covariant=True) def bad_func(x: B_co) -> B_co: # Flagged as error by a type checker ...
The numeric tower
PEP 3141 defines Python's numeric tower, and the stdlib module numbers implements the corresponding ABCs ( Number , Complex , Real , Rational and Integral ). There are some issues with these ABCs, but the built-in concrete numeric classes complex , float and int are ubiquitous (especially the latter two :-).
Rather than requiring that users write import numbers and then use numbers.Float etc., this PEP proposes a straightforward shortcut that is almost as effective: when an argument is annotated as having type float , an argument of type int is acceptable; similar, for an argument annotated as having type complex , arguments of type float or int are acceptable. This does not handle classes implementing the corresponding ABCs or the fractions.Fraction class, but we believe those use cases are exceedingly rare.
Forward references
When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later.
A situation where this occurs commonly is the definition of a container class, where the class being defined occurs in the signature of some of the methods. For example, the following code (the start of a simple binary tree implementation) does not work:
class Tree: def __init__(self, left: Tree, right: Tree): self.left = left self.right = right
To address this, we write:
class Tree: def __init__(self, left: 'Tree', right: 'Tree'): self.left = left self.right = right
The string literal should contain a valid Python expression (i.e., compile(lit, '', 'eval') should be a valid code object) and it should evaluate without errors once the module has been fully loaded. The local and global namespace in which it is evaluated should be the same namespaces in which default arguments to the same function would be evaluated.
Moreover, the expression should be parseable as a valid type hint, i.e., it is constrained by the rules from the section Acceptable type hints above.
It is allowable to use string literals as part of a type hint, for example:
class Tree: ... def leaves(self) -> List['Tree']: ...
A common use for forward references is when e.g. Django models are needed in the signatures. Typically, each model is in a separate file, and has methods that arguments whose type involves other models. Because of the way circular imports work in Python, it is often not possible to import all the needed models directly:
# File models/a.py from models.b import B class A(Model): def foo(self, b: B): ... # File models/b.py from models.a import A class B(Model): def bar(self, a: A): ... # File main.py from models.a import A from models.b import B
Assuming main is imported first, this will fail with an ImportError at the line from models.a import A in models/b.py, which is being imported from models/a.py before a has defined class A. The solution is to switch to module-only imports and reference the models by their _module_._class_ name:
# File models/a.py from models import b class A(Model): def foo(self, b: 'b.B'): ... # File models/b.py from models import a class B(Model): def bar(self, a: 'a.A'): ... # File main.py from models.a import A from models.b import B
Union types
Since accepting a small, limited set of expected types for a single argument is common, there is a new special factory called Union . Example:
from typing import Union def handle_employees(e: Union[Employee, Sequence[Employee]]) -> None: if isinstance(e, Employee): e = [e] ...
A type factored by Union[T1, T2, ...] is a supertype of all types T1 , T2 , etc., so that a value that is a member of one of these types is acceptable for an argument annotated by Union[T1, T2, ...] .
One common case of union types are optional types. By default, None is an invalid value for any type, unless a default value of None has been provided in the function definition. Examples:
def handle_employee(e: Union[Employee, None]) -> None: ...
As a shorthand for Union[T1, None] you can write Optional[T1] ; for example, the above is equivalent to:
from typing import Optional def handle_employee(e: Optional[Employee]) -> None: ...
An optional type is also automatically assumed when the default value is None , for example:
def handle_employee(e: Employee = None): ...
This is equivalent to:
def handle_employee(e: Optional[Employee] = None) -> None: ...
Support for singleton types in unions
A singleton instance is frequently used to mark some special condition, in particular in situations where None is also a valid value for a variable. Example:
_empty = object() def func(x=_empty): if x is _empty: # default argument value return 0 elif x is None: # argument was provided and it's None return 1 else: return x * 2
To allow precise typing in such situations, the user should use the Union type in conjunction with the enum.Enum class provided by the standard library, so that type errors can be caught statically:
from typing import Union from enum import Enum class Empty(Enum): token = 0 _empty = Empty.token def func(x: Union[int, None, Empty] = _empty) -> int: boom = x * 42 # This fails type check if x is _empty: return 0 elif x is None: return 1 else: # At this point typechecker knows that x can only have type int return x * 2
Since the subclasses of Enum cannot be further subclassed, the type of variable x can be statically inferred in all branches of the above example. The same approach is applicable if more than one singleton object is needed: one can use an enumeration that has more than one value:
class Reason(Enum): timeout = 1 error = 2 def process(response: Union[str, Reason] = '') -> str: if response is Reason.timeout: return 'TIMEOUT' elif response is Reason.error: return 'ERROR' else: # response can be only str, all other possible values exhausted return 'PROCESSED: ' + response
The Any type
A special kind of type is Any . Every type is consistent with Any . It can be considered a type that has all values and all methods. Note that Any and builtin type object are completely different.
When the type of a value is object , the type checker will reject almost all operations on it, and assigning it to a variable (or using it as a return value) of a more specialized type is a type error. On the other hand, when a value has type Any , the type checker will allow all operations on it, and a value of type Any can be assigned to a variable (or used as a return value) of a more constrained type.
A function parameter without an annotation is assumed to be annotated with Any . If a generic type is used without specifying type parameters, they assumed to be Any :
from typing import Mapping def use_map(m: Mapping) -> None: # Same as Mapping[Any, Any] ...
This rule also applies to Tuple , in annotation context it is equivalent to Tuple[Any, ...] and, in turn, to tuple . As well, a bare Callable in an annotation is equivalent to Callable[[...], Any] and, in turn, to collections.abc.Callable :
from typing import Tuple, List, Callable def check_args(args: Tuple) -> bool: ... check_args(()) # OK check_args((42, 'abc')) # Also OK check_args(3.14) # Flagged as error by a type checker # A list of arbitrary callables is accepted by this function def apply_callbacks(cbs: List[Callable]) -> None: ...
The NoReturn type
The typing module provides a special type NoReturn to annotate functions that never return normally. For example, a function that unconditionally raises an exception:
from typing import NoReturn def stop() -> NoReturn: raise RuntimeError('no way')
The NoReturn annotation is used for functions such as sys.exit . Static type checkers will ensure that functions annotated as returning NoReturn truly never return, either implicitly or explicitly:
import sys from typing import NoReturn def f(x: int) -> NoReturn: # Error, f(0) implicitly returns None if x != 0: sys.exit(1)
The checkers will also recognize that the code after calls to such functions is unreachable and will behave accordingly:
# continue from first example def g(x: int) -> int: if x > 0: return x stop() return 'whatever works' # Error might be not reported by some checkers # that ignore errors in unreachable blocks
The NoReturn type is only valid as a return annotation of functions, and considered an error if it appears in other positions:
from typing import List, NoReturn # All of the following are errors def bad1(x: NoReturn) -> int: ... bad2 = None # type: NoReturn def bad3() -> List[NoReturn]: ...
The type of class objects
Sometimes you want to talk about class objects, in particular class objects that inherit from a given class. This can be spelled as Type[C] where C is a class. To clarify: while C (when used as an annotation) refers to instances of class C , Type[C] refers to subclasses of C . (This is a similar distinction as between object and type .)
For example, suppose we have the following classes:
class User: ... # Abstract base for User classes class BasicUser(User): ... class ProUser(User): ... class TeamUser(User): ...
And suppose we have a function that creates an instance of one of these classes if you pass it a class object:
def new_user(user_class): user = user_class() # (Here we could write the user object to a database) return user
Without Type[] the best we could do to annotate new_user() would be:
def new_user(user_class: type) -> User: ...
However using Type[] and a type variable with an upper bound we can do much better:
U = TypeVar('U', bound=User) def new_user(user_class: Type[U]) -> U: ...
Now when we call new_user() with a specific subclass of User a type checker will infer the correct type of the result:
joe = new_user(BasicUser) # Inferred type is BasicUser
The value corresponding to Type[C] must be an actual class object that's a subtype of C , not a special form. IOW, in the above example calling e.g. new_user(Union[BasicUser, ProUser]) is rejected by the type checker (in addition to failing at runtime because you can't instantiate a union).
Note that it is legal to use a union of classes as the parameter for Type[] , as in:
def new_non_team_user(user_class: Type[Union[BasicUser, ProUser]]): user = new_user(user_class) ...
However the actual argument passed in at runtime must still be a concrete class object, e.g. in the above example:
new_non_team_user(ProUser) # OK new_non_team_user(TeamUser) # Disallowed by type checker
Type[Any] is also supported (see below for its meaning).
Type[T] where T is a type variable is allowed when annotating the first argument of a class method (see the relevant section).
Any other special constructs like Tuple or Callable are not allowed as an argument to Type .
There are some concerns with this feature: for example when new_user() calls user_class() this implies that all subclasses of User must support this in their constructor signature. However this is not unique to Type[] : class methods have similar concerns. A type checker ought to flag violations of such assumptions, but by default constructor calls that match the constructor signature in the indicated base class ( User in the example above) should be allowed. A program containing a complex or extensible class hierarchy might also handle this by using a factory class method. A future revision of this PEP may introduce better ways of dealing with these concerns.
When Type is parameterized it requires exactly one parameter. Plain Type without brackets is equivalent to Type[Any] and this in turn is equivalent to type (the root of Python's metaclass hierarchy). This equivalence also motivates the name, Type , as opposed to alternatives like Class or SubType , which were proposed while this feature was under discussion; this is similar to the relationship between e.g. List and list .
Regarding the behavior of Type[Any] (or Type or type ), accessing attributes of a variable with this type only provides attributes and methods defined by type (for example, __repr__() and __mro__ ). Such a variable can be called with arbitrary arguments, and the return type is Any .
Type is covariant in its parameter, because Type[Derived] is a subtype of Type[Base] :
def new_pro_user(pro_user_class: Type[ProUser]): user = new_user(pro_user_class) # OK ...
Annotating instance and class methods
In most cases the first argument of class and instance methods does not need to be annotated, and it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. In addition, the first argument in an instance method can be annotated with a type variable. In this case the return type may use the same type variable, thus making that method a generic function. For example:
T = TypeVar('T', bound='Copyable') class Copyable: def copy(self: T) -> T: # return a copy of self class C(Copyable): ... c = C() c2 = c.copy() # type here should be C
The same applies to class methods using Type[] in an annotation of the first argument:
T = TypeVar('T', bound='C') class C: @classmethod def factory(cls: Type[T]) -> T: # make a new instance of cls class D(C): ... d = D.factory() # type here should be D
Note that some type checkers may apply restrictions on this use, such as requiring an appropriate upper bound for the type variable used (see examples).
Version and platform checking
Type checkers are expected to understand simple version and platform checks, e.g.:
import sys if sys.version_info[0] >= 3: # Python 3 specific definitions else: # Python 2 specific definitions if sys.platform == 'win32': # Windows specific definitions else: # Posix specific definitions
Don't expect a checker to understand obfuscations like "".join(reversed(sys.platform)) == "xunil" .
Runtime or type checking?
Sometimes there's code that must be seen by a type checker (or other static analysis tools) but should not be executed. For such situations the typing module defines a constant, TYPE_CHECKING , that is considered True during type checking (or other static analysis) but False at runtime. Example:
import typing if typing.TYPE_CHECKING: import expensive_mod def a_func(arg: 'expensive_mod.SomeClass') -> None: a_var = arg # type: expensive_mod.SomeClass ...
(Note that the type annotation must be enclosed in quotes, making it a "forward reference", to hide the expensive_mod reference from the interpreter runtime. In the # type comment no quotes are needed.)
This approach may also be useful to handle import cycles.
Arbitrary argument lists and default argument values
Arbitrary argument lists can as well be type annotated, so that the definition:
def foo(*args: str, **kwds: int): ...
is acceptable and it means that, e.g., all of the following represent function calls with valid types of arguments:
foo('a', 'b', 'c') foo(x=1, y=2) foo('', z=0)
In the body of function foo , the type of variable args is deduced as Tuple[str, ...] and the type of variable kwds is Dict[str, int] .
In stubs it may be useful to declare an argument as having a default without specifying the actual default value. For example:
def foo(x: AnyStr, y: AnyStr = ...) -> AnyStr: ...
What should the default value look like? Any of the options "" , b"" or None fails to satisfy the type constraint (actually, None will modify the type to become Optional[AnyStr] ).
In such cases the default value may be specified as a literal ellipsis, i.e. the above example is literally what you would write.
Positional-only arguments
Some functions are designed to take their arguments only positionally, and expect their callers never to use the argument's name to provide that argument by keyword. All arguments with names beginning with __ are assumed to be positional-only:
def quux(__x: int) -> None: ... quux(3) # This call is fine. quux(__x=3) # This call is an error.
Annotating generator functions and coroutines
The return type of generator functions can be annotated by the generic type Generator[yield_type, send_type, return_type] provided by typing.py module:
def echo_round() -> Generator[int, float, str]: res = yield while res: res = yield round(res) return 'OK'
Coroutines introduced in PEP 492 are annotated with the same syntax as ordinary functions. However, the return type annotation corresponds to the type of await expression, not to the coroutine type:
async def spam(ignored: int) -> str: return 'spam' async def foo() -> None: bar = await spam(42) # type: str
The typing.py module provides a generic version of ABC collections.abc.Coroutine to specify awaitables that also support send() and throw() methods. The variance and order of type variables correspond to those of Generator , namely Coroutine[T_co, T_contra, V_co] , for example:
from typing import List, Coroutine c = None # type: Coroutine[List[str], str, int] ... x = c.send('hi') # type: List[str] async def bar(): -> None: x = await c # type: int
The module also provides generic ABCs Awaitable , AsyncIterable , and AsyncIterator for situations where more precise types cannot be specified:
def op() -> typing.Awaitable[str]: if cond: return spam(42) else: return asyncio.Future(...)
Compatibility with other uses of function annotations
A number of existing or potential use cases for function annotations exist, which are incompatible with type hinting. These may confuse a static type checker. However, since type hinting annotations have no runtime behavior (other than evaluation of the annotation expression and storing annotations in the __annotations__ attribute of the function object), this does not make the program incorrect -- it just may cause a type checker to emit spurious warnings or errors.
To mark portions of the program that should not be covered by type hinting, you can use one or more of the following:
- a # type: ignore comment;
- a @no_type_check decorator on a class or function;
- a custom class or function decorator marked with @no_type_check_decorator .
For more details see later sections.
In order for maximal compatibility with offline type checking it may eventually be a good idea to change interfaces that rely on annotations to switch to a different mechanism, for example a decorator. In Python 3.5 there is no pressure to do this, however. See also the longer discussion under Rejected alternatives below.
Type comments
No first-class syntax support for explicitly marking variables as being of a specific type is added by this PEP. To help with type inference in complex cases, a comment of the following format may be used:
x = [] # type: List[Employee] x, y, z = [], [], [] # type: List[int], List[int], List[str] x, y, z = [], [], [] # type: (List[int], List[int], List[str]) a, b, *c = range(5) # type: float, float, List[float] x = [ 1, 2, ] # type: List[int]
Type comments should be put on the last line of the statement that contains the variable definition. They can also be placed on with statements and for statements, right after the colon.
Examples of type comments on with and for statements:
with frobnicate() as foo: # type: int # Here foo is an int ... for x, y in points: # type: float, float # Here x and y are floats ...
In stubs it may be useful to declare the existence of a variable without giving it an initial value. This can be done using a literal ellipsis:
from typing import IO stream = ... # type: IO[str]
In non-stub code, there is a similar special case:
from typing import IO stream = None # type: IO[str]
Type checkers should not complain about this (despite the value None not matching the given type), nor should they change the inferred type to Optional[...] (despite the rule that does this for annotated arguments with a default value of None ). The assumption here is that other code will ensure that the variable is given a value of the proper type, and all uses can assume that the variable has the given type.
The # type: ignore comment should be put on the line that the error refers to:
import http.client errors = { 'not_found': http.client.NOT_FOUND # type: ignore }
A # type: ignore comment on a line by itself is equivalent to adding an inline # type: ignore to each line until the end of the current indented block. At top indentation level this has effect of disabling type checking until the end of file.
If type hinting proves useful in general, a syntax for typing variables may be provided in a future Python version.
Casts
Occasionally the type checker may need a different kind of hint: the programmer may know that an expression is of a more constrained type than a type checker may be able to infer. For example:
from typing import List, cast def find_first_str(a: List[object]) -> str: index = next(i for i, x in enumerate(a) if isinstance(x, str)) # We only get here if there's at least one string in a return cast(str, a[index])
Some type checkers may not be able to infer that the type of a[index] is str and only infer object or Any , but we know that (if the code gets to that point) it must be a string. The cast(t, x) call tells the type checker that we are confident that the type of x is t . At runtime a cast always returns the expression unchanged -- it does not check the type, and it does not convert or coerce the value.
Casts differ from type comments (see the previous section). When using a type comment, the type checker should still verify that the inferred type is consistent with the stated type. When using a cast, the type checker should blindly believe the programmer. Also, casts can be used in expressions, while type comments only apply to assignments.
NewType helper function
There are also situations where a programmer might want to avoid logical errors by creating simple classes. For example:
class UserId(int): pass get_by_user_id(user_id: UserId): ...
However, this approach introduces a runtime overhead. To avoid this, typing.py provides a helper function NewType that creates simple unique types with almost zero runtime overhead. For a static type checker Derived = NewType('Derived', Base) is roughly equivalent to a definition:
class Derived(Base): def __init__(self, _x: Base) -> None: ...
While at runtime, NewType('Derived', Base) returns a dummy function that simply returns its argument. Type checkers require explicit casts from int where UserId is expected, while implicitly casting from UserId where int is expected. Examples:
UserId = NewType('UserId', int) def name_by_id(user_id: UserId) -> str: ... UserId('user') # Fails type check name_by_id(42) # Fails type check name_by_id(UserId(42)) # OK num = UserId(5) + 1 # type: int
NewType accepts exactly two arguments: a name for the new unique type, and a base class. The latter should be a proper class, i.e., not a type construct like Union , etc. The function returned by NewType accepts only one argument; this is equivalent to supporting only one constructor accepting an instance of the base class (see above). Example:.
Stub Files
Stub files are files containing type hints that are only for use by the type checker, not at runtime. There are several use cases for stub files:
- Extension modules
- Third-party modules whose authors have not yet added type hints
- Standard library modules for which type hints have not yet been written
- Modules that must be compatible with Python 2 and 3
- Modules that use annotations for other purposes
Stub files have the same syntax as regular Python modules. There is one feature of the typing module that is different in stub files: the @overload decorator described below.
The type checker should only check function signatures in stub files; It is recommended that function bodies in stub files just be a single ellipsis ( ... ).
The type checker should have a configurable search path for stub files. If a stub file is found the type checker should not read the corresponding "real" module.
While stub files are syntactically valid Python modules, they use the .pyi extension to make it possible to maintain stub files in the same directory as the corresponding real module. This also reinforces the notion that no runtime behavior should be expected of stub files.
Additional notes on stub files:
Modules and variables imported into the stub are not considered exported from the stub unless the import uses the import ... as ... form or the equivalent from ... import ... as ... form.
However, as an exception to the previous bullet, all objects imported into a stub using from ... import * are considered exported. (This makes it easier to re-export all objects from a given module that may vary by Python version.)
Stub files may be incomplete. To make type checkers aware of this, the file can contain the following code:
def __getattr__(name) -> Any: ...
Any identifier not defined in the stub is therefore assumed to be of type Any .
Function/method overloading
The @overload decorator allows describing functions and methods that support multiple different combinations of argument types. This pattern is used frequently in builtin modules and types. For example, the __getitem__() method of the bytes type can be described as follows:
from typing import overload class bytes: ... @overload def __getitem__(self, i: int) -> int: ... @overload def __getitem__(self, s: slice) -> bytes: ...
This description is more precise than would be possible using unions (which cannot express the relationship between the argument and return types):
from typing import Union class bytes: ... def __getitem__(self, a: Union[int, slice]) -> Union[int, bytes]: ...
Another example where @overload comes in handy is the type of the builtin map() function, which takes a different number of arguments depending on the type of the callable:
from typing import Callable, Iterable, Iterator, Tuple, TypeVar, overload T1 = TypeVar('T1') T2 = TypeVar('T2) S = TypeVar('S') @overload def map(func: Callable[[T1], S], iter1: Iterable[T1]) -> Iterator[S]: ... @overload def map(func: Callable[[T1, T2], S], iter1: Iterable[T1], iter2: Iterable[T2]) -> Iterator[S]: ... # ... and we could add more items to support more than two iterables
Note that we could also easily add items to support map(None, ...) :
@overload def map(func: None, iter1: Iterable[T1]) -> Iterable[T1]: ... @overload def map(func: None, iter1: Iterable[T1], iter2: Iterable[T2]) -> Iterable[Tuple[T1, T2]]: ...
Uses of the @overload decorator as shown above are suitable for stub files. In regular modules, a series of @overload -decorated definitions must be followed by exactly one non- @overload -decorated definition (for the same function/method). The @overload -decorated definitions are for the benefit of the type checker only, since they will be overwritten by the non- @overload -decorated definition, while the latter is used at runtime but should be ignored by a type checker. At runtime, calling a @overload -decorated function directly will raise NotImplementedError . Here's an example of a non-stub overload that can't easily be expressed using a union or a type variable:
@overload def utf8(value: None) -> None: pass @overload def utf8(value: bytes) -> bytes: pass @overload def utf8(value: unicode) -> bytes: pass def utf8(value): <actual implementation>
NOTE: While it would be possible to provide a multiple dispatch implementation using this syntax, its implementation would require using sys._getframe() , which is frowned upon. Also, designing and implementing an efficient multiple dispatch mechanism is hard, which is why previous attempts were abandoned in favor of functools.singledispatch() . (See PEP 443 , especially its section "Alternative approaches".) In the future we may come up with a satisfactory multiple dispatch design, but we don't want such a design to be constrained by the overloading syntax defined for type hints in stub files. It is also possible that both features will develop independent from each other (since overloading in the type checker has different use cases and requirements than multiple dispatch at runtime -- e.g. the latter is unlikely to support generic types).
A constrained TypeVar type can often be used instead of using the @overload decorator. For example, the definitions of concat1 and concat2 in this stub file are equivalent:
from typing import TypeVar AnyStr = TypeVar('AnyStr', str, bytes) def concat1(x: AnyStr, y: AnyStr) -> AnyStr: ... @overload def concat2(x: str, y: str) -> str: ... @overload def concat2(x: bytes, y: bytes) -> bytes: ...
Some functions, such as map or bytes.__getitem__ above, can't be represented precisely using type variables. However, unlike @overload , type variables can also be used outside stub files. We recommend that @overload is only used in cases where a type variable is not sufficient, due to its special stub-only status.
Another important difference between type variables such as AnyStr and using @overload is that the prior can also be used to define constraints for generic class type parameters. For example, the type parameter of the generic class typing.IO is constrained (only IO[str] , IO[bytes] and IO[Any] are valid):
class IO(Generic[AnyStr]): ...
Storing and distributing stub files
The easiest form of stub file storage and distribution is to put them alongside Python modules in the same directory. This makes them easy to find by both programmers and the tools. However, since package maintainers are free not to add type hinting to their packages, third-party stubs installable by pip from PyPI are also supported. In this case we have to consider three issues: naming, versioning, installation path.
This PEP does not provide a recommendation on a naming scheme that should be used for third-party stub file packages. Discoverability will hopefully be based on package popularity, like with Django packages for example.
Third-party stubs have to be versioned using the lowest version of the source package that is compatible. Example: FooPackage has versions 1.0, 1.1, 1.2, 1.3, 2.0, 2.1, 2.2. There are API changes in versions 1.1, 2.0 and 2.2. The stub file package maintainer is free to release stubs for all versions but at least 1.0, 1.1, 2.0 and 2.2 are needed to enable the end user type check all versions. This is because the user knows that the closest lower or equal version of stubs is compatible. In the provided example, for FooPackage 1.3 the user would choose stubs version 1.1.
Note that if the user decides to use the "latest" available source package, using the "latest" stub files should generally also work if they're updated often.
Third-party stub packages can use any location for stub storage. Type checkers should search for them using PYTHONPATH. A default fallback directory that is always checked is shared/typehints/python3.5/ (or 3.6, etc.). Since there can only be one package installed for a given Python version per environment, no additional versioning is performed under that directory (just like bare directory installs by pip in site-packages). Stub file package authors might use the following snippet in setup.py :
... data_files=[ ( 'shared/typehints/python{}.{}'.format(*sys.version_info[:2]), pathlib.Path(SRC_PATH).glob('**/*.pyi'), ), ], ...
The Typeshed Repo
There is a shared repository where useful stubs are being collected [typeshed] . Note that stubs for a given package will not be included here without the explicit consent of the package owner. Further policies regarding the stubs collected here will be decided at a later time, after discussion on python-dev, and reported in the typeshed repo's README.
Exceptions
No syntax for listing explicitly raised exceptions is proposed. Currently the only known use case for this feature is documentational, in which case the recommendation is to put this information in a docstring.
The typing Module
To open the usage of static type checking to Python 3.5 as well as older versions, a uniform namespace is required. For this purpose, a new module in the standard library is introduced called typing .
It defines the fundamental building blocks for constructing types (e.g. Any ), types representing generic variants of builtin collections (e.g. List ), types representing generic collection ABCs (e.g. Sequence ), and a small collection of convenience definitions.
Note that special type constructs, such as Any , Union , and type variables defined using TypeVar are only supported in the type annotation context, and Generic may only be used as a base class. All of these (except for unparameterized generics) will raise TypeError if appear in isinstance or issubclass .
Fundamental building blocks:
- Any, used as def get(key: str) -> Any: ...
- Union, used as Union[Type1, Type2, Type3]
- Callable, used as Callable[[Arg1Type, Arg2Type], ReturnType]
- Tuple, used by listing the element types, for example Tuple[int, int, str] . The empty tuple can be typed as Tuple[()] . Arbitrary-length homogeneous tuples can be expressed using one type and ellipsis, for example Tuple[int, ...] . (The ... here are part of the syntax, a literal ellipsis.)
- TypeVar, used as X = TypeVar('X', Type1, Type2, Type3) or simply Y = TypeVar('Y') (see above for more details)
- Generic, used to create user-defined generic classes
- Type, used to annotate class objects
Generic variants of builtin collections:
- Dict, used as Dict[key_type, value_type]
- DefaultDict, used as DefaultDict[key_type, value_type] , a generic variant of collections.defaultdict
- List, used as List[element_type]
- Set, used as Set[element_type] . See remark for AbstractSet below.
- FrozenSet, used as FrozenSet[element_type]
Note: Dict , DefaultDict , List , Set and FrozenSet are mainly useful for annotating return values. For arguments, prefer the abstract collection types defined below, e.g. Mapping , Sequence or AbstractSet .
Generic variants of container ABCs (and a few non-containers):
- Awaitable
- AsyncIterable
- AsyncIterator
- ByteString
- Callable (see above, listed here for completeness)
- Collection
- Container
- ContextManager
- Coroutine
- Generator, used as Generator[yield_type, send_type, return_type] . This represents the return value of generator functions. It is a subtype of Iterable and it has additional type variables for the type accepted by the send() method (it is contravariant in this variable -- a generator that accepts sending it Employee instance is valid in a context where a generator is required that accepts sending it Manager instances) and the return type of the generator.
- Hashable (not generic, but present for completeness)
- ItemsView
- Iterable
- Iterator
- KeysView
- Mapping
- MappingView
- MutableMapping
- MutableSequence
- MutableSet
- Sequence
- Set, renamed to AbstractSet . This name change was required because Set in the typing module means set() with generics.
- Sized (not generic, but present for completeness)
- ValuesView
A few one-off types are defined that test for single special methods (similar to Hashable or Sized ):
- Reversible, to test for __reversed__
- SupportsAbs, to test for __abs__
- SupportsComplex, to test for __complex__
- SupportsFloat, to test for __float__
- SupportsInt, to test for __int__
- SupportsRound, to test for __round__
- SupportsBytes, to test for __bytes__
Convenience definitions:
- Optional, defined by Optional[t] == Union[t, type(None)]
- AnyStr, defined as TypeVar('AnyStr', str, bytes)
- Text, a simple alias for str in Python 3, for unicode in Python 2
- NamedTuple, used as NamedTuple(type_name, [(field_name, field_type), ...]) and equivalent to collections.namedtuple(type_name, [field_name, ...]) . This is useful to declare the types of the fields of a named tuple type.
- NewType, used to create unique types with little runtime overhead UserId = NewType('UserId', int)
- cast(), described earlier
- @no_type_check, a decorator to disable type checking per class or function (see below)
- @no_type_check_decorator, a decorator to create your own decorators with the same meaning as @no_type_check (see below)
- @overload, described earlier
- get_type_hints(), a utility function to retrieve the type hints from a function or method. Given a function or method object, it returns a dict with the same format as __annotations__ , but evaluating forward references (which are given as string literals) as expressions in the context of the original function or method definition.
- TYPE_CHECKING, False at runtime but True to type checkers
Types available in the typing.io submodule:
- IO (generic over AnyStr )
- BinaryIO (a simple subtype of IO[bytes] )
- TextIO (a simple subtype of IO[str] )
Types available in the typing.re submodule:
- Match and Pattern, types of re.match() and re.compile() results (generic over AnyStr )
Suggested syntax for Python 2.7 and straddling code
Some tools may want to support type annotations in code that must be compatible with Python 2.7. For this purpose this PEP has a suggested (but not mandatory) extension where function annotations are placed in a # type: comment. Such a comment must be placed immediately following the function header (before the docstring). An example: the following Python 3 code:
def embezzle(self, account: str, funds: int = 1000000, *fake_receipts: str) -> None: """Embezzle funds from account using fake receipts.""" <code goes here>
is equivalent to the following:
def embezzle(self, account, funds=1000000, *fake_receipts): # type: (str, int, *str) -> None """Embezzle funds from account using fake receipts.""" <code goes here>
Note that for methods, no type is needed for self .
For an argument-less method it would look like this:
def load_cache(self): # type: () -> bool <code>
Sometimes you want to specify the return type for a function or method without (yet) specifying the argument types. To support this explicitly, the argument list may be replaced with an ellipsis. Example:
def send_email(address, sender, cc, bcc, subject, body): # type: (...) -> bool """Send an email message. Return True if successful.""" <code>
Sometimes you have a long list of parameters and specifying their types in a single # type: comment would be awkward. To this end you may list the arguments one per line and add a # type: comment per line after an argument's associated comma, if any. To specify the return type use the ellipsis syntax. Specifying the return type is not mandatory and not every argument needs to be given a type. A line with a # type: comment should contain exactly one argument. The type comment for the last argument (if any) should precede the close parenthesis. Example:
def send_email(address, # type: Union[str, List[str]] sender, # type: str cc, # type: Optional[List[str]] bcc, # type: Optional[List[str]] subject='', body=None # type: List[str] ): # type: (...) -> bool """Send an email message. Return True if successful.""" <code>
Notes:
Tools that support this syntax should support it regardless of the Python version being checked. This is necessary in order to support code that straddles Python 2 and Python 3.
It is not allowed for an argument or return value to have both a type annotation and a type comment.
When using the short form (e.g. # type: (str, int) -> None ) every argument must be accounted for, except the first argument of instance and class methods (those are usually omitted, but it's allowed to include them).
The return type is mandatory for the short form. If in Python 3 you would omit some argument or the return type, the Python 2 notation should use Any .
When using the short form, for *args and **kwds , put 1 or 2 stars in front of the corresponding type annotation. (As with Python 3 annotations, the annotation here denotes the type of the individual argument values, not of the tuple/dict that you receive as the special argument value args or kwds .)
Like other type comments, any names used in the annotations must be imported or defined by the module containing the annotation.
When using the short form, the entire annotation must be one line.
The short form may also occur on the same line as the close parenthesis, e.g.:
def add(a, b): # type: (int, int) -> int return a + b
Misplaced type comments will be flagged as errors by a type checker. If necessary, such comments could be commented twice. For example:
def f(): '''Docstring''' # type: () -> None # Error! def g(): '''Docstring''' # # type: () -> None # This is OK
When checking Python 2.7 code, type checkers should treat the int and long types as equivalent. For parameters typed as unicode or Text , arguments of type str should be acceptable.
Rejected Alternatives
During discussion of earlier drafts of this PEP, various objections were raised and alternatives were proposed. We discuss some of these here and explain why we reject them.
Several main objections were raised.
Which brackets for generic type parameters?
Most people are familiar with the use of angular brackets (e.g. List<int> ) in languages like C++, Java, C# and Swift to express the parametrization of generic types. The problem with these is that they are really hard to parse, especially for a simple-minded parser like Python. In most languages the ambiguities are usually dealt with by only allowing angular brackets in specific syntactic positions, where general expressions aren't allowed. (And also by using very powerful parsing techniques that can backtrack over an arbitrary section of code.)
But in Python, we'd like type expressions to be (syntactically) the same as other expressions, so that we can use e.g. variable assignment to create type aliases. Consider this simple type expression:
List<int>
From the Python parser's perspective, the expression begins with the same four tokens (NAME, LESS, NAME, GREATER) as a chained comparison:
a < b > c # I.e., (a < b) and (b > c)
We can even make up an example that could be parsed both ways:
a < b > [ c ]
Assuming we had angular brackets in the language, this could be interpreted as either of the following two:
(a<b>)[c] # I.e., (a<b>).__getitem__(c) a < b > ([c]) # I.e., (a < b) and (b > [c])
It would surely be possible to come up with a rule to disambiguate such cases, but to most users the rules would feel arbitrary and complex. It would also require us to dramatically change the CPython parser (and every other parser for Python). It should be noted that Python's current parser is intentionally "dumb" -- a simple grammar is easier for users to reason about.
For all these reasons, square brackets (e.g. List[int] ) are (and have long been) the preferred syntax for generic type parameters. They can be implemented by defining the __getitem__() method on the metaclass, and no new syntax is required at all. This option works in all recent versions of Python (starting with Python 2.2). Python is not alone in this syntactic choice -- generic classes in Scala also use square brackets.
What about existing uses of annotations?
One line of argument points out that PEP 3107 explicitly supports the use of arbitrary expressions in function annotations. The new proposal is then considered incompatible with the specification of PEP 3107 .
Our response to this is that, first of all, the current proposal does not introduce any direct incompatibilities, so programs using annotations in Python 3.4 will still work correctly and without prejudice in Python 3.5.
We do hope that type hints will eventually become the sole use for annotations, but this will require additional discussion and a deprecation period after the initial roll-out of the typing module with Python 3.5. The current PEP will have provisional status (see PEP 411 ) until Python 3.6 is released. The fastest conceivable scheme would introduce silent deprecation of non-type-hint annotations in 3.6, full deprecation in 3.7, and declare type hints as the only allowed use of annotations in Python 3.8. This should give authors of packages that use annotations plenty of time to devise another approach, even if type hints become an overnight success.
Another possible outcome would be that type hints will eventually become the default meaning for annotations, but that there will always remain an option to disable them. For this purpose the current proposal defines a decorator @no_type_check which disables the default interpretation of annotations as type hints in a given class or function. It also defines a meta-decorator @no_type_check_decorator which can be used to decorate a decorator (!), causing annotations in any function or class decorated with the latter to be ignored by the type checker.
There are also # type: ignore comments, and static checkers should support configuration options to disable type checking in selected packages.
Despite all these options, proposals have been circulated to allow type hints and other forms of annotations to coexist for individual arguments. One proposal suggests that if an annotation for a given argument is a dictionary literal, each key represents a different form of annotation, and the key 'type' would be use for type hints. The problem with this idea and its variants is that the notation becomes very "noisy" and hard to read. Also, in most cases where existing libraries use annotations, there would be little need to combine them with type hints. So the simpler approach of selectively disabling type hints appears sufficient.
The problem of forward declarations
The current proposal is admittedly sub-optimal when type hints must contain forward references. Python requires all names to be defined by the time they are used. Apart from circular imports this is rarely a problem: "use" here means "look up at runtime", and with most "forward" references there is no problem in ensuring that a name is defined before the function using it is called.
The problem with type hints is that annotations (per PEP 3107 , and similar to default values) are evaluated at the time a function is defined, and thus any names used in an annotation must be already defined when the function is being defined. A common scenario is a class definition whose methods need to reference the class itself in their annotations. (More general, it can also occur with mutually recursive classes.) This is natural for container types, for example:
class Node: """Binary tree node.""" def __init__(self, left: Node, right: Node): self.left = left self.right = right
As written this will not work, because of the peculiarity in Python that class names become defined once the entire body of the class has been executed. Our solution, which isn't particularly elegant, but gets the job done, is to allow using string literals in annotations. Most of the time you won't have to use this though -- most uses of type hints are expected to reference builtin types or types defined in other modules.
A counterproposal would change the semantics of type hints so they aren't evaluated at runtime at all (after all, type checking happens off-line, so why would type hints need to be evaluated at runtime at all). This of course would run afoul of backwards compatibility, since the Python interpreter doesn't actually know whether a particular annotation is meant to be a type hint or something else.
A compromise is possible where a __future__ import could enable turning all annotations in a given module into string literals, as follows:
from __future__ import annotations class ImSet: def add(self, a: ImSet) -> List[ImSet]: ... assert ImSet.add.__annotations__ == {'a': 'ImSet', 'return': 'List[ImSet]'}
Such a __future__ import statement may be proposed in a separate PEP.
The double colon
A few creative souls have tried to invent solutions for this problem. For example, it was proposed to use a double colon ( :: ) for type hints, solving two problems at once: disambiguating between type hints and other annotations, and changing the semantics to preclude runtime evaluation. There are several things wrong with this idea, however.
- It's ugly. The single colon in Python has many uses, and all of them look familiar because they resemble the use of the colon in English text. This is a general rule of thumb by which Python abides for most forms of punctuation; the exceptions are typically well known from other programming languages. But this use of :: is unheard of in English, and in other languages (e.g. C++) it is used as a scoping operator, which is a very different beast. In contrast, the single colon for type hints reads naturally -- and no wonder, since it was carefully designed for this purpose (the idea long predates PEP 3107 [gvr-artima] ). It is also used in the same fashion in other languages from Pascal to Swift.
- What would you do for return type annotations?
- It's actually a feature that type hints are evaluated at runtime.
- Making type hints available at runtime allows runtime type checkers to be built on top of type hints.
- It catches mistakes even when the type checker is not run. Since it is a separate program, users may choose not to run it (or even install it), but might still want to use type hints as a concise form of documentation. Broken type hints are no use even for documentation.
- Because it's new syntax, using the double colon for type hints would limit them to code that works with Python 3.5 only. By using existing syntax, the current proposal can easily work for older versions of Python 3. (And in fact mypy supports Python 3.2 and newer.)
- If type hints become successful we may well decide to add new syntax in the future to declare the type for variables, for example var age: int = 42 . If we were to use a double colon for argument type hints, for consistency we'd have to use the same convention for future syntax, perpetuating the ugliness.
Other forms of new syntax
A few other forms of alternative syntax have been proposed, e.g. the introduction of a where keyword [roberge] , and Cobra-inspired requires clauses. But these all share a problem with the double colon: they won't work for earlier versions of Python 3. The same would apply to a new __future__ import.
Other backwards compatible conventions
The ideas put forward include:
- A decorator, e.g. @typehints(name=str, returns=str) . This could work, but it's pretty verbose (an extra line, and the argument names must be repeated), and a far cry in elegance from the PEP 3107 notation.
- Stub files. We do want stub files, but they are primarily useful for adding type hints to existing code that doesn't lend itself to adding type hints, e.g. 3rd party packages, code that needs to support both Python 2 and Python 3, and especially extension modules. For most situations, having the annotations in line with the function definitions makes them much more useful.
- Docstrings. There is an existing convention for docstrings, based on the Sphinx notation ( :type arg1: description ). This is pretty verbose (an extra line per parameter), and not very elegant. We could also make up something new, but the annotation syntax is hard to beat (because it was designed for this very purpose).
It's also been proposed to simply wait another release. But what problem would that solve? It would just be procrastination.
PEP Development Process
A live draft for this PEP lives on GitHub [github] . There is also an issue tracker [issues] , where much of the technical discussion takes place.
The draft on GitHub is updated regularly in small increments. The official PEPS repo [ peps ] is (usually) only updated when a new draft is posted to python-dev.
Acknowledgements
This document could not be completed without valuable input, encouragement and advice from Jim Baker, Jeremy Siek, Michael Matson Vitousek, Andrey Vlasovskikh, Radomir Dopieralski, Peter Ludemann, and the BDFL-Delegate, Mark Shannon.
Influences include existing languages, libraries and frameworks mentioned in PEP 482 . Many thanks to their creators, in alphabetical order: Stefan Behnel, William Edwards, Greg Ewing, Larry Hastings, Anders Hejlsberg, Alok Menghrajani, Travis E. Oliphant, Joe Pamer, Raoul-Gabriel Urma, and Julien Verlaguet. | https://www.python.org/dev/peps/pep-0484/ | CC-MAIN-2017-22 | refinedweb | 12,562 | 53.1 |
#include <hallo.h> Bernd Eckenfels wrote on Tue Jan 01, 2002 um 07:21:17PM: > On Tue, Jan 01, 2002 at 12:54:46PM -0500, Joey Hess wrote: > > There is a cent symbol: ¢ (compose c |) > > It's available in the standard fixed font for X. > > a related note, when will the Euro be part of fixed? :) > > Anybody knows anything about it? Huch, "apt-get install xfonts-base-transcoded" and you have fixed fonts with latin15 charset. And visit: Gruss/Regards, Eduard. -- I have found little that is good about human beings. In my experience most of them are trash. -- Sigmund Freud | https://lists.debian.org/debian-devel/2002/01/msg00031.html | CC-MAIN-2018-17 | refinedweb | 101 | 83.66 |
Hello, Using encode/decode from Binary seems to permamently increase my memory consumption by 60x fold. I am wonder if I am doing something wrong, or if this is an issue with Binary. If I run the following program, it uses sensible amounts of memory (1MB) (note that the bin and list' thunks won't actully be evaluated): import Data.Binary main :: IO () main = let list = [1..1000000] :: [Int] bin = encode list list' = decode bin :: [Int] in putStrLn (show . length $ takeWhile (< 10000000) list) >> getLine >> return () /tmp $ ghc --make -O2 Bin.hs -o bin /tmp $ ./bin +RTS -s /tmp/bin +RTS -s 1000000 68,308,156 bytes allocated in the heap 6,700 bytes copied during GC 18,032 bytes maximum residency (1 sample(s)) 22,476 bytes maximum slop 1 MB total memory in use (0 MB lost due to fragmentation) Generation 0: 130 collections, 0 parallel, 0.00s, 0.00s elapsed Generation 1: 1 collections, 0 parallel, 0.00s, 0.00s elapsed INIT time 0.00s ( 0.00s elapsed) MUT time 0.05s ( 0.92s elapsed) GC time 0.00s ( 0.00s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 0.05s ( 0.92s elapsed) %GC time 0.0% (0.1% elapsed) Alloc rate 1,313,542,603 bytes per MUT second Productivity 100.0% of total user, 5.7% of total elapsed According to top: VIRT RSS SHR 3880 1548 804 Now, if I change *list* in the last line to *list'* so that the encode/decode stuff actually happens: /tmp $ ./bin +RTS -s /tmp/bin +RTS -s 1000000 617,573,932 bytes allocated in the heap 262,281,412 bytes copied during GC 20,035,672 bytes maximum residency (10 sample(s)) 2,187,296 bytes maximum slop 63 MB total memory in use (0 MB lost due to fragmentation) Generation 0: 1151 collections, 0 parallel, 0.47s, 0.48s elapsed Generation 1: 10 collections, 0 parallel, 0.36s, 0.40s elapsed INIT time 0.00s ( 0.00s elapsed) MUT time 0.47s ( 20.32s elapsed) GC time 0.84s ( 0.88s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 1.30s ( 21.19s elapsed) %GC time 64.1% (4.1% elapsed) Alloc rate 1,319,520,653 bytes per MUT second Productivity 35.9% of total user, 2.2% of total elapsed And top reports: VIRT RSS SHR 67368 64m 896 63 times as much total memory in use. And, this is while the program is waiting around at 'getLine' after it is 'done' with the data. I am using GHC 6.10.4 on GNU/Linux. Thanks! - jeremy | http://www.haskell.org/pipermail/haskell-cafe/2009-July/064779.html | CC-MAIN-2014-41 | refinedweb | 437 | 76.72 |
US7117246B2 - Electronic mail system with methodology providing distributed message store - Google PatentsElectronic mail system with methodology providing distributed message store Download PDF
Info
- Publication number
- US7117246B2US7117246B2 US09/735,130 US73513000A US7117246B2 US 7117246 B2 US7117246 B2 US 7117246B2 US 73513000 A US73513000 A US 73513000A US 7117246 B2 US7117246 B2 US 7117246B2
- Authority
- US
- United States
- Prior art keywords
- data
- portion
- system
- claims the benefit of priority from, and is related to, the following commonly-owned U.S. provisional application: application Ser. No. 60/184,212, filed Feb. 22, 2000.) systems and, more particularly, to improved methodology for distributed processing and storage of e-mail messages.
Today, electronic mail or “e-mail” is a pervasive, if not the most predominant, form of electronic communication.
A typical e-mail delivery process is as follows. In the following scenario, Larry sends e-mail to Martha at her e-mail address: martha@example.org. Martha's Internet Service Provider (ISP) uses an MTA, such as provided by Sendmail® for NT, available from Sendmail, Inc. of Emeryville, Calif. (With a lower case “s,” “sendmail” refers to Sendmail's MTA, which is one component of the Sendmail® for NT product.)
- 1. Larry composes the message and chooses Send in Microsoft Outlook Express (a “mail user agent” or MUA). The e-mail message itself specifies one or more intended recipients (i.e., destination e-mail addresses), a subject heading, and a message body; optionally, the message may specify accompanying attachments.
- 2. Microsoft Outlook Express queries a DNS server for the IP address of the host providing e-mail service for the destination address. The DNS server, which is a computer connected to the Internet running software that translates domain names, returns the IP address, 127.118.10.3, of the mail server for Martha's domain, example.org.
- 3. Microsoft Outlook Express opens an SMTP connection to the mail server running sendmail at Martha's ISP. The message is transmitted to the sendmail service using the SMTP protocol.
- 4. sendmail delivers Larry's message for Martha to the local delivery agent. It appends the message to Martha's mailbox. By default, the message is stored in:
- C:\Program Files\Sendmail\Spool\martha.
- 5. Martha has her computer dial into her ISP.
- 6. Martha chooses Check Mail in Eudora, an MUA.
- 7. Eudora opens a POP (Post Office Protocol version 3, defined in RFC1725) connection with the POP3 server at Martha's ISP. Eudora downloads Martha's new messages, including the message from Larry.
- 8. Martha reads Larry's message.
The MTA, which is responsible for queuing up messages and arranging for their distribution, is the workhorse component of electronic mail systems. The MTA “listens” for incoming e-mail messages on the SMTP port, which is generally port 25. When an e-mail message is detected, it handles the message according to configuration settings, that is, the settings chosen by the system administrator, in accordance with relevant standards such as the Internet Engineering Task Force's Request For Comment documents (RFCs). Typically, the mail server or MTA must temporarily store incoming and outgoing messages in a queue, the “mail queue”, before attempting delivery. Actual queue size is highly dependent on one's system resources and daily volumes.
MTAs, such as the commercially-available Sendmail® MTA, perform three key mail transport functions:
- Routes mail across the Internet to a gateway of a different network or “domain” (since many domains can and do exist in a single network)
- Relays mail to another MTA (e.g., 12 b) on a different subnet within the same network
- Transfers mail from one host or server to another on the same network subnet
To perform these functions, it accepts messages from other MTAs or MUAs, parses addresses to identify recipients and domains, resolves address aliases, fixes addressing problems, copies mail to and from a queue on its hard disk, tries to process long and hard-to-deliver messages, and notifies the sender when a particular task cannot be successfully completed. The MTA does not store messages (apart from its queue) or help users access messages. It relies on other mail system components, such as message delivery agents, message stores, and mail user agents (MUAs), to perform these tasks. These additional components can belong to any number of proprietary or shareware products (e.g., POP or IMAP servers, Microsoft Exchange, IBM Lotus Notes, Netscape, cc:Mail servers, or the like). Because of its central role in the e-mail systems, however, the MTA often serves as the “glue” that makes everything appear to work together seamlessly.
For further description of e-mail systems, see e.g., Sendmail® for NT User Guide, Part Number DOC-SMN-300-WNT-MAN-0999, available from Sendmail, Inc. of Emeryville, Calif., the disclosure of which is hereby incorporated by reference. Further description of the basic architecture and operation of e-mail systems is available in the technical and trade literature; see e.g., the following RFC (Request For Comments) documents:
currently available via the Internet (e.g., at), the disclosures of which are hereby incorporated by reference. RFCs are numbered Internet informational documents and standards widely followed by commercial software and freeware in the Internet and UNIX communities. they are adopted as standards.
Traditional electronic mail (e-mail) systems today are based on monolithic, single-machine configurations, such as a single computer having multiple hard disks. E-mail services are simplest to configure and maintain on a single machine. Such a service, by definition, has a single point of failure. At the same time, however, fairly sophisticated multi-computer hardware is increasingly available. For instance, it is possible to connect together multiple UNIX machines, each running a POP daemon (background process), connected together via a high-speed (e.g., gigabit) network to other computers that, in turn, are connected to disk farms. Despite those advances in computer hardware, there has been little effort today to implement an e-mail system in a distributed fashion—that is, employing a set of machines with a set of disks that cooperate over a network.
Traditional systems have limits in their robustness due to their non-distributed nature. First, traditional systems have difficulty scaling. In a single-machine implementation, scaling the service to meet increased demand involves purchasing faster hardware. This solution has its limits, however. There is usually a strong correlation between e-mail server workload and the importance of 24×7×365 availability. A single server, however large and fast, still presents a single point of failure. At some point on the scalability curve, it becomes impossible or cost-prohibitive to buy a computer capable of handling additional workload. Accordingly, the present-day monolithic systems provide little in the way of scalability or fault tolerance, nor are such systems able to benefit from increased performance afforded by distributed hardware.
As another problem, traditional systems cannot add or remove resources on-the-fly. For single-server environments, additional capacity comes in the form of adding CPUs, RAM, and/or disk resources. Most hardware must be taken out of service during these upgrades, which usually must be done late at night when workloads are light. Eventually, the next upgrade must come in the form of a complete replacement or “forklift upgrade.” Adding additional computing resources in a multi-server environment can be much more difficult. As an example, in “active/standby” clusters (usually just a pair of machines), specification of the standby machine is easy: buy a machine just like the “active” one. These pairs have 50% of their equipment standing idle, however, waiting for the “active” server to fail. Some models allow both servers to be active simultaneously: in the event one fails, the other takes over responsibility for both workloads. While resource utilization rates are much higher in such “active/active” schemes, they are much more difficult to plan for. Each machine must now remain at least 50% idle during peak workload; in the event of failure, the surviving machine must have enough resources to handle two machines' worth of work.
Traditional systems also cannot guarantee immediate and consistent data replication. The problem of consistent data replication is so difficult that most commercial systems do not even try to replicate e-mail message data across servers. Instead, most gain their data redundancy via disk storage using either RAID-1 (disk mirroring) or RAID-4/5 (parity-based redundancy). In the event that a server's CPU, memory, disk controller, or other hardware fail, the data on those redundant disks are unavailable until the hardware can be repaired. To overcome this limitation, a small handful of setups may use dual-ported SCSI or Fibre Channel disks shared with a hot-standby machine or use remote database journaling (again for hot-standby mode). However, some industries, particularly where government regulations and industry-standard practices are the main driving forces, have strong requirements for data availability, redundancy, and/or archiving.
Ideally, one could implement an e-mail system in a distributed manner, with resistance against single points of failure. Here, the information stored on any one computer is not instrumental to continue normal operation of the system. Thus, for example, if one of the servers were to fail, the system would continue to function normally despite that failure. Distributed e-mail systems have been slow in coming, due to the difficulty of ensuring efficient, fault-tolerant operation in such systems. Currently, “semi-distributed” implementations exist, though. For example, the Intermail system (by Software.com of Santa Barbara, Calif.) employs a back-end database for storing mail information. Although Intermail employs multiple machines, it is not truly distributed. Instead, the machines are employed more as proxy servers, rather than as control logic for a central message store. With that approach, however, such a system cannot store information redundantly in an efficient manner.
What is needed is a distributed e-mail system that is maximally redundant, yet is resource-efficient. Moreover, such a system should provide fault-tolerant operation, thereby guaranteeing system reliability. The present invention fulfills this and other needs.. Counterbalancing both of these approaches is the desire to read data only once, as it is computationally more expensive, and therefore undesirable, to read data twice (or more times), in addition to writing data twice versus three times.) and then write out the relatively small metadata 2f+1 times (e.g., three times for the above example)..
- API: Abbreviation for “application program interface,” a set of routines, protocols, and tools for building software applications.
- BIND: Short for Berkeley Internet Name Domain, a Domain Name Server (DNS). BIND is designed for UNIX systems based on BSD, the version of UNIX developed at the University of California's Berkeley campus.
- daemon: A process that runs in the background and performs a specified operation at predefined times or in response to certain events. The term daemon is a UNIX term, though many other operating systems provide support for daemons, though they're sometimes called other names.
- DNS: Short for Domain Name System (or Service), an Internet service that translates domain names into IP addresses. Because domain names are alphabetic, they are easier to remember. The Internet, however, is really based on IP addresses. Every time one uses a domain name, therefore, a DNS service must translate the name into the corresponding IP address.
- Gigabit Ethernet: The newest version of Ethernet, which supports data transfer rates of 1 Gigabit (1,000 megabits) per second. The first Gigabit Ethernet standard (802.3z) was ratified by the IEEE 802.3 Committee in 1998.
- HTTP: Short for HyperText Transfer Protocol, the underlying protocol used by the World Wide Web. HTTP defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands.
- IMAP: Short for Internet Message Access Protocol, a protocol for retrieving e-mail messages. The latest version, IMAP4, is similar to POP3 but supports some additional features. For example, with IMAP4, a user can search through his or her e-mail messages for keywords while the messages are still on the mail server, and then choose which messages to download to his or her machine. Like POP, IMAP uses SMTP for communication between the e-mail client and server.
- inode: A data structure that contain information about files in UNIX file systems, including important information on files such as user and group ownership, access mode (read, write, execute permissions), and type. Each file has an inode and is identified by an inode number (i-number) in the file system where it resides.
- “Law of Large Numbers”: A theorem by Jakob Bernoulli. Stated informally, “In any chance event, when the event happens repeatedly, the statistics will tend to prove the probabilities.”
- Lock: To make a file or other object inaccessible. File locking, for instance, is a critical component of all multi-user computer systems, including local-area networks. When users share files, the operating system must ensure that two or more users do not attempt to modify the same file simultaneously. It does this by locking the file as soon as the first user opens it. All subsequent users may read the file, but they cannot write to it until the first user is finished.
- MD5: An algorithm created by Ronald Rivest that is used to create digital signatures. It is intended for use with 32-bit machines and is safer than the MD4 algorithm. MD5 is a one-way hash function, also called a message digest. When using a one-way hash function, one can compare a calculated message digest against the message digest that is decrypted with a public key to verify that the message has not been tampered with.
- MIME:. There are many predefined MIME types, such as GIF graphics files and PostScript files.
- POP: Short for Post Office Protocol, a protocol used to retrieve e-mail from a mail server. Most e-mail applications (e-mail clients) use the POP protocol, although some can use the newer IMAP (Internet Message Access Protocol). There are two versions of POP. The first, called POP2, became a standard in the mid-1980's and required SMTP to send messages. The newer version, POP3, can be used with or without SMTP.
-.
The following description will focus on the presently-preferred embodiment of the present invention, which is implemented in a server-side application operating in an Internet-connected environment running under a network operating system, such as Microsoft® Windows 2000 running on an IBM-compatible PC. The present invention, however, is not limited to any particular one application or any particular environment. Instead, those skilled in the art will find that the system and methods of the present invention may be advantageously embodied on a variety of different platforms, including Linux, BeOS, Solaris, UNIX, NextStep, and the like. Therefore, the description of the exemplary embodiments which follows is for purposes of illustration and not limitation.
Computer-based Implementation
A. Basic System Hardware (e.g., for Desktop and Server Computers)
The present invention may be implemented on a conventional or general-purpose computer system, such as an IBM-compatible personal computer (PC) or server computer.
CPU 201 comprises a processor of the Intel Pentium® family of microprocessors. However, any other suitable microprocessor or microcomputer may be utilized for implementing the present invention. The CPU 201 communicates with other components of the system via a bi-directional system bus (including any necessary input/output 202 serves as the working memory for the CPU 201. In a typical configuration, RAM of sixteen 216 into the main (RAM) memory 202, for execution by the CPU 201. During operation of the program logic, the system 200 accepts user input from a keyboard 206 and pointing device 208, as well as speech-based input from a voice recognition system (not 20 shown). The keyboard 206 permits selection of application programs, entry of keyboard-based input or data, and selection and manipulation of individual data objects displayed on the display device 205. Likewise, the pointing device 208, such as a mouse, track ball, pen device, or the like, permits selection and manipulation of objects on the display device 205. In this manner, these input devices support manual user input for any process running on the system.
The system 200 displays text and/or graphic images and other data on the display device 205. Display device 205 is driven by the video adapter 204, which is interposed between the display device 205 and the system 200. The video adapter 204, 200, may be obtained from the printer 207, or other output device. Printer 207) 211 connected to a network (e.g., Ethernet network), an RS-232 serial port, a Universal Serial Bus (USB) interface, or the like. Devices that will be commonly connected locally to the comm interface 210 Sun Solaris workstations, which are available from Sun Microsystems of Mountain View, Calif.
The above-described system 200 of
B. Basic System Software
Illustrated in) and the system BIOS microcode 330 (i.e., ROM-based microcode), particularly when interfacing with peripheral devices. OS 310 can be provided by a conventional operating system, such as Microsoft® Windows 9x, by Microsoft® Windows NT, or by Microsoft® Windows 2000, all available from Microsoft Corporation of Redmond, Wash. Alternatively, OS 310 can also be an alternative operating system, such as Sun Solaris available from Sun Microsystems of Palo Alto, Calif., or Linux OS (available from several vendors, including the Red Hat distribution of Linux from Red Hat, Inc. of Durham, N.C.).
Distributed Message Processing and Storage
A. Overall Design Approach. In this way, one compromises between the need for absolute data integrity in the face of an arbitrary failure in any single data instance and a minimization of the read/write effort needed to achieve confidence in the data integrity itself. in the message's lifetime) and then write out the relatively small metadata 2f+1 times (e.g., three times for the above example each time the message's state in the appropriate folder is modified)..
B. System Design
1. General
In a preferred embodiment, a two-tier distributed architecture 400 is adopted, as shown in
2. Access Servers
The Access Servers 410 are responsible for communicating with clients over the Internet via standard protocols, for example, POP, SMTP, IMAP4, and the like. They are stateless, relying on redundant storage across a set of back-end Data Servers. The Access Servers are exposed to the public Internet, which is shown at 460 in
The Access Servers run programs such as sendmail (i.e., Sendmail, Inc.'s MTA), POP servers, and/or other programs to provide services to clients on the Internet 460 (e.g., HTTP-based servers for sending/receiving e-mail messages with World Wide Web browser software). Each of these programs is linked with a Client Library. The Client Library communicates, via an internal API (Application Programming Interface), with a Client Daemon. The Client Library and API provide a convenient interface between the application processes, which are not aware of the Data Network 430, and the Client Daemon, which is aware of the Data Network and the Data Servers 420. Typically, one Client Daemon runs on each Access Server, though multiple Client Daemons may be run on a single Access Server. The Client Daemon, based on the set of actions to be performed, selects the appropriate Data Servers to communicate with.
Applications communicate with the Client Daemon using an appropriate interprocess communication method (e.g., TCP/IP, shared memory). The Client Daemon communicates via the Data Network to the Data Server. The Data Protocol is used to communicate between the Access Server and a process running on the Data Server called the Server Daemon. The Client Daemon makes decisions about which Data Servers will store the data requested by the applications. The Server Daemon is the process that listens on the Data Network for Client Daemon requests. The Server Daemon interacts with the file system that is built-in to the underlying operating system of the Data Server on which it runs. The file system, which can be any high-quality file system, provides the storage representation for the message store in a preferred embodiment. For the best mode of operation, it is assumed that the Data Server will implement a high performance file system. Examples of suitable file systems include WAFL (by Network Appliance of Sunnyvale, Calif.), XFS (by SGI of Mountain View, Calif.), and VxFS (by Veritas Software Corporation of Mountain View, Calif.).
3. Data Servers
Data Servers (420 of
Data Servers store message, folder, mailbox, and queue data. In addition, file servers can maintain event log data and possibly the authorization/authentication database (if there is no external authentication service). All data are held in files using the underlying operating system's file system(s).
A single Data Server is non-redundant. Replicated copies of data are spread across different Data Servers to protect against failures that cause the loss of an entire Data Server (i.e., a unit of failure). Each Data Server then holds only a single copy of any data item. Indeed, the Data Servers are entirely unaware of the redundancy, if any, being managed by the Access Servers. However, as an optimization, Data Servers may employ RAID techniques individually to increase disk bandwidth and guard against disk failures in order to make each Data Server more reliable. Even when the Data Servers use RAID, multiple copies are still made across different Data Servers to guard against the remaining hardware single points of failure: the Data Server itself and its RAID disk controller(s).
4. Redundant Data Network
In embodiments where the system will be run in a redundant mode of operation, a redundant, high-speed Data Network (e.g., as shown at 430 in
Each Data Server has two network interfaces. Here, each network interface has a separate IP address on each Data Network; IP aliasing or other methods are not used to make the interfaces appear identical at ISO network model layer three. The Client Daemons on the Access Servers know both IP addresses for each Data Server. In the event of a failure of one network, each Client Daemon attempts to reestablish connections over the other network.
While there are two high speed network interfaces on each server attached to separate physical and logical networks, there is only one Command Network. If the Command Server cannot communicate with Access and Data Servers over this network, it will fall back to using one of the Data Networks to communicate with the server while the Data Network may be used for passing administrative traffic if the Command Network is down, the Command Network is not used for passing e-mail message traffic, even if both Data Networks are unavailable.
5. Command Server
The Command Server (440 of
The Command Server is the focal point of monitoring, command, and control. All other machines listen to the Command Server for configuration data and commands to change operational modes. All machines report operational statistics to the Command Server.
The Command Server is not critical for nominal operations. The Command Server has no operational role—none of the other machines ever depend on input from the Command Server. The system can operate, even tolerate faults, without it. Therefore, the Command Server does not have to be redundant, though it may be implemented as a cluster, if desired. This simplifies the overall system design since redundancy need not be implemented through the command protocols.
An independent Command Network (450 of
6. Advantages of Architecture
The two-tiered architectural design meets several important design criteria for security, performance, and reliability. First, the network design limits which machines are exposed to the public Internet. Only the Access Servers are exposed to the public Internet, and they should be configured accordingly. The Access Servers do not trust any machine outside of the cluster. The Data Servers are not connected to the public Internet and, therefore, are not subject to many types of attacks since packets from the public Internet cannot reach them.
In this two-tiered architecture, the amount of computing resources applied to applications handling customer requests and storing data can be varied independently. In contrast to other prior art systems, the system's two-tiered architecture allows the system to better adapt to its environment. For example, a corporate e-mail server using IMAP would have more persistent storage per capita than an ISP using POP. Accordingly, the system's cluster for the corporate customer can have a greater ratio of Data Servers to Access Servers, or perhaps Data Servers with a greater amount of disk space. A cluster can be changed on-the-fly by adding or dropping machines of one tier or another or by adding additional disk space to one or more Data Servers. By contrast, changing these aspects in a conventional system may require reconfiguring or replacing each of the single-tier machines.
C. Redundancy Approaches
1. Replication: Quorum Voting
“Quorum voting” guarantees fault-tolerance properties, which can establish a correct result even in the event of any single-unit failure. Writing 2f+1 copies tolerates f failures. In the nominal case, quorum voting requires that any datum be written, in its entirety, on a set of machines. A successful read must read the same data from more than half of that same set of machines.
This arrangement provides the ability to tolerate f failures with 2f+1 machines. Nominally, one employs triple-redundancy for single-failure tolerance. This approach can scale down to a single copy, which is not fault-tolerant, or scale up to five-way replication to tolerate two simultaneous faults, and so on. Note that, in the currently-preferred embodiment, this fault tolerance approach is employed for the Data Servers only.
Repair is possible when damage is detected. If, at any time, a failure occurs on one node, there will still be enough operational nodes to form a quorum. However, the failed node may later re-enter service with a data set that is different than the others. Once a failed Data Server has returned to service, a redundant read operation that retrieves data from multiple Data Servers may return conflicting answers. Therefore, in accordance with quorum voting, enough extra reads are performed from remaining Data Servers storing the same data element to gain a majority view. When the difference is noticed, it can be repaired by replacing the data on the (formerly) failed node with the consensus of the quorum. It is important to use a concurrency mechanism, such as a “lock,” to exclude writes to other copies of the datum while the repair is in progress. The same mechanism can also be used to populate a new file server with data, as described below.
Some recovery is possible in a worst-case failure. In the event of a failure so extensive that there are not enough functional servers to make a quorum, or if the operational servers contain data so divergent such that consensus is not possible, it still might be possible to salvage some of the data. By making conservative assumptions to preserve data, it may be possible to reconstruct a badly-degraded mailbox as the union of whatever messages may be held on any of the file servers. This may cause previously-expunged messages to reappear, but this is preferable to losing data. Regarding attributes of the messages at this point, the most conservative option is to mark them with the attributes of newly-arrived messages, thus encouraging users to take a look at them. This is the currently-preferred method employed.
2. Concurrency: Local Lock Quorums and Voting
A large, distributed e-mail system may have thousands of data operations proceeding in parallel upon different message delivery queues and mailbox folders. Most of these operations do not operate on the same objects simultaneously, but this behavior is not guaranteed. There must be a way to control concurrent operations on any queue, queue item, mailbox, folder, or message. Locks, or more properly, locks with time limits (also known as “leases”), provide this control.
“Quorum locking” establishes a global consensus through local locks. A “local lock” is one granted by an individual Data Server, which enforces the lock without coordination with any other server. On occasion, multiple independent processes may want to lock the same global object at the same time. A “global lock” is a series of local locks taken with separate protocol operations to different Data Servers until a majority of possible local locks is held. The various operations described later may require either kind of lock. Just as a majority share establishes a fault-tolerant consensus for the data stored on the Data Servers using Quorum voting, taking a majority of individual locks on the file servers establishes a global lock. It is impossible for more than one Access Server to hold a majority of locks. If two Access Servers simultaneously attempt to grab a global lock, the one which cannot grab a majority will relinquish its locks to the victor. Once a process obtains a majority of local locks for a datum and thus obtains a global lock for the datum, that process may create, modify, or delete any copy of the datum, regardless of whether that copy is protected by a local lock.
Lock ordering and exponential back-off avoids deadlock. There is a danger of two or more Access Servers getting into a situation where each of them has grabbed enough locks so that no one can attain a majority. The traditional solution to this “dining philosophers” problem is to establish an order in which locks must be taken. This reduces the problem to a race for the first lock, which is resolved locally. In the first pass, the problem is solved in the same way. Any set of Access Servers attempting to globally lock the same datum will attempt to grab the necessary locks in the same order to minimize the possibility of lock contention. Of course, one is faced with the complication that different Access Servers may have different opinions of which file servers are available, so one Access Server may think that the first lock in the locking order is unavailable, and then will attempt to grab the second lock. The resulting race for the remaining locks could end with neither getting a majority. This condition is handled by the following approach:
- (1) An attempt to grab a local lock will either succeed or fail because the lock is already taken.
- (2) If, before a majority is taken, an Access Server fails to take a lock, it will then quickly scan the remaining locks to see if they are taken. If it detects that it cannot obtain a majority, it releases its locks taken so far and waits for a short, random time period.
- (3) After a majority is taken, an Access Server will attempt to take one more, for fault tolerance. In its attempts to attain majority-plus-one, it will make a single retry after a failure or expiration. After it attains majority-plus-one, it will try to take the remaining locks, but will not re-attempt after failures or expirations.
- (4) On successive retries, the random range of the waiting period grows progressively longer.
The Access Server will attempt to take all of the locks if possible. However, if one of the Data Servers does not reply promptly to lock requests, the Access Server may be forced to wait for it. Since the Access Server may be holding some locks already, it will then block other Access Servers waiting for those locks, causing a cascading bottleneck. Therefore, lock timeouts should be as short as possible. Additionally, one may dynamically monitor network latency conditions so that timeouts can be set according to observed response times rather than anticipated worst-case estimates, which may either be too long or too short. With this scheme, a deadlock situation is unlikely to originate, and if it does, the random waits, exponential back-off, and retries, will eventually resolve itself.
3. Determining Replica Locations
In order for the quorum voting system to work correctly, all Access Servers must agree which set of Data Servers hold replicas of a particular datum. This set is referred to as the location set for the datum.
A mapping algorithm eliminates the need for a directory service. One way to map entity names to location sets is to maintain a name server running on a separate machine. This centralizes the administration and control of the mapping but introduces a single point of failure.
Instead of a directory service, the invention uses a hashing algorithm to determine an object's location set. For an entity name, one computes a hash value h, say, using MD5. A location set of size m out of n Data Servers can then be determined by a formula, such as {h mod n, (h+1) mod n, . . . , (h+m−1) mod n}. This hashing algorithm eliminates the need to maintain a list of all queues or mailboxes or even to keep statistics on their distribution. Even a relatively simplistic hashing function will serve to spread the objects evenly enough across all of the Data Servers.
A global “View” is controlled by the Command Server. The only global state that needs to be controlled is the number of Data Servers used as the modulus n in the hashing formula above. If different Access Servers have different values of n, they will inconsistently lock, read, and update the replicas. Therefore, the value of this number is distributed by the Command Server. Changing the value of this number involves a two-phase protocol, which is described below. The list of all operational Data Servers, as decreed by the Command Server, represents the View.
4. Reconciliation of Data
The system is robust against a failure to store a replica of a datum, whether from a momentary outage of a network component, temporary failure of a process on the Data Server, or even a complete crash of a Data Server. In any of these cases, the data stored on the failed machine can drift out of sync with the others. Though the system can function indefinitely without correcting the problem, it cannot tolerate another fault without risking data loss. It is important, therefore, that the damage be repaired. “Reconciliation” is the process of restoring the state of data on a failed server to its original, fault-tolerant state.
Data can and should preferably be refreshed on-the-fly. Access Servers performing read operations can notice inconsistencies in replicated data via the quorum voting algorithm. Normally, two reads are made out of the three replicas. If there is an inconsistency between the data returned, then a third read is used to establish a majority. The minority data may then be corrected on-the-fly. The correcting write is not a critical operation: if it fails, the system is no worse off than before. Therefore, there is no reason to make the initial request wait for this reconciliation attempt to complete. Quorum writes always reestablish consistency, since they will overwrite inconsistent data.
If there is reason to suspect that a Data Server may contain inconsistent data, then the Access Servers can be asked to always include that machine in their two-out-of-three choice for reads, so that it is brought in step with the quorum more quickly. On the other hand, to avoid overwhelming a new machine, they can also be instructed to tend to avoid the same server. A dynamic load-balancing scheme can vary between these alternatives to ensure that inconsistencies are discovered as quickly as possible but without impacting the quality of service. Optionally, a weighting scheme could be passed by the Command Server to each Access Server and updated on a regular basis.
The most extreme case of reconciliation is when the data content of a Data Server is completely lost, whether through disk failure or complete machine failure. In this case, every read on the replacement machine will be inconsistent, not just on those data items updated during the outage.
In order to update a particular Data Server, it is tempting to import data from other Data Servers in order to update the particular Data Server as much as possible before placing it into service. However, since it is undesirable to halt service while the data is being transferred, there will be concurrent transactions on the live Data Servers during the update. As a result, the data copied over potentially will become stale as soon as it is written to the new disk, and becomes staler with each passing moment. Unless some kind of snapshot is made or synchronization steps taken, one also runs the risk of a copy being made from a moving target, which may result in the copied image not even being well-formed as a data set. The copied image may then be repaired by a second offline update pass on the data, but transactions still continue. After some time, the data may be close enough to risk a limited interruption in service to fully get caught up and resume with the new data.
However, it is also possible to place the new machine into service with no warmup, begin to fill in data on-demand as transactions happen, using load-balancing to avoid overloading the new machine, and start a background process to update all data that is touched on-the-fly. Since this scheme does no unnecessary work and only uses mechanisms which have to be developed for other purposes as well, it is the preferred reconciliation method.
A background sweeper ensures that all data are eventually repaired. In addition to repairing data on-demand, the Command Server can ask one or more Access Servers to actively check for corruption of data by progressively scanning all data on the Data Servers and ensuring consistency among their replicas. Such a scan can run at a low priority to ensure that it does not degrade the performance of the system. During busy times, it may be shut off entirely until capacity is available to run without negatively impacting service. For reconciliation of a particular Data Server, a sweep can be modified so that it only checks consistency of data that have at least one replica on the Data Server in question. Once this sweep is done, fault tolerance has been restored.
The sweeper process can save network bandwidth, at the expense of a negligible risk, by asking the Data Servers to compute checksums or hashes of data instead of transmitting the actual data. The hash or checksum must be based upon a standard presentation of the data, so care must be taken if an upgraded Data Server uses a new low-level file format to hold the data. The hash value should be large enough to make hash collisions very unlikely, given the volume of data on the system.
5. Migration of Data
The migration of data involves the rebalancing of data once a new Data Server has been added to the system (not merely replacing a failed server) or when a Data Server is removed permanently. This changes the distribution of the replicas of data. The challenge is to keep the system fault-tolerant while data is migrated from old to new locations.
As with reconciliation, data migration should happen on-the-fly to ensure consistency, maintain fault tolerance, and avoid an interruption of service. Again, as with reconciliation, one establishes a way for the migration to happen while still satisfying current requests. Data migration happens “on demand”, that is, as data elements are accessed through normal use they are migrated as they are accessed. At the same time or after some time has passed, a “sweeper” process is started which migrates non-accessed data. Finally, after all the data is suspected to have been migrated, a consistency checker is run to be sure that this is the case, at which point the migration is finished.
Migration is completed in two phases. This two-phase migration establishes a “before” and “after” location set for each data item. This is established by the Data Server, and this information is pushed to each Access Server. Since the Access Servers do not communicate among themselves, at any one time it is possible for each Access Server to perceive a different set of Data Servers as currently being accessible via the Data Network(s). Further, there is no way to assure that all Access Servers receive the information about the changed set of Data Servers simultaneously, hence the need for a multi-phase protocol.
In Phase 1 of this protocol, each Access Server is updated about the “before” and “after” location sets of each data. Of course, a list of these locations is not provided, but two hashing algorithms are provided to act on each data element such that one can determine on which Data Server they are supposed to reside in each location set. During migration, data modification takes place in the following way: First, the Access Server obtains a lock in the “after” location, then it obtains a lock in the “before” location. After both locks have been obtained, the Access Server checks for data existence in the “after” location. If it does not find it there, then the data item is updated in the old location only.
In Phase 2, it is now only necessary to obtain locks in the “after” locations. Any data found in the “after” location will be updated in place. Any data found in the “before” location will be moved to the “after” location. Note that under this scheme, a lock is placed on a data item in the “after” location whether it exists there or not.
If in Phase 1 an Access Server discovers that data has moved to its “after” location (by checking there after it has locked the data, but before it performs an update in the “before” location), then it immediately proceeds to Phase 2.
Since the Command Server establishes the transition from one phase of the migration to another, it can make sure that no two Access Servers have an incompatible view of the system. The Command Server will inform each Access Server that it should now be in Phase 1, but it will not give the go-ahead to transition to Phase 2 until each Access Server has acknowledged that it is operating in Phase 1. Once it has received this notification, the Command Server will give instructions to transition to Phase 2. At any time after the Access Servers have responded that they are in Phase 2, the Command Server may instruct an Access Server to begin forcibly locking and migrating all unaccessed data from the “before” to “after” locations. Once this has been accomplished, the Command Server will instruct an Access Server to sweep over all the data to make sure everything has been migrated. Once this has been determined to be the case, all the Access Servers can be informed that the “before” View is no longer relevant.
At any time, some Access Servers may be in one phase while others are in an immediately-preceding or following phase without damage to the data integrity of the system. However, it cannot be the case that some Access Servers are in each of four different states, pre-Phase 1, Phase 1, Phase 2, post-Phase 2. Since the Command Server controls the state change, it can make certain that this does not occur before every Access Server is ready.
During migration, all operations must satisfy the redundancy requirements of both Views during Phase 2. Because of the quorum algorithm, a single fault can be tolerated during this period, even if the replica sets of the two Views overlap. Note that it is possible for the replica locations of the “before” and “after” Views to partially or totally overlap, even certain if the number of Data Servers is small or Consistent Hashing is used to determine the two Views. In this case, replicas on the overlapping Data Servers need not be moved at all. This is a desirable property.
During migration, the two Views are not symmetric—the “after” View must be regarded as a tentative View that one is trying to establish, and certain kinds of errors in it should be corrected without triggering any warnings. For example, when data are referenced for the first time during Phase 2, all read attempts in the “after” View will fail.
Reconciliation may also occur during a migration. There are no problems with handling the on-the-fly repair aspects of reconciliation during a migration. The usual reconciliation algorithm repairs data in both the new and the old Views. The only complication is the need to run a sweep of the entire data set both for reconciliation and migration safety.
D. Immutable Messages
1. Stable Storage
“Stable storage” is a way to store immutable data and tolerate f faults with f+1 redundancy. For an introduction to stable storage, see e.g., Lampson, B. W., Ethernet, pup and violet, Distributed systems: Architecture and Implementation, No. 105, Lecture Notes in Computer Science, pages 265–273, Berlin: Springer-Verlag, 1981, the disclosure of which is hereby incorporated by reference. The basic approach is to store f+1 copies of the data and read any available copy. The data may never be modified, therefore any of the copies will be up-to-date and correct. This approach presumes that an attempt to retrieve a copy from a failed server may generate an error or may not return at all (i.e., server timeout) but will never return incorrect data. To protect against data corruption, the data may be augmented with a checksum if it is not provided by the underlying media. Although it is easy to incorporate such a checksum, the design presented below assumes that it is not present.
It is important that failures during initialization of stable storage do not result in the creation of incomplete copies of the data, lest a later read operation encounter this incomplete object. Since only one read request is issued, the discrepancy with other data may never be noticed. This problem can be avoided by any of several techniques.
If the Data Servers are storing stable storage data in files in its underlying file system(s), there are two options. If that file system supports an atomic file creation operation, the problem is taken care of already. If not, the file can be written using a temporary file name, then renamed to its permanent file name: the semantics of the UNIX file rename( ) system call provide the required atomicity. Temporary file names use a well-known naming convention, e.g., a “.tmp” suffix, to clearly identify their temporary nature. Any temporary file last modified more than a few hours ago is clearly the leftover of a failed or interrupted stable storage file creation attempt and can be removed with impunity.
If the Data Servers are implemented using a DBMS (database management system) or using some other method which provides transaction-oriented semantics, then the Data Servers can rely on that mechanism to implement stable storage.
The stable storage files are named with an identifier, or a handle, which can be input to the hashing algorithm to return a set of Data Servers. For a single-failure tolerant, one only needs f+1=2 Data Servers to reliably store the data. However, in order to ensure that one can store the data with fault-tolerant redundancy even if a Data Server is temporarily unavailable, a set of 2f+1=3 Data Servers is instead employed, choosing f+1 out of that set to actually store the replicas. Therefore, even if one Data Server is down, every new message that enters the system will still be stored twice.
For a reliable read of stable storage, the entity name is consistently hashed to get a set of 2f+1 Data Servers. One of these Data Servers is chosen arbitrarily (for this invention the best mode is sequentially), and a read attempt is made. If the stable storage datum is not available on that Data Server, the others in the set are tried one-by-one until the datum is found. If the data are found on any of the servers, the information read will be correct, because the system would not have created a reference to a data instance if it did not know that that data instance was completely intact. If the data are not found on any of the servers, then more failures have occurred than the cluster's fault-tolerant configuration can handle.
2. Message Anatomy, Queueing, and Delivery
Data is placed into stable storage only when it can no longer change. In a sendmail system, once an SMTP message is successfully queued, its body will never again be modified. There are cases where the MTA may be configured to perform data format conversions of the message body, e.g., 7-bit- to 8-bit-significant format or Quoted-Printable (RFC2045, section 6.7) to 8-bit-significant format. In these cases, the data is converted before commitment to stable storage. If another conversion is required later, the conversion is done after the data is read from stable storage. In no case is the data rewritten after commitment to stable storage.
Headers, however, are not delivered as they are received. Most message headers are stored in stable storage. Those headers which require modification after commitment to stable storage are constructed dynamically from that message's metadata. By providing ways for these processes to persistently store their metadata via other means, one can treat message headers as immutable once the message is written into some mailbox but not as it sits in the queue.
3. Reference Counting
A message header or body can be arbitrarily shared through delivery to multiple recipients or IMAP COPY operations. The system needs to be able to delete a stable storage object once, and only when all references to it have been deleted. There are two general approaches to storage reclamation of this sort: garbage collection and reference counting. In a system with more complicated references between objects, garbage collection has some compelling advantages. Modem garbage collectors can be real-time, distributed, cache-conscious, and very efficient. However, such collectors are complex, especially considering the large scale, fault-tolerant environments that this e-mail system will be deployed in. A general-purpose garbage collector would be overkill in such an environment.
Instead, in the currently-preferred embodiment, reference counting is used to reclaim stable storage. Associated with each stable storage item is a count of the number of queue or folder references to it. The count starts at one when the stable storage object is created. When the count reaches zero, the storage space used by the object is reclaimed safely, since there are no references to the object, and no new references can be created.
Since the stable storage object and the reference(s) to it will, in general, be on different Data Servers, one has an atomicity problem with manipulating the reference count: one cannot create or delete the message reference and modify the reference count simultaneously. Even a two-phase commit protocol cannot prevent this, as one is not guaranteed that crashed nodes will eventually restart. One is either faced with the anomaly of the reference count being too high, which can cause a storage leak, or the reference count being too low, which can cause dangling pointers. Because dangling pointers cause data loss, storage leaks are a lesser problem to the system (as long as there is some way of correcting them). Therefore, the reference count operations are ordered such that failures will never leave a dangling pointer.
There are two methods of validating reference counts to detect storage leaks: scanning all references and recomputing the counts, or keeping backreferences from the stable storage to all references for it. Scanning all references and recomputing the reference counts is tantamount to garbage collection and is subject to the problems of scale and fault tolerance identified above. As long as the number of references to a stable storage object are kept to a reasonable number, backreferences represent the currently-preferred approach. If a stable storage object would require too many references (and thus also backreferences) to be “reasonable”, another copy of that object is made, and the references are split between the copies. For example, if a single message is delivered to 10,000 local recipients, but the implementation can only handle 4,096 references per stable storage object, then each of the message's stable storage objects can be copied twice, and the 10,000 references can be divided evenly among each of the three copies of each object.
4. Reconciliation of Stable Storage Errors
The failure of a Data Server may cause the number of replicas of any given stable storage object to drop below the f+1 copy threshold. Discovery and repair of these errors is straightforward.
When the system attempts to retrieve a copy of a stable storage object, the object's location set of 2f+1 Data Servers is first determined. A Data Server is chosen from that location set in an attempt to retrieve the object. Each set is tried in any order, typically sequentially in the absence of any specific reasons (e.g., loading or availability) that a particular Data Server should be skipped. Once the object is found and acted upon, it is not necessary to contact any other Data Servers.
The reconciliation algorithm, due to the immutable nature of stable storage objects, is simple: continue searching the object's location set until a copy of the object is found. Then create copies of that object on surviving Data Servers, atomically, in the location set until the number of available copies rises to f+1. If a failed Data Server later returns back into service, then more than f+1 copies of the object will be available, but because all copies of a stable storage object are by definition immutable, the message store remains consistent. If desired the extraneous copies can be safely deleted by periodic sweeper processes, but that does not affect the running system.
E. Data Storage and Data Structures
1. Single-Instance Message Store
The system implements what is commonly called a “single-instance message store”. In a mail server using a more traditional mailbox format (such as the UNIX mailbox format) for message storage, a message with several local recipients will be stored several different times, once per recipient. While easy to implement, this replication consumes a fair amount of unnecessary disk space. MIME attachments of multi-megabyte files are becoming increasingly common. Even though disk space is relatively cheap today, system managers prefer not to spend money on disk storage when the multiple message copies do not increase fault tolerance. Thus, in accordance with the present invention, the system writes the contents of each message only once (not including copies made for redundancy's sake); pointers to the message text location are inserted in each recipient's mailbox. The chief reason for implementing this in the system is not for the space it saves but reducing the I/O bandwith required to deliver messages. A single-instance message store facilitates this by writing message bodies to disk only once.
2. Extensive Metadata Use
The system stores messages with a model similar to a UNIX file system. The headers and body of a message are stored on the Data Servers in files; these files act like UNIX file system “inodes” and their associated storage blocks. Metadata files, one per mail folder, contain the references which describe where each message data file is located, and thus resemble UNIX file system directory files. As with UNIX inodes, the header files and body files use link reference counts in order to determine when the storage space they occupy can be deallocated.
The analogy falls short when considering the backreference information in stable storage object. UNIX file systems do not include backreference information in the inode. UNIX file system consistency checking programs (usually called “fsck”) instead scan all forward references in directories and recomputes all reference counts. This method, as mentioned earlier, is impractical for a distributed system of this scale; thus backreferences are used.
3. Consistent Hashing
Consistent hashing is an algorithm that permits expansion or contraction of the Data Server pool while migrating a minimal number of mailboxes, queues, and messages. It is described in Kanger, Lehman, Leighton, Levine, Lewin, and Panigrahy, Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web, Massachusetts Institute of Technology, 1997, the disclosure of which is hereby incorporated by reference.
Consistent hashing maps data objects (e.g., strings) to a finite set of buckets. Unlike other hashing algorithms, when a new bucket is added to the pool, consistent hashing will only occasionally map an object from an old bucket into the new bucket. It will never remap an object from one old bucket to another old bucket.
The algorithm consists of two stages. In the first stage, the data objects are mapped onto the half-closed [0,1) unit interval. Any conventional hashing function may be used, but it must provide a very uniform distribution across its range. In the second stage, buckets are mapped onto sets of points on the unit interval. The hash of an object is mapped to the nearest bucket on the unit interval (wrapping around at the ends of the interval).
As buckets are added to the hashing scheme, the points they map to on the unit interval become the nearest bucket for some objects. For the majority of objects, an old bucket point is closer and remains the closer. Therefore, the majority of objects will continue to map to their old buckets. Similarly, when a bucket is removed there is a minimum of disturbance.
Assuming a uniform distribution of objects on the unit interval, the uniformity of the bucket mapping depends on how uniformly the buckets are mapped onto [0,1). Ideally, each bucket's “catch basin” on the unit interval will have equal area. A reasonable way to do this is to choose an upper limit B on the number of buckets and have each bucket map onto log(B) random, pseudorandom, or evenly-spaced points on [0,1).
The invention requires a slight variation of the algorithm: it uses consistent hashing to choose the 2f+1 Data Servers to store replicated metadata objects and f+1 Data Servers to store stable storage objects. This is done by choosing the closest 2f+1 buckets (or f+1 buckets) to the object's hash, computed with a hashing algorithm, such as MD5. Each bucket is assigned a Data Server. If two buckets, which are nearest an object's hash value, are assigned to the same Data Server, one bucket is ignored and the next closest bucket is chosen. This process repeats until 2f+1 (or f+1) unique Data Servers have been selected.
This algorithm ensures minimum disruption of the bucket set as buckets are added or deleted. For any object hash, the bucket set changes by, at most, the number of buckets added. If a conservative approach is taken, and only a single Data Server is added or removed each time, the bucket set can change by, at most, one bucket. Therefore, at most, only one copy of a redundantly-stored object must be moved in a Data Migration.
4. Access Servers Determine Which Data Servers Store Objects
The Access Servers determine where all files are stored. They use handles, or opaque objects of fixed size, to determine which Data Servers may store copies of any given data object. Handles are generated by one of two methods, depending on the type of object. The Command Server distributes the cluster's current View, which includes bucket-to-interval and bucket-to-Data Server mappings for use with the Consistent Hashing Algorithm, to each Access Server.
Given an object's handle, the Access Server uses the Consistent Hashing Algorithm to determine the set of Data Servers which may store the object. Given an object's handle and a small amount of additional information about the object (to assist with handle collisions), a Data Server can then uniquely locate that object within its file system(s) or determine that the object does not exist.
A situation which cannot occur in a UNIX file system but can occur in a system cluster is an attempt to allocate the same inode. A UNIX file system keeps paranoid control over inode allocation. The distributed nature of a cluster in the system of the present invention makes such control difficult. It is possible for a newly-generated handle to collide with a previously-written file or, worse yet, be used in a race (probably with another Access Server) with the creation of another file using the same handle. The Data Servers therefore attempt to detect these collisions. If a collision is detected, the Access Server will simply generate a new handle and try again.
It may be desirable to influence the PRNG with the cluster's workload distribution information. Typically, the “Law of Large Numbers” will create a fair distribution of messages across all Data Servers. Similarly, it is expected that operational workloads will be fairly evenly distributed across all Data Servers. However, if particular Data Servers have uneven workload, one may be able to weight the PRNG increase or reduce the resource burden on individual Data Servers.
5. Handle Generation
Two different methods are used to generate handles for a data object. The methods for metadata objects and for stable storage objects are discussed below.
All metadata objects have names that can be derived from the type of object. Each message delivery queue is assigned a name by system administrators, e.g., “as5-default-queue”. Each mailbox is named by the username and domain of its owner, e.g., “bob@sendmail.com” or “bob@example.com”. Each folder is named by its folder name as well as the mailbox which contains it, e.g., “INBOX” of “bob@example.com” or “letters/grandma” of “bob@example.com”. The mailbox name and folder name are concatenated together, separated by a colon.
The handle of a metadata object is the result of an MD5 checksum of its name, e.g., MD5(“as5-default-queue”), MD5(“bob:example.com”), or MD5(“bob:example.com:INBOX”). This handle input to the Consistent Hashing Algorithm to determine the set of 2f+1 Data Servers which store copies of this metadata object.
There is a very small, but non-zero, probability that the handle of two distinct metadata objects may be equal. It is the Data Server's responsibility to store the un-hashed name of each object in order to detect accidental collisions. All Data Protocol operations upon metadata objects identify the object by type (i.e., queue, mailbox, or folder) and un-hashed name.
Stable storage objects are accessed in a fundamentally different manner than metadata objects. To continue with the UNIX file system analogy, a UNIX file cannot normally be operated upon without first knowing its name. System calls such as rename( ) and unlink( ) operate require only a file name(s). System calls such as open( ) return a file descriptor which is used in subsequent operations, such as read( ) and write( ). The operating system uses the file name to determine which file system stores the file and the inode of the file's contents. The (file system, inode number) tuple is state maintained with the file descriptor state; for each descriptor-based operation, e.g., read( ), the operating system utilizes the (file system, inode number) tuple to actually operate upon the proper file.
The handles for stable storage objects are generated in a different manner because they are accessed in a different manner: as with UNIX file system inodes, they cannot normally be accessed without first consulting one or more metadata objects to determine the stable storage object's handle (which is stored in metadata). Therefore the system has much greater latitude in selecting handles for stable storage objects. Stable storage objects need not be stored at well-known locations, i.e., well-known names.
Two methods can be used to generate handles for stable storage objects. The first is a pseudo-random number generator (PRNG). The only requirements for the PRNG are that it has the same range and size as the hash used for metadata object handles (i.e., 128 bits for an MD5 hash) and that it has an even distribution across the entire range. The other method utilizes system information to generate a handle guaranteed to be unique throughout the lifetime of the cluster. This method combines the current wall-clock time, Access Server hostname, UNIX process ID number, process thread ID number, and an incremented counter to create the handle.
The latter method has the advantage that there is no chance, by definition, that two identical handles can be generated. It is immune to several subtle race conditions generated by the former method and thus is the preferred implementation.
6. Ordering of Operations Upon Stable Storage and Metadata
The currently-preferred embodiment does not provide any mechanism to perform multiple data-changing operations atomically, i.e., in a transaction-oriented or “all-or-none” manner, across Data Servers. In order for the stable storage object reference counting mechanism to function properly, it is important that certain orders of operation be strictly adhered to. These orders of operation cannot ensure that the reference counts of stable storage objects are always correct, but they can ensure that all stable storage objects' reference counts, if incorrect, are always too high.
The invention uses the following orders of operation to assist with the maintenance of cluster-wide message-store integrity:
- (1) All f+1 replicas of a stable storage object must be successfully committed to persistent storage before any of the 2f+1 replicas of a metadata reference pointing to that object may be created.
- (2) The reference counts of all f+1 replicas of a stable storage object must be incremented before any of the 2f+1 replicas of another metadata reference pointing to that object may be created.
- (3) All 2f+1 replicas of a metadata reference pointing to a stable storage object must be deleted before the reference count of any of the stable storage object's f+1 replicas may be decremented.
Due to the close resemblance between the UNIX file system's inodes and directories and the invention's stable storage objects and their metadata, it is not surprising that the orders of operations to prevent reference counts from falling below their correct value are the same. The Berkeley Fast File System (FFS) is a widely-used file system by several major variations of UNIX. See, e.g., Marshall K McKusick et al., The Design and Implementation of the 4.4BSD Operating System, Addison Wesley, 1996, particularly at Chapter 8, page 284, for a discussion of the synchronous order of operations used by FFS to help maintain file system integrity, the disclosure of which is hereby incorporated by reference.
7. Actual Implementation in a File System-Based Environment
(a) General
Whereas an RDBMS or other database system can be used in the invention, the UNIX file system, together with the well-defined semantics of several UNIX I/O system calls, provide the support required to implement f+1-distributed stable storage and 2f+1-distributed metadata.
(b) Stable Storage Object Structure
The contents of a stable storage object are, by definition, immutable. However, there are attributes of a stable storage object which must change: the reference count and the list of backreferences. These mutable attributes must be maintained in a manner which maintains message integrity despite having only f+1 replicas to tolerate f faults.
The UNIX file system stores the contents of a file and its associated attributes (e.g., last modification timestamp, access list) in an “index node” or “inode”. The inode does not, however, include the file's name. The file's name is stored independently within the file system's name space. The name space may contain several names which all refer to the same inode and thus to the same file. See the above-mentioned McKusick et al. reference at p. 251,
Each inode has a reference count, which keeps track of the number of names within the name space that refer to the inode. When a new name for the file is added to the name space (e.g., by the creat( ) or link( ) system calls), the inode's reference count is incremented. Conversely, when a name is removed from the name space (e.g., by the unlink( ) system call), the corresponding inode's reference count is decremented. If an inode's reference count drops to zero, all storage resources utilized by the inode are freed by the file system. See the above-mentioned McKusick et al. reference for a thorough discussion of the UNIX file system.
Three UNIX file system-related system calls are guaranteed to operate in an atomic manner:
- (1) The rename( ) system call, used to change the name of a file and/or move it to a different subdirectory;
- (2) the link( ) system call, which creates an alternative name for the same file in a different place within the file system's name space; and
- (3) The unlink( ) system call, which removes one of (the possibly many) names of a file from the file system's name space.
These three system calls, in concert with the orders of operation described above, are sufficient to maintain message integrity within the invention's message store.
The UNIX file system provides an atomic file creation operation, but the guarantee is limited to the allocation of the inode and a single insertion into the file system's name space. It does not guarantee that data can also be simultaneously written to the newly-allocated inode.
For example, assume the MTA needs to queue a 100-megabyte incoming e-mail message. A Client Daemon instructs f+1 Data Servers to create a stable storage object to store the body of the message. Each Data Server creates a standard UNIX file within one of its file systems to store the body. As the message data is received by the MTA, it is sent to each Data Server, which appends it to the file. However, after 95% of the message data has been received, one of the f+1 Data Servers crashes. The UNIX file system does not provide a mechanism to remove the incomplete file when the Data Server reboots.
At a glance, the orders of operation used by the invention do not require it: metadata references to a new stable storage object cannot be created until all replicas of the object have been successfully created. The Client Daemon is aware that only f of the object's replicas were created successfully. The Client Daemon may create another replica of the object on another Data Server (i.e., message queueing and delivery proceeds successfully), or it may pass the error back to the MTA (i.e., message queueing fails, higher-level protocols are used to retransmit the e-mail message at a later time). Neither case results in message store corruption.
However, in the event that a Data Server crashes with a large storage file (e.g., 95-megabyte file) which is unaccounted for, a resource “leak” results, thus wasting storage space. Without checking that object's backreferences, there is no way for the system to tell if the object is legitimately linked into the message store or if it should be removed and its storage space reclaimed. In order to safely reclaim such lost space, stable storage objects are created using the following algorithm:
- 1. A new file is opened using a temporary file name. The naming convention used specifically identifies the file as being temporary in nature, e.g., by using a “.tmp” suffix.
- 2. The object's data is appended to the temporary file as it becomes available. Each time data is appended to the file, its inode's last modification timestamp is updated.
- 3. If there are long periods of time when no new data for the object is available, the file's last modification timestamp is updated as an indicator that the temporary file is still in use.
- 4. When all of the object's data have been received and have been successfully written to persistent storage (i.e., using the fsync( ) system call), the file is renamed using a naming convention separate from the temporary file naming convention.
- 5. Periodically the Data Server's file systems are swept for all files named with the temporary file naming convention. Any such file with a last modification timestamp older than a configurable timeout value, e.g., 1 day, is a temporary file that has been abandoned and can be deleted to reuse its storage resources.
This algorithm does not rely on checking the object's backreferences, which is a much more time- and I/O-consuming activity. It does not directly address the stable storage object's reference count or backreferences attributes. However, a simple modification of the algorithm can safely create all of an object's required attributes. The change is simple: use a naming convention for the “final” file name which encodes the object's backreference attribute into the file name. Nothing need be done explicitly for maintaining the reference count attribute: the UNIX file system automatically does it.
This file naming convention uses the object's handle to create the directory pathname used for the “final” name. The handle is converted to an ASCII hexadecimal representation, e.g., 0xD41D8CD98F00B204E9800998ECF8427E. The backreference attribute depends on whether the backreference is for a message delivery queue control file or for a block of message text. For the former, the message queue name and queue control identifier are concatenated, using a colon as a separator, e.g., “as5-default-queue:CAA57074”. For the latter, the user name, domain name, and folder name are concatenated, using colons as separators, e.g., “bob@example.com:f:INBOX”. Finally, the complete file name is created by concatenating the backreference type (e.g., “q” for queue control file, “H” for message header, etc.), the hexadecimal handle, and the encoded backreference, e.g., “q:D41D8CD98F00B204E9800998ECF8427E:as5-default-queue:CAA57074” or “S:D41D8CD98F00B204E9800998ECF8427E:bob@example.com:INBOX”.
UNIX file systems, which few exceptions, can provide acceptable I/O performance with more than a few thousand files in any particular directory.
Typical file systems provide diminishing performance characteristics as large numbers of file system entities (files, directories, etc.) are stored in a single directory. In this scheme, all stable storage object files could be placed in the same directory, but the resulting performance could be disastrous. Therefore it is advantageous to take advantage of the UNIX file system's hierarchical nature to distribute stable storage objects' files evenly across a large number of directories and subdirectories to avoid the performance penalty.
The evenly-distributed methods used for generating stable storage object handles becomes significant here: portions of the handle itself can be used to generate the intermediate subdirectory names. For example, to create three intermediate subdirectory levels, each consisting of 256 members, the first 3 bytes of the handle are converted to an ASCII hexadecimal representation and separated by the directory separator, the forward slash: “/”. For example, the handle 0xD41D8CD98F00B204E9800998ECF8427E would yield “D4/1D/8C” as an intermediate subdirectory prefix.
The final step of the stable storage object file naming algorithm is the path prefix. This prefix is simply a constant and can be set at the system administrator's discretion. For the purposes of discussion, the prefix “/smm/data” will be used as a path name for all data references in all examples. Under the “/smm/data” tree subdirectories named “messages”, “queue”, and “mailboxes” will be used to represent the location of immutable message components, queue metadata, and mailbox metadata respectively.
To illustrate the stable storage object file naming algorithm, assume a stable storage object, storing message text, has the handle 0xD41D8CD98F00B204E9800998ECF8427E and that it is to be referenced by a metadata pointer in a queue control file “CAA57074” in the message delivery queue “as5-default-queue”. The full pathname to the file storing the object is “/smm/data/messages/messages/D4/1D/8C/S:D41D8CD98F00B204E9800998ECF8427E:as5-default-queue:CAA57074”.
The format of a stable storage object file 500, as shown in
As described earlier, each name for the stable storage object file within the UNIX file system's namespace represents a backreference to a metadata association with the file. The sum total of these links is stored in the UNIX file system inode reference count.
Most UNIX file systems limit the size of an inode's reference count and thus the number of names a file may have. The Berkeley FFS, for example, stores the inode reference count in a 16-bit value, which creates a limit of 65,535 links. If a stable storage object requires more links than the file system can support, another copy of that object is made, and the references are split between the copies. For example, if a single message is to be delivered to 100,000 local recipients, but the underlying file system can only support 65,535 links per inode, then an additional copy of each of the message's stable storage object(s) can be made, and the 100,000 references can be divided evenly among both copies of each object.
(c) Metadata Structure
Both message folders and message delivery queues store 2f+1 metadata replicas of each metadata object. However, the I/O access patterns of the two types of metadata are quite different. Accordingly, they are stored in different manners within the UNIX file system.
(1) Message Folder Metadata
A message folder metadata file contains all information required for mailbox maintenance, including the handles used to retrieve the text of the messages stored in the folder. A mailbox is defined as the set of IMAP folders owned by a single user. The default folder to which the LDA delivers new messages is called an INBOX. Each POP user's mailbox is their IMAP INBOX folder. The system makes no distinction between the two, and a user's POP mailbox and IMAP INBOX may be referred to interchangeably for the discussion which follows.
As illustrated in
(2) Message Delivery Queue Metadata
Message delivery queue metadata is accessed quite differently than message folder metadata. The reasons lie with implementation of the sendmail MTA and the best mode to date for integrating the invention with the pre-existing sendmail code base.
Each message, as it is received by sendmail, is assigned a queue control identifier. That identifier is used to name the queue control file for the message, which is stored in a central UNIX directory. The queue control file contains all data relevant to each recipient's delivery status. Once the message has been delivered successfully to all recipients (or if delivery to a recipient(s) fails with a permanent error), the queue control file is deleted from the queue directory, which signifies that the MTA has finished processing the message. At any time the MTA is attempting to deliver a message, it obtains an exclusive lock on the corresponding queue control file to prevent other MTA processes from attempting delivery of that message. As the delivery state of a message changes (e.g., message delivered to 10 recipients successfully, 8 recipients remaining), the MTA will periodically replace the queue control file with another containing new state information; if the MTA crashes, the newer state in the queue control file will avoid re-delivery to recipients who have already received a copy of the message.
The historical mechanism used by sendmail to create and update its queue control files resembles the invention's stable storage objects in two important ways. First, a queue control file either exists in the queue control file directory (often referred to as “the queue”) or it does not. Second, queue control files are not updated “in place”: rather the new state is written to a temporary file, and then the new temporary file is renamed to the same name as the old queue control file. The UNIX file system provides atomic semantics in this case: the old queue control file is unlinked (and deleted, since its reference count is now zero, because additional links for queue control files are never created) and the new one assumes the name of the old one.
In a redundant environment, a sendmail process, the Client Daemon, or an entire Access Server may crash after some of the copies of the queue control file have been edited yet some have not. Queue control files cannot be stored using the stable storage algorithm, instead quorum voting must be used. This is because a queue control file may be replaced by another with the same name but with different contents. Since these cannot be told apart, if a process fails while substituting some, but not all, of a set of queue control files, doing a single read would not provide enough information to know if these operations have succeeded or failed. All queue control files are stored three times and read using quorum voting to guarantee their integrity.
The internal structure of a queue control file is, simply, the format of the sendmail queue control file (also known as the “qf” file) historically used by sendmail. It is described fully in Chapter 23 of sendmail, Second Edition by Costales and Allman, O'Reilly, 1997, the disclosure of which is hereby incorporated by reference. Only minor changes to its format:
- (1) The I code letter, the df file inode, no longer has any meaning and is no longer used.
- (2) A new code letter is added to store the handle, starting byte offset, and length of the message's body.
- (3) A new code letter is added to store the handle, starting byte offset, and length of the message's headers. Unlike the body, the queue control file may refer to several sets of headers for different classes of recipients. Therefore this new code letter would be used in a manner similar to the C code letter (i.e., the controlling user), possibly appearing multiple times.
The only significant difference with prior sendmail implementations is that the invention requires that all queue control files be written 2f+1 times. This provides fault tolerance via quorum voting: the consensus determines if the control file exists and, if so, its contents.
(d) Metadata File Naming Convention
One of the properties of metadata is that it provides the name space in which all objects in the system, metadata and stable storage objects alike, are locatable and retrievable. The lookup process for any object must start with a metadata object with a well-known name. The well-known name for a message delivery queue is assigned to the queue by the system's administrators, e.g., “as5-default-queue”. The well-known name for a mailbox is the user's name and domain name, e.g., “bob@example.com”. A message folder is named relative to its mailbox name.
Metadata file naming convention is very similar to the one described for stable storage objects. The chief difference is that the object's handle is generated by a hash, for example, using the MD5 algorithm, of the well-known name. The use of MD5 specifically is not recommended, as it is more computationally expensive than necessary for these purposes, but it will be used here as an example. In its best mode of operation, the use of any of several hash functions is recommended such as those described by Knuth in The Art of Computer Programming, Vol. 3, 2nd ed., pp. 513ff, which is hereby incorporated by reference.
One can calculate the path name of the metadata for a mailbox. For example, assume one wants to determine the full path for the mailbox of user “bob” within the domain “example.com”. First, the handle may be calculated: MD5(“bob@example.com”)=0xF1E2D1F542273BA0A30751482519C18C. Using the same intermediate subdirectory scheme, the “mailboxes” secondary identifier, and top-level prefix, the full pathname is determined to be “/smm/data/mailboxes/F1/E2/D1/m:bob@example.com”, where the type prefix “m” denotes a mailbox. This pathname is a directory which stores the metadata for the mailbox's folders, as described below.
The IMAP protocol provides a hierarchical namespace for naming folders. This hierarchy maps nicely onto the UNIX file system's hierarchy. The only difference is that the IMAP server can specify the hierarchy delimiter character used within the folder hierarchy; the UNIX file system must use the forward slash, “/”. The invention's IMAP server uses the colon, “:”, as the hierarchy delimiter.
The invention stores a mailbox's folder metadata object files within the UNIX file system hierarchy named by the mailbox's pathname (as described above). These metadata object files use a type prefix of “f” to denote folder metadata. If the folder's name contains the folder hierarchy delimiter, a UNIX file system subdirectory hierarchy is created to mirror the folder hierarchy. For example, the full pathnames for folders named “INBOX”, “letters:danny”, “letters:kirsten”, and “letters:dad:funny” belonging to the user “bob” within the domain “example.com”, utilizing the directory prefix schemes as all previous examples, are “/smm/data/mailboxes/F1/E2/D1/m:bob@example.com/f:INBOX”, “/smm/data/mailboxes/F1/E2/D1/m:bob@example.com/letters/f:danny”, “/smm/data/mailboxes/F1/E2/D1/m:bob@example.com/letters/f:kirsten”, and “/smm/data/mailboxes/F1/E2/D1/m:bob@example.com/letters/dad/f:funny”, respectively.
F. Detailed Operations Examples
For the purposes of this illustration, the following assumptions are made:
- (1) The current system View consists of 8 Data Servers, named DS0 through DS7.
- (2) The system is operating in a mode to withstand a single Data Server failure without data loss, i.e., the value of f, for the purposes of storing 2f+1 metadata replicas and f+1 stable storage object replicas, is 1.
- (3) The queue name assigned by the MTA or Client Daemon to the incoming message is “as5-default-queue”. The handle for this queue is MD5(“as5-default-queue”)=0x2F3436B2AB4813E7DB3885DFA2BA1579. The Consistent Hashing Algorithm determines that, in the current system View and fault tolerance level, this handle's Data Server location set is {DS1, DS2, DS3}.
- (4) The queue identifier assigned by the MTA to the incoming message is “CAA57074”.
- (5) The handle derived to store the message's body is 0xD41D8CD98F00B204E9800998ECF8427E. The handle and the current system View, by the Consistent Hashing Algorithm, determines that the handle's Data Server location set is {DS0, DS3, DS4}. Recall that despite the three Data Servers listed here, the stable storage item will be stored only on the first two in an f+1 redundant situation. It will be stored on the third Data Server in this list, DS4, if either DS0 or DS3 is not available at the time the Client Daemon goes to write this message body.
- (6) Data Servers store metadata and stable storage objects in an intermediate subdirectory hierarchy that uses three levels of 256 subdirectories each. The first 24 bits of the handle are used to name these subdirectories, or 8 bits per subdirectory. At each level, the 8-bit value is converted to an ASCII hexadecimal to name each subdirectory, i.e., at each level the subdirectories are named “00” through “FF”.
- (7) The message's headers, at queueing time, are stored within the message queue control file. This has been sendmail's queueing method for years; it gives the MTA great flexibility in how it rewrites the message's headers. While the MTA could have written the headers into the stable storage object used to store the message's body, or it could have written the headers into a separate stable storage object, for the purposes of this example, sendmail follows its historical behavior.
- (8) No errors occur during the processing of the message. If an error occurs, e.g., a queue control file with the same name already exists or a Data Server in an object's location set is down, the algorithm descriptions elsewhere in this document describe what steps, if any, must be taken to continue processing.
1. EXAMPLE
Message Queueing
With reference to
First, the MTA receives an SMTP connection from a peer server 701. The MTA assigns a message delivery queue and queue control identifier (called the “queue ID”) to the as-yet unknown message 702. The Client Daemon locks a message queue control file named by the queue ID on the 2f+1 Data Servers, then stores the message in the delivery queue 703 which verifies that a queue control file by that name does not already exist. Specifically, on each of the Data Servers, a lock is successfully granted for “CAA57074” in the queue “as5-default-queue”. It then verifies that the file /smm/data/queue/2F/34/36/Q:as5-default-queue/q:CAA57074 does not already exist.
When the SMTP DATA command is received, the MTA stores the message envelope and headers in its memory 704, and then selects a handle for storing the message's text. The Client Daemon selects two of the three eligible Data Servers in the handle's storage set to store the message's text in a stable storage object, DS0 and DS3, and asks them to prepare to create stable storage object replicas using that handle. If either is not up, DS4 will be queried to replace the unavailable Data Server. If more than one does not respond, the operation will be aborted 705.
From each Data Server's perspective, the file used to store a stable storage object with the handle 0xD41D8CD98F00B204E9800998ECF8427E is /smm/data/messages/D4/1D/8C/S:D41D8CD98F00B204E9800998ECF8427E:as5-default-queue:CAA57074. It verifies that that file does not already exist, and then opens the temporary file /smm/data/messages/D4/1D/8C/S:D41D8CD98F00B204E9800998ECF8427E.tmp to store the message text. In the file naming convention, the “.tmp” suffix denotes a temporary file.
As the body of the message is received, the data are passed in blocks to DS0 and DS3 714 and each Data Server appends these blocks to its respective temporary file. The data is copied in discrete blocks to avoid requiring the MTA, Client Daemon, or Data Server to buffer the entire message in memory at once.
When the SMTP DATA command is finished, the Access Server sends the final data block to the Client Daemon and tells it to finalize the stable storage object 706. Each Data Server flushes any pending data to persistent storage, e.g., disk and/or non-volatile RAM, the temporary file to its permanent name, e.g., /smm/data/messages/D4/1D/8C/S:D41D8CD98F00B204E9800998ECF8427E.tmp is renamed to: /smm/data/messages/D4/1D/8C/S:D41D8CD98F00B204E9800998ECF8427E:as5-default-queue:CAA57074.
Now that the message body is safely written, the MTA creates the message queue control file for this message 707. It writes the message queue control file to the Client Daemon, which creates replicas of it in the proper locations. A temporary file is created, e.g., /smm/data/queue/2F/34/36/Q:as5-default-queue/q:CAA57074.tmp. When all the message queue control file data have been received and flushed to persistent storage, the temporary file is renamed to /smm/data/queue/2F/34/36/Q:as5-default-queue/q:CAA57074 on 2f+1 servers.
At this point, the lock on the message queue control file is relinquished, and the message queueing phase is completed. The MTA ACKs the receipt of the message by returning the SMTP message: “250 Message accepted for delivery” to the originating MTA and the SMTP session terminates, as indicated at step 708. The message is safely queued and awaits further processing for delivery.
2. EXAMPLE
Scanning a Message Delivery Queue for Idle Jobs
All of the assumptions made in the previous example are also made for this example, as well as the following:
- It is assumed that only one queue, named “as5-default-queue”, is currently assigned to the MTA. If the MTA was assigned multiple message delivery queues, the same process described below would occur on each queue in parallel or sequentially.
First, a timer or other event within the MTA triggers a periodic scan of its message queue directories 801, as illustrated on
Then the MTA enters a loop where it locks each queue control file 802, attempts to read the contents of that file, and stores its contents in local memory. For illustration purposes, it is assumed there exists one message queue control file whose ID is “CAA57074”. If the Client Daemon cannot obtain a global lock 804, it assumes that some other MTA process is processing that control file, and it continues with the next control file at the top of the loop. If successfully locked 805, the MTA reads the contents of the control file 806: the Client Daemon issues read requests for the message queue control file “CAA57074” in the message delivery queue “as5-default-queue” to AS1, AS2, and AS3. Each Data Server, in turn, retrieves the contents of the file /smm/data/2F/34/36/Q:as5-default-queue/q:CAA57074. The Client Daemon uses quorum voting to determine the file's contents and returns the consensus data to the MTA and a delivery attempt may be made 808.
3. EXAMPLE
Message Delivery to a Non-local Recipient
In addition to the assumptions made in the previous examples, the following assumptions are made for the purposes of illustration:
- (1) The message is addressed to one non-local recipient, <user@domain.com>, and that this recipient's message can be delivered via SMTP to a remote SMTP server responsible (called the “mail exchanger”) for domain.com's e-mail.
- (2) The queue ID assigned to this message is “CAA57074” (the same as used in the previous example).
First, the message's queue ID must be known 901, as illustrated in
Transmission of the message to the remote host begins 903. The envelope information is exchanged according to the SMTP protocol. The MTA then sends the SMTP “DATA” command and begins sending the message's text to the remote server 904. The message's headers are already known, because they are also stored in the message queue control file. The contents of the message body, however, are not yet known. The message queue control file stores the handle H=0xD41D8CD98F00B204E9800998ECF8427E, starting byte offset O, and length L of the stable storage object which stores the message body.
The MTA asks the Client Daemon for a data stream for handle H, beginning at byte O for L bytes, as referenced by queue ID “CAA57074” in the message delivery queue “as5-default-queue” 905. The Client Daemon calculates the location set for the handle and chooses one of the Data Servers to read it. The Data Server determines that the path for H would be /smm/data/messages/D4/1D/8C/S:D41D8CD98F00B204E9800998ECF8427E:as5-default-queue:CAA57074. If the Data Server cannot read the file because it does not exist, the Client Daemon chooses another Data Server from the location set. If the file does exist, the Data Server begins streaming the data to the Client Daemon, which forwards the data stream to the MTA, which forwards it to the remote SMTP server.
When the SMTP DATA command has finished and the remote SMTP server has returned a 250 status code (i.e., the message has been received and stored), the SMTP session is terminated 906. Because there are no more recipients for this message, the message queue control file is deleted and unlocked 907. The Client Daemon asks each Data Server in the message queue control file's location to delete “CAA57074” in the “as5-default-queue”; each Data Server in turn deletes the file /smm/data/queue/2F/34/36/Q:as5-default-queue/q:CAA57074 and unlocks it. The MTA then deletes the stable storage object H, as referenced by queue ID “CAA5707438 in the message delivery queue “as5-default-queue” 908. Each Data Server then deletes the file /smm/data/messages/D4/1D/8C/S:D41D8CD98F00B204E9800998ECF8427E:as5-default-queue:CAA57074. If the file system's namespace contains no other links to that file, the file system will automatically reclaim the storage space used by it.
4. EXAMPLE
Message Delivery to Local Recipients
In addition to the assumptions made in the previous examples, the following assumptions are made for the purposes of illustration:
- (1) The MTA has already fully expanded the message's recipient list: it is addressed to one non-local recipient, <user@domain.com>, and two local recipients, <bob@example.com> and <carla@example.com>.
- (2) The handle for “bob@example.com” is 0x9181DEAB4E17620D3524AED08C1D099B. The handle for “carla@example.com” is 0xEFB0B76CCC1873F6C874CC85D6D0C165. Below, they are called H1 and H2, respectively.
- (3) The handle for the message's body stable storage object is 0xD41D8CD98F00B204E9800998ECF8427E (called Hb below).
First, the message's queue ID must be known, the message queue control file be locked, and the control file's contents be known 1001, as shown in
Next, an extended LMTP session is started with the LDA (Local Delivery Agent) 1002. An example session is shown below.
First, the MTA and the LDA exchange envelope information as defined by the protocol (explained fully in RFC2033) 1003. The MTA and LDA use their own extension to the LMTP protocol, the X-BODYREF command, to pass the handle for the message body through to the LDA., at 1004. This is enough information for the LDA to locate this data. Then when it comes time to deliver the message during the “DATA” phase of the LMTP session, the header is generated from the contents of the already-read queue control file and passed to the LDA.
The LDA creates a temporary header file in the same location as its corresponding body file at /smm/data/messages/D4/1D/8C/H:D41D8CD98F00B204E9800998ECF8427E.tmp.
Once the headers have been written out and synced to stable storage, the body file is linked to its new name 1005: /smm/data/messages/D4/1D/8C/S:D41D8CD98F00B204E9800998ECF8427E:bob@example .com:f:INBOX.
The header file is now renamed to its permanent name 1006: /smm/data/messages/D4/1D/8C/H:D41D8CD98F00B204E9800998ECF8427E:bob@example .com:INBOX.
Then, for each recipient's default folder, the LDA adds metadata references to the message in stable storage 1007 as well all other metadata components shown in previously-described
Now that the MTA has finished work, for now, on this queue item, it's time to update the message queue control file with new status data 1009. If the message were delivered to all recipients, the message queue control file would be deleted and the queue's reference to the stable storage object storing the message body would be removed 1010. However, there is still one recipient that has not had a successful delivery. Therefore the MTA tells the Client Daemon to replace the old message queue control file with a new one, writes the new control file, and closes it. The Client Daemon instructs each of the three Data Servers in the queue's location set to update the queue control file in “as5-default-queue” called “CAA57074”. The same process described in example #1 is used, with one exception: when the temporary file is renamed to /smm/data/queue/2F/34/36/Q:as5-default-queue/q:CAA57074, the old file “q:CAA5707438 is replaced as part of the operation.
5. EXAMPLE
Message Retrieval
- In addition to the assumptions made in the previous examples, the following assumptions are made for the purposes of illustration:
The owner of the mailbox “bob@example.com” has authenticated properly with an IMAP server and wishes to retrieve a message from the folder “INBOX”.
The message retrieval process is done in two phases. First, the contents of the folder must be determined from the folder's metadata. Second, the actual message text is retrieved from stable storage.
First, the IMAP server determines the handle for the mailbox “bob@example.com” 1101, as shown in
Now that the IMAP server knows that the folder contains a message, it can retrieve it when the user requests it. The IMAP server uses the previously-obtained folder metadata to get the handle, starting byte offset, and length of the header and body portions of the message 1104. The IMAP server asks the Client Daemon to retrieve the contents of the header stable storage object's handle at proper offset for the proper length 1105. The Client Daemon calculates the location set for the handle and choses one of the three Data Servers to stream the data to the requesting Access Server. If that Data Server does not have a copy of /smm/data/messages/7A/07/E8/H:7A07E8E4EBD01B8568DD6446F72513BC:bob @example.com:f:INBOX, then another Data Server is chosen. The data is forwarded by the Client Daemon to the IMAP server which then forwards it to the user.
The same process used to retrieve the message's header is used to retrieve the message's body 1105. The body may be stored in a different stable storage object; in this example, it is: its handle is 0xD41D8CD98F00B204E9800998ECF8427E. Otherwise, the process is identical.
6. EXAMPLE
Deleting a Message From a Folder
In addition to the assumptions made in the previous examples, the following assumptions are made for the purposes of illustration:
- (1) The owner of the “carla@example.com” mailbox wishes to delete a message from the “INBOX” folder. Recall that the handle for this mailbox is 0xEFB0B76CCC1873F6C874CC85D6D0C165.
- (2) The contents of the folder have already been determined, using the same process described in example #5 above. Assume the message to be deleted is the one described in example #4 above: recall that the header's stable storage object handle is 0x7A07E8E4EBD01B8568DD6446F72513BC and the body's stable storage object handle is 0xD41D8CD98F00B204E9800998ECF8427E.
First, the IMAP server asks the Client Daemon to remove the metadata for this message from the “INBOX” folder 901. For each of the three Data Servers, the Client Daemon requests that the folder be locked, the metadata for the message be removed, and the folder unlocked; each Data Server in parallel locks the folder, removes the metadata for the message from /smm/data/mailboxes/EF/B0/B6/m:carla@example.com/f:INBOX, and unlocks the folder. Like all edits of mutable data objects, it performs the edit by copying the file to the same name with a ”.tmp” extension, editing the file there, then atomically renaming it to its original name.
Then the IMAP server asks the Client Daemon to remove the backreferences from the message's header stable storage objects and thus decrement its reference count 902. For stable storage object, the Client Daemon asks the Data Servers in the object's location set to remove the backreference for the mailbox “carla@example.com” folder “INBOX” from the object's handle. For this example, this results in the following unlink( ) system call on each Data Server which might store this data object:
The same process is done for the message body's stable storage object 903. In this case, this results in the following unlink( ) system call on each Data Server:
F. The Reference Count Sweeper
- 1. Introduction
In an ideal world, the reference counting scheme should be all that is required to maintain the system's message store integrity and to manage the disk storage resources of all Data Servers. The world is of course far from ideal. Most UNIX file systems require a program such as fsck to check the consistency of the file system because certain file system operations cannot be performed atomically or within the context of “database transaction” semantics (with abort, roll-back/roll-forward, and logging capabilities). The problem of maintaining complete consistency within a system cluster is even more challenging than in a UNIX file system: at least the UNIX file system is maintained by only one computer. The files, links, reference counts, and backreferences maintained by a system cluster are scattered across a number of Data Servers.
Today, there exists a multitude of garbage collection techniques developed over the years, though they employ only a few major algorithms. The following describes a “mark and sweep” algorithm to correct the reference counts of header files and body files stored in a system cluster. The sweeper process relies on an important invariant to avoid data loss:
For every stable storage object F, r(F) ≧R(F), where r(F) is the on-disk reference count for F and R(F) is The True and Correct Reference Count for F.
2. The Sweeper Algorithm
This sweeper's algorithm does not correct all stable storage object reference count inaccuracies. Instead, its purpose is to find stable storage objects with non-zero reference counts which ought to be zero. Any objects with a True and Correct Reference Count of zero will be unlinked. A stable storage object that has a non-zero reference count which is too high does not hurt since, as long as it is non-zero, the system will not delete it. Thus, the system loses nothing by implementing this scheme even though it does not fix all incorrect reference counts.
At a given point in time, either automatic or administrator-initiated, the Command Server declares a reference count sweep S to begin at time T. Each Data Server receives the directive and starts a sweep of all of its metadata files. The process is divided into the following phases.
(a) Phase I: The Metadata Sweep
The entire metadata directory hierarchy is traversed recursively like the UNIX “find” utility does. Sequentially, each folder metadata file encountered is opened and parsed. For each unexpunged message in the folder, the system sends a reference message to the primary, secondary, and tertiary Data Servers that may be storing that message's header file and body file. The reference message contains the stable storage object type, handle, and the message delivery queue or mailbox+folder name. This phase of the sweeper need not attempt to verify that the reference is valid, as messages sent to the wrong Data Server will be detected and ignored in later phases.
Unlike a “find” operation, the recursive directory sweep should search subdirectories in a deterministic manner, e.g., by sorting subdirectory entries before processing them. This allows periodic checkpoints of the sweep to be done easily. If the Data Server crashes during the sweep, it can resume where it left off. The checkpoint frequency need not be very frequent: the algorithm does not care if a folder is swept multiple times. Nor does the algorithm care if folders are added, removed, or renamed after T but before the sweep process gets around to that folder. There are a couple of exceptions to this rule:
- 1. The algorithm cares about a race condition which would move a folder from an unswept portion to an already-swept portion of the Data Server's file system. The solution is to have the process attempting the folder move send reference messages for each message in the to-be-moved folder, regardless of whether or not the folder has been swept and regardless of whether the folder is being moved to a place in the file system which has already been swept. Once all the reference messages have been sent, the folder may be renamed.
- 2. The algorithm cares if a message is moved or copied from an unswept folder to a folder which has already been swept. The solution is to have the process of moving or copying the message send a reference message to each possible Data Server, regardless of the current state of the hierarchy sweep. Once the reference messages have been sent, the message may be copied.
- Both of these concerns are addressed by the Command Server. When a sweep is in progress, the Access Servers can guarantee that both of these actions can be handled in a safe manner, allowing nothing to escape being swept eventually.
- Once a Data Server finishes Phase I, it notifies the Command Server that it has finished Phase I of the sweep S.
(b) Phase II: Message Sorting
All of the reference messages are collected by Data Servers. When the Command Server has determined that all Data Servers have finished Phase I of the sweep S, the Command Server informs all Data Servers to begin Phase II. This phase involves processing and sorting all of the reference messages sent to it. The processing step involves converting the handle and folder/queue names to local pathnames. The sorting step involves creating a single large dictionary-order sort of all the paths. During sorting, all duplicates are removed from the final list.
Once a Data Server finishes Phase II, it notifies the Command Server that it has finished Phase II of the sweep S.
(c) Phase III: Sweeping the Header Files and Body Files
The Command Server informs the Data Server when it may begin Phase III of sweep S. This phase involves having each Data Server traverse its stable storage object storage hierarchy in the same dictionary sort order as the sort used in Phase II. The object list generated in this phase is compared to the list generated by Phase II. See table below for an example.
A differential algorithm is used to compare the two lists. If a stable storage object is found in Phase III that has no corresponding entry in the Phase II list, the object's creation timestamp is checked. If that timestamp is greater than T (the time at which sweep S began), the object is ignored. If the timestamp of the object is less than T, its reference count ought to be zero and it can be safely removed at any time.
Situations where an object reference in the Phase II list does not have a corresponding entry in the Phase III list are not problematic, due to the Reference Count Invariant. This situation means the object was swept in Phase I but legitimately deleted before Phase III got around to checking this directory. It could also mean that the object has not been deleted but rather that this Data Server is one which does not store a copy of the stable storage object.
Claims (40)
Priority Applications (2)
Applications Claiming Priority (1)
Publications (2)
Family
ID=26879923
Family Applications (1)
Country Status (1)
Cited By (97)
Families Citing this family (89)
- 2000
- 2000-12-12 US US09/735,130 patent/US7117246B2/en active Active | https://patents.google.com/patent/US7117246B2/en | CC-MAIN-2019-26 | refinedweb | 18,541 | 51.48 |
This message was posted using Email2Forum by msabir.
This message was posted using Email2Forum by msabir.
Hi Chris,
The following example shows typical code needed to export a report to a DOC file using Aspose.Words for JasperReports. More examples can be found in the demo reports included in the product download.
import com.aspose.words.jasperreports.*;
AWDocExporter exporter = new AWDocExporter();
File sourceFile = new File(fileName);
JasperPrint jasperPrint = (JasperPrint)JRLoader.loadObject(sourceFile);
exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint);
File destFile = new File(sourceFile.getParent(), jasperPrint.getName() + ".doc");
exporter.setParameter(JRExporterParameter.OUTPUT_FILE_NAME, destFile.toString());
exporter.exportReport();
You can easily convert the doc to pdf format using Aspose.Words.
You need Aspose.Words for JasperReports.
Regards,
Imran Rafique
Hi Chris,
You can download the latest version from following link for testing.
The component will function as normal except for an evaluation limitation.
Regards,
Imran Rafique
Thank you
Hi
Hi Chris,
Thank you for request. You can convert the doc to pdf format using following code:
// Open document.
Document doc = new Document("C:\\Temp\\in.doc");
// Accept all changes.
doc.acceptAllRevisions();
// Save document to PDF.
doc.saveToPdf("C:\\Temp\\out.pdf");
Regards,
Imran Rafique
awesome thanks
com.aspose.words.Document
i have the Aspose.words.jdk16.jar in my classpath... do i need anything else?
Chris
Hello
Thanks for your inquiry. You should just add imports like com.aspose.words and a reference to JAR.
import com.aspose.words.*;
import com.aspose.words.DocumentBuilder;
Best regards,
Could there be an error in the latest Aspose.Words package?
java.lang.IncompatibleClassChangeError
i googled it and it says it might be an issue with the client jar
ill try the version before
Chris
Hi
Thanks for your request. The problem might be caused because there are two different versions of Aspose.Words for Jasper Reports. Have you tried to clear caches?
Also, do you use only Aspose.Words for Jasper Reports in your application or you also use Aspose.Words for Java? Maybe there is a conflict between these products. If so, I suppose you can try wrapping one Aspose.Words JAR into your own JAR and expose only methods, which you use. Maybe this could help you to resolve the problem.
Best regards,
i am trying to put both products… aspose.word and aspose words for jasper reports in my project… same thing happens too if i download an earlier version of apsose.words
Hi Chris,
Thanks for your request. You can try wrapping one Aspose.Words JAR into your own JAR and expose only methods, which you use. Maybe this could help you to resolve the problem.
Best regards, | https://forum.aspose.com/t/jasper-to-doc-and-doc-to-pdf/57377 | CC-MAIN-2021-21 | refinedweb | 435 | 53.78 |
PyQt5: Very slow QSortFilterProxyModel(not customized)
How do you optimize the performance of a QSortFilterProxyModel based on the example code below?
Example: In the example below I extract 200 records from the database, load them in a QSortFilterProxyModel & finally display them via a sort-able QTableView. Once the user click on a column header it takes about 20 seconds for 200 records to sort(?!) (If I sort the same amount of records in excel it takes less than a second)
from PyQt5 import QtCore, QtWidgets, QtSql from models import _databaseConnection class FormController(QtWidgets.QMainWindow): def __init__(self): super(FormController, self).__init__() #connect to db _databaseConnection.connect() #setup model modelSource = QtSql.QSqlQueryModel() modelSource.setQuery("""SELECT TOP 200 Entity.Name, Entity.Surname FROM Entity""") #setup proxy modelProxy = QtCore.QSortFilterProxyModel() modelProxy.setSourceModel(modelSource) #setup ui self.resize(900, 700) tableView = QtWidgets.QTableView() tableView.setSortingEnabled(True) tableView.setModel(modelProxy) self.setCentralWidget(tableView) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) f = FormController() f.showNormal() sys.exit(app.exec_())
Edit1:
One work around to speed up the sorting is to simply re-query the database and request a sorted result set. The drawback however would be increased data usage for mobile users...
Okay why are you sorting them in code rather than via the Database engine? As it is designed (or can be designed) to do that extremely efficiently
Conversely if you do not know what sort order you will be using then instead of sorting them build a sortable index based on the the things you might need to resort the list on this reduces the amount of data the program has to manipulate and/or you can create these indexes (in the background) while presenting the current list to the user.
You right, I can do the initial/default sorting when loading the query as you suggested, but I still want the users to customize the sorting by clicking on the column headers should he/she wish to do so. Also I thought this is what QSortFilterProxyModel was designed for...
How do you go about building a simple sortable index in qt? Are you referring to caching the data in something like a 2d-list and sorting that instead then afterwards updating the view? (Sorry, I did some searching on the net and could not find any reference to sortable indexes in qt)
Well yeah a sort of 2'D list but de-referenced perhaps it all greatly depends on the data you are working with. Just keep in mind that computers love numbers and tolerate everything else by viewing them as special numbers. So anytime you can reduce what you are doing to just numbers (including sorting) you are going to see improvements in speed -- as there is no overhead necessary that would come along with any "special" number.
Still even if you did not de-reference it you could build an Ordered Dictionary with the sorted column as Key and the Key to the Dictionary the table data is stored in being the value. Aka {SortedKey : IndexKey} then all you have to do is iterate through your SortedDict.keys() and reference the FullData[IndexKey] to get the data you need to display.
Further you could, as I stated pass this off to say a worker thread to implement, give it the full list and have it create the various sorted dictionaries, while you are displaying the current one - then while waiting on the User to do something -- the worker thread takes that initial data and creates the various sorted dictionaries that you might need to implement for what the user might want to sort on (or what you allow them to sort on) implemented in most likely to least likely order perhaps.
Of course had the Sort Filter Proxy been quick enough this would not have been necessary but when it comes to speed sometimes simple things can make a world of difference. Lastly I am not saying this is the best answer to this situation it is just an answer. You might find a better one but if not at least you have something to move forward with.
Heck I just learned today that the documentation on how to use a QThread is all wrong and you should not sub-class the dang thing -- can you say back to the design board.
Seems
Yes that is generally the trade off -- if you want more speed on the client you have to use more memory (which btw is fairly cheap these days or so they tell me) -- however if you want to use less memory it generally means the program will run slower -- and this trade off has been around for ages ... where it used to be memory was in short supply so speed suffered but now we can speed things up dramatically by just using more memory -- note there were programming tricks and back-flips you could perform in order to speed up code when memory was tight but most of these rather complex methodologies have given way to simply throwing more memory at it. So good luck and always K.I.S.S. it (Keep It Simple and Smart) | https://forum.qt.io/topic/106291/pyqt5-very-slow-qsortfilterproxymodel-not-customized/6 | CC-MAIN-2019-39 | refinedweb | 862 | 57 |
Data mining part three-text classification
Text classification generally includes 8 steps. Data exploration and analysis-"data extraction-" text preprocessing-"word segmentation-" removal of stop words-"text vectorized representation-" classifier-"model evaluation. Important python libraries include numpy (array), pandas (for processing Structured data), matplotlib (drawing word cloud for easy visual representation), sklearn (providing a large library of classification and clustering algorithms).
1. Data exploration and analysis
(1) Obtain a large number of unprocessed documents, and mark the type of documents.
(2) Assign a unique Id to each document, and replace the classification categories previously marked with text with discrete numbers. For example, the classification is marked as ['normal message', 'spam message'], and its discrete representation is [0,1].
(3) Read the documents into an array with Id, document content, and tags as columns and the number of samples as rows. The form is: [[Id1, content1, label1], …, [Id_n, content_n, label_n]]
Code example:
import pandas as pd
data = pd.read_csv (csvfile name, header = None) # read csv file, not column names
data.columns = ['Id', 'Content', 'Label']
Some methods to get data in 1.1DataFrame:
- data.loc [] # Get the specified row and column data by string index. For example:
data.loc [0: 2, 'content'] # Get the data in the content column of rows 0,1,2, [Note]: 0: 2 gets 0,1,2 rows, this is the same as general slicing Not the same
data.loc [[0,2], ['content', 'label']] # Specify rows and columns by list
- data.iloc [] # By numerical indexing, the usage is exactly the same as the array
data ['label'] # Get the data of the label column, the result is a one-dimensional array
data [['content', 'label']] # The result is content, all data in the label column
1.2 Count the frequency of different labels and draw a pie chart
data ['label']. value_counts () # Get the number of occurrences of each label in the label column of data, and the result is returned as a series
1.2.1 Draw a pie chart
num = data ['label']. value_counts ()
import matplotlib.pyplot as plt
plt.figure (figsize = (3,3)) # set the canvas to a 3 * 3 square
plt.pie (num, labels = ['normal', 'junk']) # draw a pie chart, num is a series, series is an indexed array, similar to the use of a dictionary.
plt.show ()
2. Data extraction
When the proportion of different labels is unbalanced, stratified sampling is needed. For example, 0 labels appear 72,000 times and 1 label appears 8000 times. At this time, the problem of model laziness will occur.
data_normal = data.loc [data ['label'] == 1] .sample (1000, random_state = 123) #Select 1000 random samples from all the data with label 1
data_bad = data.loc [data ['label'] == 0] .sample (1000, random_state = 123) #Select 1000 random samples from all data with label 0
data_new = pd.contat ([data_normal, data_bad], axis = 0) # The default row is spliced, so axis does not need to be written
3. Text preprocessing
As shown in the following figure, the content item contains xxx, some special coded characters, and punctuation marks such as commas and periods. These things are meaningless characters and need to be deleted.
To delete these special non-Chinese characters, regular expressions are required. Regular expressions are an essential knowledge point in crawlers. They are a conditional expression. The rules specified by the constructed conditional expression are in a specified string. The search matches the sentences that match the rules.
import re
afterDeleteSpecialWord = data_new ['content']. apply (lambda x: re.sub ('[^ \ u4E00- \ u9FD5] +', '', string))
Here apply means to execute this anonymous function x for each element in the series array (ie the content string of the document), where string is the parameter passed in, and re.sub means that the regular expression specified before "[^ \ u4E00- \ u9FD5] + 'matches strings (that is, non-Chinese special characters) with' 'instead. Here's the regular expression '[^ \ u4E00- \ u9FD5] +':
[] Is an atomic list, ^ means non, \ u4E00- \ u9FD5 Chinese character regular expression, plus ^ before it means non-Chinese characters, [] + means that the characters in this atom list can be matched one or more times. There are many online resources for the usage of specific regular expressions, which are not explained in detail here.
After processing, the punctuation marks and special characters are gone, as shown below:
4. Segmentation, remove stop words
The first step is to segment the content in the previous content. The element in the content column after the segmentation is a list. For example, the element in the previous content column 'I came to the School of Computer Science and Technology of Tsinghua University in Beijing.' 'I', 'Come', 'Beijing', 'Tsinghua University', 'Computer', 'College']
The second step is to remove stop words, first load the stop word file, which stores N stop words, and then remove the word segmentation results in the stop word list in the first step.
code show as below:
import jieba # thesaurus
with open ('stopList.txt', 'r') as f:
stop = f.read () # The result obtained is a large string with special characters such as newlines in it
stop = stop.split () # split by spaces and newlines to get a list of stop words
stop = [''] + stop # Since there was no space in the previous stop, and the space is a stop word, add the space back
jieba.load_userdic (path) # load the user-defined dictionary in the specified path path
after_segement = afterDeleteSpecialWord.apply (jieba.lcut) # segmentation
data_after = after_segement.apply (lambda x: [i for i in x if i not in stop]) # remove stop words
4.1 Draw Word Cloud
Drawing a word cloud is an intuitive image representation of word frequency in text classification. It is presented in the form of an image. Words with high frequencies have larger fonts, less frequent, and smaller fonts.
import matplotlib.pyplot as plt # drawing tools
from wordcloud import WordCloud # word cloud tool library
import itertools # Compress two-dimensional data into one-dimensional data
pic = plt.imread (picturePath) # Here picturePath is the path of the specific picture, which is not specified here. This line of code is a background picture that loads the artboard
'' '
wc = WordCloud (font_path = r'C: \ Windows \ Fonts \ font name ', background_color =' white ', mask = pic) # Generate a word cloud object, the fonts in the windows system are stored in the C folder of the Windows folder Fonts folder. Because the statistics here are all Chinese, do not choose English fonts, but select Chinese fonts, right-click, properties, as shown in the figure, for the specific font name '' '
num = pd.Series (list (itertools.chain (* list (data_after)))). value_counts () # statistics word frequency
wc.fit_words (num) # put the counted word frequency in
plt.imshow (wc)
plt.show ()
Vectorized representation of text
The meaning of text vectorized representation is: Because what we currently get is a segmentation result, which is Chinese, and the computer cannot directly use Chinese as input data for classification, it must be represented by numbers. Then how to use a digitalized document What about vector representation? This is text vectorization.
Commonly used vectorized representations are bag-of-words model, word frequency, TF-IDF, and context-embedded word embeddings.
The bag-of-words model means that words appearing in a document are set to 1 and other words in the thesaurus are set to 0, regardless of the number of occurrences. A document can be represented as an N-dimensional 0,1 vector. The size of N depends on the size of the thesaurus.
Word frequency: Based on the bag-of-words model, consider the number of times a word appears, instead of 1 whenever it appears.
TF-IDF: Consider the word frequency of a word and the inverse document frequency. The inverse document frequency refers to the rarity of the word in all documents. The fewer the number of documents in which the word appears in all documents, the more rare the word is The higher the frequency, the higher the inverse document frequency. The inverse document frequency = log (number of all documents / (number of documents in which the word appears + 1)), and TF-IDF = TF * IDF.
There is CountVectorizer in the feature_extraction.text package in sklearn. TfidfVectorizer can directly do text vectorization, one is based on word frequency, and the other is TF-IDF.
tmp = data_after.apply (lambda x: '' .join (x)) # Since the vectorization tool developed by Google only supports counting by spaces, the words stored in the previous list need to be converted into a large space separated by spaces. String.
cv = CountVectorizer (). fit (tmp) # load dictionary and text data that needs to be vectorized
vector_data = cv.transform (tmp) # vectorization, the result is an iterator
vector_array = vector_data.toarray () # turn the iterator into an array
Text Categorization
The next steps are exactly the same as general machine learning classification problems, so I won't introduce more. You have obtained a vector_array of structured data and the corresponding labels, which can be used for training, testing, model evaluation, etc. with various training models of sklearn. | http://www.itworkman.com/76308.html | CC-MAIN-2022-21 | refinedweb | 1,470 | 55.54 |
Arduino + Servo + openCV Tutorial [openFrameworks]
.
Many of you are probably familiar with at least the basics of the oF and Arduino projects, but for anyone not familiar here’s the briefest of brief explanations: oF is a C++ library for creative coding that helps artists and designers quickly create powerful applications for Windows, Linux, or OSX. Arduino is both a library and a microprocessor platform for prototyping and tinkering with microprocessors and embedded computing that simplifies the process of connecting and programming a fantastic range of components and electronic devices. Connecting the two of them allows you to create applications that not only use processing-heavy libraries like the OpenCV library for Computer Vision (more on that later) but also connect to and communicate with electronics out in the physical world. In this tutorial we’ll be using motion tracking in an openFrameworks application to control servo motors connected to an Arduino board.
Arduino
The Arduino is a few different things: an IDE, a library for embedded computing, and a microprocessor and that can be a little confusing because people often use the terms interchangeably but after a little experience you’ll see how they all work together.
For this tutorial you’ll need a few things:
1 x Arduino-compatible device
1 x Trossen Servokit
1 x USB cable
1 x Breadboard and wires to connect the Servos to the Arduino
You’ll also need to have the Arduino IDE installed on your computer as well as an IDE and the openFrameworks libraries to work with openFrameworks. You can get the Arduino IDE here and oF here
Serial Communication
Communication between an Arduino board and an openFrameworks application is conducted through the Serial port. The most common setup looks like this: an Arduino gets connected to a computer via a USB cable. The Arduino application starts listening for data at a certain baud rate (read: speed) and the openFrameworks starts listening on a certain port (since a computer has multiple) and at a certain baud rate. Voila, your Arduino is talking to your computer.
Arduino and servo-motors
A servo is a type of motor but with a few added advantages. Most important:the pulse of a signal to determine how far to rotate within its range. The most common range for a servo motor is 180 degrees but there are other types as well and the servo can move from any point in that range to any other point clockwise or counterclockwise, making it a powerful tool for controlling anything that needs a controllable, repeatable movement. Robots of all kinds, puppets, factory machinery of all kinds, all make extensive use of servos to function.
To connect a servo motor to the Arduino controller, you need to provide the servo with power from the 5V pin on the Arduino controller, to ground the servo, and to send commands to the servo PWM port.). You should be aware that there is no standard for the exact relationship between pulse width and position; different servos may need slightly longer or shorter pulses.
Servo motors have three wires: power, ground, and signal, as Figure 11-8.
Now, let’s take a look at the code involved:
[sourcecode language="cpp"]
#include <Servo.h>
[/sourcecode]
Here are the two instances of the Servo object that will control each servo motors:
[sourcecode language="cpp"]
Servo vert;
Servo horz;
byte xy[2]; // this array is what the Serial data will be written to
[/sourcecode]
To start controlling the servo, simply call the attach() method of the Servo object:
[sourcecode language="cpp"]
void setup() {
horz.attach(3);
vert.attach(5);
Serial.begin(9600); // start the communication between the oF app and the Arduino
}
[/sourcecode]
In the loop(), simply check for any data on the Serial port, and if anything has arrived, use it to set the positions of the servos with the write() method.
[sourcecode language="cpp"]
void loop() {
if(Serial.available() > 1) {
xy[0] = Serial.read();
xy[1] = Serial.read();
// now that we have the xy we can go ahead
// and write them to the serial
float ratio = 180.0/255.0;
horz.write(xy[0] * ratio);
vert.write(xy[1] * ratio);
delay(20); // make sure we give it a second to move
}
}
[/sourcecode]
openFrameworks and OpenCV
As mentioned earlier, openFrameworks is a C++ library for creative coding and that means that in order to get started working with it, you’ll need a way to compile your application for your computer and operating system. If you take a look at openframeworks.cc/downloads you’ll find a download for your operating system and chosen IDE as well as instructions on how to get started. For this tutorial, you’ll want to get the FAT download. That is, the download that contains all the additional addons, including the OpenCV library that we’ll be using. If you run into any problems the community on the forums at openframeworks.cc/forums are incredibly helpful. You might want to take a moment to get set up if you’re not already.
A team of developers at Intel headed by Gary Bradski started developing OpenCV in 1999. In 2007, Intel released version 1.0 as an open source project. Since it’s an open source library, it’s free to download, and you can use and modify it for your own purposes. Some of the core features that it provides are an interface to a camera, object detection and tracking, and face and gesture recognition. The library itself is written in C and is a little tricky to get started using right away without another library or frame- work to interface with it. Luckily, Stefan Hechenberger and Zachary Lieberman developed the ofxOpenCV add-on for openFrameworks to help anyone working with oF use OpenCV easily in their projects.
Movement tracking in OpenCV is done through what is commonly called a blob, which is a data object that represents a contiguous area of a frame. Blobs can be all sorts of different things: hands, faces, a flashlight, a ball. Blobs are usually identified by the contrast caused by the difference between adjoining pixels. A group of adjoining pixels that have the same relationship to another group of adjoining pixels, such as an area of darker pixels that is bordered by an area of whiter pixels, usually indicates a specific object. To do that in your application you retrieve an image from the camera and pass it to an object called a contourFinder that looks for contours and then tries to create blobs from those. The image below shows an image passed to a ContourFinder instance and the detected contours:
Now we’re ready to look at some code. I’m only going to highlight certain areas of the code, since everything else is available in the download for this tutorial. Like the Arduino application every oF applicaiton has a setup() method that is called when the it starts up. In our case, we want to set up the camera that we’ll be using to find blobs and start the communication between the Arduino and oF application:
[sourcecode language="cpp"]
void servoApp::setup() {
threshold = 60;
bLearnBakground = true;
[/sourcecode]
The enumerateDevices() method prints out all of the devices attached to your computer and allows you to see what your oF application thinks is attached. This is great way to avoid typos in the device name, which can be a little unfriendly at times.
[sourcecode language="cpp"]
serial.enumerateDevices();
//serial.setup("COM4"); // windows will look something like this.
serial.setup("/dev/cu.usbserial-A9007KHo",9600); // mac osx looks like this
//serial.setup("/dev/ttyUSB0", 9600); //linux looks like this
[/sourcecode]
In the final video I’m using two cameras: one to read to do motion tracking and one attached to my servo kit to show the movement of the servos. That’s why you see vidGrabber and vidGrabber2. If you’re not using two cameras, feel free to comment all the code for vidGrabber2 as it isn’t doing anything vital to motion tracking or communicating with the Arduino.
[sourcecode language="cpp"]
vidGrabber.setDeviceID(5); // this isn’t necessary if you’re only using one camera
vidGrabber.initGrabber( 320, 240 );
colorImg.allocate( 320, 240 );
grayImg.allocate( 320, 240 );
bgImg.allocate( 320, 240 );
// here’s the setup for the second camera
vidGrabber2.setDeviceID(4); // this isn’t necessary if you’re only using one camera
vidGrabber2.initGrabber(320, 240, true);
}
[/sourcecode]
Like the Arduino applications’ loop() method, an oF application has an update() method that is called every frame. This is where you want to do any data processing or reading from or writing to peripheral devices like cameras or Arduinos. The grabFrame() method of the videoGrabber is what tells the underlying operating system to get a new frame from the camera and store it in memory so that it can be analyzed and displayed.
[sourcecode]
void servoApp::update() {
vidGrabber.grabFrame();
vidGrabber2.grabFrame();
int i, len, largest;
[/sourcecode]
Usually you’ll want to check whether the frame is new since the application might be operating at a slightly different speed than the device driver for your camera. That’s another thing that oF makes very easy, with the aptly named isFrameNew() method. Another thing worth pointing out is that the images are all converted to grayscale before they’re passed to the contourFinder. This is because grayscale images are inherently smaller than color images, black/white requiring less data to represent a pixel than Red/Green/Blue.
[sourcecode language="cpp"]
if( vidGrabber.isFrameNew() ) {
colorImg = vidGrabber.getPixels();
colorImg.mirror( false, true );
grayImg = colorImg;
if( bLearnBakground ) {
bgImg = grayImg;
bLearnBakground = false;
}
grayImg.absDiff( bgImg );
grayImg.blur( 11 );
grayImg.threshold( threshold );
[/sourcecode]
Now, after converting the image to greyscale it gets passed to the contourFinder to process the image and see whether there are any blobs. The parameters for the findContours() method are:
image – the image to process
minSize – the smallest size of a blob
maxSize – the largest size of a blob
numberOfBlobs – the number of blobs to look for
findHoles – whether to look for holes in the blobs or not
[sourcecode language="cpp"]
contourFinder.findContours( grayImg, 50, 20000, 10, false );
largest = -1;
i = 0;
[/sourcecode]
Now that the contours have been processed we can try to figure out which is the largest since that’s probably the one we’re interested in.
[sourcecode language="cpp"]
if(contourFinder.blobs.size() > 0) {
int len = contourFinder.blobs.size() – 1;
while(i<len){
if(largest == -1) {
largest = i;
} else if(contourFinder.blobs.at(i).area > contourFinder.blobs.at(largest).area) {
largest = i;
}
i++;
}
[/sourcecode]
This is important: we don’t want to send data to the serial port on every frame because the Arduino runs much more slowly than the oF app. The ofGetFrameNum method gives the number of frames that the application has been running. The modulo operator (%) returns the remaineder of the number on the left when divided by the number on the right. In this case, the net effect of this if() statement is to only write data to the serial once every 4 frames, or about 15 frames a second, which is about as much as the Arduino can process.
[sourcecode language="cpp"]
if(ofGetFrameNum() % 4 == 0) {
if(largest != -1) {
serial.writeByte(contourFinder.blobs.at(largest).centroid.y/240 * 255);
serial.writeByte(contourFinder.blobs.at(largest).centroid.x/320 * 255);
}
}
}
[/sourcecode]
The rest of the code is fairly straightforward and includes comments to help you understand what’s going on.
You can download the code for both the Arduino and oF apps at:
This short video shows how the servos are connected to the Arduino board and how you might use the code:
Joshua Noble is a writer, designer, and programmer based in Portland, Oregon and New York City. He’s the author of, most recently, Programming Interactivity and the forthcoming book Research for Living.
Posted on: 28/06/2010
Posted in: Featured, openFrameworks, Tutorials
Post tags: arduino c++ howto joshuanoble learning openFrameworks servo tutorial
-
- Marcus Åslund
- joshua noble
- Sermad
- Electronics Is Fun
- Sharonvella401
- Ouazar Yahia
- wofty
- Abishek Hariharan
- Jonathan Mercieca
- Franck BOUYON
- Abishek Hariharan
- Valentin_dr
- Jonathan
- Jonathan
- Conoco75 | http://www.creativeapplications.net/tutorials/arduino-servo-opencv-tutorial-openframeworks/ | CC-MAIN-2013-48 | refinedweb | 2,023 | 51.07 |
//Program checks employment status, if part time, NO VACATION, if full time, program goes into if/else nest and asks for number of years working at company. More than five years,full time employee granted 3weeks of vacation. Five years or less, full time employee is granted 2weeks of vacation.
#include <iostream> using namespace std; char empTime=' '; int years=0; int main() { cout<<"Employment status? (F=fulltime, P=parttime)"; cin>>empTime; cin.ignore(100,'\n'); [B]//(1)[/B] empTime=toupper(empTime); if(empTime=='F') { cout<<"Years working at company XYZ?"; cin>>years; if(years>5) { cout<<"Vacation weeks = 3"; } else if(years<=5 && years>=0){ [B]//(2)[/B] cout<<"Vacation weeks = 2"; } else{cout<<"Invalid input";} } else if(empTime=='P'){ cout<<"Vacation weeks = 0"; } else{cout<<"Invalid input";} cin.get(); }
(1) I put ignore function in b/c I thought clearing the buffer may help..but I don't think char data type needs clearing..does it?
MAIN PROBLEM:
(2)This is where I suspect the problem is...when I run the program, I input 'F', and then when the screen displays number of years I punch in a bunch of random charcters ex "dhsdkhfebs" [enter] --> and right away it displays "vacation weeks = 2", but I was hoping it would go to "invalid input".
When I put a negative number in..it goes to "Invalid input"...but not charcters..
Any help would be appreciated. Please note that I am only a week in learning c++ and am using a book from 2003, so if my syntax is either too formal, off, retarded in anyway please correct me. Also if I am using unneeded functions or anything else that's bloated please tell me, better I fix bad habits now than later.
P.S.
_btw I use cin.get(); so user has to press [enter] before the program returns 0.
_using codeblocks as my compiler.
Edited by WaltP: Added CODE Tags | https://www.daniweb.com/programming/software-development/threads/282209/basic-c-program-not-working-right | CC-MAIN-2018-39 | refinedweb | 320 | 66.74 |
First of all, hi everyone!
Im a C++ newbie and i'd like to get a good grasp of the concepts soon.
I'm currently working on this
#include <iostream> using namespace std; int main() { int i, j, lines; cout << "This Program creates Christmas Trees." << endl; do { cout << "Please enter a number between 1 and 30" << endl; cin >> lines; } while (lines < 1 || lines > 30); for(j = 1; j <= lines*2; j=j+2) { for(i=j; i <= 80 ; i=i+2) { cout << " "; } for(i = 1; i <= j; i++) { cout << "*"; } cout << endl; } system ("pause"); return 0; }
I understand everything except for the most important parts-- the for loops.
My textbook does a really poor job explaining them, and i am confused as what the int i, j mean
and how the loops work together.
This code asks the user to input a number between 1 and 30 and it creates a perfect triangle of asterisks (or a "Tree").
My problem is that like I said, I don't understand how the tree is done, or why you need the equations in the loops.
I'd greatly appreciate if someone could help me better understand these concepts!
Thanks a lot =] | https://www.daniweb.com/programming/software-development/threads/344382/hi-need-some-help-here | CC-MAIN-2017-43 | refinedweb | 198 | 75.95 |
Add a (non-static) method to the Person class that adds..."insert string or integer change"...to Person object. The name of this method should be birthday()/phd(). The return type is void. The method has no parameters.
I am getting the following error after I have finished coding and not sure what to adjust as I have moved stuff around to no avail. Any advice? Below is the Error recieved for lines 16 and 17.
PersonCommand.java:24: error: non-static method phd() cannot be referenced from a static context
PersonCommand.java:24: error: non-static method birthday() cannot be referenced from a static context
import javax.swing.JOptionPane; public class PersonCommand { public static void main(String[ ] args) { int age1 = Integer.parseInt(args[1]); //creates one person objects Person person1 = new Person(args[0], age1); //Prints the first JPane with relative data fields String display1 = person1.toString(); JOptionPane.showMessageDialog(null, display1); //Calls phd and birthday method Person.birthday();//Calls birthday method Person.phd();//Calls phd method JOptionPane.showMessageDialog(null, display1); }//end of class } /*Stores the name and age of the person*/ class Person{ //data fields that store each object's data private String name; private int age; /** Constructor - Used To Create EAch Object & Initialize DAta Fields. * @param n1 is the Persons name * @param a1 is the Persons age*/ public Person(String n1, int a1){ name = n1; age = a1; } //Used to Display The Data Stored In EAch Object's DAta Field. public String toString(){ String personString = name + " is " + age + " years old."; return personString; } //Method used to display age public void phd(){ String n1 = "Dr. " + name; return; } //Method used to display age public void birthday(){ Integer a1 = age + 1; return; } } | http://www.dreamincode.net/forums/topic/294835-error-non-static-method-cannot-be-referenced-from-static-context/page__p__1718878 | CC-MAIN-2013-20 | refinedweb | 279 | 58.79 |
{- | Control.Pipeline ( -- * Pipeline Pipeline, newPipeline, send, call, close, isClosed ) where import Control.Monad.Throw (onException) import Control.Monad.Error import Control.Concurrent (ThreadId, forkIO, killThread) import GHC.Conc (ThreadStatus(..), threadStatus) import Control.Monad.MVar import Control.Concurrent.Chan import Network.Abstract (IOE) import qualified Network.Abstract as C -- * Pipeline -- | Thread-safe and pipelined connection data Pipeline i o = Pipeline { vConn :: MVar (C.Connection i o), -- ^ Mutex on handle, so only one thread at a time can write to it responseQueue :: Chan (MVar (Either IOError o)), -- ^ Queue of threads waiting for responses. Every time a response arrive we pop the next thread and give it the response. listenThread :: ThreadId } -- | Create new Pipeline on given connection. You should 'close' pipeline when finished, which will also close connection. If pipeline is not closed but eventually garbage collected, it will be closed along with connection. newPipeline :: (MonadIO m) => C.Connection i o -> m (Pipeline i o) newPipeline conn = liftIO $ do vConn <- newMVar conn responseQueue <- newChan rec let pipe = Pipeline{..} listenThread <- forkIO (listen pipe) addMVarFinalizer vConn $ do killThread listenThread C.close conn return pipe close :: (MonadIO m) => Pipeline i o -> m () -- | Close pipe and underlying connection close Pipeline{..} = liftIO $ do killThread listenThread C.close =<< readMVar vConn isClosed :: (MonadIO m) => Pipeline i o -> m Bool isClosed Pipeline{listenThread} = lift conn <- readMVar vConn forever $ do e <- runErrorT $ C.receive conn var <- readChan responseQueue putMVar var e case e of Left err -> C.close conn >> ioError err -- close and stop looping Right _ -> return () send :: Pipeline i o -> i -> IOE () -- ^ Send message to destination; the destination must not response (otherwise future 'call's will get these responses instead of their own). -- Throw IOError and close pipeline if send fails send p@Pipeline{..} message = withMVar vConn (flip C.send message) `onException` \(_ :: IOError) -> close p call :: Pipeline i o -> i -> IOE (IOE o) -- ^Conn doCall `onException` \(_ :: IOError) -> close p where doCall conn = do C.send conn message var <- newEmptyMVar liftIO $ writeChan responseQueue var return $ ErrorT (readMVar var) -- return. -} | http://hackage.haskell.org/package/mongoDB-0.9.1/docs/src/Control-Pipeline.html | CC-MAIN-2017-30 | refinedweb | 330 | 59.6 |
IRC log of tagmem on 2006-03-03
Timestamps are in UTC.
13:00:18 [RRSAgent]
RRSAgent has joined #tagmem
13:00:18 [RRSAgent]
logging to
13:00:41 [timbl]
Vincent: I had a discussion with Mark Baker, and confirmed that we would be happy for him to join a teleconf to help us address the issue he has over End Point References. let us invite him March 21 or 28.
13:00:43 [Ed]
Meeting: Tag F2f 3, March 2006
13:00:51 [Ed]
Chair: Vincent
13:00:57 [Ed]
Scribe: Timbl
13:01:06 [timbl]
ACTION Vincent: Negotiate date of his attendance with Make Baker
13:01:15 [timbl]
Topic: XMLVersioning-41
13:02:46 [timbl]
David: I thought I would step back a bit.
13:03:03 [noah]
noah has joined #tagmem
13:03:06 [DanC_lap]
(pointer to "part one"?)
13:03:16 [timbl]
... We are looking at 2 documents: part1, which tlaks about versioning XML, talking about design decisions. Terminology section, motivation, stragegy.
13:03:29 [Vincent]
Vincent has joined #tagmem
13:03:30 [timbl]
... Part 2 has XMLSchema sepcific stuff.
13:03:55 [DanC_lap]
->
Ext/Vers terminology with generic/xml split
13:03:59 [tvraman]
tvraman has joined #tagmem
13:04:21 [timbl]
... This has ben out there for a year or so. After discussion wih Noah etc, and thing abiout how these thinga are actually done, iot seems there is a more interesting story to be told about language versioning.
13:04:48 [timbl]
... Talking about different parts of the system which may operate on XML; and also the system above XML.
13:05:27 [noah]
Do we have links to Dave's most current drafts on XML Versioning? We should probably include those in the minutes. I think he prepared some just before the Cambridge December 2004 F2F.
13:05:51 [timbl]
... I was using hReview over the weekend. Found that these people have names of things like FN for full name and I wondered how they are going to deal with versioning.
13:06:05 [timbl]
... Are we going to have several layers on the onion? I would like 2 more l;ayers:
13:06:23 [noah]
Ah...I think Dave's latest drafts are at
, unless I've missed a newer one.
13:06:29 [DanC_lap]
q+ to noodle on the idea that the end-game for ext+vers is Web Arch Vol 2: Language Evolution and the Self-Describing Web: extensibility, versioning, composition [and software installation]
13:07:22 [DanC_lap]
the issues list agrees with you, noah.,
13:07:36 [timbl]
....]
13:08:43 [timbl]
2) Talk about language evolution in general. We have a story about Markup exten'y, but I'd like also to talk about for example µFormats. I don't know whether to do CSS eveolution and URI evolutoin as well... maybe too far togo.
13:09:14 [DanC_lap]
(css versioning is a permathread, internally, in discussion of validator.w3.org )
13:09:49 [timbl]
...'?
13:10:08 [timbl]
@@@ Link to Daves's blg, someone?
13:10:27 [Vincent]
ack danc
13:10:27 [Zakim]
DanC_lap, you wanted to noodle on the idea that the end-game for ext+vers is Web Arch Vol 2: Language Evolution and the Self-Describing Web: extensibility, versioning, composition
13:10:30 [Zakim]
... [and software installation]
13:10:38 [timbl]
... So let us discuss whether this is a workable method of looking at this.
13:10:43 [noah]
q+ to ask whether I'm looking at the latest draft
13:11:09 [timbl]
DanC: What is the endgame? For many things, it is WebArch Volume 2 ... though who onws how we slice up the picees in the meantime.
13:12:04 [timbl]
DanC: Re CSS versioning, we have a CSS versioning, there is a problem that there is no version indication in CSS and the validator can't work out what it should be.
13:12:37 [DanC_lap]
q+ to ask VQ to take an action to write up "lack of CSS versioning syntax is trouble" or "on versioning in CSS"
13:12:49 [Norm]
q+ to observe the XSLT case
13:12:50 [timbl]
Ramin: One of the things which has happened in W3C is that every slightly new version of a document gets a new namespace, even for a new namespace. I think tyhis is a mistake.
13:12:59 [DanC_lap]
s/Ramin/Raman/
13:13:39 [timbl]
If you have to namespaces, you have ns1:section and ns2:section the spec of section has not changed at all, but it has a totally new name.
13:13:46 [DanC_lap]
q+ to relay a concern about SMIL versioning
13:14:10 [timbl]
... I conclude this is too chaotic.
13:14:50 [timbl]
...(... Multimodal working group for example? Maybe voice browser?)
13:15:21 [timbl]
... In XForms, we had an experimental namespace but we are thinking of graduating the working bits t the original namespace.
13:15:24 [dorchard]
dorchard has joined #tagmem
13:15:31 [Norm]
The XSL and XML Query WGs used this technique for evolving versions of several XPath 2/XQuery namespaces
13:15:43 [timbl]
... Look at SVG. SVG 1.0, 1.1, 1.2 and tiny each have different namespaces!
13:15:49 [DanC_lap]
q+ to say that I'm not persuaded that XHTML 2 merits a separate REC-level namespace from XHTML 1
13:16:02 [timbl]
... Graceful failover is really important.
13:16:11 [timbl]
q?
13:16:47 [timbl]
Dave: I have been trying to get the design decisions clearer in the document.
13:17:10 [DanC_lap]
q+ to argue that people making new languages shouldn't be obliged to give their versioning strategy because they don't have sufficient experience yet
13:17:47 [Vincent]
ack noah
13:17:47 [Zakim]
noah, you wanted to ask whether I'm looking at the latest draft
13:17:51 [timbl]
Raman: Make the same mistakes consistently over W3C at least ;-)
13:18:44 [DanC_lap]
(whee! life without CVS)
13:18:50 [JacekK]
JacekK has joined #tagmem
13:19:23 [timbl]
Noah: Where is the latest draft? My link above is to the latest version I found.
13:19:31 [noah]
13:19:36 [timbl]
VQ: That is linked from the findings page.
13:21:57 [timbl]
[discussion about the fact that the latest version is not the one in CVS and linked from the Findings page, Agenda, etc.]
13:22:51 [DanC_lap]
->
Ext/Vers terminology with generic/xml split
13:23:05 [timbl]
q?
13:23:05 [Vincent]
ack danc
13:23:06 [Zakim]
DanC_lap, you wanted to ask VQ to take an action to write up "lack of CSS versioning syntax is trouble" or "on versioning in CSS" and to relay a concern about SMIL versioning and
13:23:08 [DanC_lap]
ack danc
13:23:11 [Zakim]
... to say that I'm not persuaded that XHTML 2 merits a separate REC-level namespace from XHTML 1 and to argue that people making new languages shouldn't be obliged to give their
13:23:15 [Zakim]
... versioning strategy because they don't have sufficient experience yet
13:23:41 [timbl]
DanC: I'm not persuaded that XHTML 2 merits a separate REC-level namespace from XHTML 1 and to argue that people making new languages shouldn't be obliged to give their
13:24:14 [timbl]
ACTION Vincent: Write to www-tag about CSS versioning being a problem "levels"
13:24:33 [noah]
q?
13:24:54 [timbl]
DanC: Someone told me that the 3 smil profiles each have their own namespaces and this will be a disaster. Does the finding say that is a problem:
13:25:20 [timbl]
Dave: I didn't come up with a hard rule for making a decision.
13:26:01 [timbl]
ACTION DanC: Look at the document and see if it is good for informing on this SMIL problem of multiple namespaces.
13:26:31 [timbl]
DanC: I am not sure whether versioning tehcnology in the technology can be done without the xperience of having done veriosn 1 and 2 in version3
13:26:55 [timbl]
David: You actually make decisions about versioning in your langauge whgether you like it or not
13:28:41 [dorchard]
q+
13:28:52 [timbl]
13:28:56 [timbl]
q+
13:28:58 [Norm]
ack Norm
13:28:58 [Zakim]
Norm, you wanted to observe the XSLT case
13:29:03 [Vincent]
ack norm
13:29:11 [noah]
q+ To discuss namespaces a bit
13:29:22 [timbl]
Norm: Xslt2.0 uses the same NS as XSLT1.0, and so does docbook
13:29:53 [timbl]
... There is a whole matrix of what happens with version n document being processed by version m agent.
13:30:08 [timbl]
DanC: Please do CR test cases for that
13:30:17 [timbl]
Norm: [nods with a smile]
13:30:27 [Vincent]
ack dorchard
13:30:41 [timbl]
Raman: Can we do examples?
13:31:39 [DanC_lap]
->
8.1 Version Strategy: all components in new namespace(s) for each version (#1)
13:32:59 [noah]
q?
13:33:19 [Vincent]
ack timbl
13:33:52 [DanC_lap]
(at home, I don't ask "where does this spatula belong?" becuase it might not have an established home yet. I ask "where would you look for this?" which pretty reliably produces an answer. ;-)
13:33:54 [noah]
TBL: We could now look at where people have made different namespaces.
13:34:06 [noah]
TBL: We could see whether this has caused problems for software.
13:34:41 [noah]
TBL: We could also look(?) whether there is generic metadata to express that one namespace is "compatible" with Edinburgh sense with each other?
13:34:46 [noah]
DC: I've done some of that/
13:34:53 [noah]
TBL: In RDF?
13:35:06 [noah]
DC: Yes, for 3 standard W3C ???? (types of compatibility?)
13:35:14 [noah]
DO: We should incorporate in finding?
13:36:10 [timbl]
Tim: [missed]
13:37:11 [DanC_lap]
tv+ tvraman
13:37:15 [DanC_lap]
q+ tvraman
13:37:19 [Vincent]
ack noah
13:37:19 [Zakim]
noah, you wanted to discuss namespaces a bit
13:37:30 [timbl]
Tim: Should we look at generic dispatch code which can tranform an XML doc from one new namepsace into an old one using metadat, and then feed it into an old version client?
13:38:12 [tvraman]
q+ to add that today browsers kick off an impl based on namespace, and this was especially true of IE in switching between html and xhtml. But that shouldn't be the default pattern for handling multiple versions of a language
13:38:19 [timbl]
Noah: It is easy where there is a clear mapping between betwen different mappings. Not sure there is an easy 89/20 situation.
13:38:37 [timbl]
Tim: I was only talking about total back-compatibility.
13:38:44 [DanC_lap]
re RDF vocabulary for "backward compatible", see
which defines #stableSpecification etc. that formalize the 3 options in
13:38:54 [timbl]
Noah: Namespaces are for preventing name clashes.
13:39:42 [timbl]
... in XML Schema, we debated how many namespaces, and felt one namespace per document might make sense. Or should it be one namespace per language?
13:39:46 [dorchard]
q+ to respond that schema did make some decisions about multi-ns wrt extensibility..
13:40:04 [timbl]
... When you start doing one namespace per version, you end up renaming things which have not changed.
13:41:08 [timbl]
... When people write an XPath for all the paras in a document, and the document may have one of many namespaces for paragraph, the XPath becomes a total pain.
13:41:10
13:41:47 [timbl]
... Sometimes you decide something really changes its semantics, so you want to search only a new one.
13:42:10 [timbl]
... You can image a certain XPAth processor which reads teh metadata, and is smart about that.
13:43:54 [Vincent]
ack tvraman
13:43:54 [Zakim]
tvraman, you wanted to add that today browsers kick off an impl based on namespace, and this was especially true of IE in switching between html and xhtml. But that shouldn't be
13:43:58 [Zakim]
... the default pattern for handling multiple versions of a language
13:44:18 [timbl]
Dave: UBL does that: change the NS all the time, but have XPaths which are huge and recommended to do anything with UBL.
13:45:53 [timbl]
Raman: Yu can use the namespace to disptach a different code, but the two versions may in fact share code. Maybe the sofwatre instalation problem should look at the namespace, and also the far-feature stuff from DOM3.
13:46:19 [Vincent]
ack dorchard
13:46:19 [Zakim]
dorchard, you wanted to respond that schema did make some decisions about multi-ns wrt extensibility..
13:46:20 [timbl]
DanC: I hate DOM3 has-feature with a purple passion
13:46:23 [timbl]
q+
13:46:26 [DanC_lap]
s/far-feature/has-feature/
13:47:10 [DanC_lap]
(it could be that has-feature was fixed to use URIs. anybody know? but even so, I think it's the same anti-pattern as has_attr in python.)
13:47:43 [DanC_lap]
. 8.2 Version Strategy: all new components in new namespace(s) for each compatible version (#2)
13:47:52 [DanC_lap]
in
13:48:17 [timbl].
13:48:46 [timbl]
... The issue is that you can't write a Schema for this.
13:49:33 [timbl]
... You wantto be able to say that the binding must occur here, whatever its version.
13:49:37 [timbl]
... Couldn
13:49:47 [timbl]
... 't use substitution groups
13:50:31 [timbl]
... There is flexibility in eth Schema design, including adding names to a namespace and new versions is really important design center.
13:51:22 [noah]
q?
13:51:43 [timbl]
... Taking one extreme, you could have every name in each namespace! What would a verison of the language be?
13:51:53 [timbl]
.... i worry about going to far that way.
13:51:55 [Vincent]
ack timbl
13:53:26 [timbl]
Vincent: You wanted to get back to the issue of terminology....
13:53:41 [timbl]
Noah: We could tell a story which goes beyond XML.
13:53:50 [timbl]
q+
13:54:30 [timbl]
q+ to suggest limiting the effort we put into the XML layer in that there are limited things on can do at that level, and suggest that future parts of it deal with CDF and RDF and SOAP versioning.
13:56:03 [timbl]
Noah: There is a story which is independent of XML, and it becomes useful in the XML level. When people do unmarked up content inside the XML have versioning issues. too.
13:56:12 [timbl]
DanC: Encourage you to write down the generic story.
13:56:52 [timbl]
Noah: Dave thought that the XML people would not keep reading stuff which was generic.
13:57:25 [timbl]
Dave: I don't want many views of the same thing from many angled, and too much abstraction.
13:58:08 [timbl]
DanC: Makes sense for bits about text strings to be there but not the first bit and ignored by the XML person whi an XML problem.
13:58:38 [timbl]
q+
13:59:15 [timbl]
Noah: people are asking me to do this -- or I can do metadataInURI
13:59:21 [Vincent]
ack timbl
13:59:21 [Zakim]
timbl, you wanted to suggest limiting the effort we put into the XML layer in that there are limited things on can do at that level, and suggest that future parts of it deal with
13:59:24 [Zakim]
... CDF and RDF and SOAP versioning. and to
14:00:02 [DanC_lap]
(managing one's todo list for the TAG is something I have certainly not solved to my own satisfaction.)
14:02:29 [DanC_lap]
(I will repeat that I like the way dave and I have found a machine-readable formal artifact, i.e. the violet uml diagrams.)
14:02:41 [noah]
TBL: Maybe it will fall out of the terminology, but I think telling a general story has value.
14:02:55 [noah]
TBL: Dave, are you thinking at focussing on XML-specific or not?
14:03:09 [DanC_lap]
->
Ext/Vers terminology with generic/xml split
14:03:23 [noah]
DO: Noah didn't characterize my latest story as well as he should. I've moved toward including more general stuff.
14:03:33 [noah]
DO: But be careful, the 2004 version was almost all XML.
14:05:17 [dorchard]
14:07:20 [timbl]
q+
14:07:40 [noah]
q?
14:07:48 [timbl]
Dave: We should do several but not too many case studies of differenet languages.
14:08:23 [timbl]
Vincent: CSS is diff from XMl -- not a metalangauge, and under the control of one working group. It ios comparable with HTML.
14:08:39 [timbl]
Noah: They have a versionming strategy
14:08:46 [Vincent]
ack timbl
14:09:09 [DanC_lap]
(DO, do you distinguish betwee "ignore the whole element" vs "ignore the tags" ? css has an interesting design there.)
14:09:24 [tvraman]
q+ to add: ass CSS starts getting used to say interesting things on the right hand side of the equal sign, versioning becomes an issue --- microformats being a case in point.
14:09:34 [noah]
TBL: When you use XML for something specific like CDF, or HTML, you can tell stories about things like must-ignore.
14:09:47 [noah]
TBL: You can and should describe that whole branch of the tree.
14:10:20 [noah]
TBL: They are all in one form or another presentations languages, and you can say a lot about them.
14:10:31 [noah]
TBL: RDF is another branch you can say a lot about.
14:10:40 [noah]
DC: You've said that before, why are you saying it now?
14:10:54 [noah]
TBL: Because we're discussing the form of the document. How much general stuff, early, etc.
14:11:38 [noah]
TBL: I'm suggesting the document start totally generic, then move on to XML, then two branches, one for presentation languages, and one for RDF.
14:12:04 [noah]
TBL: Now having said that's a logical order, I note that some people have said that will be hard to read, and maybe we need a different organization.
14:12:12 [noah]
q?
14:13:34 [noah]
TBL: You have examples where you talk about names "consisting of" pieces. That's not an appropriate explanation in the case of RDF. RDF doesn't have a fundamental notion of "consists of". It's not tree-like.
14:14:08 [timbl]
timbl has joined #tagmem
14:14:30 [noah]
DO: I say the name language has the constraints that a name "consists of" the piece parts.
14:14:48 [noah]
TBL: right, that handles any context-free grammar
14:14:58 [noah]
DO: I think consists of can be mapped to things like RDF
14:15:11 [timbl]
Tim: But when you extend RDF systems you don't change the CFG.
14:16:12 [Norm]
q?
14:17:22 [noah]
q+ to say, the story about the terminology needs to be general
14:20:58 [Vincent]
ack noah
14:20:58 [Zakim]
noah, you wanted to say, the story about the terminology needs to be general
14:21:18 [Vincent]
ack tvraman
14:21:18 [Zakim]
tvraman, you wanted to add: ass CSS starts getting used to say interesting things on the right hand side of the equal sign, versioning becomes an issue --- microformats being a
14:21:22 [Zakim]
... case in point.
14:21:28 .)
14:22:50 [dorchard]
(I wasn't sure whether the suggestion was to have a CDF/SemWeb/someotherthing/someotherotherthing terminology sections..)
14:22:58 [timbl]?
14:23:17 [timbl]
... What is the true content? The source or the presentation?
14:23:47 [timbl]
___________________________________________________
14:25:05 [timbl]
Decision to move the final session in a high granularity silicon-based environment.
14:26:56 [timbl]
Tim: What happened to the math in the whiteboard in Edinbugh?
14:27:24 [Ed]
Ed has left #tagmem
14:27:40 [timbl]
Back compatability being the interpretation of a document being a subset of the]intent of the author
14:27:46 [timbl]
or something.
14:29:29 [DanC_lap]
(fyi, timbl,
)
14:29:36 [DanC_lap]
er...
14:29:59 [DanC_lap]
(rather:
Using RDF and OWL to model language evolution )
16:21:19 [DanC_lap]
RRSAgent, draft minutes
16:21:19 [RRSAgent]
I have made the request to generate
DanC_lap
16:21:50 [DanC_lap]
RRSAgent, make logs world-access
16:22:14 [timbl]
timbl has joined #tagmem
16:28:52 [Zakim]
Zakim has left #tagmem
16:55:20 [noah]
noah has joined #tagmem | http://www.w3.org/2006/03/03-tagmem-irc | CC-MAIN-2013-48 | refinedweb | 3,489 | 70.53 |
Difference Between C# and JavaScript
C# is a general-purpose, object-oriented programming language. It is intended for a simple, modern and general-purpose language. It has been designed to build software ranging from small functions to large operating systems. It is also a multi-paradigm language that is strong typed, imperative, declarative, functional, and component-oriented, whereas JavaScript is a high-level programming language. It is mostly used in web browsers. Along with HTML and CSS, JavaScript is the foundation of the world wide web. It makes the interaction between client and server possible.).
What is JavaScript?
JavaScript is easy to learn a programming language. JavaScript follows ECMAScript standards along with some of its own additional features that are not present in the ECMAScript standard. JavaScript is a scripting language that was first introduced in 1995 by Netscape.
Initially, JavaScript was used as a client-side programming language. Gradually with the enhancement of the language, more new functionalities were added to extend its support towards server-side scripting, PDF software, and word processing. Today JavaScript is fairly popular and widely used scripting language alongside CSS and HTML to create interactive and beautiful websites.
What is C#?
When Microsoft took the .NET initiative in around 2000, it introduced C# approved by the European Computer Manufacturers Association (ECMA) and International Standards Organization (ISO). The hash symbol ‘#’ in C# is commonly referred to as the word ‘SHARP’. C# is an object-oriented programming language that comes fully integrated with the Visual Studio IDE. The coding structure of C# closely resembles Java. C# requires compilation and hence can be compiled in a variety of platforms. C# is also a part of Microsoft’s .NET framework.
Head to Head Comparison Between C# and JavaScript (Infographics)
Below is the top 8 difference between C# and JavaScript:
Key Differences Between C# and JavaSript
Now that we discussed most of the essential features of C# vs JavaScript languages, in this section we can talk about some of the other difference
- JavaScript has so many tutorials, documents, and help available that it is easy to learn.
- C# is so complex and vast, it may frighten the learn at first sight.
- The developer community and peer network for both JavaScript vs C# language are strong but in a hindsight, it seems C# has a better peer group among windows developers.
- Nowadays, as nobody can get away with learning just a single language, it does not matter which one you start with. Having the knowledge of both JavaScript vs C# will only be beneficial in the long run.
- Generally, one needs to write so many lines of code in C# like Java to get things done which is not the case in JavaScript.
- The language syntax of C# is more consistent than plain JavaScript.
- One good thing about JavaScript is that it is still evolving, newer things build in other languages also started coming into JavaScript.
- Now that TypeScript is evolving, it is worth learning. TypeScript brings many missing key features to JavaScript which was not there in vanilla implementation.
- JavaScript has thousands of free libraries available and strong community support while C# is very limited as it is primarily windows based.
Examples
Below are the topmost examples between C# and JavaScript
->
- The below example prints our all-time favorite string Hello World using C#.
C#
C# using System;
namespace HelloWorldApplication
{
class HelloWorld
{
static void Main(string[] args)
{
/* my first program in C# */
Console.WriteLine("Hello World");
Console.ReadKey();
}
}
}
->
- The below example shows how to use read and write files using FileStream class in C#.
C#();
}
}
}
- From the above examples, we can simply copy paste the JavaScript example codes, paste them into a text file and change the extension of the file to .html. This will enable us to execute the codes. For running the C# examples we can either use the C# IDE, i.e. Microsoft Visual Studio or use the command line to compile C# codes.
- Like we discussed earlier, for both the above examples, JavaScript executes in any browser. But C# is more of Server Side programming on Windows server.
- JavaScript is weakly typed while C# is strongly typed. From the above examples, we see the use of classes and types in C# while there are no type definitions for JavaScript.
C# and JavaScript Comparison Table
Let’s look at the top Comparison between C# and JavaScript.
Conclusion
Basically,. JavaScript has a growing community and is continuously updating with new features.
C# is an object-oriented programming language that is developed by Microsoft and the project is head by Anders Hejlsberg. The C# codes are easy to learn if we have basic knowledge of Java or C++ programming languages. The latest version of C# is 15.7.2 and is used alongside Microsoft Visual Studio 2017.
Based on organizational requirements, a majority of client-side work is done in JavaScript. Most of the websites that we browse use JavaScript. Though C# has its own pros comparatively JavaScript is more popular and we can find expert developers easily. C# is also popular but kind of outdated in terms of usage.
Recommended Articles
This has been a guide to the top difference between C# and JavaScript. Here we also discuss the head to head differences, key differences along with infographics and comparison table. You may also have a look at the following C# vs JavaScript articles to learn more – | https://www.educba.com/c-sharp-vs-javascript/?source=leftnav | CC-MAIN-2021-04 | refinedweb | 900 | 64.51 |
System:
AIX 7100-04-01-1543
xlccmp.16.1.0 16.1.0.1 C F XL C compiler
during configure rpath ability is detected as working:
Checking linker accepts ['-Wl,-R,.'] : yes
Checking for rpath library support : yes
build reports:
ld: 0706-027 The -R libdir flag is ignored.
and rpath is not set in binaries
ld supports -R only if option -bsvr4 is set, otherwise option -R is ignored
But why does configure wrongly detect the abilities?
The check 'Checking linker accepts ['-Wl,-R,.']' succeeds because ld doesn't gives an error on -R (only warning). That is ok.
The check 'Checking for rpath library support' succeeds because it is broken to my understanding. On any platform I would say.
buildtools/wafsamba/samba_waf18.py:
lib_node = bld.srcnode.make_node('libdir/liblc1.c')
lib_node.parent.mkdir()
lib_node.write('int lib_func(void) { return 42; }\n', 'w')
main_node = bld.srcnode.make_node('main.c')
main_node.write('int main(void) {return !(lib_func() == 42);}', 'w')
linkflags = []
if version_script:')
o = bld(features='c cprogram', source=main_node, target='prog1', uselib_local='lib1')
if rpath:
o.rpath = [lib_node.parent.abspath()]
def run_app(self):
args = conf.SAMBA_CROSS_ARGS(msg=msg)
env = dict(os.environ)
env['LD_LIBRARY_PATH'] = self.inputs[0].parent.abspath() + os.pathsep + env.get('LD_LIBRARY_PATH', '')
self.generator.bld.cmd_and_log([self.inputs[0].abspath()] + args, env=env)
o.post()
bld(rule=run_app, source=o.link_task.outputs[0])
it creates:
libdir/liblc1.c
main.c
compiles to:
libdir/liblc1.c.1.o
main.c.2.o
creates lib:
liblib1.so
creates binary:
prog1
And now checks if prog1 get the lib loaded. But the lib is located in the same directory as the binary! So this succeeds without a correct set rpath in the binary!
To fix I use the following patch which creates the lib in libdir/ (o.rpath setting should be made better, I'm not familar with WAF...)
--- buildtools/wafsamba/samba_waf18.py.orig 2019-04-25 13:45:49.000000000 +0200
+++ buildtools/wafsamba/samba_waf18.py 2019-04-25 15:05:52.000000000 +0200
@@ -212,10 +212,10 @@')
+ bld(features='c cshlib', source=lib_node, target='libdir/lib1', linkflags=linkflags, name='lib1')
o = bld(features='c cprogram', source=main_node, target='prog1', uselib_local='lib1')
if rpath:
- o.rpath = [lib_node.parent.abspath()]
+ o.rpath = [lib_node.parent.parent.abspath() + '/testbuild/default/libdir']
def run_app(self):
args = conf.SAMBA_CROSS_ARGS(msg=msg)
env = dict(os.environ)
Can you please send us a DCO and create a full GIT patch and submit it as a merge request?
The test idea looks reasonable, it would be good to get this upstream.
Thanks!
Sorry, I don't understand your wiki.
Do I need to become a team member for this?
(In reply to Bert Jahn from comment #2)
No, just join gitlab.com and let me know your username. I can then let you in to that repo allowing you to schedule a full test and prepare a merge request.
Finally, make sure to send in your DCO and sign off your commits:
I sent the DCO.
My username at git lab is: bert.jahn
I have created a merge request.
Branch is bert.jahn-rpath
the merge request broke the rpath handling on the build machine. I also don't understand what you are doing in your proposed patch and for what reason. Can you explain the changes that you propose here?
The changes that I propose here are explained in this bug report.
I don't see why this change fails in one of CI builds because the changes do no change the way it will build:
Checking compiler accepts ['-Werror'] : yes
Checking linker accepts ['-Wl,-rpath,.'] : yes
Checking for rpath library support : yes
Is it possible master branched from was not clean?
as discussed in - this patch doesn't actually do what it's expected to do.
From my experience the lxc compiler also generally has the problem that "-qhalt=w" does not halt on all warnings while we do have to expect it to throw an error for tests.
Bert closed the merge request already, I'll close this bug report accordingly. Thanks for your efferts anyway, if you find a good solution, feel free to reopen this bug again. | https://bugzilla.samba.org/show_bug.cgi?id=13933 | CC-MAIN-2021-17 | refinedweb | 695 | 60.82 |
Rails & Vue.js Trello Clone - Part 2 Discussion
Wow a lot to process nice. Will keep a eye on this as I like to move to more frontend stuff. Thank you Chris
I followed along step by step - but I am not getting the 200 status response. When I check your code on github, it looks like you've added quite a bit more than what was covered in the 2 videos. Everything is just null for me when I click the button. Any direction on where to troubleshoot that?
Hi Chris,
Any plans on creating some content on Vus.js Rails Steps forms?
I have a monster of a form I would like to use this for.
Dan
Hi Chris,
I had to set 'this' reference externally to use it in Success callback. But you didn't do that in the video I wonder how that works for you ? Please check the below code snippet SS ....
I used a thick arrow function which does that for you. You'll see my success function is: "success: (data) => {" and you use "success: function()".
Hi Chris. I'm getting an error at 9:33, when I try to add a card.
```Uncaught ReferenceError: Rails is not defined at VueComponent.submitMessages (app.vue:31)```
Can you help me solving this?
I was able to fixe Rails.ajax by updating my gemfile to run Rails 5.1 or above and by requiring rails-ujs in application.js
I had the same error and to fix it, I've added "import Rails from '@rails/ujs'"; into the script part in app.vue likewise :import Rails from '@rails/ujs'; export default { ... }
*As a side note, I have required rails-ujs in the application.js file as well and still can't seem to get it to submit any data.
if (!(typeof options.beforeSend === "function" ? options.beforeSend(xhr, options) : void 0)) {
return false;
}
if (xhr.readyState === XMLHttpRequest.OPENED) {
return xhr.send(options.data);
}
so it couldnt get to send request because when you dont specify "beforeSend" it always return from function.
One possible way is to add:
Rails.ajax({
url: "/cards",
type: "POST",
data: data,
dataType: "json",
beforeSend: function() { return true },
.....
But i dont like t very much, you can install rails-ujs as module or use ajax of jquery and it should work.
const Rails = require('rails-ujs'); Rails.start();
I'm not sure if it will carry over into the other components, but this is working for me now.
that works up until you have more than one component. Then it'll complain rails-ujs is already loaded.
My work around is to add
import Rails from "rails-ujs"
in each script/component.
This is still problematic with Rails 6.0.0rc1
On Rails 6 you can import rails-ujs like that :
import Rails from '@rails/ujs';
between script tags in your app.vue | https://gorails.com/forum/0-rails-vue-js-trello-clone-rails-vue-js-trello-clone-part-2 | CC-MAIN-2021-04 | refinedweb | 479 | 76.42 |
SDK Documentation: Super Users
Why Send User Data?
Sending user data to us will annotate your users’ sessions, giving you more insight into who is using your app (Super Users) and who is seeing which in-app UI flows (Release Notes, Onboarding, etc.).
You can add as little or as much information as you’d like. We recommend you at least provide a
userId(something that you, or your server provides you, that identifies your user). For your convenience, you may add the user’s name and/or email address, if that makes it easier for you to visualize the data.
IMPORTANT: Make sure providing the user information to a trusted 3rd party like us is covered in your Privacy Policy.
When A User Logs In
Add this code when the user logs in, or when a logged-in user starts the app. This will associate your user&rsqou;s information with their session here.
#import <AppToolkit/AppToolkit.h> ... // When the user finishes logging in, or is already logged in on start [[AppToolkit sharedInstance] setUserIdentifier:@"qxb49bd" email:@"bob@loblaw.org" name:@"Bob Loblaw"]; ...
import AppToolkit ... // When the user finishes logging in, or is already logged in on start AppToolkit.sharedInstance().setUserIdentifier("qxb49bd", email: "bob@loblaw.org", name: "Bob Loblaw") ...
When A User Logs Out
In order to clear the AppToolkit session of the user’s information, add this code when the user logs out.
#import <AppToolkit/AppToolkit.h> ... [[AppToolkit sharedInstance] setUserIdentifier:nil email:nil name:nil];
import AppToolkit ... AppToolkit.sharedInstance().setUserIdentifier(nil, email: nil, name: nil)
Accessing Super Status
Once you have configured Super Users in your Dashboard, you can check inside the app whether the current user qualifies as super or not.
if (ATKAppUserIsSuper()) { // User is a Super User, so you can perform different tasks for that user. [self enableAwesomeFeature]; }
if (ATKAppUserIsSuper()) { // User is a Super User, so you can perform different tasks for that user. enableAwesomeFeature(); } | https://apptoolkit.io/sdk/super-users/ | CC-MAIN-2019-04 | refinedweb | 318 | 55.13 |
Code Rewrite - Yes
Tuesday, 25. September 2007, 01:44:15
So should I backpeddle? Should I recant my statement about rewrites? I mean, I've seen my fair share of bad rewrites. No, I think rewrites are fucking fantastic. If your software is in its first incarnation, I say a rewrite is a good thing. But it's a good thing only as it relates to your code base. You'll have to decide if it's actually worth it. Is the time and money spent justified? That's another discussion. I want to talk about the code itself.
I've seen bad rewrites, but have NEVER, not once been involved in a bad one. Frankly, it's because I rewrite things when the original is so bad, that it can no longer be improved upon. Sounds like a contradiction, but any programmer has seen code so bad that any modification will result in a reduction in functionality no matter what you do.
The problems that arise in a rewrite are many. First, funding is a problem. I don't want to talk about this too much, but when you do a rewrite, managers and higher ups feel like money is going down the drain. The pressure on programmers makes them uncomfortable and this is the #1 reason why software fails. Programmers don't feel compelled to test, do trial runs, write unit test or do any of the things that are worthwhile because they don't want the finger pointed at them for wasting more time and money. If you have this kind of scenario, get out now. JUST ONE NEGATIVE THOUGHT WILL FAIL THE PROJECT! ONE! I'm not joking. The absolute worst thing a project can have is one single negative thought. After that, it snowballs and game over.
You can and will have obstacles. But everyone should always think that it's possible to succeed. The moment you start to doubt this, the project is over. Obstacles need not be negative. A setback is just that, a setback. You can keep going. A negative thought is one where you move backwards. Where you don't think you can advance, ever. Those are the ones that kill projects.
Every single rewrite that succeeds is WAY superior to the original. Not by a little, but by incredible amounts. That sounds like an obvious statement. Of course, if a rewrite fails, it'll suck. And a successful one will rock. Well, here's the secret to a successful rewrite. You need the original programmers doing the rewrite. If they're gone, kiss the rewrite goodbye. It's as simple as that. If you use new programmers, they will hit the same problems as before, only this time, they won't be tested and won't work.
There's only one kind of situation where I've seen a proper rewrite happen successfully with new programmers. It's where all inputs, transformations and outputs are documented and all special cases are likewise written down in documents. With this amount of documentation, you need to sort it. So thus begins the organisation and the new programmers are forced to think about how this all fits together and also to think about a framework. Another thing. These new programmers CANNOT have access to the old code. They cannot see it, look at it, poke it, or even touch the back of a screen where it is displayed, even if the person is blindfolded. The reason is what I just said. You don't want the new software to be a recreation. Recreations mean a reproduction of not only its features, but of its problems. So you'll just end up with a duplicate of the original, just without all the bug fixes.
Another way to make a rewrite successful is to do it incrementally. In my experience, these have the most chance of success and are the easiest to implement. What you do is start with the original software, but basically butcher it. You rip out everything you don't need. Only leave the core and rewrite the protocol that all the modules in your system will use. Write conversion routines that can translate back and forth between the two protocols. This means you can no longer call a module directly. You'll be amazed at the modularity you'll get just by doing this. It'll force you to rethink your implementation. Then start replacing the modules that you need for the new core to work properly. Then start rewriting the core systems. Some modules may be merged, changed locations or have a different set of functionality. Just as long as the overall operation remains the same. I once took a Java server and made all modules callable by sockets. I rewrote the modules in C++ and eventually left certain parts in its own process. I could then run different parts on different machines when this wasn't a requirement. What it did was give me more flexibility later on.
The biggest problem I've seen with software is that it's monolithic. There are no languages out there that are pluggable. Functions are the anti-thesis of modularity. Ironically, it's being incorrectly promoted as the best tool for this. Unless you're writing a library, no such luck. Most software start out small. You don't need plugins and a framework. You just need a few features. But at this point, all functions call all others. There's no way to insert more code without modifying the existing source. That's where rewrites should concentrate their efforts. If you rewrite just for the sake of a rewrite, don't bother.
And here's the weird part. If you want to do a rewrite, you should want to rewrite the core systems. In short, you want a new framework. You can't write a new framework in languages that force you to use theirs. This includes ALL languages that have an interpreter or VM. C, C++, Pascal, etc. are still the best tools to do a rewrite. It simply makes no sense to rewrite something on top of the same framework. What are you getting? Nothing.
Think of successful rewrites. One example is Linux. Linus used Minix until he had enough pieces in place that he no longer needed it. uTorrent is also a good example of a successful rewrite. Again, there's a framework on which to build on. It may not be obvious, but there are plenty of existing P2P files being transferred right now.
Rewrites are absolutely fantastic. But you have to know what you're doing. If you get someone that has never done one and knows nothing about the original software, then good luck. You'll need it. Rewrites are in the realm of optimizations. They make your software work better. Most programmers have been told not to even look in that direction. It's not that rewrites are bad. It's that it's not a practice that is taught in schools, Universities or Colleges. And there's the bigger point. The main point. The crucial point why people would rather avoid doing a rewrite. It's that normally success is just getting something to work. With a rewrite, the bar is set SO much higher. So high in fact, that you have a concrete measure to compare with. All of a sudden, there's accountability. That's the real reason you shouldn't do one.
Rewrites are actually very easy. Everything is already there. You have MORE data and test cases at your disposal than you could dream for. The reality is that it exposes just how good a programmer you really are and if you can do what you say you can. And about surprises and undocumented features or side-effects. I don't want to hear it. That's your job as a programmer to assess these things. That's how every other field works. There are risks. It's your job to ascertain them. If you decide to start one, you best make sure you have everything you need. Just like any job, there are some that are too risky. But saying that you should avoid all rewrites is a cop out.
After the initial rewrite, you should actually continue rewrites. While you can think up many reasons to do a rewrite and probably more for not doing one, it comes down to this. You rewrite code because you're stuck modifying existing source code. What you want is a system where you NEVER touch the original code except for bug fixes. You don't touch it for adding functionality. You don't touch it to add a module. You don't touch it to refactor. You don't touch it for ANY reason except to fix what should have been working in the first place. If your code doesn't do that, then this is a good reason for a rewrite. It should be only one reason out of two for a rewrite. The other being that the original didn't work.
So this should make it clear what you do with a second rewrite. You don't do a complete rewrite. You basically rip out all the parts that have been added since the last rewrite and make it work in a more consistent way with what's already there, rewriting much of that too if need be. Normally, a second rewrite would only be 20% of the total source tops. A third rewrite is 5% or less. In fact, everything I do is made of rewrites. Every iteration gets better. It is my contention that rewrites are done too late. They wait until they must change everything.
I say yes to rewrites. I rewrite until I no longer need to touch the source code. If I'm touching the source code, then I've done something wrong and it should be rewritten. And if I'm the only one who thinks this way, then so be it. I don't miss not having big balls of mud.
vladas # 25. September 2007, 05:56
On the other hand, I do complete rewrite in case if something (module or algorithm) gets too complicated while I program it. I don't hesitate in this case. I'm just taking another different view on problem and rewriting it from scratch. Right now I do the same with my Memel. I made a first draft system just for proof of concept, I realized concepts and problems, and now I'm capable to do it in more better way.
About the time pressure - you are right. I think that in the meantime only a small amount of programmers (and customers) remembers that programming is an art. It's impossible to do a good artwork under any kind of pressure. That was my phrase about (you want good or cheap?). It's about time. Cheap in this case means 'quick'.
Vorlath # 25. September 2007, 07:18
You also bring up a valid topic. When is refactoring not a rewrite and vice versa. Where's the boundary? What are the differences? When I code, I don't think with these terms in my head so I'm curious where people draw the line.
Anonymous # 25. September 2007, 10:44
Refactoring are supposed to be incremental, so if you scrap the old code and start out fresh, you are not refactoring.
Nice post on an interesting subject!
Hans-Eric Grönlund
vladas # 25. September 2007, 11:29
Anonymous # 25. September 2007, 13:23
Yes, you have a point. I usually define refactoring as being the technique described in Martin Fowlers book. I might very well be wrong (it was a long time ago I read it). Maybe it's the result that defines refactoring, not the technique.
Wikipedia defines refactoring as "modifying without changing its external behavior". This suggests that you should be able to rewrite from scratch and still call it refactoring, as long as the external behavior is not changed.
Interesting. What then should I call the technique?
Anonymous # 25. September 2007, 14:24
Yes! The best that can happen to code is that it is rewritten. Each time around the experience from the previous round helps build a better, more orthogonal, clean & lean, flexible and efficient system.
Anonymous # 25. September 2007, 14:55
And I'd take Joel's opinion on this with a grain of salt. While he does preach that rewriting is bad, or at least very risky, they essentially did a complete rewrite of FogBugz to move to a platform-independent code generator for FB5 (?). He may call it something else, but changing your application to use a generator and spit out both ASP and PHP depending on the target platform is such a significant change, that I think you can only call it a rewrite.
Anonymous # 25. September 2007, 16:53
I had a little problem with your post because first you wrote
"I want to talk about the code itself"
then you go on and talk about managers raising fingers...
I think that was not the best way to write the article, because
as I was not at all installed at the economy-pressure (i code for
fun), I was much more interested in conclusions than that economic
finger pointing :(
About rewrites, I think its better to always keep in mind what kind
of problems arise in the first case. Maybe with growing experience
of the coder(s) in question, you will less likely need a
real rewrites (or just rewrite some components, which are modular
and easy to change) ...
Vorlath # 25. September 2007, 19:14
Experience isn't enough to avoid a possible need for a rewrite. That's what my previous blog entry was about. It talks about a well known paper called Big Ball of Mud and it explains this specific issue (why software most often tends to end up as a big ball of mud) far better than I.
Anonymous # 25. September 2007, 20:00
"You can't write a new framework in languages that force you to use theirs. This includes ALL languages that have an interpreter or VM."
Strongly disagree. With the same argument why dont you want you to rewrite the compiler itself? Interpreters/VMs are the basic underlying machine outside the application domain. Usually they are out of the scope of the problem to solve, except sometimes speed concerns.
Anonymous # 25. September 2007, 20:14
I did a complete rewrite of a C++ project in Lisp.
Not only it is much shorter now, but quite every time I add a new feature, it just works. In the C++ code, there were some very strange bugs, which have never been caught.
I'm really surprised of the speed (no difference to the C++ version), and catching bugs is now much easier and faster, not to mention that they are very rare now.
But all this isn't real news nowadays (at least for open minds), is it?
Vorlath # 25. September 2007, 21:50
Anon: You went from using your own framework to using someone else's (Lisp). That's uh... boring.
Anonymous # 25. September 2007, 21:50
I work in a corporate I.T. department (publishing company) and we don't get to rewrite anything unless it has a business reason to do so. Even if it's the biggest piece of crap ever delivered by one of our consultants or outsourced to someone and brought in house, we don't get to rewrite, or even refactor, ANYTHING unless the business side has a budget for it and only then does the rewrite get approved (and subsequently outsourced to one of our top manager's consulting buddies' firms, but that's a rant for a whole other post).
Anonymous # 25. September 2007, 23:12
Actually, Linus used Minix until he accidently f!@#ed up his Minix partition and was too lazy to fix it.
Anonymous # 26. September 2007, 02:43
I find your ideas about working with a system that uses an interpreter or VM a bit strange. Do you write software than runs on Intel processors? The instruction set for many Intel processors implement the assembly instructions using microcode - the machine code that all programs that execute on the computer is running on an interpreter.
Is this cheating? Do you have to build your own hardware to successfully rewrite your software?
If your software happens to be a web application, you are working within the frameworks of HTML, HTTP, CSS, etc. To really rewrite do you have to throw away those frameworks and replace them with your own (or perhaps just going from a web client to a fat client, then for the next one maybe X or console would be good enough).
What about languages like Haskell that provide both an interpreter and a compiler? Does the existence of an interpret automatically disqualify it? If I write a C++ interpreter will you stop using C++?
Vorlath # 26. September 2007, 03:46
Now, the decision is about how much of a clean break you want to make. If you're doing a rewrite, the cleanest break you can have is by using a system level language.
You can also do a rewrite without writing a new framework. You can extend existing ones and use VM's and whatever else. I just think this is a waste of effort for a rewrite if you're just gonna let other software developers of other languages and platforms decide design decisions of your software anyways that may not be the best fit for your application.
With the web, you can't throw away the framework. The framework is the web. I doubt you can rewrite the Internet. So this is one case (as it is with a lot of hardware) where you want to stay with existing frameworks and be affected by the web's philosophy. But keep in mind that web applications are ONE type of application. There is currently no alternative, but that won't always be so. So when you do a rewrite for this future environment, you will indeed want to ditch HTML, HTTP, CSS, etc. And again, you still won't create a new framework because likely someone else will have designed one. This is called porting. You ditch the old and use whatever is available on the new environment.
What I said is the best way to build a NEW framework is to be in as much control as possible. But not everyone needs a new framework. Many people are ok with a 90% new or 50% new or 20% new framework. Some don't need any new framework at all. Existing products are good enough. So you can build on what's already there. This is why it doesn't matter to many people if the code is native or interpreted. It's not something that will affect the new version of their software.
Now take my project called Project V. This can work with everything from native code to VM's to interpreters. I'm not going to bind my product to existing proprietary products because it makes no sense that Project V can produce native code, but my product isn't itself native code. As for libraries, I can create bindings for anything that currently exists. But my framework is completely new and independant of any outside influence. Not many have my requirements. But if you do, it's good to know about these things. It's also not a bad idea to keep in mind that most of the software we write is conditioned by others who came before us even when we think we're doing a complete rewrite. And now you know what's involved if you ever wanted to not be influenced at all.
vladas # 26. September 2007, 05:25
Every advance in software, I think, happens because of someone's lazyness
Anonymous # 26. September 2007, 07:12
@ Vorlath:
Lisp is not a 'framework', it's more or less at the same basic level as assembler (you don't need C to implement Lisp, and a couple implementations don't use C).
So, Lisp and assembler are both bare-bone (in a sence), and they have more in common than one may think.
vladas # 26. September 2007, 09:40
vladas # 26. September 2007, 13:51
What's about the side effects on CAR and '.' list notation in LISP? And the very internal structure of lists?
Are these bare-bone too?
Vorlath # 26. September 2007, 21:01
What's the opcodes for cdr?
When I program in a system level language, I knew pretty much what the opcodes are going to be. Only exceptions and RTTI I can't figure out, but those are higher level anyways. With C, Pascal, heck even Modula-2, I can tell what's going on.
But again, you're missing the point. It's not about being bare bones. It's about escaping design decisions made by others. If you use Lisp, you still have Lisp at runtime. So you must formulate your code according to Lisp and not according to your specific needs. Hey, if they're close, that's great. But that's not the point. It's the fact that no language can be a perfect fit other than a carefully crafted framework built exactly for your needs. System level languages are the only languages that let you do this. Lisp cannot do this.
Anonymous # 27. September 2007, 15:42
There's a slight difference between rewriting and refactoring: rewrite is refactoring with baggage, while refactoring is a tiny rewrite of the smallest replaceable piece of code.
Most rewrites happen for superficial reasons, like wanting to play with the newest toy (Perl to PHP, PHP to Rails, Rails to Django, etc.), or a messy codebase, inflexible API, etc. But it's dangerous to assume that having a complete version of your product would clarify your vision and simplify a rewrite.
When you rewrite an application, you're starting from zero. Not from 20-30%, but from zero. You know how your interfaces would look like, you know what your application *should* do, but you don't yet know *how* it will do it.
Refactoring, on the other hand, is tiny rewrites applied on tiny portions of code. The point of refactoring is to replace code with better-looking code while maintaining the same functionality. When you apply this principle on small sets (as opposed to complete applications), you needn't worry about features and vision much, because what you're doing is mostly harmless, and humans are perfectly capable of handling small sets of information.
At the end, I could be nitpicking. I feel rewriting is risky. It could be pulled off, but it's not the wisest decision to make.
Anonymous # 27. September 2007, 21:49
What's the opcodes for cdr?
In case you're interested, CDR was originally implemented for the IBM 704 as the following assembler macro:
LXD JLOC,4
CLA 0,4
PDX 0,4
PXD 0,4
I'm unfamiliar with 704 assembler, so I can't comment much on the code, but I'd imagine it pulls bits 4 to 18 from a memory location and puts them in a register. Modern architectures don't use 15 bit addresses and 36 bit words to store their data, so if I were implementing Lisp on, say, a 68k, I'd use two 32 bit words to represent a cons cell. In which case, I believe CDR could be defined in just one opcode:
MOVE.L (A0),D0
Lisp was originally designed to run on a machine that used vacuum tubes, so it should come as no surprise that it's built-in instruction set can be implemented at a low level. I'd imagine C is rather more complex to implement, and further from the system layer than a minimal lisp implementation.
Vorlath # 28. September 2007, 06:53
I'll just recap for those who want to know. That I'm not just saying these things.
First, the code you're showing is for the interpreter. So you're making my point that you're using someone else's framework.
Second, the 68K opcode just moves a longword (32bit) into D0. But you make no mention of how you are organising your data structure. Is the pointer to the item first or second? In any case, the value loaded up would be a pointer and you're loading it into a data register. Not sure what you're doing there. But whatever. It's just more nonsense that we're all too familiar seeing with you.
Third, you say that Lisp was designed to run a machine that used vacuum tubes. Awesome! Again, this is someone else's framework. Where's the user's code? With C, the user's code is exactly what the user specified, not what C wants. In fact, nothing of C remains. Even the calling mechanism is not C's, but rather the machine's ABI. So again, with C, the only thing left is your code. Not so with Lisp. With Lisp, it's always Lisp with your additions. I repeat, it's always the same code at runtime. Someone else's code.
vladas # 28. September 2007, 07:17
Anonymous # 28. September 2007, 11:21
Vorlath, you argue that C is a "system-level" language, and that Lisp is not, but surely that depends on the system the language is being implemented on.
The IBM 704 used memory made up of 36 bit words. Instructions were held in memory using a format of a 3-bit prefix, a 15-bit decrement, a 3-bit tag, and a 15-bit address. CDR was an assembler macro standing for "Content of Decrement Register", whilst CAR stood for "Content of Address Register". In other words, Lisp cons cells could be stored directly in the instruction set.
In my 68k example, I assumed that cons cells were stored as pairs of 32 bit pairs, so CDR would be:
MOVE.L (A0),D0
And CAR:
MOVE.L +(A0),D0
A Lisp applications would probably contain a lot of arbitrary JMPs from one cons cell to the next (though a good compiler could abstract a lot of them out), but apart from that I don't believe it would be inherently different to a compiled C application.
And whilst C may not seem like it makes many assumptions, that's because we're all used to a standard processor architecture. We expect there to be a few registers, one or more stacks, and a linearly addressable memory arranged in groups of bits divisible by 8. This doesn't have to be the case. Instead of linear blocks of memory, you could use chains of linked lists, with each block of memory divided into two address pairs. I believe this style of computer architecture is known as a pointer machine, and trying to get C working with such a system architecture would likely be an uphill battle. You'd have to implement what would amount to a VM in order to get pointer arithmetic to work.
Given this, why assume that Lisp is any different from compiled C? Both can be implemented in VMs. Both can be executed at the hardware level. At least some processor architectures are incompatible with C, and would require an abstraction layer for C to work. Possibly the same is also true of Lisp.
Why the bias against VMs? How is a system architecture implemented in software any different from one implemented in hardware?
Vorlath # 28. September 2007, 13:03
My bias against VM's is this.
Functionality of hardware is X.
Functionality of software is Y.
Y < X ALWAYS!
Clear?
Also, if someone else wrote the core, then you must conform to other people's design decisions. OHOH, if you control the core, then you make the decisions and this is the only way you can keep providing functionality to the higher level parts if you need to. With VM's, you're dependent on Sun or whoever to keep it up to date. And if one day they decide they're not going to let you use feature C anymore, too bad. It's your project, you should have final say.
If you use a VM, then your high level part is basically a VM on top of a VM. How ridiculous is that? And if you're not building another higher level on top, then you're doing it wrong.
I still find it funny that Sun, MS, Google, Adobe and a host of big companies all use exactly the techniques mentioned here, yet there's resistance to this idea. There's a real reason they do this. They are in control. They're not dependent on anyone else for the core functionality. And the stupid thing is that there's nothing a VM or interpreter can do that can't be done at the system level with equal ease. There's no reason for VM's at all. That, more than anything is what confuses the hell out of me as to why people use them.
vladas # 28. September 2007, 16:02
You are right in that C can have, and actually has some other - interpreted implementations. But whatever it be - I can think in C as though it would be Assembler. Can You think in LISP similar way?
If I write a++; in C, I expect similar compiled machine code. But I can't think what code will be for CDR in any of LISP implementations. I even can't be aware of lists memory allocation details in various LISP VMs (I can count 5-7 different of them).
As the final paradox I could tell - LISP is more robust and restrictive in the same time. I'm not against it by the way.
Anonymous # 28. September 2007, 18:19
The performance of a VM is usually less than the underlying hardware, but the functionality does not have to be. What if I ran my OS under an x86 VM like Bochs on an x86 machine? Then my VM would have exactly the same capabilities as my hardware - just an order of magnitude slower.
I'm also curious as to where different hardware architectures fit into this, Vorlath. A company called Symbolics sold some Lisp Machines in the 1980s, which implemented a minimal lisp core at the hardware layer. Using your "Software
Vorlath # 28. September 2007, 22:19
2. I'd need to see the specs of this Lisp machine and I care to look at that as much as I care to shave my next door neighbour's cat.
3. Ever heard of A1 (not the sauce)? MOVEA.L (A0), A1 ; Address registers are made to hold, wait for it... addresses. Yay!
Anonymous # 29. September 2007, 01:52
1. A different kind of VM? Care to be explicit as to where the line is drawn? Clearly you view say, the JVM and Bochs as being two different "kinds" of VM, but what, precisely, is the defining difference? Is it just whether the VM's architecture has previously been created in hardware?
2. Then why have such strong opinions on matters in which you are not in possession of all the facts?
3. A cons cell doesn't have to store an address in the decrement register, which is why I chose to use D0 over A1.
Vorlath # 29. September 2007, 11:44
2. You're talking about one specific case called a Lisp machine. Also, a Lisp machine is tailored toward one specific language. I've actually said very little about this Lisp machine.
3. Your statement has more holes, but I'm ending it because it's about one specific framework. I've already said it (the fact that it's from the interpreter) actually proves my point anyways.
Weave Jester, go read a book or something. Every single time, you spread incorrect notions and can't stay on topic. Besides, what I'm saying isn't outrageous and I think it may have been blown way out of proportions. There's nothing in a VM that can't be provided in a system level language. But there are things in a system level language that can't be provided in a VM. Specifically, the ability to control the core of your software and decide what functionality will be available to the higher level parts of your system. This is braindead obvious. There's nothing new here and all large companies do it this way for a reason. Even your precious VM's do it this way. So before you say anything else, explain to me why the very first JVM wasn't built using the SmallTalk one for example?
Anonymous # 29. September 2007, 13:35
It may be "braindead obvious", but it's also incorrect. A VM may degrade performance, but it doesn't necessarily reduce functionality. The reason the JVM wasn't built on top of another VM was purely a performance consideration. It's not like one couldn't create a working Java bytecode VM in, say, Python, but if one did, it would be depressingly slow.
It occurs to me that perhaps you're talking about functionality from a programmer's perspective, rather than the functionality of the application itself. It's obvious that you could write a Java program that exactly mimics the outputs of a C program, but in the Java environment, the programmer loses the ability to manipulate pointers and so forth, and this may be seen as a restriction of functionality.
However, C does not represent the pinnacle of language functionality. I can do things in a Lisp VM that would be impossible or impractical in raw C. For instance, many Lisp programs are self-modifying, which would not really be workable in C. Different C compilers produce different outputs, and ANSI C can theoretically compiled on many different architectures, so you couldn't just change the assembly of your application at run-time in the same way as you can manipulate S-expressions in Lisp. In order to get self-modifying, cross-platform C code, you'd probably have to implement a VM of your own.
But you'll never fully understand the benefits or disadvantages of something without first trying it, so I'm uncertain why you so strongly cling to notions that are not backed by practical experience. I like VMs because they give me more flexibility, and more functionality. I like introspection, syntax macros and higher level functions, and in order to use them I C, I'd effectively have to create a VM of my own from scratch. The effort involved in getting the functionality of C to the same level as, say, PLT Scheme, seems rather prohibitive, and my home-made VM probably wouldn't be as good as the VMs that scores of programmers have worked on for years.
I mean, if I was really concerned about starting from scratch, I'd begin by etching my own microchips - x86 architecture is really limited in many respects.
Vorlath # 29. September 2007, 14:54
That's my favorite line. You really do produce some gems sometimes.
"I like VMs because they give me more flexibility, and more functionality. I like introspection, syntax macros and higher level functions, and in order to use them I C, I'd effectively have to create a VM of my own from scratch."
Exactly! This is what I'm talking about. This way, you control what features are allowed at the higher level for extensions, plugins and custom high level code. You can bind certain objects directly into your low level framework and all sorts of things like that.
"The effort involved in getting the functionality of C to the same level as, say, PLT Scheme, seems rather prohibitive, and my home-made VM probably wouldn't be as good as the VMs that scores of programmers have worked on for years."
Huh? Why wouldn't you be able to use existing ones as your high level language? There are plenty of scriptable C and C++ type languages. I'm sure there are Lisp style OSS intepreters that would allow you to bind your own low level stuff into. Also, there are plenty of toolkits that allow you to build your own languages with your own action routines.
And about examples that use what I mention, here are a few:
JVM
.NET
SmallTalk (this is actually a prime example)
Lisp interpreter (most of them)
Quake (all of them)
Most 3D games (that allows mods).
Firefox
Opera
IE (I think)
Photoshop
On and on...
Anonymous # 29. September 2007, 17:19
I think we're talking past each other, and we may even be advocating the same thing for once, though I confess I'm still unsure what you mean. All the major VM-based languages I can think of have foreign function interfaces, allowing one to bind native C functions to functions the VM can understand. If I wanted to access feature X of the underlying system, I could define a FFI for it... though, most useful functionality is likely already implemented in that fashion. In truth, I've only ever needed to do it for performance reasons.
I don't advocate using VMs without connecting them to the underlying hardware - that would just be silly, and more than a little pointless. But VMs are useful, so long as they expose all the core APIs that would be needed. The majority of code can be written in the VM, with a minimal compatibility layer connecting it to the underlying system.
I'm starting to think that we just have a different idea of what makes up a "framework"...
Vorlath # 29. September 2007, 17:49
I'll leave you with one last analogy.
If I want an integer, I can use an integer.
If I want a string, I can use a string.
If I want a boolean, I can use a boolean.
If I want a compound structure, I can use a compound structure.
If I want algorithm X to use an integer, I can use algorithm X with an integer.
If I want algorithm X to use a string, I can use algorithm X with a string.
If I want algorithm X to use a boolean, I can use algorithm X with a boolean.
If I want algorithm X to use a compound structure, I can use algorithm X with a compound structure.
This is your argument. The algorithm is an anology for your VM. The tools inside are the basic types. So no matter the algorithm, you can choose the one that most fits your needs. Whether that be Lisp, Java or whatever. But all tools are inside because no matter which one you pick, there they are. So it should just be a matter of preference, right?
If you can just use whatever you need, can you explain to me why anyone would ever create a pattern? Like vector<T>? Or set<T1,T2>
My point is that VM's are just different implementations. Going to system level is a generic framework that allows you to use any implementation you wish for the high level stuff. Even if you only use one particular high level implementation, the choice to use many different kinds of high level implementations at any point in the future is no problem, including custom ones (ie. your own interpreter or scripting language). This allows you to mold everything towards your solution instead of molding your solution to the existing implementations.
If you want to understand my point, you need to understand how patterns are built and why they are built. While most programmers understand how to use patterns, the same cannot be said for building them. From past articles posted online about the topic (not from me), 99% of programmers do not understand this concept. And most of them never will. I'm afraid that you, Weave Jester, are in this majority.
Anonymous # 29. September 2007, 21:52
It took me a while to penetrate your analogy, Vorlath, but after reading through your comment a few times, I think I understand what you're getting at.
In a nutshell, you seem to be arguing that we should tend toward generic tools, such as vector and set , and that implementing a framework in a language like C allows you to use it with a large number of higher level languages.
I guess that's a valid point, but I'd argue that's limiting yourself to the lowest common denominator.
For instance, if I created a framework in Scheme, I'd likely make use of macros quite a bit. This would produce a superior framework, but one that would not be directly compatible with a language that lacks the same capabilities. By specializing my framework for a particular language, I don't have to lower myself to a common level of compatibility.
Vorlath # 29. September 2007, 22:43
If you can understand why you would want to create patterns in a programming language, then you'd see why you would want to create a pattern for your framework one level deeper. However, keep in mind that I'm promoting using your own high level language along with the system level core. And not one already in existance because those VM's and interpreters retain control and don't allow this kind of flexibility. But there are plenty of tools that allow you to do exactly this.
BTW, your example is incorrect in its assumptions, but its wording actually supports my argument, but again I don't think you understand what I'm getting at. What I'm talking about isn't a huge revelation or anything. So don't expect any kind of "aha" moment. More like a "DUH" moment.
There's a bit more to this, but I'll try and keep it to this one point for now.
Anonymous # 29. September 2007, 23:53
But what about language constructs that exist in one language, but not another? If I were to design a certain framework in C, Lisp and Haskell, I'd come up with three different implementations with very little overlap between them. Certain languages rely heavily on constructs that have no direct analogy in others.
For instance, lets say I wanted to create a general framework for solving logic puzzles. If I were using Scheme, I'd probably implement McCarthy's amb operator via a macro and call/cc. If I were using Haskell, I'd implement amb using a monad comprehension. If I were using C, well, I've seen a fairly portable implementation of amb that screws around directly with the instruction stack to get the desired affect.
Anyway, my general point is that implementing the same functionality requires incompatible approaches. Scheme shows off its macros and continuations; Haskell it's advanced type system; C it's ability to screw around directly with the instruction stack. You couldn't use the C approach in Scheme, or the Haskell approach in C. You could try using a FFI to the C implementation, but I wouldn't like to think of the effects of unsafe stack alterations would have on the respective VMs and GCs, and it wouldn't slot into Haskell's type system very well, either.
So in this case, it's better to write the framework (well, library really) from scratch in each language. And this is just a handful of functions we're talking about. Scale it up to a proper, complex framework and you're likely to have even more problems with incompatible programming paradigms.
Vorlath # 30. September 2007, 00:19
Sorry, but I can't explain patterns to you. You cannot get closer to the destination than where you are at now and I cannot do anything to bring you over. That's up to you. You're fixated on this world view you've setup for yourself and are afraid to let it go. Maybe the fact that I'm talking about lower level code is what's throwing you off from thinking in higher level terms, I don't know.
Anonymous # 30. September 2007, 01:26
I'd ask how your definition of "pattern" differs from the norm, but you've already said you can't explain it. I think I need a Vorlath to English dictionary :P
It sounds as if you're talking about abstractness, but perhaps that's just my views tainting my interpretation of your various explanations. I view abstractness and generality as the key to better programming, and when I see the word "patterns", I think of subtle threads of commonality that run through programming solutions. The most general threads, the ones that run through the most solutions, tend to be the hardest to see and conceptualize, but also the most useful.
You probably don't share the same views or definitions, as you seem to steer away from any abstraction I care to mention, and seem to have little very interest in them. You seem to favour languages with relatively limited syntax and type systems, and are uninterested in languages that offer more exotic language constructs. But at the same time, what you talk about seems remarkably similar to advocating greater abstraction and generality. I confess this apparent inconsistency is rather baffling!
Vorlath # 30. September 2007, 03:48
I have little interest in abstractions because I see through them. You seem to soak it in as if it were the gospel truth. Your exotic language constructs are illusions. What you get from them may well be nice, but I steer away from them, not because I don't understand what they are capable of, but because I know their weaknesses. If I use them (and I do sometimes), I will paint myself in a corner in the long run. But I also know where they draw their power.
Functional programming is probably the worst at this. It draws 100% of its power from data flow, yet restricts itself by including imperative concepts (not monads BTW) and then their advocates call it beautiful and say it's the blub of anyone who dares to challenge these notions. Even as it has a quite obvious dichotomy with its "monadic" approaches, both are paradoxically called functional. Only someone pure of faith can ignore the sheer number of inconsistencies.
So it is not I that is shying away from languages with more expressive power. In fact, I'm the one trying to create a more powerful platform. I don't advocate abstractions, but I do support the idea of retaining access to all levels of computing.
Here's an interesting quote from Alan Kay:
Here, he's talking about porting and bragging about how easy it is. If you want more bragging, look up anything about Alan Kay and porting SmallTalk. He repeats this all the time. But why would he convert to C? My goodness, a system level language? And easy porting? Now, I don't care much about porting though it is a nice advantage. But any system level functionality can be added rather easily now. No way it can take longer than porting. Also note how this is different than simple external calls. This extra functionality would come from within. Seamless and specifically tailored to the task at hand. I think it's braindead obvious why it's done this way, but it seems to be a forgotten art in many circles. And forget that it's a VM. It could be any kind of application.
Anonymous # 30. September 2007, 12:55
I don't think we understand the same thing when we talk about "abstractions". An abstraction, in my view, is simply a way of factoring out common behaviour into a more general function. For instance, if a neophyte wanted to find the sum of five numbers, they might hard code it:
x = 1 + 2 + 3 + 4 + 5
A more experienced programmer would write a "sum" function:
sum [] = 0
sum (x:xs) = x + sum xs
A still more experienced programmer would factor out the common behaviour into a fold:
fold _ i [] = i
fold f i (x:xs) = f i (fold x xs)
sum = fold (+) 0
So when I talk about the need for abstraction, I am talking about the desire for a programmer to reduce the amount of repetition in their code. I can't conceive of any programmer who would argue that repeating themselves is good practise, so your objections about abstractions seem more like a misunderstanding to me.
On the subject of monads, you should be away that they are not a concept limited to functional languages. They can be implemented in any language, but tend to be unwieldy without a sophisticated type system and higher level functions. As such, most languages forgo this layer of abstraction.
However, a lot of things you likely use everyday have their roots in monads. Streams in C++ can be thought of as a specialised monad, as can list comprehensions and generator expressions in Python, or LINQ in C# 3.0.
A monad is essentially a way of applying general functionality to an object within a specialised container. It's such a general idea that it applies to a wide range of programming concepts. That's the reason languages like Haskell use monads so much - no because they have some fetish for category theory, but because once you introduce the concept of monads into a language, they tend to apply to a lot of distinct pieces of functionality.
Vorlath # 30. September 2007, 15:59
You can do that without abstractions. It seems you believe that wrappers are enough to provide an abstraction. Unfortunately, you do not want to admit that sometimes, programmers have to break into those abstractions and fix them to provide better functionality, or sometimes create something entirely new. For this to happen, one must understand what is going on even within these wrappers. This is very true of encapsulation. While it does provide a simpler way for its use, it's still the programmer that must maintain this "abstraction".
So no, I don't promote abstractions for the simple reason that it's a lie, but I do promote tools.
Monads are data flow concepts and have NOTHING to do with functions. In fact, this is the limiting factor in getting their full power. See, there are things I understand that you plainly don't want to. It's not that you can't, but you eat up this notion of abstraction wholesale. Do you not see that there's a difference between the normal way functional programming works and the way monads works. If there were no differences, then there'd be no reason for monads since they'd be the same. So explain to me exactly what causes this difference. (Please don't. It's rhetorical because I already know you can't, but you think you can). You seem to want to claim that there is no difference. We all know there is one and this makes it non-functional. Denial of this fact only proves that you wish to retain your imaginary world view.
You go on to say that monads can be applied in a wide variety of languages, yet know nothing of why this is. You believe it to be a "specialised container". Again with the wrapper concept. The "if I can wrap it, I can call it what I want" scenario. The reason monads are powerful is because it's data flow. Once you remove the imperative stranglehold from functional programming (bet you still don't believe that imperative is critical to the definiton of functional programming)... anyways, if you remove imperative, you end up with data flow. By some weird twist, monads can actually restrict itself so much that it reproduces the effects of imperative programming. Only in functional languages would you find such bastardisations of powerful concepts.
You say C++'s streams can be thought of a sort of monad. Again, these are very limited, but streams are exactly in the domain of data flow. I'm pointing this out because it comes from you and you can't deny that streams are a data flow concept. If you do deny this, then know you're living in your make believe world for one more day by your choice, and your choice alone. There is no outside reason other than you why you do not see reality.
BTW, don't you even get frustrated that everything I've told you adds up? That it all fits neatly together? That your explanations are always filled with holes, yet you are continuously try to fill them up? I'm just trying to figure out what would cause someone to keep up the fight for their virtual (aka fake) world for so long. It's admirable, but one that is untenable in the long run (or now for that matter). What is it that is so importatnt to you to retain this imaginary world view?
Anonymous # 30. September 2007, 17:00
Keep in mind that when I talk about abstractions, I'm talking about factoring out repetition in code, not necessarily anything to do with wrappers. All I'm interested in is not writing redundant code.
Regarding monads having nothing to do with functions; you're incorrect. I'll provide a brief example in Python to show you:
def unit(x):
return [x]
def bind(f, xs):
return [f(x) for x in xs]
Those two functions turn the standard list class in Python into a monad. That's it. No fancy functional mojo. No IO black magic. That pair of functions you see above are literally _all_ that's needed to turn the list class into a monad.
So no, your arguments don't add up. They're just nonsensical, or based around fundamental misunderstandings. Claiming that monads have nothing to do with functions is completely wrong, because all monads are implemented using a pair of functions. You might as well claim that objects have nothing to do with classes.
Regarding monads being data flow concepts; some monads certainly are, but some are not. Monads are just an general interface used to apply functions to data inside a container. Sometimes that container represents a flow of data, but other times it's just something mundane like a list, or a nullable type.
Vorlath # 30. September 2007, 18:48
I believe haskell even lets you omit ">>" and ">>=" where the "=" is used when you actually want to use data from the previous operation. In fact, you normally have to define a function to enable this transition of state from one function to the next, but notice that it's not actually there when you use it. No function?!!! WOW! This is what enables lazy lists, map, reduce, monads, arrows and all these other things. These are all dataflow concepts whether you choose to admit this to yourself or not. In Project V, I'm using imperative code to enable dataflow programming because the hardware is based on opcodes. There are no functions (I use a loop) though you can use anything you wish for the implementation.
Again, I've explained why things are the way they are, yet you refuse to see the truth. As long as you persist this personal world view, you are basically putting me up on a pedastal as one who must teach you things before we can even discuss the topic at hand. While I like to introduce topics, it's not my objective in any way to teach people. I do like to help if people are genuinely interested, but you seem more interested in trying to convince me otherwise, or are trying to retain what little shred of your world view is still intact. Please understand that I know more than you on this topic. If you are trying to convince me otherwise, please don't. Not even other functional programmers agree with you.
Anonymous # 30. September 2007, 20:53
Stop digging. You're clearly unfamiliar with the concept of monads, and it is unwise to make sweeping, confident claims about concepts you do not understand. I realise that it's not in your nature to ever admit ignorance about anything, but perhaps you could instead change the subject, or end the discussion, rather than continuing to embarrass yourself with false claims.
A monad is a specific pair of functions associated with a type. Asserting that they have nothing to do with functions is wrong. Continuing to assert this when it's been pointed out to you to be wrong is foolish.
However, I will admit that the IO monad is somewhat the exception. The IO monad is a contrivance to shield the functional environment from the imperative one that exists outside. It's black magic, a necessary evil, and not a particularly good example of a monad. Ignore the IO monad; all the other monads are far more interesting, and few have anything at all to do with data flow.
That said, I retain little hope that you'll look into anything beyond the horizons you've devised for yourself, monads or otherwise. It may be that your own ideas about Project V are revolutionary; but your lack of interest and knowledge about existing programming concepts does not bode well for this. Truly revolutionary ideas are rarely thought up in a vacuum, and by dismissing any idea that isn't your own, or convincing yourself you know all about it already when you actually do not, it's unlikely you'll advance particularly far.
That said, I may be wrong. Project V seems to be continuing, and dataflow programming does not seem an inherently bad idea. I'll check in occasionally and resist the urge to make comments... but I'm not going to hold my breath.
vladas # 30. September 2007, 21:45
From my Algorithm of creative thinking .
Vorlath # 30. September 2007, 21:55
I'll grant you that many people do not accept what I say. This is one of the reasons why this blog exists, so that I can publish any discoveries I have found. And I dislike all programming languages equally. I have nothing to defend, so this puts you in unfamiliar grounds, yet you proceed with arguments that assume I do have something to defend. Every time you come here is because you wish to defend your world view. It's never about discussing the matter at hand. I can say this because you use lawyer-like techniques of using items that seem related, but aren't. I present arguments and instead of disproving what I say, you ignore them or just say otherwise without explaining why. I always explain why I think something is true. Even if I did believe that you knew a little of what you're talking about, you never follow up directly on the topic and instead bring up something else that seems to fit your world view better. When I poke that full of holes, you start over with something else. It's a never ending cycle. Forget any other argument we may have, this alone is reason enough to not believe anything you say.
I could be 100% full of shit, but you don't give yourself the chance to prove me wrong. | http://my.opera.com/Vorlath/blog/2007/09/25/code-rewrite-yes | crawl-002 | refinedweb | 9,965 | 73.27 |
MongoDB Interview Questions
- 1) What is use of capped collection in MongoDB?
- 2) What are Primary and Secondary Replica sets?
- 3) What is splitting in mongodb?
- 4) What do you know about MongoDB?
- 5) Explain what is MongoDB?
- 6) List some important features of MongoDB?
- 7) What is namespace in MongoDB?
- 8) What is BSON in MongoDB?
- 9) What type of DBMS is MongoDB?
- 10) What is the document structure of MongoDB?
- 11) What is replica set in MongoDB?
- 12) What is profiler in MongoDB?
- 13) Write the syntax for creating and droping a collection in MongoDB?
- 14) What is the size limit of a document?
- 15) What is _id Field in MongoDB?
- 16) Explain what is ObjectId in MongoDB?
- 17) Write syntax to create or select a database in MongoDB?
- 18) What is a collection in MongoDB?
- 19) What is use of insertOne and insertMany in MongoDB?
- 20) What is sharding in MongoDB?
- 21) What is writeConcern in MongoDB?
- 22) What is use of upsert in MongoDB?
- 23) Explain what is Mongoose?
- 24) List some alternatives of MongoDB?
- 25) What is 32-bit nuances?
Below are the list of Best MongoDB Interview Questions and Answers
MongoDB is a cross-platform document-oriented database program that is open source and free in nature. It can also be classified as.
The.
When the mongoose is created at the first time the version key is a property set on every document. The value of this key comprises of the internal revision of the document. It is understood that the name of this document is configurable. The default key is __v | https://www.onlineinterviewquestions.com/mongodb-interview-questions/ | CC-MAIN-2020-40 | refinedweb | 268 | 71.31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.