text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
String:: erase() function in C++
In this tutorial, we are going to discuss the erase() function in C++ and its different form of syntax and uses which are helpful in solving problems of string easily during solving competitive programming problem.
String:: erase() function in C++
erase() function erases a part of string and shortens the length of string. erase() returns this* pointer.
- To erase all characters in a given string
Syntax :- string_name.erase();
string.erase()
string s; cin>>s; s.erase(); // This will erase all the contents in the given string.
- To erase all characters after a certain position
Syntax :- string_name.erase(index);
All the characters after the index value will be deleted.
string.erase(index)
string s; cin>s; s.erase(2) // It will delete all characters after index 2
- Deleting a certain number of functions after a certain index
Syntax :- string_name.erase(index, value);
It will delete the value number of characters after index given in parameter.
string.erase(index, value)
string s; cin>>s; s.erase(1,3); // It will delete 3 characters after index 1
- Deleting a character after a certain position. If position is not found the iterator will return to the string.end() position which is the hypothetical position after the last position in a string.
Syntax :- string_name.erase(position+value);
string.erase(position+value)
string s; cin>>s; s.erase(s.begin()+value);
- Deleting characters between a certain range in a string.
Syntax:- string_name(starting range, ending range);
string(starting range, ending range)
string s; cin>>s; s.erase( s.begin()+1, s.erase()-3);
C++ Code implementation of erase() function in different forms
#include <iostream> using namespace std; int main() { string s="Codespeedy"; s.erase(); cout<<"Your string is deleted: "<<s<<endl; s="Codespeedy"; s.erase(1); cout<<"Your string is deleted after character 1: "<<s<<endl; s="Codespeedy"; s.erase(s.begin()+3); cout<<" 3rd index characters from beginning is deleted: "<<s<<endl; s="Codespeedy"; s.erase(s.begin()+2, s.end()-2); cout<<"Characters between 2nd index from starting and 2nd position is deleted: "<<s<<endl; }
OUTPUT
Your string is deleted: Your string is deleted after character 1: C 3rd index characters from beginning is deleted: Codspeedy Characters between 2nd index from starting and 2nd position is deleted: Cody
Thanks For Reading !!
If you have any doubt or suggestion please, comment in the comment section below. Keep Reading and Stay Tuned.
Also read:
|
https://www.codespeedy.com/string-erase-function-in-cpp/
|
CC-MAIN-2020-45
|
refinedweb
| 403
| 57.67
|
Entering a Flaky Context Manager in Python2020-02-07
Here’s a little Python problem I encountered recently.
I want to open and read a file cautiously.
If
open() raises an
OSError, for whatever reason, I want to do nothing.
Otherwise, I want to print out all the lines in the file.
If I open the file, I want to always
close() it - even if the printing of the lines raises an error.
But if the printing does raise an error, even an
OSError, that should be raised.
What’s the most idiomatic way to do this?
If you have experienced similar problems before, you were likely thinking of using
try around
with open() like so:
try: with open("file.txt") as fp: for line in fp: print(line) except OSError: pass
This works great, apart from for the last point: “if the printing does raise an error, even an
OSError, that should be raised”.
Right now such an
OSError would be caught by the
except OSError.
Following my post on limiting try clauses, we want to limit the
try to only be around
open().
Effectively, we want something like this non-existent, poorly indented syntax:
try: with open("foobar") as fp: except OSError: pass else: for line in fp: print(line)
It would be neat if it works, but it doesn’t. So how can we wrap a
try around just our
open context manager?
Enter the standard library’s
contextlib.ExitStack.
It’s a meta context manager, allowing you to enter zero or more context managers, and it will close them later for you.
To solve our problem, we need to want to use
with ExitStack() combined with
enter_context on our
open() call.
We then use a standard
try/except/else to catch
OSError on the
open() but nowhere else.
The full solution looks like:
from contextlib import ExitStack with ExitStack() as stack: try: fp = stack.enter_context(open("file.txt", "r")) except OSError: pass else: for line in fp: print(line)
Fin
Thanks to Tom Grainger for reminding me to use
ExitStack() here.
I hope this helps you use context managers betterer,
Adam
🤩 Django-Related Sales for Black Friday Week 🤩
One summary email a week, no spam, I pinky promise.
Related posts:
- Tuples versus Lists in Python
- Limit Your Try Clauses in Python
- Simplify Your If Statements That Return Booleans
Tags: python
|
https://adamj.eu/tech/2020/02/07/entering-a-flaky-context-manager-in-python/
|
CC-MAIN-2021-49
|
refinedweb
| 394
| 70.43
|
Unanswered: How to format numbers?
Unanswered: How to format numbers?
Hi
So nobody knows how to format numbers?
- Join Date
- Feb 2012
- Location
- Berne, Switzerland
- 698
- Answers
- 41
- Vote Rating
- 39
So far Sencha has implemented formatting numbers only in Ext JS.
So that leaves you with 2 choices:
- you can copy over the necessary code from Ext JS (license...?)
- you find another solution such as this one
Yes, well, at the end I decided to backport the ExtJS code, I have license for both components, anyway the ExtJS code is free and you can check the source code in the docs, so here is the function for anyone that might need it:
Consider that this is inside an object called myObject, you will have to change it to your own thing in case you want to use this, also the defaults thousandSeparator, decimalSeparator and numberFormat. This is just an example.
Code:
myObject : { thousandSeparator : '.', decimalSeparator : ',', numberFormat : '0.000,00/i', number : function(v, formatString) { if (!formatString) { return v; } v = Ext.Number.from(v, NaN); if (isNaN(v)) { return ''; } var comma = myObject.thousandSeparator, dec = myObject.decimalSeparator, i18n = false, neg = v < 0, hasComma, psplit, fnum, cnum, parr, j, m, n, i; v = Math.abs(v); // The "/i" suffix allows caller to use a locale-specific formatting string. // Clean the format string by removing all but numerals and the decimal separator. // Then split the format string into pre and post decimal segments according to *what* the // decimal separator is. If they are specifying "/i", they are using the local convention in the format string. if (formatString.substr(formatString.length - 2) == '/i') { var myI18NFormatCleanRe = I18NFormatCleanRe; if (!myI18NFormatCleanRe) { myI18NFormatCleanRe = new RegExp('[^\\d\\' + myObject.decimalSeparator + ']','g'); } formatString = formatString.substr(0, formatString.length - 2); i18n = true; hasComma = formatString.indexOf(comma) != -1; psplit = formatString.replace(myI18NFormatCleanRe, '').split(dec); } else { hasComma = formatString.indexOf(',') != -1; psplit = formatString.replace(formatCleanRe, '').split('.'); } if (psplit.length > 2) { //<debug> Ext.Error.raise( { sourceClass: "Ext.util.Format", sourceMethod: "number", value: v, formatString: formatString, msg: "Invalid number format, should have no more than 1 decimal" }); //</debug> } else if (psplit.length > 1) { v = Ext.Number.toFixed(v, psplit[1].length); } else { v = Ext.Number.toFixed(v, 0); } fnum = v.toString(); psplit = fnum.split('.'); if (hasComma) { cnum = psplit[0]; parr = []; j = cnum.length; m = Math.floor(j / 3); n = cnum.length % 3 || 3; for (i = 0; i < j; i += n) { if (i !== 0) { n = 3; } parr[parr.length] = cnum.substr(i, n); m -= 1; } fnum = parr.join(comma); if (psplit[1]) { fnum += dec + psplit[1]; } } else { if (psplit[1]) { fnum = psplit[0] + dec + psplit[1]; } } if (neg) { /* * Edge case. If we have a very small negative number it will get rounded to 0, * however the initial check at the top will still report as negative. Replace * everything but 1-9 and check if the string is empty to determine a 0 value. */ neg = fnum.replace(/[^1-9]/g, '') !== ''; } return (neg ? '-' : '') + formatString.replace(/[\d,?\.?]+/, fnum); } }
Enjoy it!
BTW, for Sencha guys, is there a good reason not to have included this function in the official library?
There are a few posts related to "missing format functions". Replies I've seen from Sencha have been <roll_eyes>'mobile doesnt need it'.</roll_eyes>
Raise a feature request for Touch asking for Sencha to do what you have done i.e. port the ExtJS code
I have raised a few of theses and they have been approved for implementation.
Hey BostonMerlin, I read some topics about it but with not a lot of useful information. Well, it seems that we are more than two or three that think this is necessary...
Suzuki, how can I create a change request, in the Discussion forum or something like that?
Cheers!
Touch feature request forum is in the premium forums - I take it you don't have access?
If you don't I'll post the request and reference this thread
I have a premium account, I just never took the time to link it to my forum user. I will do that and request the change.
Thanks for the info and the offering!
|
http://www.sencha.com/forum/showthread.php?249025-How-to-format-numbers&p=912946
|
CC-MAIN-2014-10
|
refinedweb
| 678
| 60.11
|
What are you trying to achieve?:
I have four annuli (hollow circles) on the screen, each in their respective quadrants (no actual sectioning). I want to record the amount of time the participant spends in the correct annuli, then at the end of my experiment have a screen showing the participant’s average time spent in the annuli (might do it as a percent/ratio; doesn’t matter). I’m having troubles figuring out how to get the time.
What did you try to make it work?:
Begin routine:
from psychopy import core core.clock.reset()
Each frame:
if mouse in [CorrectQuad]: RT = core.clock.getTime() thisExp.addData('RT in ms', RT*1000)
What specifically went wrong when you tried that?:
AttributeError: module ‘psychopy.core’ has no attribute ‘clock’
|
https://discourse.psychopy.org/t/recording-time-of-cursor-in-stimulus-restarting-a-trial-if-afk/14168
|
CC-MAIN-2020-34
|
refinedweb
| 129
| 68.06
|
Le 7 sept. 05, à 21:49, Pier Fumagalli a écrit :
> ...Any other languages that _seriously_ deserve some attention before
> marking the block as "stable" ?
>
> I'm thinking about DTDs using the same Xerces internals, WDYT?
Yes, DTDs are still in widespread use, to it would be cool to have a
validating transformer for that.
Just curious, do your validators "just" stop processing when they find
an error, or can they let the XML flow go through with validation
errors added as annotations (inline namespaced XML elements for
example)?
My use-case is for example a content management system, where it would
be cool to be able to accept a document for storage, while noting that
it is not valid.
-Bertrand
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200509.mbox/%3Cd4b3bf2aa2ffb8bf5d103742539459ca@apache.org%3E
|
CC-MAIN-2014-52
|
refinedweb
| 121
| 62.78
|
Code formatting, indenting, and commenting
Set, fold, and unfold code blocks
Apply syntax coloring preferences
Indent code blocks
Add comments and comment blocks
Navigate and inspect code
Open code definitions
Use the Outline view
Use Quick Outline view in the editor
Browse and view classes
Show line numbers
Use markers
Navigate markers
Add tasks
Complete and delete tasks
Add and delete bookmarks
Organize import statements
The Flash Builder editors
provide many shortcuts for navigating your code, including folding
and unfolding code blocks, opening the source of code definitions,
and browsing and opening types. Code navigation includes the ability
to select a code element (a reference to a custom component in an
MXML application file, for example) and go to the source of the
code definition, wherever it is located in the project, workspace,
or path.
Multiple line code blocks can be collapsed and expanded to help
you navigate, view, and manage complex code documents. In Flash
Builder, expanding and collapsing multiple-line code statements
is referred to as code folding and unfolding.
As you write code, Flash Builder automatically indents
lines of code to improve readability, adds distinguishing color
to code elements, and provides many commands for quickly formatting
your code as you enter it (adding a block comment, for example).
To change the default formatting, in the Preferences dialog box,
select Flash Builder > MXML Code > Formatting.
You can change the order and grouping of the attributes.
When you paste MXML or ActionScript code into the code editor,
Flash Builder automatically indents the code according to your preferences.
You can also specify indenting for a selected block of code.
To specify the indentation preferences, in the Preferences dialog,
select Flash Builder > Editors. You can specify the
indent type and size.
In the editor, click the fold symbol (-) or the
unfold symbol (+) in the editor’s left margin.
Folding
a code block hides all but the first line of code.
Unfolding
the code block to make it visible again. Hold the mouse over the unfold
(+) symbol to show the entire code block in a tool tip.
By default, code folding is turned on in Flash Builder. To
turn off code folding, open the Preferences dialog and select Flash
Builder > Editors, and then deselect the Enable Code
Folding option.
You can easily adjust syntax coloring preferences.
Open the Preferences dialog and select Flash Builder
> Editors > Syntax Coloring.
For more information, see Syntax coloring.
Default font colors can also be configured on the Text Editors
and Colors and Fonts Preferences pages (see Preferences > General
> Appearance > Colors and Fonts. See also Preferences >
General > Editors > Text Editors).
The
editor automatically formats the lines of your code as you enter
it, improving readability and streamlining code writing. You can
also use the Tab key to manually indent individual lines of code.
When you copy and paste code blocks into Flash Builder, Flash
Builder automatically indents the code according to your indentation
preferences.
If you want to indent a block of code in a single action, you
can use the Shift Right and Shift Left editor commands.
In the editor, select a block of code.
Select Source > Shift Right or Source >
Shift Left.
Press Tab or Shift Tab to indent or unindent blocks of code.
Open the
Preferences dialog and select Flash Builder > Indentation.
Select the indent type (Tabs or Spaces) and specify the IndentSize
and Tab Size.
You
can add or remove comments using options in the Source menu or by
using keyboard shortcuts. You can add the following types of comments:
Source comment for ActionScript (//)
Block comment for ActionScript (/* */)
ASDoc comments for ActionScript (/** */)
Block comment for MXML (<!---->)
CDATA block for MXML (<![CDATA[ ]]>)
Adobe Community Professional, Paul Robertson, blogged
In the editor, select one or more lines of ActionScript code.
Press Control+Shift+C (Windows) or Command+Shift+C (Mac OS)
to add, or remove, C-style comments.
Press Control+/ (Windows) or Command+/ (Mac OS)
to add, or remove, C++ style comments.
In the editor, select one or more lines of MXML code.
Control+Shift+C (Windows) or Command+Shift+C (Mac OS) to
add a comment.
Control+Shift+D (Windows) or Command+Shift+D (Mac OS) to
add a comment.
With
applications of any complexity, your projects typically contain
many resources and many lines of code. Flash Builder provides several
features that help you navigate and inspect the various elements
of your code.
Flash Builder lets.
Select the code reference in the editor.
From the Navigate menu, select Go To Definition.
You
can use the keyboard shortcut, F3.
The source file that contains
the code definition opens in the editor.
Flash
Builder also supports hyperlink code navigation.
Locate the code reference in the editor.
Press and hold the Control key (Windows) or Command key (Mac OS)
and hold the mouse over the code reference to display the hyperlink.
To navigate to the code reference, click the hyperlink.
The
Outline view is part of the Flash Development perspective (see The Flash Development perspective), and, therefore, is available
when you edit code. is not private or public.
In Class mode, the Outline
view toolbar contains the sort and filter commands, as the following
example shows: (for example, layout
constraints), and compiler tags such as Model, Array, and Script.
The
Outline view in MXML mode does not show comments, CSS rules and properties,
and component properties expressed as attributes (as opposed to child
tags, which are shown).
When the Outline view
is in MXML mode, the toolbar menu contains additional commands to
switch between the MXML and class views.
To
switch between the two views, from the toolbar menu, select Show
MXML View or Show Class View..
With
an ActionScript or MXML document open in the editor, from the Navigate menu,
select Quick Outline.
You can also use the keyboard shortcut,
Control+O.
Navigating
outside the Quick Outline view closes the view. You can also press ESC
to close the Quick Outline view.
The Open Type dialog is available for browsing all available
classes (including Flex framework classes) in your project. Select
a class in the Open Type dialog to view the implementation.
The Open Type dialog is also available for selecting classes
as the base class for a new ActionScript class or a new MXML component.
The Open Type dialog lets you filter the classes that are displayed
according to the text and wild cards that you specify. The dialog
uses color coding to indicate recommended types and excluded types.
Recommended types display as gray. Excluded types appear brown.
Recommended types are those classes available in the default
namespace for a project. For example, in some contexts only Spark
components are allowed. In other contexts, both Spark and Halo components
are allowed.
Excluded types are those classes that are not available
in the default namespace for a project.
classes) To browse classes and view their implementation:
From the Flash Builder menu, select Navigate >
Open Type.
(Optional) Type text or select filters to modify the classes
visible in the list.
Select a class to view the source code.
You cannot
modify the source code for classes in the Flex framework.
(New ActionScript classes) When selecting a base class for
a new ActionScript class:
Select File >
New > ActionScript class.
For the Superclass field, click Browse.
Select a base class from the list.
(New MXML components) When selecting a base component for
a new MXML component:
Select File > New >
MXML Component.
From the list of projects in your workspace, select a project
for new MXML component and specify a filename.
The available
base components vary, depending on the namespaces configured for
a project.
For the Based On field, click Browse.
Select a base component from the list.
You
can add line numbers in the editor to easily read and navigate your
code.
From the context menu in the editor margin, select Show
Line Numbers.
The editor margin is between the marker bar
and the editor.
Markers are shortcuts to lines of code in a document, to
a document itself, or to a folder. Markers represent tasks, bookmarks,
and problems and they are displayed and managed. Selecting markers
opens the associated document in the editor and, optionally, highlights
the specific line of code.
With
Flash Builder, you must save a file to update problem markers. Only
files that are referenced by your application are checked. The syntax
in an isolated class that is not used anywhere in your code is not
checked.
The workbench generates the following task and problem markers
automatically. You can manually add tasks and bookmarks.
Markers are
descriptions of and links to items in project resources. Markers
are generated automatically by the compiler to indicate problems
in your code, or added manually to help you keep track of tasks
or snippets of code. Markers are displayed and managed in their
associated views. You can easily locate markers in your project
from the Bookmarks, Problems, and Tasks views, and navigate to the
location where the marker was set.
a marker in the Bookmarks, Problems, or Tasks views.
The file
that contains the marker is located and opened in the editor. If
a marker is set on a line of code, that line is highlighted.
Tasks represent
automatically or manually generated workspace items. All tasks are
displayed and managed in the Tasks view (Window > Other
Views > General > Tasks), as the following
example shows:
Open a file in the editor, and then locate and select the
line of code where you want to add a task; or in the Flex Package
Explorer, select a resource.
In the Tasks view, click the Add Task button in the toolbar.
Enter the task name, and select a priority (High, Normal,
Low), and click OK.
When
a task is complete, you can mark it and then optionally delete it
from the Tasks view.
In the
Tasks view, select the task in the selection column, as the following example
shows:
In the Tasks view,
open the context menu for a task and select Delete.
In
the Tasks view, open the context menu and select Delete Completed
Tasks.
You
can use bookmarks to track and easily navigate to items in your
projects. All bookmarks are displayed and managed in the Bookmarks
view (Window > Other Views > General >
Bookmarks), as the following example shows:
Open a file in the editor, and then locate and select the
line of code to add a bookmark to.
From the main menu, select Edit > Add Bookmark.
Enter the bookmark name, and click OK.
A bookmark icon
() is
added next to the line of code.
In the Bookmarks
view, select the bookmark to delete.
Right-click (Windows) or Control-click (Mac OS)
the bookmark and select Delete.
You can add, sort, and remove unused import statements
in ActionScript and MXML script blocks using the Organize Imports
feature.
To use the Organize Imports feature, with an ActionScript or
MXML document that contains import statements open in the editor,
do one of the following:
Select Organize Imports from the Source menu.
Use the keyboard shortcut: Control+Shift+O (Windows) or Command+Shift+O
(Mac OS)
Place the cursor on any import statement, and press Control+1.
Then, select the option to organize imports.
If you have
undefined variables within your ActionScript script or MXML script block,
you can add all missing import statements all at once by using the Organize
Imports feature.
When you use
the Organize Imports feature in the MXML and ActionScript editors,
the packages in which the classes are located are automatically
imported into the document.
If you declare an ActionScript
type that is present in two packages, you can choose the import
statement for the required package. For example, Button is present
in both spark.components and mx.controls packages.
If
you have more than one instance of ambiguous import statements,
all the unresolved imports are displayed letting you resolve them
one after the other.
Flash
Builder places all import statements at the head of an ActionScript document
or at the top of the script block of an MXML document by default.
To
remove import statements that are not referenced in your document,
keep the document that contains import statements open in the editor,
and use the Organize Imports feature.
You can quickly
sort all the import statements in your ActionScript or MXML script
block using the Organize Imports feature.
Flash Builder sorts
the import statements alphabetically by default. To change the default
order in which you want Flash Builder to add the import statements, use
the Preferences dialog. To do so, open the Preferences dialog, and
select Flash Builder> Editors > ActionScript Code > Organize
Imports. For more information, see ActionScript code.
When you copy ActionScript code from
an ActionScript document and paste into another ActionScript document,
any missing import statements are automatically added. The missing
import statements are added at either the package level or file
level depending on where you paste the code.
If you have multiple MXML script blocks in a document
with multiple import statements defined for each script block, Flash
Builder lets you consolidate all the import statements.
The
Consolidate Import Statements Across MXML Script Blocks option in
the Organize Imports preferences dialog box is selected by default.
The import statements are consolidated, sorted, and added only once
at the top of the first script block.
If you have multiple import statements
from the same package, you can use wildcards in your import statement
instead of repeating the statements multiple times.
import flash.events.*;
By
default, Flash Builder applies the <package>.* wildcard
if you use more than 99 import statements from the same package.
You can change the default value using the Organize Imports Preferences
dialog. For more information, see ActionScript code.
Twitter™ and Facebook posts are not covered under the terms of Creative Commons.
|
https://help.adobe.com/en_US/flashbuilder/using/WSe4e4b720da9dedb5-25a895a612e8e9b8c8e-8000.html
|
CC-MAIN-2018-09
|
refinedweb
| 2,288
| 64
|
Today, my young nephew came to me with the question "What is that thing called 'pointers' in the C programming language?" from his first programming course at school.
The time has come to see if he was born with that part of the brain that understands pointers. After some minutes of explanation, I presented to him the following 4 programs asking him "What is displayed in the screen for each program?"
He answered correctly all of them, and I concluded that such part of the brain is present indeed.
Program 0:
#include <stdio.h> void main() { char n; n=65; printf("%d\n",n); char* p; p= &n; printf("%d\n", *p ); char* g=p; printf("%d\n", *g ); *g=14; printf("%d\n", n ); printf("%d\n", *p ); printf("%d\n", *g ); }
Program 1:
#include <stdio.h> void f(int x) { x= x * 2; } void g(int* x) { *x= (*x) * 2; } void main() { int n=30; f(n); printf("n=%d\n",n); g(&n); printf("n=%d\n",n); }
Program 2:
#include <stdio.h> float* f(float* x, float* y) { *x= (*x) * (*y); return y; } void main() { float n = 4; float m = 2; float* r = f(&n,&m); printf("result=%f \n", n); printf("result=%f \n", *r); }
Program 3:
#include <stdio.h> char f(char* x, char y) { char result=( (*x) + y ) / 2; return result; } char* g(char* s, char* t) { *s = *t; return s; } void main() { char a=65; char b=90; char c=f(&a,b); printf("%c \n", c); void* n=(void*)g(&a, &b); printf("%x \n", n); }
I’ve always found it strange that pointers simply make sense for some people, and others struggle with them no matter how long they try. Wonderful that your nephew is one of those who "gets it" 🙂
if one variable contains the address of other variable,the first variable is said point to the second
|
https://blogs.msdn.microsoft.com/marcod/2006/09/16/what-are-pointers/
|
CC-MAIN-2017-39
|
refinedweb
| 319
| 80.85
|
Introduction to C++ Microsoft's recommended replacement for the C++/CX language projection, and the Windows Runtime C++ Template Library (WRL). The full list of topics about C++/WinRT includes info about both interoperating with, and porting from, C++/CX and WRL.
Important
Some of the most important pieces of C++/WinRT to be aware of are described in the sections SDK support for C++/WinRT and Visual Studio support for C++/WinRT, XAML, the VSIX extension, and the NuGet package.
Language projections
The Windows Runtime is based on Component Object Model (COM) APIs, and it's designed to be accessed through language projections. A projection hides the COM details, and provides a more natural programming experience for a given language.
The C++/WinRT language projection in the Windows UWP API reference content
When you're browsing Windows UWP APIs, click the Language combo box in the upper right, and select C++/WinRT to view API syntax blocks as they appear in the C++/WinRT language projection.
Visual Studio support for C++/WinRT, XAML, the VSIX extension, and the NuGet package
For Visual Studio support, you'll need Visual Studio 2019 or Visual Studio 2017 (at least version 15.6; we recommend at least 15.7). From within the Visual Studio Installer, install the Universal Windows Platform Development workload. In Installation Details > Universal Windows Platform development, check the C++ (v14x) Universal Windows Platform tools option(s), if you haven't already done so. And, in Windows Settings > Update & Security > For developers, choose the Developer mode option rather than the Sideload apps option.
While we recommend that you develop with the latest versions of Visual Studio and the Windows SDK, if you're using a version of C++/WinRT that shipped with the Windows SDK prior to 10.0.17763.0 (Windows 10, version 1809), then, to use the the Windows namespaces headers mentioned above, you'll need a minimum Windows SDK target version in your project of 10.0.17134.0 (Windows 10, version 1803).
You'll want to download and install the latest version of the C++/WinRT Visual Studio Extension (VSIX) from the Visual Studio Marketplace.
- The VSIX extension gives you C++/WinRT project and item templates in Visual Studio, so that you can get started with C++/WinRT development.
- In addition, it gives you Visual Studio native debug visualization (natvis) of C++/WinRT projected types; providing an experience similar to C# debugging. Natvis is automatic for debug builds. You can opt into it release builds by defining the symbol WINRT_NATVIS.
The Visual Studio project templates for C++/WinRT are described in the sections below. When you create a new C++/WinRT project with the latest version of the VSIX extension installed, the new C++/WinRT project automatically installs the Microsoft.Windows.CppWinRT NuGet package. The Microsoft.Windows.CppWinRT NuGet package provides C++/WinRT build support (MSBuild properties and targets), making your project portable between a development machine and a build agent (on which only the NuGet package, and not the VSIX extension, is installed).
Alternatively, you can convert an existing project by manually installing the Microsoft.Windows.CppWinRT NuGet package. After installing (or updating to) the latest version of the VSIX extension, open the existing project in Visual Studio, click Project > Manage NuGet Packages... > Browse, type or paste Microsoft.Windows.CppWinRT in the search box, select the item in search results, and then click Install to install the package for that project. Once you've added the package, you'll get C++/WinRT MSBuild support for the project, including invoking the
cppwinrt.exe tool.
Important
If you have projects that were created with (or upgraded to work with) a version of the VSIX extension earlier than 1.0.190128.4, then see Earlier versions of the VSIX extension. That section contains important info about the configuration of your projects, which you'll need to know to upgrade them to use the latest version of the VSIX extension.
- Because C++/WinRT uses features from the C++17 standard, the NuGet package sets project property C/C++ > Language > C++ Language Standard > ISO C++17 Standard (/std:c++17) in Visual Studio.
- It also adds the /bigobj compiler option.
- It adds the /await compiler option in order to enable
co_await.
- It instructs the XAML compiler to emit C++/WinRT codegen.
- You might also want to set Conformance mode: Yes (/permissive-), which further constrains your code to be standards-compliant.
- Another project property to be aware of is C/C++ > General > Treat Warnings As Errors. Set this to Yes(/WX) or No (/WX-) to taste. Sometimes, source files generated by the
cppwinrt.exetool generate warnings until you add your implementation to them.
With your system set up as described above, you'll be able to create and build, or open, a C++/WinRT project in Visual Studio, and deploy it.
As of version 2.0, the Microsoft.Windows.CppWinRT NuGet package includes the
cppwinrt.exe tool. You can point the
cppwinrt.exe tool at a Windows Runtime metadata (
.winmd) file to generate a header-file-based standard C++ library that projects the APIs described in the metadata for consumption from C++/WinRT code. Windows Runtime metadata (
.winmd) files provide a canonical way of describing a Windows Runtime API surface. By pointing
cppwinrt.exe at metadata, you can generate a library for use with any runtime class implemented in a second- or third-party Windows Runtime component, or implemented in your own application. For more info, see Consume APIs with C++/WinRT.
With C++/WinRT, you can also implement your own runtime classes using standard C++, without resorting to COM-style programming. For a runtime class, you just describe your types in an IDL file, and
midl.exe and
cppwinrt.exe generate your implementation boilerplate source code files for you. You can alternatively just implement interfaces by deriving from a C++/WinRT base class. For more info, see Author APIs with C++/WinRT.
For a list of customization options for the
cppwinrt.exe tool, set via project properties, see the Microsoft.Windows.CppWinRT NuGet package readme.
You can identify a project that uses the C++/WinRT MSBuild support by the presence of the Microsoft.Windows.CppWinRT NuGet package installed within the project.
Here are the Visual Studio project templates provided by the VSIX extension.
Blank App (C++/WinRT)
A project template for a Universal Windows Platform (UWP) app that has a XAML user-interface.
Visual Studio provides XAML compiler support to generate implementation and header stubs from the Interface Definition Language (IDL) (
.idl) file that sits behind each XAML markup file. In an IDL file, define any local runtime classes that you want to reference in your app's XAML pages, and then build the project once to generate implementation templates in
Generated Files, and stub type definitions in
Generated Files\sources. Then use those stub type definitions for reference to implement your local runtime classes. See Factoring runtime classes into Midl files (.idl).
The XAML design surface support in Visual Studio 2019 for C++/WinRT is close to parity with C#. In Visual Studio 2019, you can use the Events tab of the Properties window to add event handlers within a C++/WinRT project. You can also add event handlers to your code manually—see Handle events by using delegates in C++/WinRT for more info.
Core App (C++/WinRT)
A project template for a Universal Windows Platform (UWP) app that doesn't use XAML.
Instead, it uses the C++/WinRT Windows namespace header for the Windows.ApplicationModel.Core namespace. After building and running, click on an empty space to add a colored square; then click on a colored square to drag it.
Windows Console Application (C++/WinRT)
A project template for a C++/WinRT client application for Windows Desktop, with a console user-interface.
Windows Desktop Application (C++/WinRT)
A project template for a C++/WinRT client application for Windows Desktop, which displays a Windows Runtime Windows.Foundation.Uri inside a Win32 MessageBox.
Windows Runtime Component (C++/WinRT)
A project template for a component; typically for consumption from a Universal Windows Platform (UWP).
This template demonstrates the
midl.exe >
cppwinrt.exe toolchain, where Windows Runtime metadata (
.winmd) is generated from IDL, and then implementation and header stubs are generated from the Windows Runtime metadata.
In an IDL file, define the runtime classes in your component, their default interface, and any other interfaces they implement. Build the project once to generate
module.g.cpp,
module.h.cpp, implementation templates in
Generated Files, and stub type definitions in
Generated Files\sources. Then use those the stub type definitions for reference to implement the runtime classes in your component. See Factoring runtime classes into Midl files (.idl).
Bundle the built Windows Runtime component binary and its
.winmd with the UWP app consuming them.
Earlier versions of the VSIX extension
We recommend that you install (or update to) the latest version of the VSIX extension. It is configured to update itself by default. If you do that, and you have projects that were created with a version of the VSIX extension earlier than 1.0.190128.4, then this section contains important info about upgrading those projects to work with the new version. If you don't update, then you'll still find the info in this section useful.
In terms of supported Windows SDK and Visual Studio versions, and Visual Studio configuration, the info in the Visual Studio support for C++/WinRT, XAML, the VSIX extension, and the NuGet package section above applies to earlier versions of the VSIX extension. The info below describes important differences regarding the behavior and configuration of projects created with (or upgraded to work with) earlier versions.
Created earlier than 1.0.181002.2
If your project was created with a version of the VSIX extension earlier than 1.0.181002.2, then C++/WinRT build support was built into that version of the VSIX extension. Your project has the
<CppWinRTEnabled>true</CppWinRTEnabled> property set in the
.vcxproj file.
<Project ...> <PropertyGroup Label="Globals"> <CppWinRTEnabled>true</CppWinRTEnabled> ...
You can upgrade your project by manually installing the Microsoft.Windows.CppWinRT NuGet package. After installing (or upgrading to) the latest version of the VSIX extension, open your project in Visual Studio, click Project > Manage NuGet Packages... > Browse, type or paste Microsoft.Windows.CppWinRT in the search box, select the item in search results, and then click Install to install the package for your project.
Created with (or upgraded to) between 1.0.181002.2 and 1.0.190128.3
If your project was created with a version of the VSIX extension between 1.0.181002.2 and 1.0.190128.3, inclusive, then the Microsoft.Windows.CppWinRT NuGet package was installed in the project automatically by the project template. You might also have upgraded an older project to use a version of the VSIX extension in this range. If you did, then—since build support was also still present in versions of the VSIX extension in this range—your upgraded project may or may not have the Microsoft.Windows.CppWinRT NuGet package installed.
To upgrade your project, follow the instructions in the previous section and ensure that your project does have the Microsoft.Windows.CppWinRT NuGet package installed.
Invalid upgrade configurations
With the latest version of the VSIX extension, it's not valid for a project to have the
<CppWinRTEnabled>true</CppWinRTEnabled> property if it doesn't also have the Microsoft.Windows.CppWinRT NuGet package installed. A project with this configuration produces the build error message, "The C++/WinRT VSIX no longer provides project build support. Please add a project reference to the Microsoft.Windows.CppWinRT Nuget package."
As mentioned above, a C++/WinRT project now needs to have the NuGet package installed in it.
Since the
<CppWinRTEnabled> element is now obsolete, you can optionally edit your
.vcxproj, and delete the element. It's not strictly necessary, but it's an option.
Also, if your
.vcxproj contains
<RequiredBundles>$(RequiredBundles);Microsoft.Windows.CppWinRT</RequiredBundles>, then you can remove it so that you can build without requiring the C++/WinRT VSIX extension to be installed.
SDK support for C++/WinRT
Although it is now present only for compatibility reasons, as of version 10.0.17134.0 (Windows 10, version 1803), the Windows SDK contains a header-file-based standard C++ library for consuming first-party Windows APIs (Windows Runtime APIs in Windows namespaces). Those headers are inside the folder
%WindowsSdkDir%Include<WindowsTargetPlatformVersion>\cppwinrt\winrt. As of the Windows SDK version 10.0.17763.0 (Windows 10, version 1809), these headers are generated for you inside your project's $(GeneratedFilesDir) folder.
Again for compatibility, the Windows SDK also comes with the
cppwinrt.exe tool. However, we recommend that you instead install and use the most recent version of
cppwinrt.exe, which is included with the Microsoft.Windows.CppWinRT NuGet package. That package, and
cppwinrt.exe, are described in the sections above.
Custom types in the C++/WinRT projection
In your C++/WinRT programming, you can use standard C++ language features and Standard C++ data types and C++/WinRT—including some C++ Standard Library data types. But you'll also become aware of some custom data types in the projection, and you can choose to use them. For example, we use winrt::hstring in the quick-start code example in Get started with C++/WinRT.
winrt::com_array is another type that you're likely to use at some point. But you're less likely to directly use a type such as winrt::array_view. Or you may choose not to use it so that you won't have any code to change if and when an equivalent type appears in the C++ Standard Library.
Warning
There are also types that you might see if you closely study the C++/WinRT Windows namespace headers. An example is winrt::param::hstring, but there are collection examples too. These exist solely to optimize the binding of input parameters, and they yield big performance improvements and make most calling patterns "just work" for related standard C++ types and containers. These types are only ever used by the projection in cases where they add most value. They're highly optimized and they're not for general use; don't be tempted to use them yourself. Nor should you use anything from the
winrt::impl namespace, since those are implementation types, and therefore subject to change. You should continue to use standard types, or types from the winrt namespace.
Also see Passing parameters into the ABI boundary.
Important APIs
Related topics
Feedback
|
https://docs.microsoft.com/en-us/windows/uwp/cpp-and-winrt-apis/intro-to-using-cpp-with-winrt
|
CC-MAIN-2019-47
|
refinedweb
| 2,430
| 57.16
|
Bpo movable typeJobs
BPO custom Task BPO custom Task BPO custom Task BPO custom Task BPO custom Task BPO custom Task
... Our entire
need to review RFP will be provided to submit a tender in UK. for BPO industry. on 16 please give ur consent so v may move along.
..
Hi bro, give me your number. I am not able to message there! I am the one who posted about the 10k+ votes required
..
we will give your our customer database. you need to call & convince them. project js specific for people living in eithe...
We your
I have initiated a blog on BPO practices i.e. bpotocareer.blogspot.com. I need a
Brochure design for It company serving in BPO services. Must be corporate design with tri-fold design, budget is rupees 1000
...
...Javascript development team to jump in and help resolve few things off our schedule. Task is very simple, we have created Javascript calculator for Real Estate return (with movable slider). Now the task is to link the calculator with the price value from property grid as it is seen in the screenshot. So when user clicks on the grid – calculator prices looking for the project both of Domestic and International market voice and non voice Process. We are Looking for the Candidates to work as Business Development manager as freelancer who can work with us on commission basis. we are looking for candidates who has good connection in companies and can have there
I work as a voice coach in a bpo for international voice process ( IntelenetGlobal Services) and I can complete any task before the deadline.
..). We are looking for candidates
I that ....
This is our criteria: Customer Eligibility must has: customer should be from United Kingdom – It consists of 4 countries which are – England, Scotland, Wales and Northern Ireland. Currency of UK is Pounds and Pence. Debt level £6k Minimum DI £120 More debt than equity Client not already in IVA or bankruptcy Client wants to talk about financial solutions Client
...in "activity list". 11. When user selects to edit an activity from the homepage listview, see attached screensshot [log ind for at se URL], Each component on whiteboard is resizable & movable. whiteboard image is changeable to something like a table as shown in attached screenshot editor_playtable.png. If table image, the activity view screen will look like attached.
I'm looking for BPO staff to copy and paste information from one of our websites to another. We will require 50 submissions a day on a weekly basis... 1. Save image for upload 2. Copy content and category for updating new site. 3. Uploading content 4. Marking of content copied in excel sheet to avoid duplication with other staff. All fields must
I need you to fill in a spreadsheet with data. BPO and Bill entry job
|
https://www.dk.freelancer.com/job-search/bpo-movable-type/
|
CC-MAIN-2019-04
|
refinedweb
| 473
| 66.44
|
import nextapp.echo2.app.button.AbstractButton;33 import nextapp.echo2.app.button.DefaultButtonModel;34 35 /**36 * An implementation of a "push" button.37 */38 public class Button extends AbstractButton {39 40 /**41 * Creates a button with no text or icon.42 */43 public Button() {44 this(null, null);45 }46 47 /**48 * Creates a button with text.49 *50 * @param text the text to be displayed in the button51 */52 public Button(String text) {53 this(text, null);54 }55 56 /**57 * Creates a button with an icon.58 *59 * @param icon the icon to be displayed in the button60 */61 public Button(ImageReference icon) {62 this(null, icon);63 }64 65 /**66 * Creates a button with text and an icon.67 *68 * @param text the text to be displayed in the button69 * @param icon the icon to be displayed in the button70 */71 public Button(String text, ImageReference icon) {72 super();73 74 setModel(new DefaultButtonModel());75 76 setIcon(icon);77 setText(text);78 }79 }80
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/nextapp/echo2/app/Button.java.htm
|
CC-MAIN-2016-44
|
refinedweb
| 182
| 52.29
|
ZODB3 3.9.9 requires Python 2.4.2.9 servers. ZODB ZEO Clients can talk to ZODB 3.8 and 3.9 ZEO servers..9.6 (2010-09-21)
Bugs Fixed
Updating blobs in save points could cause spurious “invalidations out of order” errors.
(Thanks to Christian Zagrodnick for chasing this down.)
If a ZEO client process was restarted while invalidating a ZEO cache entry, the cache could be left in a stage when there is data marked current that should be invalidated, leading to persistent conflict errors.
Corrupted or invalid cache files prevented ZEO clients from starting. Now, bad cache files are moved aside.
Invalidations of object records in ZEO caches, where the invalidation transaction ids matched the cached transaction ids should have been ignored.
Shutting down a process while committing a transaction or processing invalidations from the server could cause ZEO persistent client caches to have invalid data. This, in turn caused stale data to remain in the cache until it was updated.
Conflict errors didn’t invalidate ZEO cache entries.
When objects were added in savepoints and either the savepoint was rolled back () or the transaction was aborted () The objects’ _p_oid and _p_jar variables weren’t cleared, leading to surprizing errors.
Objects added in transactions that were later aborted could have _p_changed still set ().
ZEO extension methods failed when a client reconnected to a storage. ():
Passing keys or values outside the range of 32-bit ints on 64-bit platforms led to undetected overflow errors. Now these cases cause Type errors to be raised.
BTree sets and tree sets didn’t correctly check values passed to update or to constructors, causing Python to exit under certain circumstances.
The verbose mode of the fstest was broken. ()
3.9.5 (2010-04-23)
Bugs Fixed
Fixed bug in cPickleCache’s byte size estimation logic. ()
Fixed a serious bug that caused cache failures when run with Python optimization turned on.
Fixed a bug that caused savepoint rollback to not properly set object state when objects implemented _p_invalidate methods that reloaded ther state (unghostifiable objects).
cross-database wekrefs weren’t handled correctly.
The mkzeoinst script was fixed to tell people to install and use the mkzeoinstance script. :)
3.9.4 (2009-12-14)
Bugs Fixed
- A ZEO threading bug could cause transactions to read inconsistent data. (This sometimes caused an AssertionError in Connection._setstate_noncurrent.)
- DemoStorage.loadBefore sometimes returned invalid data which would trigger AssertionErrors in ZODB.Connection.
- History support was broken when using stprages that work with ZODB 3.8 and 3.9.
- zope.testing was an unnecessary non-testing dependency.
- Internal ZEO errors were logged at the INFO level, rather than at the error level.
- The FileStorage backup and restore script, repozo, gave a deprecation warning under Python 2.6.
- C Header files weren’t installed correctly.
3.9.3 (2009-10-23)
Bugs Fixed
- 2 BTree bugs, introduced by a bug fix in 3.9.0c2, sometimes caused deletion of keys to be improperly handled, resulting in data being available via iteraation but not item access.
3.9.2 (2009-10-13)
Bugs Fixed
- ZEO manages a separate thread for client network IO. It created this thread on import, which caused problems for applications that implemented daemon behavior by forking. Now, the client thread isn’t created until needed.
- File-storage pack clean-up tasks that can take a long time unnecessarily blocked other activity.
- In certain rare situations, ZEO client connections would hand during the initial connection setup.
3.9.1 (2009-10-01)
Bugs Fixed
- Conflict errors committing blobs caused ZEO servers to stop committing transactuions.
3.9.0 (2009-09 exits.
FileStorage now supports blobs directly.
You can now control whether FileStorages keep .old files when packing.
POSKeyErrors are no longer logged by ZEO servers, because they are really client errors.
A new storage interface, IExternalGC, to support external garbage collection,, has been defined and implemented for FileStorage and ClientStorage.
As a small convenience (mainly for tests), you can now specify initial data as a string argument to the Blob constructor.
ZEO Servers now provide an option, invalidation-age, that allows quick verification of ZEO clients have been disconnected for less than a given time even if the number of transactions the client hasn’t seen exceeds the invalidation queue size. This is only recommended if the storage being served supports efficient iteration from a point near the end of the transaction history.
The FileStorage iterator now handles large files better. When iterating from a starting transaction near the end of the file, the iterator will scan backward from the end of the file to find the starting point. This enhancement makes it practical to take advantage of the new storage server invalidation-age option.
Previously, database connections were managed as a stack. This tended to cause the same connection(s) to be used over and over. For example, the most used connection would typically be the only connection used. In some rare situations, extra connections could be opened and end up on the top of the stack, causing extreme memory wastage. Now, when connections are placed on the stack, they sink below existing connections that have more active objects.
There is a new pool-timeout database configuration option to specify that connections unused after the given time interval should be garbage collection. This will provide a means of dealing with extra connections that are created in rare circumstances and that would consume an unreasonable amount of memory.
The Blob open method now supports a new mode, ‘c’, to open committed data for reading as an ordinary file, rather than as a blob file. The ordinary file may be used outside the current transaction and even after the blob’s database connection has been closed.
ClientStorage now provides blob cache management. When using non-shared blob directories, you can set a target cache size and the cache will periodically be reduced a ZEO cache is stale and would need verification, a ZEO.interfaces.StaleCache event is published (to zope.event). Applications may handle this event and take action such as exiting the application without verifying the cache or starting cold.
There’s a new convenience function, ZEO.DB, for creating databases using ZEO Client Storages. Just call ZEO.DB with the same arguments you would otherwise pass to ZEO.ClientStorage.ClientStorage:
import ZEO db = ZEO.DB(('some_host', 8200))
Object saves are a little faster
When configuring storages in a storage server, the storage name now defaults to “1”. In the overwhelmingly common case that a single storage, the name can now be omitted.
FileStorage now provides optional garbage collection. A ‘gc’ keyword option can be passed to the pack method. A false value prevents garbage collection.
The FileStorage constructor now provides a boolean pack_gc option, which defaults to True, to control whether garbage collection is performed when packing by default. This can be overridden with the gc option to the pack method.
The ZConfig configuration for FileStorage now includes a pack-gc option, corresponding to the pack_gc constructor argument.
The FileStorage constructor now has a packer keyword argument that allows an alternative packer to be supplied.
The ZConfig configuration for FileStorage now includes a packer option, corresponding to the packer constructor argument.
MappingStorage now supports multi-version concurrency control and iteration and provides a better storage implementation example.
DemoStorage has a number of new features:
- The ability to use a separate storage, such as a file storage to store changes
- Blob support
- Multi-version concurrency control and iteration
- Explicit support for demo-storage stacking via push and pop methods.
Wen calling ZODB.DB to create a database, you can now pass a file name, rather than a storage to use a file storage.
Added support for copying and recovery of blob storages:
Added a helper function, ZODB.blob.is_blob_record for testing whether a data record is for a blob. This can be used when iterating over a storage to detect blob records so that blob data can be copied.
In the future, we may want to build this into a blob-aware iteration interface, so that records get blob file attributes automatically.
Added the IBlobStorageRestoreable interfaces for blob storages that support recovery via a restoreBlob method.
Updated ZODB.blob.BlobStorage to implement IBlobStorageRestoreable and to have a copyTransactionsFrom method that also copies blob data.
New ClientStorage configuration option drop_cache_rather_verify. If this option is true then the ZEO client cache is dropped instead of the long (unoptimized) verification. For large caches, setting this option can avoid effective down times in the order of hours when the connection to the ZEO server was interrupted for a longer time.
Cleaned-up the storage iteration API and provided an iterator implementation for ZEO.
Versions are no-longer supported.
Document conflict resolution (see ZODB/ConflictResolution.txt).
Support multi-database references in conflict resolution.
Make it possible to examine oid and (in some situations) database name of persistent object references during conflict resolution.
Moved the ‘transaction’ module out of ZODB. ZODB depends upon this module, but it must be installed separately.
ZODB installation now requires setuptools.
Added offset information to output of fstail script. Added test harness for this script.
Added support for read-only, historical connections based on datetimes or serials (TIDs). See src/ZODB/historical_connections.txt.
Removed the ThreadedAsync module.
Now depend on zc.lockfile
Bugs Fixed
CVE-2009-2701: Fixed a vulnerability in ZEO storage servers when blobs are available. Someone with write access to a ZEO server configured to support blobs could read any file on the system readable by the server process and remove any file removable by the server process.
BTrees (and TreeSets) kept references to internal keys.
BTree Sets and TreeSets don’t support the standard set add method. (Now either add or the original insert method can be used to add an object to a BTree-based set.)
The runzeo script didn’t work without a configuration file. ()
Officially deprecated PersistentDict () analyze.py and added test..
Fix for bug #251037: Make packing of blob storages non-blocking.
Fix for bug #220856: Completed implementation of ZEO authentication.
Fix for bug #184057: Make initialisation of small ZEO client file cache sizes not fail.
Fix for bug #184054: MappingStorage used to raise a KeyError during load instead of a POSKeyError.
Fixed bug in Connection.TmpStore: load() would not defer to the backend storage for loading blobs.
Fix for bug #181712: Make ClientStorage update lastTransaction directly after connecting to a server, even when no cache verification is necessary.
Fixed bug in blob filesystem helper: the isSecure check was inverted.
Fixed bug in transaction buffer: a tuple was unpacked incorrectly in clear.
Bugfix the situation in which comparing persistent objects (for instance, as members in BTree set or keys of BTree) might cause data inconsistency during conflict resolution.
Fixed bug 153316: persistent and BTrees were using int for memory sizes which caused errors on x86_64 Intel Xeon machines (using 64-bit Linux).
Fixed small bug that the Connection.isReadOnly method didn’t work after a savepoint.
Bug #98275: Made ZEO cache more tolerant when invalidating current versions of objects.
Fixed a serious bug that could cause client I/O to stop (hang). This was accompanied by a critical log message along the lines of: “RuntimeError: dictionary changed size during iteration”.
Fixed bug #127182: Blobs were subclassable which was not desired.
Fixed bug #126007: tpc_abort had untested code path that was broken.
Fixed bug #129921: getSize() function in BlobStorage could not deal with garbage files
Fixed bug in which MVCC would not work for blobs.
Fixed bug in ClientCache that occurred with objects larger than the total cache size.
When an error occured attempting to lock a file and logging of said error was enabled..
File storages previously kept an internal object id to transaction id mapping as an optimization. This mapping caused excessive memory usage and failures during the last phase of two-phase commit. This optimization has been removed.
Refactored handling of invalidations on ZEO clients to fix a possible ordering problem for invalidation messages. written inverted.
Fixed bug in transaction buffer: a tuple was unpacked incorrectly in clear.
Fixed bug in Connection.TmpStore: load() would not defer to the back-end storage for loading blobs.
Fixed bug #190884: Wrong reference to POSKeyError caused NameError.
Completed implementation of ZEO authentication. This fixes issue 220856.
-.9.6.xml
|
https://pypi.python.org/pypi/ZODB3/3.9.6
|
CC-MAIN-2016-30
|
refinedweb
| 2,060
| 57.98
|
See also: IRC log
sribe: florian
<scribe> scribe: florian
<chris> I have sent a mail with an updated version of the API-document
changed several agenda items
<scribe> new agenda:
chris sent the new document version
going through a list of collected API issues regarding the API doc
<chris> is this list available onlineM
<chris> ?
RESOLUTION: first version of the API document will only contain GET methods
Excel file containing remaining issues:
Wonsuk: we need to add general examples how MAObject can be retrieved, e.g. in a JavaScript example
Chris: this must be a elaborated example
<scribe> ACTION: chris to come up with a good JavaScript example for the access of MAObject [recorded in]
<trackbot> Created ACTION-237 - Come up with a good JavaScript example for the access of MAObject [on Chris Poppe - due 2010-05-04].
next issue (column 9)
Wonsuk: this is already done
next issue (column 10)
Chris: already done
next issue (column 11)
Chris: yet no changes for this issue
<scribe> ACTION: chris to add example regarding column 11 [recorded in]
<trackbot> Created ACTION-238 - Add example regarding column 11 [on Chris Poppe - due 2010-05-04].
next issue (column 14)
<chris> done
Group wants to keep the picture in the specification
<chris> +1
<scribe> ACTION: werner to convert the image in the API doc to svg [recorded in]
<trackbot> Created ACTION-239 - Convert the image in the API doc to svg [on Werner Bailer - due 2010-05-04].
next issue (column 15)
same issue as 14
next issue (column 16)
missunderstanding
next issue (column 17)
Chris: setMAResource is ok for me
- already changed it
... the name is not important for me
next issue (column 18)
Florian: setMAResource is misleading
Martin: perhaps select would be better?
<scribe> ACTION: wonsuk to change the name of method "setMAResource" [recorded in]
<trackbot> Created ACTION-240 - Change the name of method "setMAResource" [on WonSuk Lee - due 2010-05-04].
Florian: name of "setMAResource" method is misleading
Martin: perhaps select would be better?
<scribe> ACTION: wonsuk to change the name of method "setMAResource" [recorded in]
<trackbot> Created ACTION-241 - Change the name of method "setMAResource" [on WonSuk Lee - due 2010-05-04].
Joakim: we should also send the reviewer the summary files for the docs
API Doc summary file for the reviewer available at:
next issue ()
s/next issue ()/next issue (line 22)
Joakim: can you explain what getProperty does?
Chris: if you ask for a value of a property (e.g., title), getProperty retrieves an array containing the objects
<chris>
Chris: here you can see an
example of getProperty in JavaScript
... i will make examples for all properties and also the general examples
next issue (line 21)
Chris: changed
next issue (line 22)
pass
next issue (line 24)
Chris: felix said he wants to
change it
... sylvia proposed how to do this
<chris> felix to update the service examples with apropriate structured JSON
<scribe> ACTION: felix to update the service examples with apropriate structured JSON [recorded in]
<trackbot> Created ACTION-242 - Update the service examples with apropriate structured JSON [on Felix Sasaki - due 2010-05-04].
<tobiasb> Page for the RDF Taskforce:
<chris>\
next item (line 28)
<chris>
next issue (item 30)
a real example should be added to clarify
<chris> +1
<chris> if somebody has a good example we can update the document
next issue (item 31)
Chris: changed
next issue (item 32)
Chris: will be replaced by the description interface
next issue (item 33)
Chris: also changing into keyword interface
next issue (item 34)
Chris: done
next issue (item 35)
Chris: done
next issue (item 36)
agreed at the TPAC and also changed in the ontology doc
next issue (item 37)
real example should be added
next issue (item 38)
Chris: changed
Joakim: we need units for framesize (pixels != svg)
<scribe> ACTION: wonsuk to adding units to the framesize [recorded in]
<trackbot> Created ACTION-243 - Adding units to the framesize [on WonSuk Lee - due 2010-05-04].
next issue (item 40)
Chris: updating it
<scribe> ACTION: chris to update the doc regarding item 40 [recorded in]
<trackbot> Created ACTION-244 - Update the doc regarding item 40 [on Chris Poppe - due 2010-05-04].
next issue (item 41)
<scribe> ACTION: chris to update the doc regarding item 41 [recorded in]
<trackbot> Created ACTION-245 - Update the doc regarding item 41 [on Chris Poppe - due 2010-05-04].
next issue (item 42)
<scribe> ACTION: wonsuk to update the change the bitrate into averagebitrate [recorded in]
<trackbot> Created ACTION-246 - Update the change the bitrate into averagebitrate [on WonSuk Lee - due 2010-05-04].
next issue (item 43)
Chris: should be resolved
next issue (item 44)
next issue (item 45)
solved
next issue (item 46)
Chris: keep as is
+1
-1
<scribe> ACTION: florian to discuss possible error messages as an input for the diagnosis section [recorded in]
<trackbot> Created ACTION-247 - Discuss possible error messages as an input for the diagnosis section [on Florian Stegmaier - due 2010-05-04].
next issue (item 47)
<scribe> done
next issue (item 48)
<scribe> done
next issue (item 49)
<scribe> done
next issue (item 50)
Chris: removed
next issue (item 51)
<scribe> done (security disclaimer)
next issue (item 52)
<scribe> done
next issue (item 53)
Joakim: should we forward this to Dom
<scribe> ACTION: joakim to ask dominique regarding item 53 (WebIDL) [recorded in]
<trackbot> Created ACTION-248 - Ask dominique regarding item 53 (WebIDL) [on Joakim Söderberg - due 2010-05-04].
next issue (item 54)
aggreed
since not every item has been clarified, please look into this Excel file for detailed informations:
This file will be also given to the reviewer.
<chris> the new version of the API document is online
<chris>
<scribe> scribe: florian
Tobias: Martin and i have updated the wiki page entry - Veronique and Pierre-Antoine did?nt contribute yet
Tobias: we could make a webimplementation using e.g., Dublin Core - may not cover everything
Jean-Pierre: why do you want to
use DC etc.? Don?t have the MA namespace
... you have to distinguish between classes and the implementation
Tobias: we should define first the concepts and map these to other ontologies
Tobias (explaining Jean-Pierre point): we must first define classes that we need, than look into the implementation
Tobias: let?s go through the
mentioned classes in the wiki entry if they are complete
... better if they cover our needs
... first we need a class for resource
... why program?
Jean-Pierre: it?s just an
illustration
... for me resource is ok
Joakim: we should name it MediaResource
Tobias is going through the set of core properties and derives classes together with the group
List of needed classes: 1. MediaResource 2. Creator 3. Cotributor 4. Publisher 5. Location 6. Collection 7. MediaFragment 8. Concepts
Tobias: i do not know, if we need to distinguish between 2., 3. and 4.
Jean-Pierre: of course you can merge them, but than you have to define rules etc.
Werner: here we go beyond the
attributes we have defined
... the main question is how far we want to go in defining the attributes of these classes
<scribe> scribe: florian
RESOLUTION: we take these classes defined in the 7th MAWG F2F (1. - 8.)
Tobias: we might add subclasses
to 1. if we want
... the next step is to define the properties
The group goes through the list and chooses, whether properties are data or object properties
identifier data property?
yes
title data property? - yes
language data property? -yes
locator - dp
contributor - op
creator - op
createDate dp
DP ma:identifier
DP ma:title
DP ma:language
DP ma:locator
OP ma:contributor
OP ma:creator
DP ma:createDate
OP ma:location
DP ma:description
DP ma:keyword
OP ma:genre
OP ma:rating
OP ma:relation
OP ma:collection
DP ma:copyright
DP ma:policy
OP ma:publisher
OP ma:targetAudience
OP ma:fragments
OP ma:namedFragments
DP ma:frameSize
DP ma:compression
DP ma:duration
DP ma:format
DP ma:samplingrate
DP ma:framerate
DP ma:bitrate
DP ma:numTracks
dp == data property, op == object property
Tobias will make a ontology proposal to have a basis to discuss.
<scribe> ACTION: tobiasb to come up with an ontology proposal on the basis of the Task Force Meeting in the 7th MAWG F2F [recorded in]
<trackbot> Sorry, couldn't find user - tobiasb
<scribe> ACTION: tobias to come up with an ontology proposal on the basis of the Task Force Meeting in the 7th MAWG F2F [recorded in]
<trackbot> Created ACTION-249 - Come up with an ontology proposal on the basis of the Task Force Meeting in the 7th MAWG F2F [on Tobias Bürger - due 2010-05-04].
Tobias and some others will have some offline discussion about the ontology today and tomorrow.
Here we refelct the cahnges into the documents. No more scribing today.
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/nex issue (item 23)/next issue ()/ FAILED: s/nex issue (item 23)/next issue ()/ Succeeded: s/nex issue (item 23)/next issue ()/ Succeeded: s/next issue ()/next issue (line 22)/ Found Scribe: florian Inferring ScribeNick: florian Found Scribe: florian Inferring ScribeNick: florian Found Scribe: florian Inferring ScribeNick: florian WARNING: Replacing list of attendees. Old list: ViennaF2F chris New list: ViennaF2F +41.22.717.aaaa jean-pierre Default Present: ViennaF2F, +41.22.717.aaaa, jean-pierre Present: wonsuk joakim martin_h daniel wbailer florian tobiasb chris john Got date from IRC log name: 27 Apr 2010 Guessing minutes URL: People with action items: chris felix florian joakim tobias tobiasb werner wonsuk[End of scribe.perl diagnostic output]
|
http://www.w3.org/2010/04/27-mediaann-minutes.html
|
CC-MAIN-2015-14
|
refinedweb
| 1,613
| 52.12
|
This set of MCQs focuses on “Hive”.
1. Hive uses _________ for logging.
a) logj4
b) log4l
c) log4i
d) log4j
View Answer
Explanation: By default Hive will use hive-log4j.default in the conf/ directory of the Hive installation.
2. Point out the correct statement :
a) list FILE[S] <filepath>* executes a Hive query and prints results to standard output
b) <query string> executes a Hive query and prints results to standard output
c) <query> executes a Hive query and prints results to standard output
d) All of the mentioned
View Answer
Explanation: list FILE[S] <filepath>* checks whether the given resources are already added to the distributed cache or not. See Hive Resources below for more information.
3. What does the hive.rrot.logger specified in the following statement ?
$HIVE_HOME/bin/hive --hiveconf hive.root.logger=INFO,console
a) Log level
b) Log modes
c) Log source
d) All of the mentioned
View Answer
Explanation: hive.root.logger specified the logging level as well as the log destination. Specifying console as the target sends the logs to the standard error.
4. HiveServer2 introduced in Hive 0.11 has a new CLI called __________
a) BeeLine
b) SqlLine
c) HiveLine
d) CLilLine
View Answer
Explanation: Beeline is a JDBC client based on SQLLine.
5. Point out the wrong statement :
a) There are four namespaces for variables in Hive
b) Custom variables can be created in a separate namespace with the define
c) Custom variables can also be created in a separate namespace with hivevar
d) None of the mentioned
View Answer
Explanation: Three namespaces for variables are hiveconf, system, and env.
6. HCatalog is installed with Hive, starting with Hive release
a) 0.10.0
b) 0.9.0
c) 0.11.0
d) 0.12.0
View Answer
Explanation: hcat commands can be issued as hive commands, and vice versa.
7. hiveconf variables are set as normal by using the following statement :
a) set -v x=myvalue
b) set x=myvalue
c) reset x=myvalue
d) none of the mentioned
View Answer
Explanation: The hiveconf variables are set as normalby set x=myvalue.
8. Variable Substitution is disabled by using :
a) set hive.variable.substitute=false;
b) set hive.variable.substitutevalues=false;
c) set hive.variable.substitute=true;
d) all of the mentioned
View Answer
Explanation: Variable substitution is on by default (hive.variable.substitute=true)
9. _______ supports a new command shell Beeline that works with HiveServer2.
a) HiveServer2
b) HiveServer3
c) HiveServer4
d) None of the mentioned
View Answer
Explanation: The Beeline shell works in both embedded mode as well as remote mode.
10. In ______ mode HiveServer2 only accepts valid Thrift calls.
a) Remote
b) HTTP
c) Embedded
d) Interactive
View Answer
Explanation: In HTTP mode, the message body contains Thrift payloads.
Sanfoundry Global Education & Learning Series – Hadoop.
Here’s the list of Best Reference Books in Hadoop.
|
https://www.sanfoundry.com/hive-questions-answers/
|
CC-MAIN-2018-51
|
refinedweb
| 483
| 56.96
|
Next: Catching Errors, Up: Handling Errors [Contents][Index]
The most common use of errors is for checking input arguments to
functions. The following example calls the
error function if
the function
f is called without any input arguments.
function f (arg1) if (nargin == 0) error ("not enough input arguments"); endif endfunction
When the
error function is called, it prints the given message
and returns to the Octave prompt. This means that no code following
a call to
error will be executed.
Display an error message and stop m-file execution.
Format the optional arguments under the control of the template string
template using the same rules as the
printf family of
functions (see Formatted Output) and print the resulting message
on the
stderr stream. The message is prefixed by the character
string ‘error: ’.
Calling
error also sets Octave’s internal error state such that
control will return to the top level without evaluating any further
commands. This is useful for aborting from functions or scripts.
If the error message does not end with a newline character, Octave will print a traceback of all the function calls leading to the error. For example, given the following function definitions:
function f () g (); end function g () h (); end function h () nargin == 1 || error ("nargin != 1"); end
calling the function
f will result in a list of messages that
can help you to quickly causes Octave to only print a single message:
function h () nargin == 1 || error ("nargin != 1\n"); end f () error: nargin != 1
A null string ("") input to
error will be ignored and the code
will continue running as if the statement were a NOP. This is for
compatibility with MATLAB. It also makes it possible to write code such
as
err_msg = ""; if (CONDITION 1) err_msg = "CONDITION 1 found"; elseif (CONDITION2) err_msg = "CONDITION 2 found"; … endif error (err_msg);
which will only stop execution if an error has been found., lasterror.
Since it is common to use errors when there is something wrong with
the input to a function, Octave supports functions to simplify such code.
When the
print_usage function is called, it reads the help text
of the function calling
print_usage, and presents a useful error.
If the help text is written in Texinfo it is possible to present an
error message that only contains the function prototypes as described
by the
@deftypefn parts of the help text. When the help text
isn’t written in Texinfo, the error message contains the entire help
Consider the following function.
## -*- texinfo -*- ## @deftypefn : f (ARG1) -| -| -| Additional help for built-in functions and operators is -| available in the online version of the manual. Use the command -| `doc <topic>' to search the manual index. -| -| Help and information about Octave is also available on the WWW -| at and via the help@octave.org -| mailing list.
Print the usage message.
See also: puts, fputs, printf, fprintf.
Query or set the internal variable that controls whether Octave will try to ring the terminal bell before printing an error message.
When called from inside a function with the
"local" option, the
variable is changed locally for the function and any subroutines it calls.
The original variable value is restored when exiting the function.
Next: Catching Errors, Up: Handling Errors [Contents][Index]
|
https://docs.octave.org/v4.0.1/Raising-Errors.html
|
CC-MAIN-2022-40
|
refinedweb
| 540
| 61.97
|
The
A while ago I started writing a NetBeans module to achieve the same, but in the GUI designer. I've always been too lazy to finish it. Now if you release a glasspane version of that, that'd be cool!
Posted by: gfx on September 13, 2006 at 03:18 AM
Romain,
The only disadvantage of using a glass pane is that you'd have to lock down all the events, painting the frame on an image and transforming that image. So, if you'll want to see the difference between the same field in two different visual states, you'd have to dismiss the glass pane in the middle :(
Posted by: kirillcool on September 13, 2006 at 09:07 AM
You can use a RepaintManager maybe. I once implemented a version of Jext () that had a 3D interface (you had to use red/blue stereo glasses) that way.
Posted by: gfx on September 13, 2006 at 11:09 AM
Romain,
I don't want to interfere with the application too much - just to have a simple way to preview the current UI in color-blind mode. So, the glass pane can be an easy solution - paint frame to image, process, overlay and lock down all the events. I'm not sure if i really want to go for the full-blown solution of the dynamic repaints.
In any case, all Substance themes provide an API for "color-blinding" them, so if you (well, may be no you :) use green and red themes for providing visual feedback, you can "color-blind" them as well and still have the regular responsive UI.
Posted by: kirillcool on September 13, 2006 at 11:33 AM
Hi,
Nice entry, Kirill.
I think we should find a solution to this, probably including the people responsible for accessibility stuff in the JDK/Swing teams.
I also think this deserves a java.net project of its own.
What about building a special JPanel to hack the "Graphics" object sent to children components?
I was thinking of tackling the Color Model used in the Composite of the corresponding Graphics2D, something like this:
public class ColorBlindPanel
extends JPanel
{
Composite colorBlindComposite = ....;
@Override // Yes, we're overriding paint, andd not PaintComponent
public void paint( Graphics g )
{
Graphics2D g2d = (Graphics2D) g;
g2d.setComposite( colorBlindComposite );
super.paint( g );
}
}
Of course we should build a specific ColorBlindComposite implementing the Composite and including some custom ColorModel's.
If we get to build such a Composite and ColorModel stuff we could propose including them in the JDK and making the hack a part of the root pane of all windows (or enable it as a default in some other way in all Swing components).
Cheers,
Antonio
Posted by: vieiro on September 14, 2006 at 08:21 AM
Antonio,
Now this sounds like a lot of work :) I'm afraid i'm not up to it at this point. Anybody interested? Anybody from Swing accessibility team?
Posted by: kirillcool on September 15, 2006 at 09:25 AM
Yes, I agree this should be lead by someone from the Swing accesibility team.
Thinking of it I assume you could use the same technique (handling the Composite on the Graphics object) in a GlassPane.
The Accesibility team could place some sort of color filtering to allow for better legibility for color blind people (using inverse algorithms to the one you were using).
Posted by: vieiro on September 15, 2006 at 11:33 AM
Antonio,
This is an excellent idea - how to design a theme so that it will look the way you expect it to the color-blind people. However, this may prove to be impossible. Let's say you want to use yellow theme - but the tritanopes can't see yellow at all - so no matter how you "twist" the original scheme, you can't get it to look yellow to the tritanopes. Same would be with red for protanopes and deuteranopes - see images here.
Posted by: kirillcool on September 15, 2006 at 11:53 AM
Hi Kirill,
Yes, but wouldn't it be possible to change all yellow to any other color that tritanopes can see?
Of course the resulting color may not be yellow, but, say, pink. The real important thing is that the resulting color is visible to color blind people. Not really important if it's the *same* color as the original one.
Ideally the Swing accessibility team could integrate this in the accesibility support, so people with these dissabilities could have a better user experience.
As far as I understand there's some kind of support for this already in Swing (with the operating system support). On some operating systems you can make all your look and feel to use high-contrast colors, and use bigger fonts.
Posted by: vieiro on September 18, 2006 at 02:48 AM
|
http://weblogs.java.net/blog/kirillcool/archive/2006/09/how_colorblind.html
|
crawl-002
|
refinedweb
| 806
| 58.82
|
Prerequisite¶
For collaborative development, we use the “Fork & Pull” model from Github. So anyone who wants to contribute needs an account on Github. Then you need to fork the project you want to contribute.
Note
If you want to contribute with a new camera plug-in you should first request us (by email @ lima@esrf.fr) to get the new plug-in camera sub-module created. We will provide:
a default structure of directories (<mycamera>/src /include sip/ doc/ python/ test/)
the build system file (<mycamera>/CMakeLists.txt)
templates files (src and include) for the mandatory classes:
-
<MyCamera>Interface
-
<MyCamera>DetInfoCtrlObj
-
<MyCamera>SyncCtrlObj
a standard .gitignore file
a template index.rst for the documentation
As above do not forget to fork the new sub-module project.
Create a github account¶
This is an easy task, you just have to Sign up, it’s free!
Fork a project¶
Check out the Github doc, it is far better explained than we could do ;)
Contribute guideline¶
It is very simple to contribute, you should follow the steps below.
Branch
First of all you have to create a branch for a new feature or for a bug fix, use an explicit branch name, for instance “soleil_video_patch” .
Code/patch
If it’s a patch from an existing module, respect and keep the coding style of the previous programmer (indentation,variable naming,end-line…).
If you’re starting a new camera project, you’ve just to respect few rules:
-
Class member must start with ‘m_’
-
Class method must be in CamelCase
-
You must define the camera’s namespace
Commit
Do as many commit as you need with clear comments. Prefer an atomic commit with a single change rather than a huge commit with too many (unrelated) changes.
Pull Request
Then submit a Pull Request
At this stage you have to wait, we need some time to accept or reject your request. So there are two possible issues:
The Pull-request is accepted, congrat!
We merge your branch with the the main project master branch, then everything is fine and you can now synchronize your forked project with the main project and go on with your next contribution.
The pull-request is rejected:
The pull request could be rejected if:
-
the new code doesn’t compile
-
it breaks backward compatibility
-
the python wrapping is missing or not updated
-
the commit log message doesn’t describe what you actually do
In case of a new camera plug-in sub-module the first pull request will be rejected if:
-
as above
-
the documentation is missing or if it does not fit with the guidelines (e.i Understand the plugin architecture)
We will tell you (code review on Github and/or email) about the reason and we will give some advises to improve your next tentative of pull-request.
So at this point you have to loop to item 2 (Code/Patch) again. Good luck !
|
https://lima1.readthedocs.io/en/v1.9.12/howto_contribute.html
|
CC-MAIN-2022-05
|
refinedweb
| 485
| 58.01
|
I just started using xmonad, and I've got most of the wrinkles ironed out, but this one still bugs me. I use pidgin + libnotify, and whenever a notification pops up, it grabs focus, meaning I can't type anything for a minute (unless I move my mouse and close it). This is particularly annoying when I'm having a chat back-and-forth, and I get notifications from that window/another chat window.
(I know I can turn off the notifications when the window is currently open, but it doesn't solve the larger problem of notifications grabbing focus when I'm typing in a separate screen).
Offline
What do you use to handle the notifications? I have no focus-grabbing issues with notification-daemon.
Offline
Hmm, I thought I used that as well, but seems like it's not installed, on second thought. Here's what I do have, though:
$ pacman -Q | grep notif haskell-hinotify 0.3.2-1 libnotify 0.7.4-1 pidgin-libnotify 0.14-4 python-notify 0.1.1-10 startup-notification 0.12-2 xfce4-notifyd 0.2.2-2
pidgin-libnotify is the plugin that I've enabled (within pidgin) to send the notifications. But the problem is not just there, since I have the same issue when I run $ notify-send "foo".
Last edited by chimeracoder (2012-02-05 21:46:59)
Offline
You have xfce4-notifyd. If that wasn't a conscious choice, you can switch to notification-daemon. Otherwise, you'll have to figure out if you can get the xfce thing to behave better (very quick google search turns up some similar problems, no clear solution).
Offline
That was not a conscious choice; I just switched from xfce, so that makes sense.
I uninstalled xfce4-notifyd, installed notification-daemon, and reinstalled pidgin-libnotify (for good measure). The notifications still work, so I'm assuming notification-daemon is working properly, but the focus still gets grabbed.
Do you use pidgin as well, or just notification-daemon? I still get the same issue when I do notify-send, which makes me believe it's not pidgin's problem, but maybe not.
This website seems to imply that it's an xmonad configuration issue, but I'm not that familiar with Haskell/xmonad's configurations. Is it possible to alter this behavior *only* for notify-send events, but not anything else that might display an urgent window?
Offline
Yeah it's probably a problem with your window manager. I had this with xfwm and xfce4-notifyd with some particular settings of xfwm that had to do with focus stealing prevention (enabling it caused the focus stealing, go figure.) I unfortunately can't help with xmonad's configuration...
Offline
Ah-hah! Here is the solution that worked for me.
My xmonad.hs has a section that looks like this....
myManageHook :: ManageHook
myManageHook = composeAll [
className =? "Xfce4-notifyd" --> doF W.focusDown
]
This returns the focus to where it was before. My other problem with Xfce4-notifyd is that the notification only appeared on the workspace I was using. To make it appear on all workspaces, modify the above line so that it also does a "copyToAll".
className =? "Xfce4-notifyd" --> doF W.focusDown <+> doF copyToAll
Be sure to include this import if you don't have it already.
import XMonad.Actions.CopyWindow
Offline
doF W.focusDown <+> doF copyToAll
I found that using "doIgnore" instead will both avoid stealing of focus, and leave it visible on all workspaces.
Offline
Haven't found a way to do this in Icewm. The ubuntu osd notifier does not work very well because you cannot set the display time. So I just stick with conky to display my current settings.
Offline
|
https://bbs.archlinux.org/viewtopic.php?pid=1065196
|
CC-MAIN-2017-13
|
refinedweb
| 623
| 58.18
|
Write a ConsoleProgram that calculates the tax, tip and total bill for us at a restaurant. The program should ask the user for the subtotal, and then calculate and print out the tax, tip and total.
Assume that tip is 20% and tax is 8% and that tip is on the original amount, not on the taxed amount.
/** * Program: Restaurant * ------------- * This program helps restaurants calculate the bill * taking into account tax and tip! */ public class Restaurant extends ConsoleProgram { // This declares a constant... it can never change value. private static final double TAX = 0.08; // Tax is 8% private static final double TIP = 0.20; // Tip is 20% public void run() { // store the raw cost of the meal. double cost = readDouble("What was the meal cost? $"); double tax = cost * TAX; double tip = cost * TIP; double total = cost + tax + tip; println("Tax: $" + tax); println("Tip: $" + tip); println("Total: $" + total); } }
|
http://web.stanford.edu/class/cs106a/examples/restaurant.html
|
CC-MAIN-2019-09
|
refinedweb
| 147
| 75.4
|
and got the following error:
Code: Select all
sudo apt-get install sense-hat
I'm not sure what happened to the repos. Could anyone give me some suggestion? ThanksI'm not sure what happened to the repos. Could anyone give me some suggestion? ThanksReading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package sense-hat
BTW:
I also tried to install via pip, and looks successfully installed, but when I import by python:
I got following error:
Code: Select all
>>> from sense_hat import SenseHat
So what should I do then?So what should I do then?Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/sense_hat/__init__.py", line 2, in <module>
from .sense_hat import SenseHat, SenseHat as AstroPi
File "/usr/local/lib/python3.5/dist-packages/sense_hat/sense_hat.py", line 10, in <module>
import RTIMU # custom version
ImportError: No module named 'RTIMU'
|
https://lb.raspberrypi.org/forums/viewtopic.php?f=45&t=228363
|
CC-MAIN-2019-35
|
refinedweb
| 160
| 68.47
|
Details
- Type:
Bug
- Status:
Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 1.5.4
- Fix Version/s: 1.6-rc-1, 1.5.8, 1.7-beta-1
- Component/s: None
- Labels:None
- Environment:Java 1.6, Ubuntu and Windows XP
- Testcase included:
- Number of attachments :
Description
A static method with a default parameter is not found, when it is imported statically and called without prefixing his class:
=== test.groovy ===
import static Settings.*
import static ConsoleUI.*
class Settings
{
static void initialize()
}
class ConsoleUI
{
static void writeln(String s, int delay = 0)
}
Settings.initialize()
=== Output ===
working
Caught: groovy.lang.MissingMethodException: No signature of method: static Settings.writeln() is applicable for argument types: (java.lang.String) values:
at Settings.initialize(test.groovy:8)
at test.run(test.groovy:20)
at test.main(test.groovy)
Exited: 256
Issue Links
- is depended upon by
GROOVY-4613 static import of method with default parameter value broken again: MissingMethodException
Activity
say.. if I have foo() and a static foo(x,y=1,z=2){}, then this patch would select the method, because it has default arguments. If that were a precompiled class, then we would have three foo methods, but none of them would be selected, because there is no exact match in the number of parameters and arguments. See ClassNode:1160
so I think a small modification to the patch is needed
I am not sure if I understand your scenario correctly. So, let me confirm:
So, let's say, there is the following class ConsoleUI and it is pre-compiled
class ConsoleUI { static void foo(x, y = 1, z = 2) {} /* When pre-compiled, 3 foo() are generated in the class file: 1) foo(x, y, z) {} 2) foo(x, y) { /* invokes foo(x, y, 2) */} 3) foo(x) { /* invokes foo(x, 1, 2) */} */ }
and there is another class like
import static ConsoleUI.* class Settings { static void initialize() { foo("valX") // goes through and invokes version 3 of foo above foo() // is not supposed to go through } } Settings.initialize()
So, now what is reported in current bug is that a call like foo("valX") does not go through. That goes through with the patch I have supplied. It invokes version 3 of foo() mentioned above. It matches that version of foo.
You have mentioned a foo() in your comment. If that is what you are expecting to go through, I don't think that is correct because ConsoleUI.foo() has a parameter "x" without a default value, so that method needs to be called with at least one parameter.
If I have not understood your comment, please elaborate the scenario and I will be glad to look into it.
rgds,
Roshan
better let us use the example
class X { static foo(x,y,z=1) {1} }
which produces foo(x,y,z) and foo(x,y){foo(x,y,1)}
. If I have now:
import static X.* foo(1,2) foo(1) foo()
then before the your patch none of these foo methods would have worked if X was not precompiled. With your patch each of these methods is linked to X. If X is precompiled, then only foo(1,2) is linked to X, because for the other two we won't find a method with a matching number of arguments. Your patch does not change anything here, because in a precompiled class there will be no default value.
That means with your patch foo(1,2) and addiionally foo(1) and foo() will be linked to X if X is not precompiled. If X is prcompiled then it is only foo(1,2).
This is a mismatch. so right, foo(1) and foo() are not supposed to go throughin the sense, that it will fail with a MissingMethodException. But if X is precompiled, then the exception would report ScriptXY#foo(int) and ScriptXY#foo() could not be found, while if the X is not precompiled it will report X#foo(int) and X#foo(). I think that shouldn't happen.
I suggest you remove the number of default parameters from the number of overall parameters and check that the resulting number is lower than count. If parameters.length>count and count>nonDefaultParameters, then it is a match, because there will be a method with the matching number of arguments.
I tested my patch with both pre-compiled and not pre-compiled versions of the X and it seems to be working fine. Have you tested and reported the mismatch in the previous comments or is it a code review kind of comment? If it is a code review kind of comment, you may have misread ( ! ..) conditions.
Just to make sure that we are on the same page version-wise - I am first trying to look into your comments with version 1.5.8. Are you also reporting it from the same version?
Now coming to details of why the patch is working, the check it performs is :-
if (method.isStatic() && count < method.getParameters().length) { int countOfParamsNotPassed = method.getParameters().length - count; boolean foundAParamWithNoDefualtVal = false; // if all arguments that have not been passed have default values, it is still a match for(int i = count; i <= method.getParameters().length - 1; i++) { if(!method.getParameters()[i].hasInitialExpression()) foundAParamWithNoDefualtVal = true; } if(!foundAParamWithNoDefualtVal) return true; }
For the call foo(1), this block sets foundAParamWithNoDefualtVal = true both in case of pre-compiled and not-pre-compiled cases and hence it does not report foo(1) as a match in either case (if foundAParamWithNoDefualtVal is true, it does not return true to report a match).
For pre-compiled X, is has no initial expression for 2nd parameter and hence foundAParamWithNoDefualtVal = true (pre-compiled classes have no default value/initial expression)
For not pre-compiled X also, 2nd param has no initial expression (foo(x,y,z=1)) and hence it also sets foundAParamWithNoDefualtVal = true.
Same explanation applies to foo() also. Also, I didn't notice any difference in the compiler errors it throws for pre-compiled X and not pre-compiled X.
rgds,
Roshan
Roshan, may I suggest for the future, that you change your code style a little bit to avoid "if (Unable to render embedded object: File (". Because this is at last the second time I did misread the patch... Just remove a bit negation. I mean, your have "foundAParamWithNoDefualtVal" containing a "No", you have if(!method.getParameters()[i].hasInitialExpression()) containing the "if() not found.", that is sometimes hard to read and if(Unable to render embedded object: File (foundAParamWithNoDefualtVal) return true;, which is a double problem, because again you have "if() not found." and then you simply return a boolean constant value... You could have done for example "return !foundAParamWithNoDefualtVal;" instead, but that still looks like having a strange negation. Then there is another part in your code style that is a bit problematic "i <= method.getParameters().length - 1;" usually that is written as "i < method.getParameters().length ;"
And yes, it was just from the review, not from actually trying it out... and I looked at 1.7 only. Ok, let me review that again....
I did also overlook that we start with i=count and not i=0... sigh... ok, then let me show you this:
class Y { def foo(x=100,y,z) {x+y+z} }
import Y.* assert foo(10,1) == 111
this code will still produce two methods, but this time it is not the last parameter that has a default, but the first parameter. If I did read your patch right this time, then you skip the first two parameters and so you will never know that there is a default value. Instead you will skip x and also y, because count is 2 and then check only z, which has no default value. So for this case your code won't work.
Ok, I will change the patch to take care of the code review comments and re-send them.
The example code that you have shown now has default values for parameters and then more parameters without default values. I did not know that it was allowed. I thought it was like varargs and default values could be supplied for only parameters at the end and no parameter without the default value could come after ones with default value. Hence I coded it the way I coded it. Thanks for correcting that assumption.
I will correct the patches for these 2 things and re-send.
Attaching the newer set of patches with the following changes based on the comments:
1) Conditions modified to not use negation.
2) No assumption that parameters with default values will only be specified at the end in parameter list of a static method.
rgds,
Roshan
much better now
Thank you. Trying negation had a side-advantage though. I have some insider information on one thing that you can get wrong in reviewing code
I had moved duplicate isStatic() calls to one place in hasPossibleStaticMethod. Missed doing that in 1.7 patch. So, just re-attaching that patch.
Sorry for a little clutter.
patch applied
Hi,
I am attaching a zip that has patches for all 3 affected groovy versions for this issue.
The patch has changes to the following 2 files:
1) src/main/org/codehaus/groovy/ast/ClassNode.java - Its hasPossibleStaticMethod() now handles finding the static methods having parameters with default values.
2) src/test/groovy/StaticImportTest.groovy - Added a few test cases for this issue.
Hope it helps.
Roshan
|
http://jira.codehaus.org/browse/GROOVY-2746?focusedCommentId=154967&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-11
|
refinedweb
| 1,576
| 65.01
|
* Thomas Gleixner <tglx@linutronix.de> wrote:[...]> The following patch series refactors the setup related x86_quirks > and the setup related paravirt hooks and puts them into an > extensible platform_setup infrastructure to provide a proper base > for adding the Moorestown modifications. As a side effect it also > unifies time_32/64.c and removes some leftovers of the pre > arch/x86 era.>> Note, this is not a replacement for paravirt_ops. It is just > replacing the setup related paravirt stuff so it can be reused for > other platforms though I have to say that it removes a fair amount > of obscurity which was introduced by paravirt & Co.> 47 files changed, 622 insertions(+), 808 deletions(-)Very nice!One small detail, before we spread out these patches. While looking at the patches i noticed that at places our new x86 init namespace is very long:> + platform_setup.timers.setup_percpu_clockev = platform_setup_noop;> + platform_cpuhotplug_setup.setup_percpu_clockev = platform_setup_noop;> +I think we should shorten the name-space a bit - we'll use it in a _lot_ of places, so the shorter, the better and the easier to use. I'd suggest something like: x86_init.timers.init_percpu_clockev = x86_init_noop; x86_cpuhotplug_init.init_percpu_clockev = x86_init_noop;( This also has the advantage that 'init' is the general term we use for kernel structure initialization - 'setup' is a more restrictive term we use related to bootloading, most of the time. )An even shorter form would be to use 'x86' as a general template for platform details: x86.timers.init_percpu_ce = x86_init_noop; x86_cpuhotplug.init_percpu_ce = x86_init_noop;this is even shorter, plus it allows us to put runtime details into this structure as well. Note that the fields themselves (init_percpu_clockev) already signal the 'init' property sufficiently. Plus 'ce' is an existing, well-known abbreviation for clockevents. (but 'clockev' would be good too - i might be pushing it)What do you think? Ingo
|
http://lkml.org/lkml/2009/8/22/34
|
CC-MAIN-2014-41
|
refinedweb
| 297
| 57.47
|
From: Andreas Huber (ah2003_at_[hidden])
Date: 2004-04-26 04:40:20
Robert Ramey wrote:
> Message: 11
> Andreas Huber wrote:
>
>> 1. On compilers supporting ADL, it seems the user-supplied
>> serialization function for non-intrusive serialization
>
>> template<class Archive>
>> void serialize(Archive & ar, gps_position & g, const unsigned int
>> version);
>
>> could also be put into the namespace of the type that it serializes.
>> If this is correct it should be mentioned in the tutorial &
>> reference. If
>> not, there should be a rationale why not.
>
> I did try this in my environment VC 7.1 and it failed to work. Does
> anyone want to assert that it should or shouldn't work?
I'm unsure about VC7.1. It is supposed to work with conformant compilers as
detailed here:
However, the situation is different there as the called function only accepts
exactly one parameter. IIRC, if ADL works correctly it looks in all namespaces
of all function arguments for a match. I'll see whether I find this rule in
the standard.
>> 2. save/load naming:
> All functions that have to do with reading from an archive are named
> load*,
>> all functions that have to do with writing to an archive are named
>> save*. To me (non-native speaker) these names are generally
>> associated with persistence only. Since the library is more general
>> (one could use it for network communication, etc.) load/save are a
>> bit missleading. I'd rather see read/write or other names that more
>> clearly communicate what the functions are doing.
>
> I used save/load consistently specifically to distinguish from
> read/write which seemed to me too suggestive of persistence, files
> and streams.
Streams have nothing to do with persistence and files, do they? As I mentioned
below an archive *is* in some way a stream, so read/write seems more
appropriate, especially since standard stream classes also use this
terminology. Save/load seems *much* closer to persistence and files than
read/write is.
>> 3. archive naming:
>> The word archive also seems to be associated with persistence. I
>> think serializer would much better describe the functionality.
>> Another option I could live with is serialization_stream (I slightly
>> disagree with the assertion in archives.html that an archive as
>> modelled by this library is not a stream. It might not be what C++
>> folks traditionally understand a stream is but to me it definitely
>> is a more general stream, i.e. a stream of objects/things/whatever).
>
> Archive is perhaps a little too suggestive. I borrowed from MFC.
> Making changes would have a huge ripple effect through code, file
> names, namepace names and documentation. I don't find any proposals
> for alternate naming sufficiently compelling to justify this.
Global search/replace is your friend ;-). Seriously, if I'm the only one with
these concerns I rest my case now and will happily use your library as is.
>> 4. Object tracking 1: Seeming contradiction
>> I've had difficulty determining how exactly object tracking works. I
>> believe the associated docs are slightly contradictory:
>
>> - Under exceptions.html#pointer_conflict I read that the
>> pointer_conflict exception is thrown when we save a class member
>> through a pointer first
>> and then "normally" (i.e. through a reference).
>
> Correct.
>
>> - traits.html#tracking says "... That is[,] addresses of serialized
>> objects are tracked if and only if an object of the same type is
>> anywhere in the program serialized through a pointer."
>
> Correct.
>
>> The second sentence seems to contradict the first one in that it
>> does not specify that the sequence of saving is important
>
> I see no conflict here
Yeah, this was a misunderstanding on my part. Sorry for the noise!
>
>> (I assume that pointer_conflict exception is not thrown if we save
>> normally first and then through a pointer).
>
> Correct
>
>> Moreover, I don't see how the library could possibly
>> achieve the "anywhere in the program" bit.
>
> It is tricky and it does work. I works by instantitiating a static
> variable. Static variables are constructed before main() is invoked.
> These static variables register themselves in global table when
> constructed. So even though a template is used "anywhere in the
> program" the entry in the global table exists before execution
> starts. I'm reluctant to include such information in the manual.
I agree that such information should probably not be in the tutorial but it
could certainly be in the reference.
>
>> 5. Object tracking 2: It should not be possible to serialize pointers
>> referencing objects of types that have track_never tracking type.
>
> LOL - tell that to library users. There are lots of things that
> users are convinced they have to do that I don't think are a good
> idea. As a library writer I'm not in position to enforce my ideas
> how users should use the library. On this particular point I would
> be very suspicious about such a usage but as practical matter I just
> have to include a warning and move on.
>
> special.html#objecttracking says:
>
> <quote>
> By definition,.
> </quote>
>
>> I think serializing untracked pointers almost never makes sense. If
>> it
>> does anyway the pointed-to object can always be saved by
>> dereferencing the pointer. Such an error should probably be
>> signalled with an exception.
>
> I don't understand this.
My point is that if a user absolutely needs to save something pointed to by an
untracked pointer then she can always do so by dereferencing the pointer first
(I highly doubt that this is necessary very often).
int i = 5;
int * pI = &i;
ar & *pI; // This works
ar & pI; // This doesn't (exception is thrown)
It makes the user much more aware what really happens if he saves something
pointed to by an untracked pointer.
This way it is also absolutely clear what the users responsibility is when
such a pointer is *loaded*. Since dereferencing a pointer that has an
undefined value is, um, undefined behavior the user must set the pointer
*before* dereferencing it and loading the value from the archive.
>> 6. Object tracking 3: One should be able to activate tracking on the
>> fly Rarely it makes sense to save even primitive types in tracked
>> mode (e.g. the example quoted in 6.). That's why I think setting the
>> tracking type
>> per type is not flexible enough. It should be possible to override
>> the default somehow. I could imagine to save such pointers as
>> follows:
>
>> class T
>> {
>> ...
>> int i;
>> int * pI; // always points to i
>> };
>
>> template<class Archive >
>> void T::save(Archive &ar) const
>> {
>> ar << track( i ); // save the int here, keep track of the pointer
>> ar << track( pI ); // only save a reference to the int
>> }
>
>> Since it is an error to save an untracked type by pointer even the
>> second line needs the track( ) call.
>
> A main function of using a trait is to avoid instantiation of code
> that
> won't ever be used. If I understand your proposal we would have to
> instantiate tracking code even if it isn't used.
Couldn't track() instantiate the code?
> But my main objection thing about this proposal is that it introduces
> a runtime modality into the type traits.
I agree that it's a bit of a hack but the current way of doing things
(wrapping all untracked types into new types) doesn't sound very appealing
either. It forces me to change my design. Only rarely but I still have to
change it. track() would allow me to save anything *as is*.
> This sound like it would
> very hard
> to debug and verify. I see lots of difficulties and very little
> advantage
> to trying to add this.
Since I haven't had the time to study the implementation I can only guess &
assume but I don't see how this is hard to debug and verify. As mentioned
under 5. above, it should be an error to save through an untracked pointer. I
would therefore be forced to *always* use track() whenever I want to save an
untracked pointer directly. This makes it immediately visible in the user code.
The (IMHO big) advantage is that I don't have to pull tricks to save something
through an untracked pointer.
>
>> 7. I think the enum tracking_type should rather be named
>> tracking_mode as the _type suffix suggests something else than an
>> enum.
>
> I don't think the change is sufficiently compelling to justify the
> hassle associated with this.
Again: Global search and replace is your friend ;-). Seriously, if nobody else
has concerns here I'm happy with it as is.
>> 8. I don't understand how function instantiation as described under
>> special.html#instantiation works. The description seems to indicate
>> that all possible archive classes need to be included in the files
>> of all types that are exported. Is that true?
>
> True. All archives that might be used have to be included. This is
> an artifact of using templates rather than virtual functions.
> Templates regenerate the code for each combination of serializable
> type and archive. This results in faster code - but more of it.
> There were strong objections raised in the first review about the
> fact it used virtual functions rather then templates - so it was
> changed to templates here.?
>
>> 9. The library registers all kinds of stuff with static instances.
>> This precludes its use outside main().
>
> Hmmm - that would be news to me.
Maybe I haven't stated it clearly enough: Your library cannot be used to
save/load anything before main() is entered nor can it be used to save/load
anything after main() has been left. This stems from the fact that code
running before or after main() is ultimately always called from a constructor
or destructor of some static instance (threads aside for the moment). Since
the C++ standard does not guarantee anything about the sequence of
construction of these instances, code saving/loading stuff could well be
called *before* all the registering is done. OTOH, when I save something from
a destructor of a static instance, this could be done after you have
destructed a table containing registration information.
If I'm reading the standard correclty, then it does not currently allow us to
work around these issues *in any way*. Therefore, this limitation should be
mentioned in the docs.
Regards,
Andreas
Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
http://lists.boost.org/Archives/boost/2004/04/64676.php
|
CC-MAIN-2014-10
|
refinedweb
| 1,722
| 64.71
|
Bug #5364open
Can't get keys to work over SVG elements
0%
Description
Hi. In my original stylesheet I have defined such keys:
<xsl:key <xsl:key
However I get an error when I try to use them:
code: "XTDE0640" message: "Definition of key Q{}lines-by-start is circular" name: "XError"
I've attempted to create a test case with the same keys:
Stylesheet: (see the
CIRCULAR KEY template)
Note that there's an
<svg> element in the HTML.
However in the test stylesheet I get a different error:
code: "FORG0001" message: "Invalid local name: 'ac:id1' (prefix='', uri='null')" name: "XError"
Maybe I've messed up the namespaces in SVG, but I don't see where.
Updated by Martynas Jusevicius 3 months ago
I get
Invalid local name: 'ac:id1' (prefix='', uri='null') even if I don't use the keys.
So I wonder if the root of the problem are the
ac:id1/
ac:id2 attributes in SVG? Are custom namespaces not supported?
Updated by Martynas Jusevicius 3 months ago
Updated by Norm Tovey-Walsh 3 months ago
What browser are you using, and on what platform? I fear the underlying cause here may be that the browser doesn't build a DOM that has namespaces declared properly. HTML5 munges all the SVG and MathML elements so they don't have (or at least don't need to have) namespaces. It's possible that the antipathy towards namespaces exhibited by the browsers extends to ignoring attempts to use them in HTML5 documents. You might have better luck if you arrange for the server to assert that these are XHTML documents, but I don't know what other consequences that might have.
Updated by Michael Kay 2 months ago
I suspect the XTDE0640 means that we've gone into index construction when there's already a half-constructed index from a previous call, and the reason for that is a failure during the previous construction attempt. So the underlying error is being masked.
Updated by Martynas Jusevicius 2 months ago
Observed on Firefox 97.0.1 (64-bit) on Windows 10.
So far I've addressed this by replacing attributes in a custom namespace with
data- attributes. The keys then work as expected.
Updated by Norm Tovey-Walsh 12 days ago
I think the invalid local name error happens if you serve the page up as HTML. If I serve the page up as XHTML, that error goes away. But I can't reproduce the circular key error either.
Updated by Norm Tovey-Walsh 12 days ago
- Status changed from New to AwaitingInfo
If you switch to XHTML, can you reproduce the bug in your test case?
Please register to edit this issue
Also available in: Atom PDF Tracking page
|
https://saxonica.plan.io/issues/5364
|
CC-MAIN-2022-21
|
refinedweb
| 462
| 68.6
|
Barrier-OR packet.
More...
#include <hsa.h>
Barrier-OR packet.
Definition at line 3108 of file hsa.h.
Signal used to indicate completion of the job.
The application can use the special signal handle 0 to indicate that no signal is used.
Definition at line 3142 of file hsa.h.
Array of dependent signal objects.
Signals with a handle value of 0 are allowed and are interpreted by the packet processor as dependencies not satisfied.
Definition at line 3131 of file hsa.h.
Packet header.
Used to configure multiple packet parameters such as the packet type. The parameters are described by hsa_packet_header_t.
Definition at line 3114 of file hsa.h.
Reserved.
Must be 0.
Definition at line 3119 of file hsa.h.
Definition at line 3124 of file hsa.h.
Definition at line 3136 of file hsa.h.
|
http://doxygen.gem5.org/release/current/structhsa__barrier__or__packet__s.html
|
CC-MAIN-2022-33
|
refinedweb
| 138
| 62.85
|
Could it be too bad to implement it as a tag?
Discussion in 'Java' started by Rhett Liu, Sep46
how do u invoke Tag b's Tag Handler from within Tag a's tag Handler?shruds, Jan 25, 2006, in forum: Java
- Replies:
- 1
- Views:
- 1,137
- John C. Bollinger
- Jan 27, 2006
Bad Transform or Bad Engine?Eric Anderson, Oct 4, 2005, in forum: XML
- Replies:
- 1
- Views:
- 463
- Peter Flynn
- Oct 5, 2005
integer >= 1 == True and integer.0 == False is bad, bad, bad!!!rantingrick, Jul 11, 2010, in forum: Python
- Replies:
- 44
- Views:
- 1,603
- Peter Pearson
- Jul 13, 2010
Matz says namespaces are too hard to implement - why?Stefan Rusterholz, Dec 22, 2007, in forum: Ruby
- Replies:
- 38
- Views:
- 488
- Rick DeNatale
- Dec 28, 2007
|
http://www.thecodingforums.com/threads/could-it-be-too-bad-to-implement-it-as-a-tag.136915/
|
CC-MAIN-2015-40
|
refinedweb
| 129
| 73.17
|
Change log (Visual Studio Tools for Unity, Windows)
Visual Studio Tools for Unity change log.
4.8.2.0
Released November 10, 2020
New Features
Integration:
Bug fixes
Integration:
- Fixed CodeLens message invalidation.
4.8.1.0
Released October 13, 2020
New Features
Evaluation:
- Added support for implicit conversion with invocations. Previously the evaluator enforced strict type checking, resulting in
Failed to find a match for method([parameters...])warning messages.
Integration:
Added
UNT0018diagnostic. You should not use
System.Reflectionfeatures in performance critical messages like
Update,
FixedUpdate,
LateUpdate, or
OnGUI.
Improved
USP0003and
USP0005suppressors, with support for all
AssetPostprocessorstatic methods.
Added
USP0016suppressor for
CS8618.
C# 8.0introduces nullable reference types and non-nullable reference types. Initialization detection of types inheriting from
UnityEngine.Objectis not supported and will result in errors.
Now using the same player and asmdef project generation mechanism for both Unity 2019.x and 2020.x+.
Bug fixes
Integration:
- Fixed unexpected completion for messages in comments.
4.8.0.0
Released September 14, 2020
Bug fixes
Integration:
- Fixed player project generation with Unity 2019.x.
4.7.1.0
Released August 5, 2020
New Features
Integration:
Added namespace support to default templates.
Updated Unity messages API to 2019.4.
Added
USP0013suppressor for
CA1823. Private fields with the
SerializeFieldor
SerializeReferenceattributes should not be marked as unused (FxCop).
Added
USP0014suppressor for
CA1822. Unity messages should not be flagged as candidates for
staticmodifier (FxCop).
Added
USP0015suppressor for
CA1801. Unused parameters should not be removed from Unity messages (FxCop).
Added MenuItem support to the
USP0009suppressor.
Bug fixes
Integration:
4.7.0.0
Released June 23, 2020
New Features
Integration:
Added support to persist solution folders when Unity is regenerating solution and projects.
Added
UNT0015diagnostic. Detect incorrect method signature with
InitializeOnLoadMethodor
RuntimeInitializeOnLoadMethodattribute.
Added
UNT0016diagnostic. Using
Invoke,
InvokeRepeating,
StartCoroutineor
StopCoroutinewith a first argument being a string literal is not type safe.
Added
UNT0017diagnostic.
SetPixelsinvocation is slow.
Added support for block comment and indentation for Shader files.
Bug fixes
Integration:
Do not reset selection when filtering messages in the Unity message wizard.
Always use the default browser when opening Unity API documentation.
Fixed
USP0004,
USP0006and
USP0007suppressors with the following rules: suppress
IDE0044(readonly),
IDE0051(unused),
CS0649(never assigned) for all fields decorated with SerializeField attribute. Suppress
CS0649(never assigned) for public fields of all types extending
Unity.Object.
Fixed generic type parameter checking for
UNT0014diagostic.
Evaluation:
- Fixed equality comparison with enums.
4.6.1.0
Released May 19, 2020
Bug fixes
Integration:
Warn if we are unable to create the messaging server on the Unity side.
Properly run analyzers during lightweight compilation.
Fixed an issue where a MonoBehaviour class created from the UPE did not match the name of the file.
4.6.0.0
Released April 14, 2020
New Features
Integration:
Added support for CodeLens (Unity scripts and messages).
Added
UNT0012diagnostic. Detect and wrap calls to coroutines in
StartCoroutine().
Added
UNT0013diagnostic. Detect and remove invalid or redundant
SerializeFieldattribute.
Added
UNT0014diagnostic. Detect
GetComponent()called with non-Component or non-Interface Type.
Added
USP0009suppressor for
IDE0051. Don't flag methods with the
ContextMenuattribute or referenced by a field with the
ContextMenuItemattribute as unused.
Added
USP0010suppressor for
IDE0051. Don't flag fields with the
ContextMenuItemattribute as unused.
Added
USP0011suppressor for
IDE0044. Don't make fields with the
ContextMenuItemattribute read-only.
USP0004,
USP0006and
USP0007are now working for both
SerializeReferenceand
SerializeFieldattributes.
Bug fixes
Integration:
Evaluation:
- Fixed handling of aliased usings.
4.5.1.0
Released March 16, 2020
New Features
Integration:
Bug fixes
Integration:
- Fixed OnDrawGizmos/OnDrawGizmosSelected documentation.
Evaluation:
- Fixed lambda argument inspection.
4.5.0.1
Released February 19, 2020
Bug fixes
Integration:
- Fixed
UNT0006diagnostic checking for incorrect message signature. When inspecting types with multiple levels of inheritance, this diagnostic could fail with the following message:
warning AD0001: Analyzer 'Microsoft.Unity.Analyzers.MessageSignatureAnalyzer' threw an exception of type 'System.ArgumentException' with message 'An item with the same key has already been added.
4.5.0.0
Released January 22, 2020
New Features
Integration:
Bug fixes
Integration:
- Fixed project generation (
GenerateTargetFrameworkMonikerAttributetarget was not always located correctly).
4.4.2.0
Released December 3, 2019
Bug fixes
Integration:
Fixed diagnostics with user-defined interfaces.
Fixed quick tooltips with malformed expressions.
4.4.1.0
Released November 6, 2019
New Features
Integration:
Added support for Unity background processes. (The debugger is able to auto-connect to the main process instead of a child process).
Added a quick tooltip for Unity messages, displaying the associated documentation.
Bug fixes
Integration:
Deprecated Features
Integration:
- Going forward, Visual Studio Tools for Unity will only support Visual Studio 2017+.
4.4.0.0
Released October 15, 2019
New Features
Integration:
4.3.3.0
Released September 23, 2019
Bug fixes
Integration:
- Fixed error and warning reporting for lightweight builds.
4.3.2.0
Released September 16, 2019
New Features
Integration:
- We've deepened the understanding that Visual Studio has for Unity projects by adding new diagnostics specific to Unity. We've also made the IDE smarter by suppressing general C# diagnostics that don't apply to Unity projects. For example, the IDE won't show a quick-fix to change an inspector variable to
readonlywhich would prevent you from modifying the variable in the Unity Editor.
UNT0001: Unity messages are called by the runtime even if they are empty, do not declare them to avoid uncesseray processing by the Unity runtime.
UNT0002: Tag comparison using string equality is slower than the built-in CompareTag method.
UNT0003: Usage of the generic form of GetComponent is preferred for type safety.
UNT0004: Update message is frame-rate dependent, and should use Time.deltaTime instead of Time.fixedDeltaTime.
UNT0005: FixedUpdate message is frame-rate independent, and should use Time.fixedDeltaTime instead of Time.deltaTime.
UNT0006: An incorrect method signature was detected for this Unity message.
UNT0007: Unity overrides the null comparison operator for Unity objects which is incompatible with null coalescing.
UNT0008: Unity overrides the null comparison operator for Unity objects which is incompatible with null propagation.
UNT0009: When applying the InitializeOnLoad attribute to a class, you need to provide a static constructor. InitializeOnLoad attribute ensures that it will be called as the editor launches.
UNT0010: MonoBehaviours should only be created using AddComponent(). MonoBehaviour is a component, and needs to be attached to a GameObject.
UNT0011: ScriptableObject should only be created using CreateInstance(). ScriptableObject needs to be created by the Unity engine to handle Unity message methods.
USP0001for
IDE0029: Unity objects should not use null coalescing.
USP0002for
IDE0031: Unity objects should not use null propagation.
USP0003for
IDE0051: Unity messages are invoked by the Unity runtime.
USP0004for
IDE0044: Fields with a SerializeField attribute should not be made readonly.
4.3.1.0
Released September 4, 2019
New Features
Evaluation:
Added support for better type display, i.e.
List<object>instead of
List'1[[System.Object, <corlib...>]].
Added support for pointer member access, i.e.
p->data->member.
Added support for implicit conversions in array initializers, i.e.
new byte [] {1,2,3,4}.
4.3.0.0
Released August 13, 2019
New Features
Debugger:
- Added support for MDS protocol 2.51.
Integration:
Improved the "Attach To Unity instance" window with sort, search and refresh features. PID is now displayed even for local players (by querying listening sockets on the system to retrieve the owning process).
Added support for asmdef files.
Bug fixes
Integration:
- Fixed handling of malformed messages while communicating with Unity Players.
Evaluation:
Fixed handling of namespaces in expressions.
Fixed inspection with IntPtr types.
Fixed stepping issues with exceptions.
Fixed evaluation of pseudo identifiers (like $exception).
Prevent crash when dereferencing invalid addresses.
Fixed issue with unloaded appdomains.
4.2.0.1
Released July 24, 2019
New Features
Integration:
Added a new option to create any type of files from the Unity Project Explorer.
Improve diagnostic caching when using fast builds for Unity projects.
Bug fixes
Integration:
Fixed an issue when the file extension was not handled by any well-known editor.
Fixed support for custom extensions in the Unity Project Explorer.
Fixed saving settings outside of the main dialog.
Removed legacy Microsoft.VisualStudio.MPF dependency.
4.1.1.0
Released May 24, 2019
New Features
Integration:
- Updated MonoBehaviour API to 2019.1.
Bug fixes
Integration:
Fixed reporting warnings and errors to output when lightweight build is enabled.
Fixed lightweight build performance.
4.1.0.0
Released May 21, 2019
New Features
Integration:
Added support for the new batch API to reload projects faster.
Disabled the full build for Unity projects, in favor of using the IntelliSense errors and warnings. Indeed Unity creates a Visual Studio solution with class library projects that represent what Unity is doing internally. That being said, the result of the build in Visual Studio is never used or picked up by Unity as their compilation pipeline is closed. Building in Visual Studio is just consuming resources for nothing. If you need a full build because you have tools or a setup that depends on it, you can disable this optimization (Tools/Options/Tools for Unity/Disable the full build of projects).
Automatically show the Unity Project Explorer (UPE) when a Unity project is loaded. The UPE will be docked next to the Solution Explorer.
Updated project name extraction mechanism with Unity 2019.x.
Added support for Unity packages in the UPE. Only Referenced packages (using manifest.json in the
Packagesfolder) and Local packages (embedded in the
Packagesfolder) are visible.
Project Generation:
- Preserve external properties when processing the solution file.
Evaluation:
Added support for alias-qualified names (only the global namespace for now). So the expression evaluator is now accepting types using the form global::namespace.type.
Added support for
pointer[index]form, which is semantically identical to pointer dereference
*(pointer+index)form.
Bug fixes
Integration:
Fixed dependency issues with Microsoft.VisualStudio.MPF.
Fixed UWP player attach, without any project loaded.
Fixed automatic asset database refresh when Visual Studio was not yet attached.
Fixed theme issues with labels and checkboxes.
Debugger:
- Fixed stepping with static constructors.
4.0.0.5
Released February 27, 2019
Bug fixes
Integration:
Fixed Visual Studio version detection with the setup package.
Removed unused assemblies from the setup package.
4.0.0.4
Released February 13, 2019
New Features
Integration:
Added support to properly detect Unity processes during installation and allow setup engine to better handle file locks.
Updated the
ScriptableObjectAPI.
4.0.0.3
Released January 31, 2019
New Features
Project Generation:
- Public and serialized fields will no longer cause warnings. We've auto-suppressed the
CS0649and
IDE0051compiler warnings in Unity projects that created these messages.
Integration:
Improved the user experience for displaying Unity editor and player instances (windows are now resizable, use uniform margins and display a resizing grip). Added Process-Id information for Unity editors.
Updated the
MonoBehaviourAPI.
Evaluation:
Added support for local functions.
Added support for pseudo variables (exception and object identifiers).
Bug fixes
Integration:
Fixed an issue with moniker images and themes.
Only write to Output Window while debugging, when auto-refreshing asset database.
Fixed UI delays with the MonoBehaviour wizard filtering.
Debugger:
- Fixed reading custom attribute on named arguments when using old protocol versions.
4.0.0.2
Released January 23, 2019
Bug fixes
Integration:
Fixed experimental build generation.
Fixed project file event handling to minimize UI-thread pressure.
Fixed completion provider with batched text changes.
Debugger:
- Fixed the display of user debug messages to the attached debugger.
4.0.0.1
Released December 10, 2018
New Features
Evaluation:
Replaced NRefactory in favor of Roslyn for expression evaluation.
Added support for pointers: dereference, casting and pointer arithmetic (both Unity 2018.2+ and the new runtime are required for this).
Added support for array pointer view (like in C++). Take a pointer expression then append a comma and the number of elements you want to see.
Added support for async constructs.
Integration:
- Added support for automatically refreshing Unity's asset database on save. This is enabled by default and will trigger a recompilation on the Unity side when saving a script in Visual Studio. You can disable this feature in Tools\Options\Tools for Unity\Refresh Unity's AssetDatabase on save.
Bug fixes
Integration:
Fixed bridge activation when Visual Studio is not selected as the preferred external editor.
Fixed expression evaluation with malformed or unsupported expressions.
4.0.0.0
Released December 4, 2018
New Features
Integration:
Added support for Visual Studio 2019 (you need at least Unity 2018.3 for being able to use Visual Studio 2019 as an external script editor).
Adopted the Visual Studio image service and catalog, with full support for HDPI scaling, pixel perfect images and theming.
Deprecated features
Integration:
Going forward, Visual Studio Tools for Unity will only support Unity 5.2+ (with Unity’s built-in Visual Studio integration).
Going forward, Visual Studio Tools for Unity will only support Visual Studio 2015+.
Removed legacy language service, error list and status bar.
Removed the Quick Monobehaviour Wizard (in favor of the dedicated intellisense support).
3.9.0.3
Released November 28, 2018
Bug fixes
Integration:
- Fixed project reloading and intellisense issues when adding or removing scripts located in the very first project.
3.9.0.2
Released November 19, 2018
Bug fixes
Debugger:
- Fixed a deadlock in the library used to communicate with Unity’s debugger engine, making Visual Studio or Unity freeze, especially when hitting ‘Attach to Unity’ or restarting game.
3.9.0.1
Released November 15, 2018
Bug fixes
Integration:
- Fixed Unity plugin activation when another default editor was selected.
3.9.0.0
Released November 13, 2018
Bug fixes
Project Generation:
- Rolled back the workaround for a Unity performance bug that has been fixed by Unity.
3.8.0.7
Released September 20, 2018
Bug fixes
Debugger:
- (Backported from 3.9.0.2) Fixed a deadlock in the library used to communicate with Unity’s debugger engine, making Visual Studio or Unity freeze, especially when hitting ‘Attach to Unity’ or restarting game.
3.8.0.6
Released August 27, 2018
Bug fixes
Integration:
- Fixed reloading of projects and solution.
3.8.0.5
Released August 20, 2018
Bug fixes
Integration:
- Fixed project monitoring subscription disposal.
3.8.0.4
Released August 14, 2018
New Features
Evaluation:
Added support for pointer values.
Added support for generic methods.
Bug fixes
Integration:
- Smart reload with multiple projects changed.
3.8.0.3
Released July 24, 2018
Bug fixes
Project Generation:
- (Backported from 3.9.0.0) Rolled back the workaround for a Unity performance bug that has been fixed by Unity.
3.8.0.2
Released July 7, 2018
Bug fixes
Project Generation:
- Transient workaround for a Unity performance bug: cache MonoIslands when generating projects.
3.8.0.1
Released June 26, 2018
New Features
Debugging:
Added support for UserLog and UserBreak commands.
Added lazy type-load support (optimizing the network load and debugger response latency).
Bug fixes
Evaluation:
- Improved binary-operator expression evaluation and method search.
3.8.0.0
Released May 30, 2018
New Features
Debugging:
Added support for displaying variables in async constructs.
Added support for processing nested types when setting breakpoints, to prevent warnings with compiler constructs.
Integration:
- Added support for textmate grammars for Shaders (the C++ workload is no longer needed for Shader code coloration).
Bug fixes
Project Generation:
- Do not convert portable pdb to mdb anymore when using the new Unity runtime.
3.7.0.1
Released May 7, 2018
Bug fixes
Installer:
- Fixed dependency issue when using experimental builds.
3.7.0.0
Released May 7, 2018
New Features
Debugging:
Added support for orchestrated debugging (debugging multiple players/editor with the same Visual Studio session).
Added support for Android USB player debugging.
Added support for UWP/IL2CPP player debugging.
Evaluation:
Added support for hexadecimal specifiers.
Improved watch window evaluation experience.
Bug fixes
Integration:
- Fixed usage of exception settings.
Project Generation:
- Exclude package manager compilation units from generation.
3.6.0.5
Released March 13, 2018
New Features
Project Generation:
- Added support for the new project generator in Unity 2018.1.
Bug fixes
Integration:
- Fixed handling corrupted states with custom projects.
Debugger:
- Fixed setting the next statement.
3.6.0.4
Released March 5, 2018
Bug fixes
Project Generation:
- Fixed Mono version detection.
Integration:
- Fixed timing issues with 2018.1 and plugin activation.
3.6.0.3
Released February 23, 2018
New Features
Project Generation:
- Added support for .NET Standard.
Bug fixes
Project Generation:
- Fixed Unity target framework detection.
Debugger:
- Fixed breaking on exceptions that are thrown outside of usercode.
3.6.0.2
Released February 7, 2018
New Features
Integration:
- Update UnityMessage API surface for 2017.3.
Bug fixes
Integration:
- Only reload projects on external change (with throttling).
3.6.0.1
Released January 24, 2018
Bug fixes
Integration:
Fixed automatic pdb to mdb debug symbol conversion.
Fixed indirect call to EditorPrefs.GetBool impacting the inspector while trying to change array size.
3.6.0.0
Released January 10, 2018
New Features
Project Generation:
- Added support for 2018.1 MonoIsland reference model.
Evaluation:
- Added support for $exception identifier.
Debugger:
- Added support for DebuggerHidden/DebuggerStepThrough attributes with the new Unity runtime.
Wizards:
- Introduce 'Latest' version for wizards.
Bug fixes
Project Generation:
- Fixed project guid computation for player projects.
Debugger:
- Fixed a race in handling breaking events.
Wizards:
- Refresh roslyn context before inserting method.
3.5.0.3
Released January 9, 2018
Bug fixes
Integration:
- Fixed automatic pdb to mdb debug symbol conversion.
3.5.0.2
Released December 4, 2017
New Features
Integration:
- Unity projects are now automatically reloaded in Visual Studio when you add or remove a script from Unity.
Debugger:
Added an option to use the Mono debugger shared by Xamarin and Visual Studio for Mac to debug the Unity Editor.
Added support for portable debug symbol files.
Bug fixes
Integration:
Fixed setup dependencies issues.
Fixed Unity API help menu not showing.
Project Generation:
Fixed player project generation when working on a UWP game with the IL2CPP/.NET 4.6 backend.
Fixed extra .dll extension wrongly added to the assembly filename.
Fixed usage of a specific project API compatibility level instead of the global one.
Do not force the AllowAttachedDebuggingOfEditor Unity flag as the default is now 'true'.
3.4.0.2
Released September 19, 2017
New Features
Project Generation:
Added support for assembly.json compilation units.
Stopped copying Unity assemblies to the project folder.
Debugger:
Added support for setting the next statement with the new Unity runtime.
Added support for Decimal type with the new Unity runtime.
Added support for implicit/explicit conversions.
Bug fixes
Evaluation:
Fixed array creation with implicit size.
Fixed compiler generated items with locals.
Project Generation:
- Fixed reference to Microsoft.CSharp for 4.6 API level.
3.3.0.2
Released August 15, 2017
Bug fixes
Project Generation:
- Fixed the Visual Studio solution generation on Unity 5.5 and previous versions.
3.3.0.0
Released August 14, 2017
New Features
Evaluation:
Added support for creating structs with the new Unity runtime.
Added minimalist support for pointers.
Bug fixes
Evaluation:
Fixed method invocation on primitives.
Fixed field evaluation with types marked with BeforeFieldInit.
Fixed non supported calls with binary operators (substract).
Fixed issues when adding items to the Visual Studio Watch.
Project Generation:
Fixed assembly name references with mcs.rsp files.
Fixed defines with API levels.
3.2.0.0
Released May 10, 2017
New Features
Installer:
- Added support for cleaning the MEF cache.
Bug fixes
Code Editor:
Fixed classification/completion with custom attributes.
Fixed flickering with Unity messages.
3.1.0.0
Released April 7, 2017
New Features
Debugger:
- Added support for the new Unity runtime (with .NET 4.6 / C# 6 compatibility).
Project Generation:
Added support for .NET 4.6 profile.
Added support for mcs.rsp files.
Always enable unsafe compilation switch when Unity 5.6 is used.
Added support for "Player" project generation when using Windows Store platform and il2cpp backend.
Bug fixes
Code Editor:
- Fixed caret position after inserting method with auto-completion.
Project Generation:
- Removed assembly version post-processing.
3.0.0.1
Released March 7, 2017
This version includes all new features and bug fixes introduced with 2.8.x series.
2.8.2.0 - 3.0 Preview 3
Released January 25, 2017
Bug fixes
Project Generation:
- Fixed regression where Plugins projects where referenced twice, first as a binary DLL then as a project reference.
2.8.1.0 - 3.0 Preview 2
Released January 23, 2017
Bug fixes
Code Editor:
- Fixed a crash when starting an attribute declaration without brace completion.
Debugger:
Fixed function breakpoints with coroutines under the new Unity compiler/runtime.
Added warning in case of an unbindable breakpoint (when no corresponding source-location is found).
Project Generation:
Fixed csproj generation with special/localized characters.
Fixed references outside of Assets, such as Library (like the Facebook SDK).
Misc:
Added check to prevent Unity from running when installing or uninstalling.
Switched to https to target the remote Unity documentation.
2.8.0.0 - 3.0 Preview
Released November 17, 2016
New Features
General:
Added Visual Studio 2017 installer support.
Added Visual Studio 2017 extension support.
Added localization support.
Code Editor:
Added C# IntelliSense for Unity messages.
Added C# code coloration for Unity messages.
Debugger:
Added support for
is,
as, direct cast,
default,
newexpressions.
Added support for string concat expressions.
Added support for hexadecimal display of integer values.
Added support for creating new temporary variables (statements).
Added support for implicit primitive conversions.
Added better error messages when a type is expected or not found.
Project Generation:
Removed the CSharp suffix from the project names.
Removed reference to a system wide msbuild targets file.
Wizards:
Added support for Unity messages in non Behaviour types such as Editor or EditorWindow.
Switched to Roslyn to inject and format Unity messages.
Bug fixes
Debugger:
Fixed a bug crashing Unity when evaluating generic types.
Fixed handling of nullable types.
Fixed handling of enums.
Fixed handling of nested member types.
Fixed collection indexer access.
Fixed support for debugging iterator frames with new C# compiler.
Project Generation:
Fixed bug that prevented compilation when targeting the Unity Web player.
Fixed bug that prevented compilation when compiling a script with a web encoded file name.
2.3.0.0
Released July 14, 2016
New Features
General:
Added an option to disable Unity console logs in Visual Studio's error list.
Added an option to allow generated project properties to be modified.
Debugger:
- Added Text, XML, HTML and JSON string visualizers.
Wizards:
- Added missing MonoBehaviors.
Bug fixes
General:
Fixed a conflict with ReSharper that prevented controls inside Visual Studio settings from being displayed.
Fixed a conflict with Xamarin that prevented debugging in some cases.
Debugger:
Fixed an issue that caused Visual Studio to freeze when debugging.
Fixed an issue with function breakpoints in Visual Studio 2015.
Fixed several expression evaluation issues.
2.2.0.0
Released February 4, 2016
New Features
Wizards:
Added smart search in the Implement MonoBehavior wizard.
Made wizards context aware; for example, NetworkBehavior messages are only available when working with a NetworkBehavior.
Added support for NetworkBehavior messages in the wizards.
UI:
Added an option to configure the visibility of MonoBehavior messages.
Removed Visual Studio property pages that are not relevant to Unity projects.
Bug fixes
Project generation:
Fixed references to UnityEngine and UnityEditor on Unity 4.6.
Fixed generation of project files when Unity is running on OSX.
Fixed handling of project names containing hashmark (#) characters.
Restricted generated projects to C# 4.
Debugger:
Fixed an issue with expression evaluation when debugging inside a Unity coroutine.
Fixed an issue that caused Visual Studio to freeze when debugging.
UI:
- Fixed an incompatibility with the Tabs Studio Visual Studio extension.
Installer:
Support machine-wide installation of VSTU (install for all users) by creating HKLM registry entries.
Fixed issues with uninstallation of VSTU when the same version of VSTU is installed for multiple different versions of Visual Studio. For example, when VSTU 2015 2.1.0.0 and VSTU 2013 2.1.0.0 were both installed.
2.1.0.0
Released September 8, 2015
New Features
- Support for Unity 5.2
Bug fixes
Display menu items on Unity < 4.2
An error message is no longer displayed when Visual Studio locks XML intellisense files.
Handle <<When Changed>> conditional breakpoints when conditional argument is not a boolean value.
Fixed references to UnityEngine and UnityEditor assemblies for Windows Store apps.
Fixed error when stepping in the debugger: Unable to step, general exception.
Fixed hit-count breakpoints in Visual Studio 2015.
2.0.0.0
Released July 20, 2015
Bug fixes
Unity Integration:
Fixed the conversion of debug symbols created with Visual Studio 2015 when importing a DLL and its debug symbols (PDB).
Always generate MDB files when importing a DLL and its debug symbols (PDB), except when an MDB file is also provided.
Fixed pollution of the Unity project directory with an obj directory.
Fixed generation of references to System.Xml.Link and System.Runtime.Serialization.
Added support for multiple subscribers to the project file generation API hooks.
Always complete project file generation even when one of the files to be generated is locked.
Added support for * wildcards in the extension filter when specifying files to be included in the C# project.
Visual Studio integration:
Fixed a compatibility issue with the Productivity Power Tools.
Fixed generating MonoBehaviors around events and delegates declarations.
Debugger:
Fixed a potential freeze when debugging.
Fixed an issue where locals would not be displayed in certain stack frames.
Fixed inspecting empty arrays.
1.9.9.0 - 2.0 Preview 2
Released April 2, 2015
New features
Unity Project Explorer:
Automatically rename class when renaming a file in the Unity Project Explorer (See Options dialog).
Automatically select newly created scripts in the Unity Project Explorer.
Track the active script in the Unity Project Explorer (See Options dialog).
Dual-synchronize the Visual Studio Solution Explorer (See Options dialog).
Adopt Visual Studio icons in Unity Project Explorer.
Debugger:
Select the active debug target from a list of saved or recently-used debug targets (See Options dialog).
Create function breakpoints on MonoBehavior methods and apply them to multiple MonoBehavior classes.
Support Make Object ID in the debugger.
Support breakpoint hit count in the debugger.
Support break-on-exception in the debugger (Experimental. See Options Dialog).
Support creation of objects and arrays when evaluating expressions in the debugger.
Support null comparison when evaluation expressions in the debugger.
Filter out obsolete members in debugger watch windows.
Installer:
Optimized Visual Studio Tools for Unity extension registration.
Install Visual Studio Tools for Unity package for Unity 5.
Documentation: Improve performance of documentation generation.
Wizards: Support new MonoBehavior methods for Unity 4.6 and Unity 5.
Unity: Lookup unsafe flags and custom defines in .rsp files during project file generation.
UI: Added Visual Studio Tools for Unity Options dialog in Visual Studio.
Bug fixes
Unity Project Explorer:
Refresh the Unity Project Explorer after files are moved or renamed from the Visual Studio Solution Explorer.
Preserve selections when renaming files in the Unity Project Explorer.
Prevent automatic expand and collapse when files are double clicked in the Unity Project Explorer.
Ensure that newly selected files are visible in the Unity Project Explorer.
Debugger:
Prevent a possible Visual Studio freeze when evaluating expressions in the debugger.
Ensure that method invocations happen on the proper domain in the debugger.
Unity:
Correct the location of UnityVS.OpenFile with Unity 5.
Correct the location of pdb2mdb with Unity 5.
Prevent a possible exception during project file generation.
Prevent a possible freeze when running Unity on OSX.
Handle internal exceptions.
Send Unity console logs to the VS error list.
Documentation: Correct documentation generation for the new unity documentation.
Project: Move and rename Unity .meta files when needed, even in folders.
Wizards: Correct the order of MonoBehavior method parameters when generating code.
UI: Support Visual Studio themes for context menu and icons.
1.9.8.0 - 2.0 Preview
Released November 12, 2014
New features
Support for Visual Studio 2015.
Code Coloration for Unity shaders in Visual Studio 2015.
Improved visualization of values when debugging:
Better visualization for ArrayLists, Lists, Hashtables and Dictionaries.
Show Non-Public members and Static members as categories in watch and local views.
Improved display of Unity's SerializedProperty to only evaluate the value field valid for the property.
DebuggerDisplayAttribute support for classes and structs.
DebuggerTypeProxyAttribute support.
Make the insertion of MonoBehaviour methods using our wizards to respect the user coding conventions.
Implement support for Compile Time Text Templates in UnityVS generated projects.
Implement support for ResX resources in UnityVS generated projects.
Support opening shaders in Visual Studio from Unity.
Bug fixes
Cleanup sockets before starting the game in Unity after Attach and Play was triggered in Visual Studio. This fixes some issues with the stability of the connection between Unity and VS when using Attach and Play.
Avoid calling methods in Unity's scripting engine debugger interface that are prone to freeze Unity. This fixes the Unity freeze when attaching the debugger.
Fix displaying of callstacks when no symbols are available.
Do not register the log callback if we don't have to.
1.9.2.0
Released October 9, 2014
New features
Improve detection of Unity players.
When using our file opener, make Unity pass the line number as well as the file name.
Default to the online Unity documentation if there's no local documentation.
Bug fixes
Fix potential Unity crash when hitting a breakpoint after a domain reload.
Fix exceptions shown in the Unity console when closing our Configuration or About windows, after a domain reload.
Fix detection of 64bits Unity running locally.
Fix filtering of MonoBehaviours per Unity version in wizards.
Fix bug where all assets were included in the project files if the extension filter was empty.
1.9.1.0
Released September 22, 2014
New features
Optimize binding breakpoint to source locations.
Support for overloaded methods in the Expression Evaluation of the debugger.
Support for boxing primitives and value types in the Expression Evaluation of the debugger.
Support recreating the C# local variables environment when debugging anonymous methods.
Delete and rename .meta files when deleting or renaming files from Visual Studio.
Bug fixes
Fix handling of Visual Studio themes. Previously, dialogs on black themes could appear empty.
Fix Unity freeze when connecting the debugger while Unity is recompiling.
Fix breakpoints when debugging remote editors or players compiled on another system.
Fix a possible Visual Studio crash when a breakpoint is hit.
Fix breakpoints binding to avoid breakpoints showing as unloaded.
Fix handling of variable scope in the debugger to avoid live variables that appear out of scope.
Fix lookup of static members in the Expression Evaluation of the debugger.
Fix displaying of types in the Expression Evaluation of the debugger to show static fields and properties.
Fix generation of solution when the Unity project names includes special characters that Visual Studio forbids (Connect issue #948666).
Fix the Visual Studio Tools Unity package to immediately stop sending console events after the option has been unchecked (Connect issue #933357).
Fix detection of references to properly regenerate references to new APIs like UnityEngine.UI in the UnityVS generated projects.
Fix installer to require that Visual Studio is closed before installation to avoid corrupted installations.
Fix installer to install the Unity Reference Assemblies as a proper standalone component, shared between all versions of VSTU.
Fix opening scripts with VSTU in 64 bits versions of Unity.
1.9.0.0
Released July 29, 2014
New features
In the Attach Unity Debugger window, add the ability to enter a custom IP and port to debug.
Add configuration option to set Unity to run in the background or not.
Add configuration option to generate solution and project files or project files only.
Startup target: choose to Attach to Unity or Attach to Unity and Play.
Display of multi-dimensional arrays in the debugger.
Handle new Unity Player debugging ports.
Handle references to new Unity assemblies like Unity's 4.6 GUI assemblies.
Deconstructs closures to properly display local variables when debugging.
Deconstructs generated iterators variables into arguments when debugging.
Preserve Unity Project Explorer's state after a project reload.
Add a command to synchronize the Unity Project Explorer with the current document.
Bug fixes
Fix conditional breakpoints whose conditions are set before starting the debugger.
Fix references to UnityEngine to avoid warnings.
Fix parsing versions for Unity betas.
Fix issue where variables would not appear in the local variables window when hitting a breakpoint or stepping.
Fix variables tooltips in Visual Studio 2013.
Fix generation of the IntelliSense documentation for Unity 4.5.
Fix the Unity / Visual Studio communication after a domain reload (play/stop in Unity).
Fix handling of parts of Visual Studio themes.
Important
C# being the predominant language in the Unity ecosystem - the new Sample Assets are in C#, the Unity documentation will default to C# - we removed our basic support for UnityScript and Boo to better focus on the C# experience. As a result, VSTU solutions are now C# only and are much faster to load.
1.8.2.0
Released January 7, 2014
New features
Work around an issue in Unity's scripting engine's network layer on Mavericks for remote discovery of editors.
Handle new ports to discover remote Unity players.
Reference the UnityEngine assembly specific to the current build target.
Add setting to filter files to include in generated projects.
Add setting to disable sending console logs to Visual Studio error list. This is useful if you're using PlayMaker or Console Pro as there could be only one callback registered in Unity to receive console logs.
Add setting to disable the generation of mdb debug symbols. This is useful if you're generating the mdb yourself.
Bug fixes
Fix a regression when files opened in VS from Unity >= 4.2 would lose IntelliSense.
Fix our VS dialogs to handle custom themes.
Fix closing the context menu of the UPE.
Prevent crash in Unity when the version specific generated assembly if out of sync.
1.8.1.0
Released November 21, 2013
New features
Adjusted the MonoBehaviour wizards with Unity 4.3 APIs.
MonoBehaviour wizards are filtering Unity APIs depending on the version you use.
Add a reference to System.Xml.Linq to the projects for Unity > 4.1.
Prettify our calls to Debug.Log to not include the beginning of the stacktrace in the message.
Bug fixes
Fixed a bug where we would interfere with the default handling of JavaScript files in Visual Studio.
Fixed a white pixel appearing in VS, for real this time.
Fixed deletion of the UnityVS.VersionSpecific assembly if it's marked as readonly by a SCM.
Fixed exceptions when creating sockets in the UnityVS package.
Fixed a crash in Visual Studio when loading stock images from Visual Studio assemblies.
Fixed a bug in the generation of the UnityVS.VersionSpecific for source builds of Unity.
Fixed a possible freeze when opening a socket in the Unity package.
Fixed the handling of Unity project with a dash (-) in their name.
Fixed opening scripts from Unity to not confuse the ALT+TAB order for Unity 4.2 and above.
1.8.0.0
Released September 24, 2013
New features
Drastically improved debugger connection speed.
Automatically handle navigation to file and line on Unity 4.2 and above.
Conditional breakpoints.
Project file generator now handles T4 templates.
Update MonBehavior wizards with new APIs.
IntelliSense documentation in C# for Unity types.
Arithmetic and logical expressions evaluation.
Better discovery of remote editors for the remote debugging preview.
Bug fixes
Fixed a bug where we would leak a thread in VS after disconnecting the debugger.
Fixed a white pixel appearing in VS.
Fixed the handling of clicks on the status bar icon.
Fixed the generation of references with assemblies in Plugins folders.
Fixed creation of sockets from the UnityVS package in case of exceptions.
Fixed the detection of new versions of UnityVS.
Fixed the prompt of the license manager when the license expired.
Fixed a bug that could render the process list empty in the attach debugger to process window of VS.
Fixed changing values of Booleans in the local view.
1.2.2.0
Released July 9, 2013
Bug fixes
Handle fully qualified names in expression evaluator.
Fixed a freeze related to exception handling where the Unity scripting engine is sending us incorrect stackframe data.
Fixed build process for Web targets.
Fixed an error that could happen if Visual Studio was started and that a deleted file was in the list of files to open at startup.
Fixed UnityVS.OpenFile to handle non script files, like compiled shaders.
We now reference Boo.Lang and UnityScript.Lang from all the C# projects.
Fixed generation of references in projects if the project has special characters.
Workaround a VS issue where method calls to disposed projects would trigger multiple NullReferenceException MessageBox.
Fixed handling of Unity 4.2 Beta assemblies.
1.2.1.0
Released April 9, 2013
Bug fixes
Fixed local deployment of Unity assemblies for code completion in the event of an IO error (such as read-only files, or files locked by Visual Studio).
Fixed a regression where opening a script from Unity would not focus the file if it was already opened in Visual Studio.
Fixed performance issue of the new exception handling.
Fixed binding of breakpoints in some external DLLs.
1.2.0.0
Released March 25, 2013
New features
Drastically improved debugger connection speed.
Optimized Unity Project Explorer for larger projects.
Honor the Visual Studio settings to break (or not) on handled and unhandled exceptions.
Honor the Visual Studio setting to call ToString on local variables.
Add new menu Debug -> Attach Unity debugger, which you can use to debug Unity players.
Preserve custom projects added to the UnityVS solution upon solution file generation.
Add new keyboard shortcut CTRL+ALT+M -> CTRL+H to display the Unity documentation for the Unity function or member at the caret position.
Take compiler response files (rsp) into account when compiling from Visual Studio.
Deconstruct compiler generated types to show variables when debugging generator methods.
Simplify the remote debugging by removing the need to configure a shared folder to Unity. Now you just need to have access to your Unity project from Windows.
Install a custom Unity profile as a standard .NET target profile. This fixes all false positives that ReSharper could show.
Work around a Unity scripting engine bug, so the debugger won't break on non properly registered threads.
Rework the file opener to avoid a race condition in VS where it claimed to be able to open files, while crashing on the file open request.
UnityVS is now asking to refresh the build when VS is building the project, and not on file save anymore.
Bug fixes
Fixed our custom .NET profile
Fixed the theming integration, this fixes our issues with the VS 2012 dark theme.
Fixed quick behavior shortcut in VS 2012.
Fixed a stepping issue that could happen when debugging and a non-main thread would hit a breakpoint.
Fixed UnityScript and Boo completion of type aliases, such as int.
Fixed exception when writing a new UnityScript or Boo string.
Fixed exceptions in Unity menus when a solution was not loaded.
Fixed bug UVS-48: typing double quote sometimes produce error and break all function (code completion, syntax highlight etc).
Fixed bug UVS-46: Duplicated opened script file (UnityScript) when clicking on the Error List of Visual Studio.
Fixed bug UVS-42: Unity connectivity logo in the status bar doesn't handle mouse events in VS 2012.
Fixed bug UVS-44: CTRL+SHIFT+Q is not available in VS 2012 for Quick MonoBehaviours.
Fixed bug UVS-40: Selected items in the Unity Project Explorer are unreadable when the window is inactive in VS2012 "dark" theme.
Fixed bug UVS-39: Issue tokenizing escaped strings.
Fixed bug UVS-35: Invoke ToString on objects when inspecting variables.
Fixed bug UVS-27: Goto Symbol window inconsistency with "dark" theme in VS2012.
Fixed bug UVS-11: Locals in coroutines.
1.1.0.0 - Beta release
Released March, 9, 2013
1.0.13.0
Released January 21, 2013
Bug fixes
Fixed a Visual Studio lockup that could happen if the target debuggee is sending invalid thread events. That would typically happen when debugging a remote Unity on OSX.
Fixed a Visual Studio lockup that could happen if an exception shuts down the debugger.
Fixed our MonoBehavior helpers when a C# MonoBehavior is in a namespace.
Fixed debugger tooltips for UnityScript in Visual Studio 2012.
Fixed project generation when only debug constants are changed from Unity.
Fixed keyboard navigation in the Unity Project Explorer.
Fixed UnityScript colorization for escaped strings.
Fixed our file opener to guess better the project name when used outside of Unity. That's necessary when the user uses a third part file opener in Unity that delegates to UnityVS.
Fixed handling of long messages sent from Unity to UnityVS. Before that, long messages could crash our messaging part of UnityVS. As a consequence, sometimes UnityVS wouldn't open a file from Unity.
1.0.12.0
Released January 3, 2013
Bug fixes
Fixed Visual Studio lockup that could happen when Visual Studio was deleting a breakpoint.
Fixed a bug where some breakpoints would not be hit after Unity recompiled game scripts.
Fixed the debugger to properly notify Visual Studio when breakpoints were unbound.
Fixed a registration issue that could prevent the Visual Studio debugger to debug native programs.
Fixed an exception that could happen when evaluating UnityScript and Boo expressions.
Fixed a regression where changing the .NET API level in Unity would not trigger an update of the project files.
Fixed an API glitch where user code could not participate in the log callback handler.
1.0.11.0
Released November 28, 2012
New features
Official support of Unity 4.
Manipulation of scripts from the Unity Project Explorer.
Integration in Visual Studio's Navigate To window.
Parsing of Info console message, so that clicking in the Error List take you to the first stackframe with symbols.
Add an API to let user participate in the project generation.
Add an API to let user participate in the LogCallback.
Bug fixes
Fixed regression in the background of the Unity Project Explorer in Visual Studio 2012.
Fixed project generation for users of the full .NET profile.
Fixed project generation for users of the Web target.
Fixed project generation to include DEBUG and TRACE compilation symbols as Unity does.
Fixed crash when using special characters in our Goto Symbol window.
Fixed crash if we can't inject our icon in Visual Studio's status bar.
1.0.10.0
Released October 9, 2012
Bug Fixes
Fixed the background of the Unity Project Explorer in Visual Studio 2010.
Fixed a Visual Studio freeze that could happen if UnityVS tried to attach the debugger to a Unity whose debugger interface previously crashed.
Fixed a Visual Studio freeze that could happen when a breakpoint was set and a AppDomain reload would occur.
Fixed how assemblies are retrieved from Unity to avoid locking files and confuse the Unity build process.
1.0.9.0
Released October 3, 2012
Bug fixes
Fixed project generation when the Unity project includes actual JavaScript assets.
Fixed error handling in expression evaluation.
Fixed setting new values to fields of value types.
Fixed possible side effects when hovering over expressions from the code editor.
Fixed how types are searched in loaded assemblies for expression evaluation.
Fixed bug UVS-21: Evaluation of assignment on Unity objects has no effect.
Fixed bug UVS-21: Invalid pointer when evaluating a method invocation to Unity Math API.
1.0.8.0
Released September 26, 2012
Bug fixes
Fixed the way our script opener acquired the path to the project to be sure that it is able to open both Visual Studio and the scripts.
Fixed a bug with breakpoints created while the debugging session was running that could cause Visual Studio to lock up.
Fixed how UnityVS is registered on Visual Studio 2010.
1.0.7.0
Released September 14, 2012
New features
- Visual Studio 2012 support.
Bug fixes
Fixed generation of Editor and Plugins project files to match Unity's behavior.
Fixed the translation of .pdb symbols on Unity 4.
Important
Because of the Visual Studio 2012 support, we had to rename a few files and move some other around. The UnityVS package to import Unity is now named either UnityVS 2010 or UnityVS 2012, for respectively Visual Studio 2010 and Visual Studio 2012. This version also requires that the UnityVS project files are regenerated.
1.0.6.0 - Internal build
Released September 12, 2012
1.0.5.0
Released September 10, 2012
Bug fixes
Fixed generation of project files when scripts or shaders had an invalid xml character.
Fixed detection of Unity instances when Unity was connected to the Asset server. This triggered failures to open files from Unity and the automatic connection of the Visual Studio debugger.
1.0.4.0
Released September 5, 2012
New features
Automatic conversion of debug symbols in Unity.
If you have a .NET .dll assembly with its associated .pdb in your Asset folder, simply re-import the assembly and UnityVS will convert the .pdb into a debug symbols file that Unity's scripting engine understands, and you'll be able to step into your .NET assemblies from UnityVS.
Bug fixes
- Fixed UnityVS crash while debugging caused by exceptions thrown by methods or properties inside Unity.
1.0.3.0
Released September 4, 2012
New features
- New configuration option to disable the usage of UnityVS to open files from Unity.
Bug fixes
Fixed generation of references to UnityEditor for non editor projects.
Fixed definition of UNITY_EDITOR symbol for non editor projects.
Fixed random VS crash caused by our custom status bar.
1.0.2.0
Released August 30, 2012
Bug fixes
Fixed conflict with the PythonTools debugger.
Fixed references to Mono.Cecil.
Fixed bug in how scripting assemblies were retrieved from Unity with Unity 4 b7.
1.0.1.0
Released August 28, 2012
New features
- Preview support for Unity 4.0 Beta.
Bug fixes
Fixed the inspection of properties throwing exceptions.
Fixed descending into base objects when inspecting objects.
Fixed blank dropdown list for the insertion point in the MonoBehavior wizard.
Fixed completion for dll inside the Asset folder for UnityScript and Boo.
1.0.0.0 - Initial release
Released August 22, 2012
|
https://docs.microsoft.com/en-us/visualstudio/gamedev/unity/change-log-visual-studio-tools-for-unity
|
CC-MAIN-2020-50
|
refinedweb
| 7,698
| 51.34
|
You can subscribe to this list here.
Showing
1
results of 1
Bob Foster wrote:
> Where did the ""; URI come from? The
> Namespaces In XML spec specifically says: "The prefix xmlns is used only
> for namespace bindings and is not itself bound to any namespace name."
>
> Am I missing some errata, or is there some complexity-multiplying going
> on here? It sure seems like a SAX feature that should be dropped should
> not be used to justify contradicting the namespaces recommendation.
You are missing an erratum to NS 1.0, and the NS 1.1 spec (as well as
DOM 2 which IIRC was the first to cite it).
See notably:
(NE05)
(towards the end of section 3)
--
Robin Berjon
|
http://sourceforge.net/p/sax/mailman/sax-devel/?viewmonth=200403&viewday=18
|
CC-MAIN-2014-42
|
refinedweb
| 120
| 75.2
|
[Updated 15.11.2013: passing IV is required in the new PyCrypto]
The PyCrypto module seems to provide all one needs for employing strong cryptography in a program. It wraps a highly optimized C implementation of many popular encryption algorithms with a Python interface. PyCrypto can be built from source on Linux, and Windows binaries for various versions of Python 2.x were kindly made available by Michael Foord on this page.
My only gripe with PyCrypto is its documentation. The auto-generated API doc is next to useless, and this overview is somewhat dated and didn't address the questions I had about the module. It isn't surprising that a few modules were created just to provide simpler and better documented wrappers around PyCrypto.
In this article I want to present how to use PyCrypto for simple symmetric encryption and decryption of files using the AES algorithm.
Simple AES encryption
Here's how one can encrypt a string with AES:
from Crypto.Cipher import AES key = '0123456789abcdef' IV = 16 * '\x00' # Initialization vector: discussed later mode = AES.MODE_CBC encryptor = AES.new(key, mode, IV=IV) text = 'j' * 64 + 'i' * 128 ciphertext = encryptor.encrypt(text)
Since the PyCrypto block-level encryption API is very low-level, it expects your key to be either 16, 24 or 32 bytes long (for AES-128, AES-196 and AES-256, respectively). The longer the key, the stronger the encryption.
Having keys of exact length isn't very convenient, as you sometimes want to use some mnemonic password for the key. In this case I recommend picking a password and then using the SHA-256 digest algorithm from hashlib to generate a 32-byte key from it. Just replace the assignment to key in the code above with:
import hashlib password = 'kitty' key = hashlib.sha256(password).digest()
Keep in mind that this 32-byte key only has as much entropy as your original password. So be wary of brute-force password guessing, and pick a relatively strong password (kitty probably won't do). What's useful about this technique is that you don't have to worry about manually padding your password - SHA-256 will scramble a 32-byte block out of any password for you.
The next thing the code does is set the block mode of AES. I won't get into all the details, but unless you have some special requirements, CBC should be good enough for you.
We create a new AES encryptor object with Crypto.Cipher.AES.new, and give it the encryption key and the mode. Next comes the encryption itself. Again, since the API is low-level, the encrypt method expects your input to consist of an integral number of 16-byte blocks (16 is the size of the basic AES block).
The encryptor object has an internal state when used in the CBC mode, so if you try to encrypt the same text with the same encryptor once again - you will get different results. So be careful to create a fresh AES encryptor object for any encryption/decryption job.
Decryption
To decrypt the ciphertext, simply add:
decryptor = AES.new(key, mode, IV=IV) plain = decryptor.decrypt(ciphertext)
And you get your plaintext back again.
A word about the initialization vector
The initialization vector (IV) is an important part of block encryption algorithms that work in chained modes like CBC. For the simple example above I've ignored the IV (just using a buffer of zeros), but for a more serious application this is a grave mistake. I don't want to get too deep into cryptographic theory here, but it suffices to say that the IV is as important as the salt in hashed passwords, and the lack of correct IV usage led to the cracking of the WEP encryption for wireless LAN.
PyCrypto allows one to pass an IV into the AES.new creator function. For maximal security, the IV should be randomly generated for every new encryption and can be stored together with the ciphertext. Knowledge of the IV won't help the attacker crack your encryption. What can help him, however, is your reusing the same IV with the same encryption key for multiple encryptions.
Encrypting and decrypting files
The following function encrypts a file of any size. It makes sure to pad the file to a multiple of the AES block length , and also handles the random generation of IV.
import os, random, file out_filename: If None, '<in_filename>.enc' will be used. chunksize: Sets the size of the chunk which the function uses to read and encrypt the file. Larger chunk sizes can be faster for some files and machines.))
Since it might have to pad the file to fit into a multiple of 16, the function saves the original file size in the first 8 bytes of the output file (more precisely, the first sizeof(long long) bytes). It randomly generates a 16-byte IV and stores it in the file as well. Then, it reads the input file chunk by chunk (with chunk size configurable), encrypts the chunk and writes it to the output. The last chunk is padded with spaces, if required.
Working in chunks makes sure that large files can be efficiently processed without reading them wholly into memory. For example, with the default chunk size it takes about 1.2 seconds on my computer to encrypt a 50MB file. PyCrypto is fast!
Decrypting the file can be done with:
def decrypt_file(key, in_filename, out_filename=None, chunksize=24*1024): """ Decrypts a file using AES (CBC mode) with the given key. Parameters are similar to encrypt_file, with one difference: out_filename, if not supplied will be in_filename without its last extension (i.e. if in_filename is 'aaa.zip.enc' then out_filename will be 'aaa.zip') """ if not out_filename: out_filename = os.path.splitext(in_filename)[0] with open(in_filename, 'rb') as infile: origsize = struct.unpack('<Q', infile.read(struct.calcsize('Q')))[0] iv = infile.read(16) decryptor = AES.new(key, AES.MODE_CBC, iv) with open(out_filename, 'wb') as outfile: while True: chunk = infile.read(chunksize) if len(chunk) == 0: break outfile.write(decryptor.decrypt(chunk)) outfile.truncate(origsize)
First the original size of the file is read from the first 8 bytes of the encrypted file. The IV is read next to correctly initialize the AES object. Then the file is decrypted in chunks, and finally it's truncated to the original size, so the padding is thrown out.
|
http://eli.thegreenplace.net/2010/06/25/aes-encryption-of-files-in-python-with-pycrypto/
|
CC-MAIN-2015-06
|
refinedweb
| 1,077
| 64.71
|
#include <hallo.h> Bernhard R. Link wrote on Mon Dec 09, 2002 um 11:44:28AM: > This design decision could exclude the segment between desktops and > embedded things, where the only ways out are are serial-cards (and > maybe some other things, for which getting a initial console would be > even harder). Maybe. And the solution is: fix the kernel to the right time. There is large segment between desktops and embedded things that works fine with Devfs, please do not try to scare us. > I'm not saying devfs should not used at all. But the system should > support using kernels without devfs, too. The sense behind switching to Devfs is exactly this problem - getting rid of painfull work of creating and maintaining the device files. Who is going to do it? You. Feel free to do. I recommend switching to Devfs now and fixing its bugs later. Gruss/Regards, Eduard.
|
https://lists.debian.org/debian-boot/2002/12/msg00310.html
|
CC-MAIN-2014-15
|
refinedweb
| 152
| 73.58
|
Sure, you already know that you can pass children to React elements by nesting JSX tags. But what about the other 3 ways?
Did you know that React gives you no less than four different ways to specify the children for a new React Element?
To start with, there’s the obvious way: nesting the elemen’s between its start and end tags, like the blue div in this example:
The obvious approach is… kinda obvious. But it gives you a hint to approach number two. Given that JSX compiles to JavaScript, there must be a pure JavaScript way to do the same thing. And while there’s a good chance that you already know the corresponding vanilla JavaScript, if you don’t, you can find out by clicking on the Compiled button in the above editor.
And if you do click on the Compiled button, you’ll see a couple calls to…
createElement()
JSX elements in a React project are transformed at build time into calls to
React.createElement(). This function’s first two arguments represent the elment’s
type and
props, while any subsequent arguments represent the element’s child nodes.
React.createElement( type, props, // You can pass as many children as you need - or none at all. childNode1, childNode2, )
For example, the two elements in the above JSX transpile to two separate calls to
React.createElement():
React.createElement( "div", { className: "red" }, React.createElement("div", { className: "blue" }) )
There are three important things to understand about
React.createElement():
typeand
props, and any further arguments represent its
children.
Simple, huh? But there’s one curious thing about the object returned by
React.createElement(). While the returned object has properties corresponding to the
type and
props arguments, it doesn’t have a
children property. Instead, the children are stored on
props.children.
So what happens if you pass
React.createElement() both a
children prop along with the
children argument? Let’s find out!
As you can see in the above editor’s console, the
children prop is ignored in favor of the value passed via argument — which happens to be
undefined. But this raises a question: what would happen if you didn’t pass a third argument?
This leads us to the third way to specify an element’s
children.
childrenprop to
createElement()
When you pass a
children prop to
createElement(), without passing third (or subsequent) arguments, then that will become the element’s children.
This has an interesting corollary: you can also pass elements via props other than
children. In fact, you can pass multiple elements to a component using multiple props. This pattern is sometimes called slots — and you can learn more about it with Dave Ceddia’s guide to slots.
Of course, you don’t have to use raw JavaScript to pass a
children prop — which leads to the 4th and final way to specify an element’s children…
childrenprop with JSX
Just as we started by transforming a JSX element into a call to
createElement(), let’s finish by rewriting the above with JSX:
Putting aside the question of whether you can pass children via
props (you can), let’s ask a perhaps more important question: should you?
It goes without saying that usually, it’s clearer to specify a JSX element’s children the standard way. But with that said, there are a couple situations where the ability to specify
children as a prop comes in handy.
When working with React context, you’ll run into a small problem when you need to consume values from multiple contexts: you need to nest context consumers within each other to access multiple values. And so just when you thought that Promises and async/await had banished callback pyramids from the idyllic land of JavaScript forever, you end up with this:
function InvoiceScreen(props) { return ( <RouteContext.Consumer> {route => <AuthContext.Consumer> {auth => <DataContext.Consumer> {data => <InvoiceScreenImpl route={route} auth={auth} data={data} {...props} /> } </DataContext.Consumer> } </AuthContext.Consumer> } </NavRoute> ) }
The above code can be shortened by using the
children prop, instead of nesting JSX elements:
function InvoiceScreen(props) { return ( <RouteContext.Consumer children={route => <AuthContext.Consumer children={auth => <DataContext.Consumer children={data => <InvoiceScreenImpl route={route} auth={auth} data={data} {...props} /> } /> } /> } /> ) }
Of course, this still isn’t perfect. For perfection, you’ll need to wait for the new React Hooks API, which will make consuming context a breeze.
Suppose that you have a component that just wraps another component — possibly transforming the props in some way, before passing them through to a child component.
In fact, you may have just seen such a component. To see what I mean, let me transform the previous
InvoiceScreen example into plain JavaScript.
function InvoiceScreen(props) { return ( createElement(RouteContext.Consumer, { children: route => createElement(AuthContext.Consumer, { children: auth => createElement(DataContext.Consumer, { children: data => createElement(InvoiceScreenImpl, { route, auth, data, ...props }) }) }) }) ) }
You’ll notice that each calls to
createElement() in the above example only uses two arguments, so any value for
props.children will be passed through to
<InvoiceScreenImpl>.
When you think about this, it makes a lot of sense. It would be confusing to have to specifically pass the
children prop through. And luckily, you don’t have to — because React lets you specify children in an appropriate way for the situation!
Have anything to add to this article? Let me know in the Twitter discussion!
Tokyo, Japan
|
https://frontarm.com/james-k-nelson/4-ways-pass-children-react-elements/
|
CC-MAIN-2019-35
|
refinedweb
| 889
| 56.25
|
I have a system with symlinks EVERYWHERE, so given a particular directory, is there a simple way to find out what mountpoint this directory is on?
Particularly interested in solaris.
You can try:
df dirname
It should give the filesystem and mount point of the target of the symlink.
If you want to know the mount point and filesystem of the symlink itself:
df $(dirname /path/to/dirname)
(That's the command dirname and a dummy directory named "dirname", confusingly enough.)
dirname
I know this provides a little more information than you requested. But you could make a simple C program using the realpath() library call. I have done this before to find out exactly where a specific file was. From there it should be a simple matter of determining the filesystem. A sample program would look like:
/*
* realpath - a program to find the real path
*/
#include <limits.h>
#include <stdlib.h>
#include <stdio.h>
main(int argc, char **argv, char **envp)
{
void exit();
char realx[10000];
printf("\nORIGINAL PATH:\t%s\n",argv[1]);
printf("Real PATH:\t%s\n",realpath(argv[1],realx));
exit(0);
}
readlink
|
http://serverfault.com/questions/110079/how-can-i-find-out-what-filesystem-a-particular-directory-is-on
|
crawl-003
|
refinedweb
| 188
| 65.93
|
About PopupEditText and SendMail
PopupEditText and SendMail changed from R20.
Even if I look at the C++ SDK, I can not find the sample so I can not figure out how to write the code.
Please let me know.
Hi @anoano, first of all, I would like to point you to our Q&A functionality please use it.
As you figured PopupEditText has changed, the
funcparameter was a
PopupEditTextCallbackand now it's a
maxon:: Delegate.
About SendMail, it was removed and replaced by a totally new interface SmtpMailInterface.
Here a basic example which makes use of both.
class CommandSendEmail : public CommandData { INSTANCEOF(CommandSendEmail, CommandData) private: String txt; public: void MailRecieverChangeCallback(POPUPEDITTEXTCALLBACK nMode, const maxon::String& text) { switch (nMode) { case POPUPEDITTEXTCALLBACK::TEXTCHANGED: GePrint(text); break; case POPUPEDITTEXTCALLBACK::CLOSED: txt = text; SendEmail(); break; default: break; } }; Bool SendEmail() { // Create a reference to smtpMailInterface iferr(maxon::SmtpMailRef smtp = maxon::SmtpMailRef::Create()) return false; // Define the subject iferr(smtp.SetSubject("Email Send from C4d"_s)) return false; // Define the sender iferr(smtp.SetSender("YourSenderEmail@something.com"_s, "Random Email"_s)) return false; // Add text to this email iferr(smtp.AttachText("This is the content of the email"_s)) return false; // Define the list of receiver maxon::SmtpReceiver receiverList[] = { maxon::SmtpReceiver(txt, maxon::String()) }; iferr(smtp.SetReceiver(receiverList)) return false; // Define our login/password iferr(smtp.SetUserPassword("UserName"_s, "Password"_s)) return false; // Finally send the email with your SMTP server and port 25 iferr(smtp.SendMimeMail("SMTPServer"_s, 25)) GePrint(err.GetMessage()); else GePrint("Email sent to:" + txt); return true; }; Bool Execute(BaseDocument* doc) { if (!doc) return false; PopupEditText(0, 0, 200, 50, "Email Receiver"_s, [this](POPUPEDITTEXTCALLBACK index, maxon::String& name) { MailRecieverChangeCallback(index, name); }); return true; }; static CommandSendEmail* Alloc() { return NewObjClear(CommandSendEmail); } };
Note that you can find all the changes in API Change List in R20.
If you have any questions please let me know.
Cheers,
Maxime!
Thank you for making the sample.
I tried it immediately, but I got an error with maxon :: SmtpReceiver.
I want to include network_smtpmail.h but I do not know the location of the file.
Where is the file?
You can now use it after adding a path.
I added network_smtpmail.h, but when I build it
LINK 2001 error has occurred.
External symbol "" public: static class maxon :: InterfaceReference maxon :: SmtpErrorInterface :: _ interface "(? _Interface @ SmtpErrorInterface @ maxon @@ 2VInterfaceReference @ 2 @ A)" is unresolved
How can I solve this?
All interfaces or functions/classes that are part of the new core are stored in the maxon folder. So you have to write
#include "maxon/network_smtpmail.h".
Moreover, you have to make sure network.framework is included. You can have an overview of each framework here or either you can go on the header file and on the bottom left the framework is written. For your case take a look at network_smtpmail.h.
I hope it solves your issue. Please let me know.
Cheers,
Maxime!
I could write #include "maxon / network_smtpmail.h" and load it successfully.
But then I get a message that SmtpErrorInterface and SmtpMailInterface can not be resolved.
There was an error when opening network_smtpmail1.hxx and network_smtpmail2.hxx.
How can I do this?
Just a dumb question: Did you include the maxon framework in your projectdefinition and did you run the project tool after that?
I do it in the following procedure
Create project file with project_tool
Add network.framework to frameworks
Add path in include directory from project properties![alt text]
C: \ sdk \ frameworks \ network.framework \ source
C: \ sdk \ frameworks \ network.framework \ generated \ hxx
Hi @anoano how did you add the network.framework?
With the R20 you shouldn't manually create your solution or add anything to it, the project tool will take care of it.
The usual way would be:
- Add the framework to your projectdefinition.txt located in plugins\YourProjectDir\project\
- Add network.framework to the API entry.
- (Re-)run the project tool, which will update your solution for visual studio/xcode with the new framework.
- This is also valid when you create/remove a file in order to get a synchronized solution for both Windows and Mac OS.)
- Open the solution in VS/Xcode and Compile.
You can find information about projectdefinition.txt and project tool here.
Cheers,
Maxime.
Error did not come out safely.
I did not understand how to use projectdefinition.txt.
You can proceed with this.
Thank you very much.
I'm wondering if your question has been answered. If so, please mark this thread as solved.
Cheers,
Maxime.
|
https://plugincafe.maxon.net/topic/11066/about-popupedittext-and-sendmail
|
CC-MAIN-2020-10
|
refinedweb
| 747
| 60.61
|
This article has the following contents :
1. What is a pointer 2. A pointer has a pointer type 2 Meaning 3. Wild pointer
4. Pointer operation 5. Pointer and array 6. Secondary pointer
7. Pointer array
<>1. What is a pointer ?
official language :
In Computer Science , Pointer ( Pointer ) Is an object in a programming language , Use address , Its value points directly to ( points to ) A value stored in another place in computer memory . because
The required variable unit can be found through the address , so to speak , The address points to the variable unit . therefore , Address visualization is called “ Pointer ". It means that you can find the memory unit with its address through it .
In short , A pointer is a variable , The address stored in the variable , therefore , The pointer is **“ address ”**, Take the code as an example , Demonstrate the meaning of the official language ;
#include <stdio.h> int main() { int a = 10; int* p = &a; *p = 20; printf("%d",
a); return 0; } /* Running results 20 */
Pointer variable p It's called a pointer , He is an object ,p The value of points to exist A value in another place in computer memory (a), So when dereferencing p Time ,*p= 20,p Use address
, Make it point to a It became 20;
therefore p It can be called a pointer ,p The value of is the address ;
In addition, I would like to emphasize here , The size of a block in memory is one byte , And use 16 It is convenient for human to watch ( It will be used in the second meaning of pointer )
<>2. Pointer has type 2 Meaning
introduction :
#include <stdio.h> int main() { int a = 20; int* intp = &a; char* charp = &a;
double* doublep = &a; printf("%d\n", sizeof(intp)); printf("%d\n", sizeof(charp)
); printf("%d\n", sizeof(doublep)); return 0; } /* Running results : 4 4 4 */
You're going to see it all 4, Why ? because sizeof() It measures the size of the pointer , In fact 32 On the bit platform , What is the size of the pointer 4 byte , stay 64 On the table , What is the size of the pointer 8 byte **, The pointer size has nothing to do with the pointer type , It's all the same **, Because the pointer is the address , The address is 4 Bytes or 8 Bytes ;
ask : Since the pointer size is the same ? Why so many pointer types ? Here are some examples
<> significance 1: The pointer type determines how much permission a pointer has to dereference ( It can manipulate several bytes )
#include <stdio.h> int main() { int a = 0x44332211;
/*a What's stored is a 16 Decimal digits ,16 A number in the base represents 4 position ,2 A number represents a byte */ int b = 0x44332211; int c =
0x44332211; int* intp = &a; char* charp = &b; short* shortp = &c; *intp = 0; *
charp= 0; *shortp = 0; printf("%#x\n", a); printf("%#x\n", b); printf("%#x\n", c
); return 0; } /* Running results : 0 0x44332200 0x44330000 */
a Become 0 , Changed four bytes
b Become 0x44332200, Changed a byte
c Become 0x44330000, Two bytes changed ;
* Because of the pointer intp Is an integer pointer , Has four byte permissions .charp Is a character pointer , Has one byte permission ,shortp Is a short integer pointer , Has two byte permissions ;
<> significance 2: The pointer type determines how far the pointer can go forward or backward ;
give an example :
#include <stdio.h> int main() { int a = 10; int* intp = &a; char* charp = &a;
double* doublep = &a; printf("%p",&a); printf("%p",intp+1); printf("%p",charp+1)
; printf("%p",doublep+1); return 0; } /* Running results : 00B6FB44 00B6FB48 00B6FB45
00B6FB4C */
You can see the integer pointer intp After adding one, the address moved back 4 Bytes ;
Character pointer charp After adding one, the address moves back one byte ;
Double precision pointer doublep After adding one, it moves back eight bytes ;
So that's what different pointer types mean , It determines the distance you can move back and forth , This is mainly used for array access , And data structure
<>3. Wild pointer
There are three reasons for the formation of wild pointer :
* The position of the pointer is unknown ;
* Pointer out of bounds when accessing array ;
* The space the pointer points to is freed ;
<> reason 1: The position of the pointer is unknown ;
We know that when we create a variable, if we do not initialize it , Then the variable is a random value
* for example ; int a; that a It's a random value ; In the same way , If the pointer is not initialized, it will be a random address ( It's dangerous , Thought you didn't know who he was going to point to )
Code examples :
#include <stdio.h> int main() { int a = 10; int* p; *p = 20;
/* It's scary , Because you don't know the pointer p What is it */ return 0; }
<> This time is the wild pointer
<> reason 2: Pointer out of bounds when accessing array ;
|----|----|----|----|----|----|----|----| ( Let's say the left is an array lattice , You can see that there are 8 A grid , The index is 0 1 2 3 4 5
6 7)
Code demonstration :
#include <stdio.h> int main() { int num[8] = {2,4,5,8,4,3,13,31}; int* p = num;
/* The array name is the first element address */ for (int i = 0;i<=8;i++) { printf("%d\n", *(p+i)); /* Using dereference pointer to access array */
} return 0; } /* Running results : 2 4 5 8 4 3 13 31 -858993460 */
We can see that , One more negative number , That's because when i =
8 When I was young ,p+i It means moving back 32 Bytes , Namely 8 Plastic surgery , But move back 7 We'll be at the end of the line in a few minutes ; When i yes 8 It's going to cross the line ; After crossing the border , Will point to a random address we don't know
<> So now p It's also a wild pointer
<> reason 3: The space the pointer points to is freed ;
Before we learn the pointer , You know when you create a function , Once the function is called , All arguments to the function are destroyed ; The created space will be released
#include <stdio.h> int* test() { int a = 20; return &a; } int main() { int* p
= test(); printf("%d", *p) return 0; }
In this way, there will be problems in writing . because test() In the parameter a It is destroyed when the function is output ,a We'll give it back to the system ; It's very likely that the execution here is still the same 20, The content of this space has not been cleared ; stay c There is this explanation in the language trap ( Sometimes it may have been removed ), In short, it is not recommended to write like this
<> So now p It's also a wild pointer
summary : How to avoid wild pointer ?
* It is better to initialize the pointer
* Watch out for the pointer
* Pointer to space is released or assigned without knowing how to initialize the pointer NULL
* Check validity before using pointer
<>4. Pointer operation
<>1. Pointer + - operation ( Some of them have been demonstrated before )
#include <stdio.h> int main() { int num[15] = {
21,2,1,4,5,3,6,9,5,4,7,8,2,1,12 }; int i = 0; int* p = num; for (i = 0; i < 15;
i += 2) { printf("%d ", *(p + i)); /* because p Is an integer pointer , So add 2 It's the jump 8 Bytes */ } return 0; } /*
Running results : 21 1 5 6 5 7 2 12 */
<>2. Pointer minus pointer ( In the same space )
#include <stdio.h> int main() { int arr[10] = {12,30,14,12,2,3,4,32,12,16};
printf("%d", &arr[9] - &arr[2]); return 0; } /* Running results : 7 */
So the pointer minus the pointer is the number of elements between the pointers
<> Make a question : Write your own functions , Three ways to measure the length of string , Don't use it strlen();
The first method is counting The second method is function recursion , But it also uses pointers to add and subtract integers The third method is pointer subtraction
<>3. Pointer relation operation
Operation by size comparison
#include main() { float values[5]; float* p = NULL; for(p =
&values[5];p>&values[0]) { *--p = 0; } return 0; }
When the pointer p be equal to &values[5] It's time to cross the line , But not afraid , Because we're just comparing addresses , It's not worth it
So according to the meaning of the code can be all initialized to 0 It's over
It's also written like this
#include main() { float values[5]; float* p = NULL; for(p =
&values[5];p>&values[0];p--) { *p = 0; } return 0; }
But this is not recommended , Because the final termination condition is **,p The address of is less than &values[0]**, I ran to the front of the array
and C Language standard :
Allows a pointer to an array element to be compared with a pointer to the memory location after the last element of the array , However, it is not allowed to compare with a pointer to the memory location before the first element .
<>5. Pointer and array
Generally speaking, the array name is the first element address , But there are exceptions ;
*
&arr_name & + The array name form represents the entire array , It's just when it's displayed , The address of the first element is displayed , You can tell by the code
* #include <stdio.h> int main() { int arr[10] = {0}; printf("%p", arr); printf
("%p", arr+1); printf("%p", &arr); printf("%p", &arr+1); return 0; } 003AFE90
003AFE94 003AFE90 003AFEB8
* Running results :
* 003AFE90
003AFE94
003AFE90
003AFEB8
You can see that &arr+1 The address has been increased 40 Bytes , That is, the size of the array
* sizeof(arr_name) It represents the entire array ; #include <stdio.h> int main() { int num[20] =
{0}; printf("%d", sizeof(num)); return 0; } /* Running results : 80 */
<>6. Secondary pointer
The address of the variable is stored with the pointer , And so on , It is the secondary pointer that stores the pointer address
So what is the secondary pointer representation ?
It's two * There are three more in the back * , Four * … And so on
#include <stdio.h> int main() { int a = 20; int* p = &a; int** pp = &p; return
0; }
<>7. Pointer array
Notice that I'm talking about pointer arrays , It's an array , Is an array of pointers
#include <stdio.h> int main() { int a = 20,b = 30,c = 10,d = 40; int* num[4] =
{&a,&b,&c,&d}; for(int i = 0;i<4;i++) { printf("%d\n",*num[i]); } return 0; }
Technology
Daily Recommendation
|
https://www.toolsou.com/en/article/210496423
|
CC-MAIN-2022-33
|
refinedweb
| 1,783
| 50.03
|
mount()
Mount a filesystem
Synopsis:
#include <sys/mount.h> int mount( const char* spec, const char* dir, int flags, const char* type, const void* data, int datalen );
Since:
BlackBerry 10.0.0
Arguments:
- spec
- A null-terminated string describing a special device (e.g. /dev/hd0t77), or NULL if there's no special device.
- dir
- A null-terminated string that names the directory that you want to mount (e.g. /mnt/home).
- flags
- Flags that are passed to the driver:
- _MFLAG_OCB — ignore the special device string, and contact all servers.
- _MOUNT_READONLY — mark the filesystem mountpoint as read-only.
- _MOUNT_NOEXEC — don't allow executables to load.
- _MOUNT_NOSUID — don't honor setuid bits on the filesystem.
- _MOUNT_NOCREAT — don't allow file creation on the filesystem.
- _MOUNT_OFF32 — limit off_t to 32 bits.
- _MOUNT_NOATIME — disable logging of file access times.
- _MOUNT_BEFORE — call resmgr_attach() with _RESMGR_FLAG_BEFORE.
- _MOUNT_AFTER — call resmgr_attach() with _RESMGR_FLAG_AFTER.
- _MOUNT_OPAQUE — call resmgr_attach() with _RESMGR_FLAG_OPAQUE.
- _MOUNT_UNMOUNT — unmount this path.
- _MOUNT_REMOUNT — this path is already mounted; perform an update.
- _MOUNT_FORCE — force an unmount or a remount change.
- _MOUNT_ENUMERATE — autodetect on this device.
- type
- A null-terminated string with the filesystem type (e.g. nfs, cifs, qnx4, ext2, network).
- data
- A pointer to additional data to be sent to the manager. If datalen is <0, the data points to a null-terminated string.
- datalen
- The length of the data, in bytes, that's being sent to the server, or <0 if the data is a null-terminated string.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The mount() function sends a request to servers to mount the services provided by spec and type at dir.
Updating the mount resets the other mount flags to their default values. To determine which flags are currently set, use the DCMD_ALL_GETMOUNTFLAGS devctl() command, and then OR in _MOUNT_REMOUNT. For example:
int flags; if(devctl(fd, DCMD_ALL_GETMOUNTFLAGS, &flags, sizeof flags, NULL) == EOK) { flags |= _MOUNT_REMOUNT; ... }
If you set _MFLAG_OCB in the flags, then the special device string is ignored, and all servers are contacted. If you don't set this bit, and the special device spec exists, then only the server that created that device is contacted, and the full path to spec is provided.
If datalen is any value <0, and there's a data pointer, the function assumes that the data pointer is a pointer to a string.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/m/mount.html
|
CC-MAIN-2017-30
|
refinedweb
| 423
| 60.31
|
tcsetattr()
Change the terminal control settings for a device
Synopsis:
#include <termios.h> int tcsetattr( int fildes, int optional_actions, const struct termios *termios_p );
Since:
BlackBerry 10.0.0
Arguments:
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The tcsetattr() function sets the current terminal control settings for the opened device indicated by fildes to the values stored in the structure pointed to by termios_p.
The operation of tcsetattr() depends on the values in optional_actions:
- TCSANOW
- The change is made immediately.
- TCSADRAIN
- No change is made until all currently written data has been transmitted.
- TCSAFLUSH
- No change is made until all currently written data has been transmitted, at which point any received but unread data is also discarded.
The termios control structure is defined in <termios.h>. For more information, see tcgetattr().
Returns:
- 0
- Success.
- -1
- An error occurred; errno is set.
Errors:
- EBADF
- The argument fildes is invalid;
- EINVAL
- The argument action is invalid, or one of the members of termios_p is invalid.
- ENOSYS
- The resource manager associated with fildes doesn't support this call.
- ENOTTY
- The argument fildes doesn't refer to a terminal device.
Examples:
#include <stdlib.h> #include <termios.h> int raw( int fd ) { struct termios termios_p; if( tcgetattr( fd, &termios_p ) ) return( -1 ); termios_p.c_cc[VMIN] = 1; termios_p.c_cc[VTIME] = 0; termios_p.c_lflag &= ~( ECHO|ICANON|ISIG| ECHOE|ECHOK|ECHONL ); termios_p.c_oflag &= ~( OPOST ); return( tcsetattr( fd, TCSADRAIN, &termios_p ) ); } int unraw( int fd ) { struct termios termios_p; if( tcgetattr( fd, &termios_p ) ) return( -1 ); termios_p.c_lflag |= ( ECHO|ICANON|ISIG| ECHOE|ECHOK|ECHONL ); termios_p.c_oflag |= ( OPOST ); return( tcsetattr( fd, TCSADRAIN, &termios_p ) ); } int main( void ) { raw( 0 ); /* * Stdin is now "raw" */ unraw ( 0 ); return EXIT_SUCCESS; }
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/t/tcsetattr.html
|
CC-MAIN-2020-34
|
refinedweb
| 307
| 52.05
|
I want to use recent C# 5.0 async methods, however some specific way.
Consider following example code:
public abstract class SomeBaseProvider : ISomeProvider { public abstract Task<string> Process(SomeParameters parameters); } public class SomeConcreteProvider1 : SomeBaseProvider { // in this method I want to minimize any overhead to create / run Task // NOTE: I remove here async from method signature, because basically I want to run // method to run in sync! public override Task<string> Process(SomeParameters parameters) { string result = "string which I don't need any async code to get"; return Task.Run(() => result); } } public class SomeConcreteProvider2 : SomeBaseProvider { // in this method, it's OK for me to use async / await public async override Task<string> Process(SomeParameters parameters) { var data = await new WebClient().DownloadDataTaskAsync(urlToRequest); string result = // ... here we convert data byte[] to string some way return result; } }
Now how I am going to use async methods (you can ignore fact that consumer actually ASP.NET MVC4 app in my case... it can be anything):
public class SomeAsyncController : AsyncController { public async Task<ActionResult> SomethingAsync(string providerId) { // we just get here one of providers I define above var provider = SomeService.GetProvider(providerId); // we try to execute here Process method in async. // However we might want to actually do it in sync instead, if // provider is actually SomeConcreteProvider1 object. string result = await provider.Process(new SomeParameters(...)); return Content(result); } }
As you can see I have 2 implementations, each one will perform differently: one I want to run in async and do not block thread (SomeConcreteProvider2), while another one I want to be able to run in sync and do not create any Task object etc (which I fail to code in code above, i.e. I do create new Task here!).
There are questions already like How would I run an async Task<T> method synchronously?. However I don't want to run something in sync... I want to avoid any overhead if I know at code time (i.e. before runtime) that some methods implementations will NOT actually be async and will not need to use any threads / I/O completion ports etc. If you check code above, its easy to see that method in SomeConcreteProvider1 basically will construct some string (html), can do it very quickly at the same execution thread. However same method in SomeConcreteProvider2 will need to create Web Request, get Web Response and process it someway and I do want to make at least Web Request in Async to avoid blocking of whole thread during request time (may be quit long time actually).
So the question is: how to organize my code (different method signatures or different implementations, or?) to be able to decide how to execute method and avoid ANY possible overhead which caused for example by Task.Run(...) in SomeConcreteProvider1.Process method?
Update 1: obvious solutions (I think up some of them during question process), such as for example to add some static property to each of providers (say 'isAsyncImplementation') and then check that property to decide how to run method (with await or without await in controller action) have some overhead too :D I want something better if possible :D
Your design looks fine.
For the
SomeConcreteProvider1 case, you can use a TaskCompletionSource
This returns a
Task without requiring concurrency.
Also, in the consumer code,
await will not yield if the
Task has already completed. This is called the "fast path" and effectively makes the method synchronous.
In your first case, I would recommend returning:
Task.FromResult(result)
which returns a ready-completed task.
Similar Questions
|
http://ebanshi.cc/questions/869499/is-it-possible-to-use-same-async-method-signature-in-c-sharp-5-0-and-be-able-to
|
CC-MAIN-2017-09
|
refinedweb
| 588
| 59.74
|
C# Sharp Exercises: Concatenate three strings and display the result
C# Sharp String: Exercise-36 with Solution
Write a C# Sharp program to concatenate three strings and display the result.
Sample Solution:-
C# Sharp Code:
using System; public class Example36 { public static void Main() { String str1 = "Don't count your chickens, "; String str2 = "before the eggs, "; String str3 = "have hatched."; var str = String.Concat(str1, str2, str3); Console.WriteLine(str); } }
Sample Output:
Don't count your chickens, before the eggs, have hatched
Flowchart :
C# Sharp Code Editor:
Improve this sample solution and post your code through Disqus
Previous: Write a C# Sharp program to concatenate a list of variable parameters.
Next: Write a C# Sharp program to concatenate the array values of strings.
What is the difficulty level of this exercise?
New Content: Composer: Dependency manager for PHP, R Programming
|
https://www.w3resource.com/csharp-exercises/string/csharp-string-exercise-36.php
|
CC-MAIN-2019-18
|
refinedweb
| 140
| 54.63
|
NAME
bisonc++ - Generate a C++ parser class and parsing function
SYNOPSIS
bisonc++ [OPTIONS] grammar-file
DESCRIPTION
The program bisonc++ is based on previous work on bison by Alain Coet‐ meur (coetmeur@icdc.fr), who created in the early ’90s a C++ class encapsulating the yyparse() function as generated by the GNU-bison parser generator. Initial versions of bisonc++ (up to version 0.92) wrapped Alain’s pro‐ gram in a program offering a more modern user-interface, removing all old-style (C) %define directives from bison++’s input specification file (see below for an in-depth discussion of the differences between bison++ and bisonc++). Starting with version 0.98, bisonc++ is compiled from a complete rebuilt of the parser generator, closely following the description of Aho, Sethi and Ullman’s Dragon Book. Moreover, starting with version 0.98 bisonc++ is now a C++ program, rather than a C pro‐ gram generating C++ code. Bisonc++ expands the concepts initially implemented in bison and bison++, offering a cleaner setup of the generated parser class. The parser class is derived from a base-class, mainly containing the parser’s token- and type-definitions as well as several member func‐ tions which should not be (re)defined by the programmer. Most of these base-class members might also be defined directly in the parser class, but were defined in the parser’s base-class. This design results in a very lean parser class, declaring only members that are actually defined by the programmer or that must be defined by bisonc++ itself (e.g., the member function parse() as well as those support functions requiring access to facilities that are only available in the parser class itself, rather than in the parser’s base class). Moreover, this design does not require the use of virtual members: the members which are not involved in the actual parsing process may always be (re)implemented directly by the programmer. Thus there is no need to apply or define virtual member functions. In fact, there are only two public members in the parser class gener‐ ated by bisonc++: setDebug() (see below) and parse(). Remaining members are private, and those that can be redefined by the programmer using bisonc++ usually receive initial, very simple default in-line implemen‐ tations. The (partial) exception to this rule is the member function lex(), producing the next lexical token. For lex() either a standard‐ ized interface or a mere declaration is offerered (requiring the pro‐ grammer to provide a tailor-made implementation for lex()). To enforce a primitive namespace, bison used a well-known naming-con‐ vention: all its public symbols started with yy or YY. Bison++ fol‐ lowed bison in this respect, even though a class by itself offers enough protection of its identifiers. Consequently, the present author feels that these yy and YY conventions are outdated, and consequently bisonc++ does not generate any symbols defined in either the parser (base) class or in the parser function starting with yy or YY. Instead, all data members have names, following a suggestion by Lakos (2001), starting with d_, and all static data members have names starting with s_. This convention was not introduced to enforce identifier protection, but to clarify the storage type of variables. Other (local) symbols lack specific prefixes. Furthermore, bisonc++ allows its users to define the parser class in a particular namespace of their own choice. Bisonc++ should be used as follows: o As usual, a grammar must be defined. Using bisonc++ this is no different, and the reader is referred to bison’s documentation for details about specifying and decorating grammars. o The number and function of the various %define declarations as used by bison++, however, is greatly modified. Actually, all %define declarations are replaced by their (former) first argu‐ ments. Furthermore, ‘macro-style’ declarations are no longer supported or required. Finally, all directives use lower-case characters only and do not contain underscore characters (but sometimes hyphens). E.g., %define DEBUG is now declared as %debug; %define LSP_NEEDED is now declared as %lsp-needed (note the hyphen). o As noted, no ‘macro style’ %define declarations are required anymore. Instead, the normal practice of defining class members in source files and declaring them in a class header files can be adhered to using bisonc++. Basically, bisonc++ concentrates on its main tasks: the definition of an initial parser class and the implementation of its parsing function int parse(), leaving all other parts of the parser class’ definition to the program‐ mer. o Having specified the grammar and (usually) some directives bisonc++ is able to generate files defining the parser class and the implementation of the member function parse() and its sup‐ port functions. See the next section for details about the vari‐ ous files that may be written by bisonc++. o All members (except for the member parse()) and its support functions must be implemented by the programmer. Of course, additional member functions should also be declared in the parser class’ header. At the very least the member int lex() must be implemented (although a standardized implementation can also be generated by bisonc++). The member lex() is called by parse() (support functions) to obtain the next available token. The member function void error(char const *msg) may also be re- implemented by the programmer, but a basic in-line implementa‐ tion is provided by default. The member function error() is called when parse() detects (syntactical) errors. o The parser can now be used in a program. A very simple example would be: int main() { Parser parser; return parser.parse(); } Bisonc++ may create the following files: o A file containing the implementation of the member function parse() and its support functions. The member parse() is a pub‐ lic member that can be called to parse a token-sequence according to a specified LALR1 type grammar. The implementations of these members is by default written on the file parse.cc. There should be no need for the programmer to alter the contents of this file, as its contents change whenever the grammar is modified. Hence it is rewritten by default. The option --no- parse-member may be specified to prevent this file from being (re)written. In normal circumstances, however, this option should be avoided. o A file containing an initial setup of the parser class, contain‐ ing the declaration of the public member parse() and of its (private) support members. The members error() and print() receive default in-line implementations which may be altered by the programmer. The member lex() may receive a standard in-line implementation (see below), or it will merely be declared, in which case the programmer must provide an implementation. Fur‐ thermore, new members may be added to the parser class as well. By default this file will only be created if not yet existing, using the filename <parser-class>.h (where <parser-class> is the the name of the defined parser class). The option --force-class- header may be used to (re)write this file, even if already existing. o A file containing the parser class’ base class. This base class should not be modified by the programmer. It contains types defined by bisonc++, as well as several (protected) data members and member functions, which should not be redefined by the pro‐ grammer. All symbolic parser terminal tokens are defined in this class, so it escalates these definitions in a separate class (cf. Lakos, (2001)), thus preventing circular dependencies between the lexical scanner and the parser (circular dependen‐ cies occur in situations where the parser needs access to the lexical scanner class to define a lexical scanner as one of its data members, whereas the lexical scanner, in turn, needs access to the parser class to know about the grammar’s symbolic termi‐ nal tokens. Escalation is a way out of such circular dependen‐ cies). By default this file will be (re)written any time bisonc++ is called, using the filename <parser-class>base.h. The option --no-baseclass-header may be specified to prevent the base class header file from being (re)written. In normal circum‐ stances, however, this option should be avoided. o A file containing an implementation header. An implementation header may be included by source files implementing the various member functions of a class. The implementation header first includes its associated class header file, followed by any directives (formerly defined in the %{header ... %} section of the bison++ parser specification file) that are required for the proper compilation of these member functions. The implementation header is included by the file defining parse(). By default the implementation header will be created if not yet existing, receiving the filename <parser-class>.ih. The option --force- implementation-header may be used to (re)write this file, even if already existing. o A verbose description of the generated parser. This file is com‐ parable to the verbose ouput file originally generated by bison++. It is generated when the option --verbose or -V is pro‐ vided. When generated, it will use the filename <grammar>.out‐ put, where <grammar> is the name of the file containing the grammar definition.
OPTIONS
If available, single letter options are listed between parentheses fol‐ lowing their associated long-option variants. Single letter options require arguments if their associated long options require arguments as well. o --baseclass-preinclude=header (-H) Use header as the pathname to the file preincluded in the parser’s base-class header. This option is useful in situations where the base class header file refers to types which might not yet be known. E.g., with %union a std::string * field might be used. Since the class std::string might not yet be known to the compiler once it processes the base class header file we need a way to inform the compiler about these classes and types. The suggested procedure is to use a pre-include header file declar‐ ing the required types. By default header will be surrounded by double quotes (using, e.g., #include "header"). When the argu‐ ment is surrounded by pointed brackets #include <header> will be included. In the latter case, quotes might be required to escape interpretation by the shell (e.g., using -H ’<header>’). o --baseclass-header=header (-b) Use header as the pathname of the file containing the parser’s base class. This class defines, e.g., the parser’s symbolic tokens. Defaults to the name of the parser class plus the suffix base.h. It is generated, unless otherwise indicated (see --no- baseclass-header and --dont-rewrite-baseclass-header below). o --baseclass-skeleton=skeleton (-B) Use skeleton as the pathname of the file containing the skeleton of the parser’s base class. Its filename defaults to bisonc++base.h. o --class-header=header (-c) Use header as the pathname of the file containing the parser class. Defaults to the name of the parser class plus the suffix .h o --class-skeleton=skeleton (-C) Use skeleton as the pathname of the file containing the skeleton of the parser class. Its filename defaults to bisonc++.h. The environment variable BISON_SIMPLE_H is not inspected anymore. o --construction This option may be specified to write details about the con‐ struction of the parsing tables to the standard output stream. This information is primarily useful for developers, and aug‐ ments the information written to the verbose grammar output file, produced by the --verbose option. --filenames=filename (-f) Specify a filename to use for all files produced by bisonc++. Specific options overriding particular filenames are also avail‐ able (which then, in turn, overide the name specified by this option). o --force-class-header By default the generated class header is not overwritten once it has been created. This option can be used to force the (re)writ‐ ing of the file containing the parser’s class. o --force-implementation-header By default the generated implementation header is not overwrit‐ ten once it has been created. This option can be used to force the (re)writing of the implementation header file. o --help (-h) Write basic usage information to the standard output stream and terminate. o --implementation-header=header (-i) Use header as the pathname of the file containing the implemen‐ tation header. Defaults to the name of the generated parser class plus the suffix .ih. The implementation header should con‐ tain all directives and declarations only used by the implemen‐ tations of the parser’s member functions. It is the only header file that is included by the source file containing parse()’s implementation . User defined implementation of other class mem‐ bers may use the same convention, thus concentrating all direc‐ tives and declarations that are required for the compilation of other source files belonging to the parser class in one header file. o --implementation-skeleton=skeleton (-I) Use skeleton as the pathname of the file containing the skeleton of the implementation header. Its filename defaults to bisonc++.ih. o --lines (-l) Put #line preprocessor directives in the file containing the parser’s parse() function. By including this option the compiler and debuggers will associate errors with lines in your grammar specification file, rather than with the source file containing the parse() function itself. o --no-lines Do not put #line preprocessor directives in the file containing the parser’s parse() function. This option is primarily useful in combination with the %lines directive, to suppress that directive. It also overrides option --lines, though. o --namespace=namespace (-n) Define the parser base class, the paser class and the parser implentations in the namespace namespace. By default no names‐ pace is defined. If this options is used the implementation header will contain a commented out using namespace declaration for the requested namespace. o --no-baseclass-header Do not write the file containing the parser class’ base class, even if that file doesn’t yet exist. By default the file con‐ taining the parser’s base class is (re)written each time bisonc++ is called. Note that this option should normally be avoided, as the base class defines the symbolic terminal tokens that are returned by the lexical scanner. By suppressing the construction of this file any modification in these terminal tokens will not be communicated to the lexical scanner. o --no-parse-member Do not write the file containing the parser’s predefined parser member functions, even if that file doesn’t yet exist. By default the file containing the parser’s parse() member function is (re)written each time bisonc++ is called. Note that this option should normally be avoided, as this file contains parsing tables which are altered whenever the grammar definition is mod‐ ified. o --parsefun-source=source (-p) Define source as the name of the source file containing the parser member function parse(). Defaults to parse.cc. o --parsefun-skeleton=skeleton (-P) Use skeleton as the pathname of the file containing the parsing member function’s skeleton. Its filename defaults to bisonc++.cc. The environment variable BISON_SIMPLE is not inspected anymore. o --scanner=header (-s) Use header as the pathname to the file defining a class Scanner, offering a member int yylex() producing the next token from the input stream to be analyzed by the parser generated by bisonc++. When this option is used the parser’s member int lex() will be predefined as int lex() { return d_scanner.yylex(); } and an object Scanner d_scanner will be composed into the parser. The d_scanner object will be constructed using its default constructor. If another constructor is required, the parser class may be provided with an appropriate (overloaded) parser constructor after having constructed the default parser class header file using bisonc++. By default header will be surrounded by double quotes (using, e.g., #include "header"). When the argument is surrounded by pointed brackets #include <header> will be included. In the latter case, quotes might be required to escape interpretation by the shell (e.g., using -s ’<header>’). o --show-filenames Write the names of the files that are generated to the standard error stream. o --usage Write basic usage information to the standard output stream and terminate. o --verbose (-V) Write a file containing verbose descriptions of the parser states and what is done for each type of look-ahead token in that state. This file also describes all conflicts detected in the grammar, both those resolved by operator precedence and those that remain unresolved. By default it will not be cre‐ ated, but if requested it will receive the filename <parse>.out‐ put, where <parse> is the filename (without the .cc extension) of the file containing parse()’s implementation. o --version (-v) Display bisonc++’s version number and terminate.
DIRECTIVES
The following directives can be used in the initial section of the grammar specification file. When command-line options for directives exist, they overrule the corresponding directives given in the grammar specification file. o %baseclass-header header Defines the pathname of the file containing the parser’s base class. This directive is overridden by the --baseclass-header or -b command-line options. o %baseclass-preinclude header Use header as the pathname to the file pre-included in the parser’s base-class header. See the description of the --base class-preinclude option for details about this option. Like the convention adopted for this argument, header will (by default) be surrounded by double quotes. However, when the argument is surrounded by pointed brackets #include <header> will be included. o %class-header header Defines the pathname of the file containing the parser class. This directive is overridden by the --class-header or -c com‐ mand-line options. o %class-name parser-class-name Declares the name of this parser. This directive replaces the %name declaration previously used by bison++. It defines the name of the C++ class that will be generated. Contrary to bison++’s %name declaration, %class-name may appear anywhere in the first section of the grammar specification file. However, it may be defined only once. If no %class-name is specified the default class name Parser will be used. %error-verbose (to do) if defined the parser stack is dumped when an error is detected by the parse() member function. o %expect number If defined the parser will not report encountered shift/reduce and reduce/reduce conflicts if all detected conflicts are equal to the number following %expect. Conflicts are mentioned in the .output file and the number of encountered conflicts is shown on the standard output if the actual number of conflicts deviates from number. o %filenames header Defines the generic name of all generated files, unless overrid‐ den by specific names. This directive is overridden by the --filenames or -f command-line options. o %implementation-header header Defines the pathname of the file containing the implementation header. This directive is overridden by the --implementation- header or -i command-line options. o %lines Put #line preprocessor directives in the file containing the parser’s parse() function. It acts identically to the -l command line option, and is suppressed by the --no-lines option. o %lsp-needed Defining this causes bisonc++ to include code into the generated parser using the standard location stack. The token-location type defaults to the following struct, defined in the parser’s base class when this directive is specified: struct LTYPE { int timestamp; int first_line; int first_column; int last_line; int last_column; char *text; }; o %ltype typename Specifies a user-defined token location type. If %ltype is used, typename should be the name of an alternate (predefined) type (e.g., size_t/*unsigned*/). It should not be used if a %locationstruct specification is defined (see below). Within the parser class, this type will be available as the type ‘LTYPE’. All text on the line following %ltype is used for the typename specification. It should therefore not contain comment or any other characters that are not part of the actual type defini‐ tion. o %namespace namespace Define the parser class in the namespace namespace. By default no namespace is defined. If this options is used the implementa‐ tion header will contain a commented out using namespace decla‐ ration for the requested namespace. This directive is overrid‐ den by the --namespace command-line option. o %negative-dollar-indices Do not generate warnings when zero- or negative dollar-indices are used in the grammar’s action blocks. Zero or negative dol‐ lar-indices are commonly used to implement inherited attributes, and should normally be avoided. When used, they can be specified like $-1 or $<type>-1, where type is a %union field-name. o %parsefun-source source Defines the pathname of the file containing the parser member parse(). This directive is overridden by the --parse-source or -p command-line options. o %scanner header Use header as the pathname to the file pre-included in the parser’s class header. See the description of the --scanner option for details about this option. Similar to the convention adopted for this argument, header will (by default) be sur‐ rounded by double quotes. However, when the argument is sur‐ rounded by pointed brackets #include <header> will be included. Note that using this directive implies the definition of a com‐ posed Scanner d_scanner data member into the generated parser, as well as a predefined int lex() member, returning d_scan‐ ner.yylex(). If this is inappropriate, a user defined implemen‐ tation of int lex() must be provided. o %stype typename The type of the semantic value of tokens. The specification typename should be the name of an unstructured type (e.g., size_t/*unsigned*/). By default it is int. See YYSTYPE in bison. It should not be used if a %union specification is defined. Within the parser class, this type will be available as the type ‘STYPE’. All text on the line following %stype is used for the typename specification. It should therefore not contain comment or any other characters that are not part of the actual type definition. o %locationstruct struct-definition Defines the organization of the location-struct data type LTYPE. This struct should be specified analogously to the way the parser’s stacktype is defined using %union (see below). The location struct is named LTYPE. If neither locationstruct nor LTYPE is specified, the aforementioned default struct is used. o %left terminal ... Defines the names of symbolic terminal tokens that should be treated as left-associative. I.e., in case of a shift/reduce conflict, a reduction will be preferred over a shift. Sequences of %left, %nonassoc, %right and %token directives may be used to define the precedence of operators. In expressions, the first used directive will have the lowest precedence, the last used the highest. o %nonassoc terminal ... Defines the names of symbolic terminal tokens that should be treated as non-associative. I.e., in case of a shift/reduce con‐ flict, a reduction will be preferred over a shift. Sequences of %left, %nonassoc, %right and %token directives may be used to define the precedence of operators. In expressions, the first used directive will have the lowest precedence, the last used the highest. o %prec token Overrules the defined precendence of an operator for a particu‐ lar grammatical rule. Well known is the construction expression: ’-’ expression %prec UMINUS { ... } Here, the default priority and precedence of the ‘-’ token as the subtraction operator is overruled by the precedence and pri‐ ority of the UMINUS token, which is commonly defined as %right UMINUS (see below) following, e.g., the ’*’ and ’/’ operators. o %right terminal ... Defines the names of symbolic terminal tokens that should be treated as right-associative. I.e., in case of a shift/reduce conflict, a shift will be preferred over a reduction. Sequences of %left, %nonassoc, %right and %token directives may be used to define the precedence of operators. In expressions, the first used directive will have the lowest precedence, the last used the highest. o %start non-terminal The non-terminal non-terminal should be used as the grammar’s start-symbol. If omitted, the first grammatical rule will be used as the grammar’s starting rule. All syntactically correct sentences must be derivable from this starting rule. o %token terminal ... Defines the names of symbolic terminal tokens. Sequences of %left, %nonassoc, %right and %token directives may be used to define the precedence of operators. In expressions, the first used directive will have the lowest precedence, the last used the highest. o %type <type> non-terminal ... In combination with %union: associate the semantical value of a non-terminal symbol with a union field defined by the %union directive. o %union union-definition Acts identically to the bison and bison++ declaration. as with bison generate a union for semantic type. The union is named STYPE. If no %union is declared, a simple stack-type may be defined using the %stype directive. If no %stype directive is used, the default stacktype (int) is used. The following public members can be used by users of the parser classes generated by bisonc++ (‘Parser Class’:: prefixes are silently implied): o LTYPE: The parser’s location type (user-definable). Available only when either %lsp-needed, %ltype or %locationstruct has been declared. o STYPE: The parser’s stack-type (user-definable), defaults to int. o Tokens: The enumeration type of all the symbolic tokens defined in the grammar file (i.e., bisonc++’s input file). The scanner should be prepared to return these symbolic tokens Note that, since the symbolic tokens are defined in the parser’s class and not in the scanner’s class, the lexical scanner must prefix the parser’s class name to the symbolic token names when they are returned. E.g., return Parser::IDENT should be used rather than return IDENT. o int parse(): The parser’s parsing member function. It returns 0 when parsing has completed successfully, 1 if errors were encountered while parsing the input. o void setDebug(bool mode): This member can be used to activate or deactivate the debug-code compiled into the parsing function. It is available but has no effect if no debug code has been compiled into the parsing func‐ tion. When debugging code has been compiled into the parsing function, it is active by default, but debug-code is suppressed by calling setDebug(false). The following enumerations and types can be used by members of parser classes generated by bisonc++. When prefixed by Base:: they are actu‐ ally protected members inherited from the parser’s base class. o Base::ErrorRecovery: This enumeration defines two values: DEFAULT_RECOVERY_MODE, UNEXPECTED_TOKEN DEFAULT_RECOVERY_MODE consists of terminating the parsing pro‐ cess. UNEXPECTED_TOKEN activates the recovery procedure whenever an error is encountered. The recovery procedure consists of looking for the first state on the state-stack having an error- production, and then skipping subsequent tokens until (in that state) a token is retrieved which may follow the error terminal token in that production rule. If this error recovery procedure fails (i.e., if no acceptable token is ever encountered) error recovery falls back to the default recovery mode, terminating the parsing process. o Base::Return: This enumeration defines two values: PARSE_ACCEPT = 0, PARSE_ABORT = 1 (which are of course the parse() function’s return values). ) The following private members can be used by members of parser classes generated by bisonc++. When prefixed by Base:: they are actually pro‐ tected members inherited from the parser’s base class. o Base::ParserBase(): The default base-class constructor. Can be ignored in practical situations. o void Base::ABORT() const throw(Return): This member can be called from any member function (called from any of the parser’s action blocks) to indicate a failure while parsing thus terminating the parsing function with an error value 1. Note that this offers a marked extension and improve‐ ment of the macro YYABORT defined by bison++ in that YYABORT could not be called from outside of the parsing member function. o void Base::ACCEPT() const throw(Return): This member can be called from any member function (called from any of the parser’s action blocks) to indicate successful pars‐ ing and thus terminating the parsing function. Note that this offers a marked extension and improvement of the macro YYACCEPT defined by bison++ in that YYACCEPT could not be called from outside of the parsing member function. o void Base::checkEOF(): Used internally by the parsing function. Not to be called other‐ wise. o void Base::clearin(): This member replaces bison(++)’s macro yyclearin and causes bisonc++ to request another token from its lex() member, even if the current token has not yet been processed. It is a useful member when the parser should be reset to its initial state, e.g., between successive calls of parse(). In this situation the scanner will probably be reloaded with new information too (in the context of a flex-generated scanner by, e.g., calling the scanner’s yyrestart() member. o bool Base::debug() const: This member returns the current value of the debug variable. o void Base::ERROR() const throw(ErrorRecovery): This member can be called from any member function (called from any of the parser’s action blocks) to generate an error, and thus initiate the parser’s error recovery code. Note that this offers a marked extension and improvement of the macro YYERROR defined by bison++ in that YYERROR could not be called from out‐ side of the parsing member function. o void error(char const *msg): This member may be redefined in the parser class. Its default (inline) implementation is to write a simple message to the standard error stream. It is called when a syntactical error is encountered. o void errorRecovery(): Used internally by the parsing function. Not to be called otherwise. o void executeAction(): Used internally by the parsing function. Not to be called other‐ wise. o int lex(): This member may be pre-implemented using the scanner option or directive (see above) or it must be implemented by the program‐ mer. It interfaces to the lexical scanner, and should return the next token produced by the lexical scanner, either as a plain character or as one of the symbolic tokens defined in the Parser::Tokens enumeration. Zero or negative token values are interpreted as ‘end of input’. o int lookup(): Used internally by the parsing function. Not to be called other‐ wise. See also below, section BUGS. o void nextToken(): Used internally by the parsing function. Not to be called other‐ wise. See also below, section BUGS. o void Base::pop(): Used internally by the parsing function. Not to be called other‐ wise. o void print()): This member can be redefined in the parser class to print infor‐ mation about the parser’s state. It is called by the parser immediately after retrieving a token from lex(). As it is a mem‐ ber function it has access to all the parser’s members, in par‐ ticular d_token, the current token value and d_loc, the current token location information (if %lsp-needed, %ltype or %location struct has been specified). o void Base::push(): Used internally by the parsing function. Not to be called other‐ wise. o void Base::reduce(): Used internally by the parsing function. Not to be called other‐ wise. o void Base::top(): Used internally by the parsing function. Not to be called other‐ wise. The following private members can be used by members of parser classes generated by bisonc++. All data members are actually protected members inherited from the parser’s base class. o bool d_debug: When the debug option has been specified, this variable (true by default) determines whether debug information is actually dis‐ played. o LTYPE d_loc: The location type value associated with a terminal token. It can be used by, e.g., lexical scanners to pass location information of a matched token to the parser in parallel with a returned token. It is available only when %lsp-needed, %ltype or %loca tionstruct has been defined. Lexical scanners may be offered the facility to assign a value to this variable in parallel with a returned token. In order to allow a scanner access to d_loc, d_loc’s address should be passed to the scanner. This can be realized, for example, by defining a member void setLoc(STYPE *) in the lexical scanner, which is then called from the parser’s constructor as follows: d_scanner.setSLoc(&d_loc); Subsequently, the lexical scanner may assign a value to the parser’s d_loc variable through the pointer to d_loc stored inside the lexical scanner. o LTYPE d_lsp: The location stack pointer. Used internally by the parser. Not to be used otherwise. o STYPE d_val: The semantic value of a returned token or non-terminal symbol. With non-terminal tokens it is assigned a value through the action rule’s symbol $$. Lexical scanners may be offered the facility to assign a semantic value to this variable in parallel with a returned token. In order to allow a scanner access to d_val, d_val’s address should be passed to the scanner. This can be realized, for example, by defining a member void setSval(STYPE *) in the lexical scanner, which is then called from the parser’s constructor as follows: d_scanner.setSval(&d_val); Subsequently, the lexical scanner may assign a value to the parser’s d_val variable through the pointer to d_val stored inside the lexical scanner. o LTYPE d_vsp: The semantic value stack pointer. Used internally by the parser. Not to be used otherwise. o size_t/*unsigned*/ d_nErrors: The number of errors counted by parse(). It is initialized by the parser’s base class initializer, and is updated while parse() executes. When parse() has returned it contains the total number of errors counted by parse(). o int d_state: The current parsing state. Used internally by the parsing func‐ tion. Not to be used otherwise. o int d_token: The current token used internally by the parser. The parser may modify the token value retrieved via lex(), so d_token may not be the value of the last token actually retrieved by lex(). o static PI s_productionInfo: Used internally by the parsing function. Not to be used other‐ wise. o static SR s_<nr>[]: Here, <nr> is a numerical value representing a state number. Used internally by the parsing function. Not to be used other‐ wise. o static SR *s_state[]: Used internally by the parsing function. Not to be used other‐ wise. In the file defining the parse() function the following types and vari‐ ables are defined in the anonymous namespace. These are mentioned here for the sake of completeness, and are not normally accessible to other parts of the parser. o ReservedTokens: This enumeration defines some token values used internally by the parsing functions. They are: _UNDETERMINED_ = -2, _EOF_ = -1, _error_ = 256, These tokens are used by the parser to determine whether another token should be requested from the lexical scanner, and to han‐ dle error-conditions. o SR (Shift-Reduce Info): This struct provides the shift/reduce information for the vari‐ ous grammatical states. SR values are collected in arrays, one array per grammatical state. These array, named s_<nr>, where tt<nr> is a state number are defined in the anonymous namespace as well. The SR elements consist of two unions, defining fields that are applicable to, respectively, the first, intermediate and the last array elements. The first element of each array consists of (1st field) a State Type and (2nd field) the index of the last array element; inter‐ mediate elements consist of (1st field) a symbol value and (2nd field) (if negative) the production rule number reducing to the indicated symbol value or (if positive) the next state when the symbol given in the 1st field is the current token; the last element of each array consists of (1st field) a placeholder for the current token and (2nd field) the (negative) rule number to reduce to by default or the (positive) number of an error-state to go to when an erroneous token has been retrieved. If the 2nd field is zero, no error or default action has been defined for the state, and error-recovery is attepted. o StateType: This enumeration defines the type of the various grammar-states. They are: NORMAL, HAS_ERROR_ITEM, IS_ERROR_STATE, HAS_ERROR_ITEM is used for a state having at least one error- production. IS_ERROR_STATE is used for a state from which error recovery is attempted. So, while in these states tokens are retrieved until a token from where parsing may continue is seen by the parser. All other states are NORMAL states. o PI (Production Info): This struct provides information about production rules. It has two fields: d_nonTerm is the identification number of the pro‐ duction’s non-terminal, d_size represents the number of elements of the productin rule. All DECLARATIONS and DEFINE symbols not listed above but defined in bison++ are obsolete with bisonc++. In particular, there is no %header{ ... %} section anymore. Also, all DEFINE symbols related to member functions are now obsolete. There is no need for these symbols anymore as they can simply be declared in the class header file and defined elsewhere.
EXAMPLE
Using a fairly worn-out example, we’ll construct a simple calculator below. The basic operators as well as parentheses can be used to specify expressions, and each expression should be terminated by a new‐ line. The program terminates when a q is entered. Empty lines result in a mere prompt. First an associated grammar is constructed. When a syntactical error is encountered all tokens are skipped until then next newline and a simple message is printed using the default error() function. It is assumed that no semantic errors occur (in particular, no divisions by zero). The grammar is decorated with actions performed when the corresponding grammatical production rule is recognized. The grammar itself is rather standard and straightforward, but note the first part of the specifica‐ tion file, containing various other directives, among which the %scan ner directive, resulting in a composed d_scanner object as well as an implementation of the member function int lex(). In this example, a common Scanner class construction strategy was used: the class Scanner was derived from the class yyFlexLexer generated by flex++(1). The actual process of constructing a class using flex++(1) is beyond the scope of this man-page, but flex++(1)’s specification file is mentioned below, to further complete the example. Here is bisonc++’s input file: %filenames parser %scanner ../scanner/scanner.h // lowest precedence %token NUMBER // integral numbers EOLN // newline %left ’+’ ’-’ %left ’*’ ’/’ %right UNARY // highest precedence %% expressions: expressions evaluate | prompt ; evaluate: alternative prompt ; prompt: { prompt(); } ; alternative: expression EOLN { cout << $1 << endl; } | ’q’ done | EOLN | error EOLN ; done: { cout << "Done.\n"; ACCEPT(); } ; expression: expression ’+’ expression { $$ = $1 + $3; } | expression ’-’ expression { $$ = $1 - $3; } | expression ’*’ expression { $$ = $1 * $3; } | expression ’/’ expression { $$ = $1 / $3; } | ’-’ expression %prec UNARY { $$ = -$2; } | ’+’ expression %prec UNARY { $$ = $2; } | ’(’ expression ’)’ { $$ = $2; } | NUMBER { $$ = atoi(d_scanner.YYText()); } ; Next, bisonc++ processes this file. In the process, bisonc++ generates the following files from its skeletons: o The parser’s base class, which is not modified by the programmer at all: #ifndef ParserBase_h_included #define ParserBase_h_included #include <vector> #include <iostream> namespace // anonymous { struct PI; } class ParserBase { public: // $insert tokens // Symbolic tokens: enum Tokens { NUMBER = 257, EOLN, UNARY, }; // $insert STYPE typedef int STYPE; private: int d_stackIdx; std::vector<size_t> d_stateStack; std::vector<STYPE> d_valueStack; protected: enum Return { PARSE_ACCEPT = 0, // values used as parse()’s return values PARSE_ABORT = 1 }; enum ErrorRecovery { DEFAULT_RECOVERY_MODE, UNEXPECTED_TOKEN, }; bool d_debug; size_t d_nErrors; int d_token; int d_nextToken; size_t d_state; STYPE *d_vsp; STYPE d_val; ParserBase(); void ABORT() const throw(Return); void ACCEPT() const throw(Return); void ERROR() const throw(ErrorRecovery); void clearin(); bool debug() const; void pop(size_t count = 1); void push(size_t nextState); void reduce(PI const &productionInfo); size_t top() const; public: void setDebug(bool mode); }; inline bool ParserBase::debug() const { return d_debug; } inline void ParserBase::setDebug(bool mode) { d_debug = mode; } // As a convenience, when including ParserBase.h its symbols are available as // symbols in the class Parser, too. #define Parser ParserBase #endif o The parser class parser.h itself. In the grammar specification various member functions are used (e.g., done()) and prompt(). These functions are so small that they can very well be imple‐ mented inline. Note that done() calls ACCEPT() to terminate fur‐ ther parsing. ACCEPT() and related members (e.g., ABORT()) can be called from any member called by parse(). As a consequence, action blocks could contain mere function calls, rather than several statements, thus minimizing the need to rerun bisonc++ when an action is modified. Once bisonc++ had created parser.h it was augmented with the required additional members, resulting in the following final version: #ifndef Parser_h_included #define Parser_h_included // $insert baseclass #include "parserbase.h" // $insert scanner.h #include "../scanner/scanner.h" #undef Parser class Parser: public ParserBase { // $insert scannerobject Scanner d_scanner; public: int parse(); private: void error(char const *msg); // called on (syntax) errors int lex(); // returns the next token from the // lexical scanner. void print(); // use, e.g., d_token, d_loc void prompt(); void done(); // support functions for parse(): void executeAction(int ruleNr); void errorRecovery(); int lookup(bool recovery); void nextToken(); }; inline void Parser::error(char const *msg) { std::cerr << msg << std::endl; } // $insert lex inline int Parser::lex() { return d_scanner.yylex(); } inline void Parser::print() // use d_token, d_loc {} inline void Parser::prompt() { std::cout << "? " << std::flush; } inline void Parser::done() { std::cout << "Done\n"; ACCEPT(); } #endif o To complete the example, the following lexical scanner specifi‐ cation was used: %{ #define _SKIP_YYFLEXLEXER_ #include "scanner.ih" #include "../parser/parser.h" %} %option yyclass="Scanner" outfile="yylex.cc" %option c++ 8bit warn noyywrap yylineno %% [ \t]+ // skip white space \n return Parser::EOLN; [0-9]+ return Parser::NUMBER; . return yytext[0]; %% o Since no member functions other than parse() were defined in separate source files, only parse() includes parser.ih. Since cerr and endl are used in the grammar’s actions, a using names‐ pace std or comparable statement is required. This was effectu‐ ated from parser.ih Here is the implementation header declaring the standard namespace: // include this file in the sources of the class Calculator, // and add any includes etc. that are only needed for // the compilation of these sources. // include the file defining the parser class: #include "parser.h" // UN-comment if you don’t want to prefix std:: // for every symbol defined in the std. namespace: using namespace std; The implementation of the parsing member function parse() is basically irrelevant, since it should not be modified by the programmer. It was written on the file parse.cc. o Finally, here is the program offering our simple calculator: #include "parser/parser.h" int main() { Parser calculator; return calculator.parse(); } Note here that although the file parserbase.h, defining the parser class’ base-class, rather than the header file parser.h defining the parser class is included, the lexical scanner may simply return tokens of the class Calculator (e.g., Calculator::NUMBER rather than Calcula torBase::NUMBER). In fact, using a simple #define - #undef pair gener‐ ated by the bisonc++ respectively at the end of the base class header the file and just before the definition of the parser class itself it is the possible to assume in the lexical scanner that all symbols defined in the the parser’s base class are actually defined in the parser class itself. It the should be noted that this feature can only be used to access base class the enum and types. The actual parser class is not available by the time the the lexical scanner is defined, thus avoiding circular class dependencies.
FILES
o bisonc++base.h: skeleton of the parser’s base class; o bisonc++.h: skeleton of the parser class; o bisonc++.ih: skeleton of the implementation header; o bisonc++.cc: skeleton of the member parse(). bison(1), bison++(1), bison.info (using texinfo), flex++(1) Lakos, J. (2001) Large Scale C++ Software Design, Addison Wesley. Aho, A.V., Sethi, R., Ullman, J.D. (1986) Compilers, Addison Wesley.
BUGS
The Semantic- and Pure parsers, mentioned in bison++(1) are not imple‐ mented in bisonc++(1). According to bison++(1) the semantic parser was not available in bison++ either, while the pure parser was deemed ‘not very useful’. The member function void lookup (< 1.00) was replaced by int lookup. When regenerating parsers created by early versions of bisonc++ (ver‐ sions before version 1.00), lookup’s prototype should be corrected by hand, since bisonc++ will not by itself rewrite the parser class’s header file. Bisonc++ was based on bison++, originally developed by Alain Coetmeur (coetmeur@icdc.fr), R&D department (RDT), Informatique-CDC, France, who based his work on bison, GNU version 1.21. Bisonc++ version 0.98 and beyond is a complete rewrite of an LALR-1 parser generator, closely following the construction process as described in Aho, Sethi and Ullman’s (1986) book Compilers (i.e., the Dragon book). It the uses same grammar specification as bison and bison++, and it uses practically the same options and directives as bisonc++ versions earlier than 0.98. Variables, declarations and macros that are obsolete were removed. Since bisonc++ is a completely new program, it will most likely contain bugs. Please report bugs to the author:
AUTHOR
Frank B. Brokken (f.b.brokken@rug.nl).
|
http://manpages.ubuntu.com/manpages/gutsy/man1/bisonc++.1.html
|
CC-MAIN-2016-07
|
refinedweb
| 7,519
| 55.44
|
Content-type: text/html
#include <sys/sunldi.h>
int ldi_ioctl(ldi_handle_t lh, int cmd, intptr_t arg, int mode, cred_t *cr, int *rvalp);
lh Layered handle.
cr Pointer to a credential structure used to open a device.
rvalp Caller return value. (May be set by driver and is valid only if the ioctl() succeeds).
cmd Command argument. Interpreted by driver ioctl() as the operation to be performed.
arg Driver parameter. Argument interpretation is driver dependent and usually depends on the command type.
mode Bit field that contains:
FKIOCTLland process), the caller must pass on the mode parameter from the original ioctl. This is because the mode parameter contains the contains the FMODELS bits which:
I_PLINK Behaves as documented in streamio(7I). The layered handle lh should point to the streams multiplexer. The arg parameter should point to a layered handle for another streams driver.
I_UNPLINK.
EINVAL Invalid input parameters.
ENOTSUP Operation is not supported for this device.
These functions may be called from user or kernel context.
|
http://backdrift.org/man/SunOS-5.10/man9f/ldi_ioctl.9f.html
|
CC-MAIN-2017-22
|
refinedweb
| 166
| 61.33
|
Hello, I am making a program that will draw a ramps for a bike game. It does that by making many lines which form a ramp or other figures. For one of the ramps I have a small problem. In the game if something is too smooth, it will slip and go to fast and fall (this is a bug). Because my program makes super smooth ramps, the bikes slips and falls. To see what I am talking about go here . And press up. It will go super fast and start spinning. This is not supposed to happen. I want to make it less smooth by drawing an extra layer on top of the ramp.
#include <iostream> #include <windows.h> #include <conio.h> #include <stdio.h> #include <stdlib.h> using namespace std; int main() { Sleep(5000); int sx = 700; int sy = 200; int startx[25]; //this is so the ramp will not be slippery int starty[25]; //this is so the ramp will not be slippery float count = 0; //then we add 0.25 float counti = 0; //other count too see which element in the array to fill int ex = 700; int ey = 500; int i = 0; for (i = 0; i < 200; i++) { count+=0.25; if (count == 2) { count = 0; startx[(int)counti] = sx; starty[(int)counti] = sy; counti++; } sy+=1; sx-=1; SetCursorPos(sx,sy); mouse_event(MOUSEEVENTF_LEFTDOWN,0,0,0,0); ex-=3; SetCursorPos(ex,ey); Sleep(150); mouse_event(MOUSEEVENTF_LEFTUP,0,0,0,0); Sleep(150); } int d = 0; while (d < 24) { SetCursorPos(startx[d]-1,starty[d]-1); d++; mouse_event(MOUSEEVENTF_LEFTDOWN,0,0,0,0); SetCursorPos(startx[d]-1,starty[d]-1); Sleep(150); mouse_event(MOUSEEVENTF_LEFTUP,0,0,0,0); Sleep(150); } }
To run the program, you should quickly switch to canvas rider drawer or any other painting program (like Windows Paint). But as you may see, the layer does not draw correctly. I can't figure out how to make it go right on top layer of ramp. So how do you think I can do this?
|
https://www.daniweb.com/programming/software-development/threads/373574/making-ramp-less-slippery
|
CC-MAIN-2018-34
|
refinedweb
| 341
| 82.44
|
I have a long mp3 file hosted on a standard apache server (30 minutes
long so far, but I would like it to work with longer sounds too).
I'd like to start playback of this audio within at a specified point.
When attempting to use Flash Actionscript 3, my basic tests
show that ALL the audio from the start to the position I choose is buffered
before playback (Sound.byte
i have 2 signals, one containing audio data which is played on speakers.
second one contains mic data recording the speakers simultaneously.
what ive done so far: align signals in time domain via correlation.
apply fft on overlapping part of both signals and divide one by the other
in order to achieve deconvolution.what am i doing wrong as the
resulting audiodata is useless.
1)this is my Main Activity :
package
com.art.drumdrum;import android.app.Activity;import
android.media.AudioManager;import android.media.SoundPool;import android.os.Bundle;import android.view.View;import
android.widget.Button; public class MainActivity extends Activity
{SoundPool soundPool = null;int kickId = 0;int snar
in my app i want music/tune playing in the background in the menus and
game. I am using this in the onCreate() method:
ourSong
= MediaPlayer.create(SoundActivity.this, R.raw.beat2);
ourSong.start();
however i have to call this in each
class and the song starts over again. I am sure this is not the right way
to do this and would be grateful if som
This might be a general sound issue, but it concerns my app aswell.I have noticed, that the Android Sound-Engine (at least on Android
v2.3.3) need to be "activated" before a sound may be played. After the
sound being played, the Sound-Engine deactivates again after a second.This is hearable as a "click"-sound on activation and deactivation.The real problem is, that the first s
I'm using Java sound to play back a number of sound samples with the
Clip.start() method.
Given this, what is the best way to
control the playback volume?
In particular, I'm very keen that
the solution will work reliably across platforms.
In my app I'm required to play a sound from a volume of 0 to maximum
system volume (it's an alarm app).
I currently use
AVAudioPlayer to play the sound and MPMusicPlayerController to maximize the
device's volume for the duration of the alarm
MPMusicPlayerController.applicationMusicPlayer.volume = 1.0;NSString *fileName = [NSString stringWithFormat:alarm.soun
Is there any AS3 library or code-snippet that can create altered
versions of a Sound object on-the-fly (at runtime)? Either based on:
For example, say you have a "dry" sound of a
gun-shot. You could:
|
http://bighow.org/tags/sound/1
|
CC-MAIN-2017-39
|
refinedweb
| 459
| 55.13
|
18 January 2012 23:59 [Source: ICIS news]
LONDON (ICIS)--January prices for European acrylic acid (AA) and acrylate esters were largely stable from the previous month, although some grades saw a slight reduction.
This follows months of falling prices, sources said on Wednesday, as purchasing activity picked up in the New Year.
AA prices in January were assessed at €1,780–1,840/tonne ($2,280-2,357/tonne) FD (free delivered) NWE (northwest Europe), steady from the previous month.
Methyl acrylate (Methyl-A) prices were assessed at €1,740–1,820/tonne FD NWE, unchanged from December.
Ethyl acrylate (Ethyl-A) prices were assessed at €1,795–1,850/tonne, also steady from the previous month.
Butyl acrylate (Butyl-A) prices were assessed at €1,780–1,870/tonne, a reduction of €10/tonne at the low end, while 2-ethylhexyl acrylate (2-EHA) numbers were assessed at €1,970–2,125/tonne, unchanged from December.
“I am not seeing lower levels than December and the overall perception is that the price trend is upwards as of February,” said one major supplier on Wednesday.
Another major European producer also confirmed it had agreed January contracts at a rollover, adding that it was “disappointed” that it did not secure any increases this month.
“Demand is certainly there this month, but the test will be to see if this continues into February,” the producer explained.
Another major buyer, who also agreed a rollover, said demand this month was largely due to restocking, but that higher propylene costs in February would impact acrylates pricing.
Although there were some aggressive offers for spot material heard in January, players believed that the volumes involved were minimal and therefore not representative of the market as a whole.
“Although we also hear some low quotations we realised that once you want to buy at this level there is no more cargo available,” said one trader.
“It might be possible that some positions on Butyl-A and 2-EHA have been cleared at low prices in January but in general the tendency is going in the other direction.”
?xml:namespace>
($1 = €0.79)
For more on acrylic acid and acrylate esters visit ICIS chemical intelligence
Follow Truong Mellor on Twitter for daily tweets on the acryl
|
http://www.icis.com/Articles/2012/01/18/9525079/europe-jan-acrylic-acid-and-esters-roll-over-as-market.html
|
CC-MAIN-2015-14
|
refinedweb
| 380
| 56.69
|
Hopcroft Karp algorithm
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Reading time: 40 minutes
The Hopcroft–Karp algorithm is an algorithm that takes as input a bipartite graph and produces as output a maximum cardinality matching, it runs in O(E√V) time in worst case. Let us define few terms before we discuss the algorithm..
Augmenting paths
Given a matching M, an augmenting path is an alternating path that starts from and ends on free vertices. All single edge paths that start and end with free vertices are augmenting paths.
Algorithm.
The Hopcroft–Karp algorithm repeatedly increases the size of a partial matching by finding augmenting paths. Sequences of edges that alternate between being in and out of the matching, such that swapping which edges of the path are in and which are out of the matching produces a larger matching. However, instead of finding just a single augmenting path per iteration, the algorithm finds a maximal set of shortest augmenting paths. As a result, O(√V) iterations are needed. As BFS or DFS takes O(V + E) (i.e O(E) as E > V in genral) time per iteration, so overall time complexity of algorithm is O(E√V).
Steps
1) Initialize the Maximal Matching M as empty. 2) While there exists an Augmenting Path P Remove matching edges of P from M and add not-matching edges of P to M (This increases size of M by 1 as P starts and ends with a free vertex i.e. a node that is not part of matching.) 3) Return M.
To understand the working of the algorithm, let's take an example of a bipartite graph of 8 vertices.
At initial state of the algorithm, all the possible paths are considered as augmented path. So we choose one and add it to the matching, now in intermediate state we check for augmented path with bfs algorithm, if any augmented path found then we start with free vertices to check for a matching with dfs algorithm. If a matching found then we update the result and repeat the same until any augmented path is available. At final state when no augmented path possible we reach the maximum matching which is 4 in the above example.
Implementation
Code in C++11
// C++ implementation of Hopcroft Karp algorithm for // maximum matching #include <iostream> #include <cstdlib> #include <queue> #include <list> #include <climits> #define NIL 0 #define INF INT_MAX // A class to represent Bipartite graph for // Hopcroft Karp implementation class BGraph { // m and n are number of vertices on left // and right sides of Bipartite Graph int m, n; // adj[u] stores adjacents of left side // vertex 'u'. The value of u ranges from 1 to m. // 0 is used for dummy vertex std::list<int> *adj; // pointers for hopcroftKarp() int *pair_u, *pair_v, *dist; public: BGraph(int m, int n); // Constructor void addEdge(int u, int v); // To add edge // Returns true if there is an augmenting path bool bfs(); // Adds augmenting path if there is one beginning // with u bool dfs(int u); // Returns size of maximum matching int hopcroftKarpAlgorithm(); }; // Returns size of maximum matching int BGraph::hopcroftKarpAlgorithm() { // pair_u[u] stores pair of u in matching on left side of Bipartite Graph. // If u doesn't have any pair, then pair_u[u] is NIL pair_u = new int[m + 1]; // pair_v[v] stores pair of v in matching on right side of Biparite Graph. // If v doesn't have any pair, then pair_u[v] is NIL pair_v = new int[n + 1]; // dist[u] stores distance of left side vertices dist = new int[m + 1]; // Initialize NIL as pair of all vertices for (int u = 0; u <= m; u++) pair_u[u] = NIL; for (int v = 0; v <= n; v++) pair_v[v] = NIL; // Initialize result int result = 0; // Keep updating the result while there is an // augmenting path possible. while (bfs()) { // Find a free vertex to check for a matching for (int u = 1; u <= m; u++) // If current vertex is free and there is // an augmenting path from current vertex // then increment the result if (pair_u[u] == NIL && dfs(u)) result++; } return result; } // Returns true if there is an augmenting path available, else returns false bool BGraph::bfs() { std::queue<int> q; //an integer queue for bfs // First layer of vertices (set distance as 0) for (int u = 1; u <= m; u++) { // If this is a free vertex, add it to queue if (pair_u[u] == NIL) { // u is not matched so distance is 0 dist[u] = 0; q.push(u); } // Else set distance as infinite so that this vertex is considered next time for availibility else dist[u] = INF; } // Initialize distance to NIL as infinite dist[NIL] = INF; // q is going to contain vertices of left side only. while (!q.empty()) { // dequeue a vertex int u = q.front(); q.pop(); // If this node is not NIL and can provide a shorter path to NIL then if (dist[u] < dist[NIL]) { // Get all the adjacent vertices of the dequeued vertex u std::list<int>::iterator it; for (it = adj[u].begin(); it != adj[u].end(); ++it) { int v = *it; // If pair of v is not considered so far // i.e. (v, pair_v[v]) is not yet explored edge. if (dist[pair_v[v]] == INF) { // Consider the pair and push it to queue dist[pair_v[v]] = dist[u] + 1; q.push(pair_v[v]); } } } } // If we could come back to NIL using alternating path of distinct // vertices then there is an augmenting path available return (dist[NIL] != INF); } // Returns true if there is an augmenting path beginning with free vertex u bool BGraph::dfs(int u) { if (u != NIL) { std::list<int>::iterator it; for (it = adj[u].begin(); it != adj[u].end(); ++it) { // Adjacent vertex of u int v = *it; // Follow the distances set by BFS search if (dist[pair_v[v]] == dist[u] + 1) { // If dfs for pair of v also returnn true then if (dfs(pair_v[v]) == true) { // new matching possible, store the matching pair_v[v] = u; pair_u[u] = v; return true; } } } // If there is no augmenting path beginning with u then. dist[u] = INF; return false; } return true; } // Constructor for initialization BGraph::BGraph(int m, int n) { this->m = m; this->n = n; adj = new std::list<int>[m + 1]; } // function to add edge from u to v void BGraph::addEdge(int u, int v) { adj[u].push_back(v); // Add v to u’s list. } int main() { int v1, v2, e; std::cin >> v1 >> v2 >> e; // vertices of left side, right side and edges BGraph g(v1, v2); // int u, v; for (int i = 0; i < e; ++i) { std::cin >> u >> v; g.addEdge(u, v); } int res = g.hopcroftKarpAlgorithm(); std::cout << "Maximum matching is " << res <<"\n"; return 0; }
Sample Input and Output
Input: // vertices of left and right side and total edges // B-Graph shown in the above example 4 4 6 1 1 1 3 2 3 3 4 4 3 4 2 Output: // size of maximum matching Maximum matching is 4
Complexity
Time complexity
- Hopcroft-karp algorithm takes O(E√V) time in worst case.
Space complexity
- Hopcroft-karp algorithm takes O(V) space in worst case.
|
https://iq.opengenus.org/hopcroft-karp-algorithm/
|
CC-MAIN-2021-17
|
refinedweb
| 1,216
| 61.7
|
Awesome Video. I really liked and refreshed my concepts about Polymorphic Associations. Thanks :)
Here is a Pro episode suggestion - take this and add nested comments and some AJAX to it :)
Id be sure to subscribe for that. Not found a decent example of nested before.
It took me forever to build but I did it from scratch a few years back on a project that I still have yet to finish. I works slick but I am sure it could be done better than the way I did it. I used the ancestry gem and a few Railscasts episodes to cobble it together.
How would you suggest getting all the comments a user's posts collectively have? For example @user.comments.all throws the error
ActiveRecord::HasManyThroughAssociationNotFoundError at / Could not find the association :commentable in model User
Is this something to consider in a social networking site? If I would have say comments polymorphically associated with various models, wouldn't it be a major hit on one table all the time?
Since they are reads, I think that so long as your database server can handle it, it doesn't matter how many connections read from the same table. Reads don't lock the table/row like writes do since a read cannot cause data loss. You shouldn't experience any loss of speed if many things are reading the same table.
The problem with polymorphic models and tables is there is no way to keep the database from becoming corrupt due to no FK constraints.
I would suggest that you please either put in minimal security on these screen casts or at least mention that the implementation is very insecure and to check out the rails security guidelines. I know the case can be made where security is out of scope for this discussion but a, at one point you reference how this method is "more secure" when talking about the hidden form field and b, you leave xss/injection etc all wide open so that body may be used to by malicious users to extend reach.
Can you please elaborate on this part `you leave xss/injection etc all wide open so that body may be used to by malicious users to extend reach`? I'd like to understand the security risk a little more.
How would this be handled if we only want the user to see their own comments, but not anyone else's?
You could do that by scoping the comments further by adding ".where(user: current_user)" to the query in the controller.
I found some of this information useful on how to test these polymorphic comment features at...
Interesting way to do this. I created an app that does comments, but I did it by the whole post has_many :comments, comments belongs_to post, and resource nesting. Took me hours to figure out how to do display comments. What's the pros and cons of the that way versus this way?
One great thing about this episode is that the whole polymorphic thing finally clicks for me, because for the longest time I still didn't really understand it despite reading on it repeatedly. In all of my albiet smallish projects I never used it, now that I understand it finally, I have some idea when to use polymorphic association.
Thanks so much! I signed up for GoRails based on this episode and the omniauth twitter episode!
The main difference is that you probably have tied your comments to the Post model. With polymorphism, you could have comments on the Post model, the User model, or any other model you've got.
And thank you a bunch for subscribing!!
If you don't go with polymorphism and you want comments on multiple models, you will need to have two different tables for comments, like PostComments and UserComments, but why do that when you could combine them into just Comments? Really that's the main difference. You reduce duplication there.
Ah I see that makes sense. What about foreign key and rails integrity? Is that ever an issue?
You shouldn't have any trouble. Rails handles it well. But you're right that with polymorphism you don't get the same database level enforcement like you would with the individual associations.
I am trying to add a delete comment button to this. But it is sending me to the articles controller destroy action. Please can you add how to add a delete button to the comment? I'm stuck. Thanks very much!
Hey Melanie!
For deleting comments, it might be useful to add a "resources :comments" to your routes that isn't inside another resources block. Then you can create a regular CommentsController with a destroy action like normal. That way you can delete any comment as long as you know the ID of it and not worry about whether it's a film or actor comment because that doesn't really matter when you're deleting a comment.
To add the delete link to each comment in the view, you can say: <%= link_to "Delete", comment, method: :delete %> and that will make a DELETE request to the /comments/1 url, which will trigger the destroy action.
That should do it! If you want to go over and above, you can also make it as a "remote: true" link so that you can return some JS to remove the item from the page to make it AJAXy and nicer to use.
Hi Chris. Thanks very much for the response. I tried adding this to my routes:
resources :comments, only: [ :destroy]
and this to my existing comments controller (which only had the create method in it - per your tutorial)
def destroy
@comment.destroy
respond_to do |format|
format.html { redirect_to data_url }
format.json { head :no_content }
end
end.
When I try this, I get an error that says:
undefined method `destroy' for nil:NilClass
Any ideas on what's gone awry?
You might be needing to add a before_action for the destroy action called set_comment to set the @comment variable. It's saying that @comment is nil there so that would be it. You can just set @comment = Comment.find(params[:id]) in the before action and you should be set.
Hi Chris, I'm still struggling along in trying to get this set up. My current issue is with my comments policy update function. Your tutorial shows how to define update as set out below (which I have tried in both my article policy and my comment policy:
Article Policy:
def update?
#user && user.article.exists?(article.id) -- I have also tried this on this suggestion of someone on stack overflow).
user.present? && user == article.user
end
Comment Policy:
def update?
user.present? && user == comment.user
end
I keep getting a controller action error which says: undefined method `user' for #<class:0x007f9e24fa7cf0>
It highlights the update definition. Has something changed in pundit that requires a different form of expression for this update function? Can you see what might have gone wrong? Thanks very much
That looks correct. It sounds like the comment object is potentially the class and not the instance of the comment.
Does your controller have " authorize @comment" in it? And is your @comment variable set to an individual record?
Yes - my articles controller update action has:
def update
# before_action :authenticate_user!
authorize @article
respond_to do |format|
# if @article.update(article_params)
# format.json { render :show, status: :ok, location: @article }
# else
# format.html { render :edit }
# format.json { render json: @article.errors, status: :unprocessable_entity }
# end
# end
if @article.update(article_params)
format.json { render :show, status: :ok, location: @article }
else
format.json { render json: @article.errors, status: :unprocessable_entity }
end
format.html { render :edit }
end
end
Hi - what part would that be?
def set_article
@article = Article.find(params[:id])
authorize @article
end
That's exactly what I need. I might suggest removing the authorize @article in this function and keeping the one in your update action instead. Everything else looks right, so you might need to also send me the full error logs to see where exactly it broke.
The error log says:
Completed 500 Internal Server Error in 56ms (ActiveRecord: 28.7ms)
ActionView::Template::Error (undefined method `user' for #<class:0x007f9e1a734af0>):
22: <%= comment.created_at.try(:strftime, '%e %B %Y') %>
23: </div>
24:
25: <% if policy(Comment).update? %>
26: <%= button_to 'Edit', polymorphic_path([commentable, comment]), :class => 'btn btn-large btn-primary' %>
27: <% end %>
28: <% if policy(Comment).destroy? %>
app/policies/comment_policy.rb:13:in `update?'
app/views/comments/_display.html.erb:25:in `block in _app_views_comments__display_html_erb___34367520595054411_70158511210960'
app/views/comments/_display.html.erb:12:in `_app_views_comments__display_html_erb___34367520595054411_70158511210960'
app/views/articles/show.html.erb:68:in `_app_views_articles_show_html_erb__2619839868106814513_70158550563960'
Ah ha! So your errors is in the view, not your controller. :)
Change line 25 to say <% if policy(@comment).update? %>. Like I first guessed, you were referencing the class Comment, and not the individual record "@comment". That should do it!
Hi Chris - when I try that, I get this error: Pundit::NotDefinedError in Articles#show
Showing //app/views/comments/_display.html.erb where line #25 raised:
unable to find policy of nil
Also, It's strange that the form of expression the way I had it works in delete, but not in update
It sounds like you're passing in a variable that's nil then. Are you sure you're before_action :set_comment is being called?
Not sure how to check that. It's in the controller. I've followed the steps as set out in your tutorial. I'm not sure what I've messed up. I'll go back and watch the video again (take 30 might do it).
I think you're passing in that variable into a partial actually(?), so that wouldn't actually solve your problem. You may need to make sure you're passing the right comment variable into your pundit stuff in each case.
If I change the view so that it's:
<% commentable.comments.each do | comment | %>
<div class="well">
<%= comment.opinion %>
<div class="commentattributionname">
<%= comment.user.full_name %>
</div>
<div class="commentattributiontitle">
<%= comment.user.formal_title %>
</div>
<div class="commentattributiondate">
<%= comment.created_at.try(:strftime, '%e %B %Y') %>
</div>
<% if policy(comment).update? %>
<%= button_to 'Edit', polymorphic_path([commentable, comment]), :class => 'btn btn-large btn-primary' %>
<% end %>
<% if policy(comment).destroy? %>
<%= button_to 'Delete', polymorphic_path([commentable, comment]), method: :delete, :class => 'btn btn-large btn-primary' %>
<% end %>
</div>
then the delete still works (it did with 'Comment' instead of 'comment', but the update shows an error as:
No route matches [POST] "/articles/10/comments/29"
In my routes, I have:
resources :articles do
collection do
get 'search'
end
resources :comments, module: :articles
end
resources :comments, only: [ :update, :destroy]
You must link to the route that matches the destroy action. You're linking to a nested route, but you want to link to comments route directly.
Replace the polymorphic_path([commentable, comment]) in your button_to's to simply be comments_path(comment)
When I try:
<% if policy(comment).update? %>
<%= button_to 'Edit', comments_path(comment), :class => 'btn btn-large btn-primary' %>
<% end %>
I get this error:
Routing Error
No route matches [POST] "/articles/8/comments/30"
I'm baffled by the idea that update is defined in the same comments controller as delete. So far delete is working (using the polymorphic nested path), but update has errors.
Well, that's a button_to, which is a POST, but update is actually an UPDATE request. You'd need to add the :method => :update to the button_to here.
Your destroy action should be put inside the regular CommentsController and should be in the same place as these. The only reason you need the nested routes and controllers is for helpers that make creating the comments (and referencing the original object you're commenting) on a little simpler. To destroy any comment, it doesn't matter what the original object was, you can just delete the comment itself if that makes sense.
Hi, still no good. Each of the create, update and destroy methods are in the ordinary comments controller. When I try:
<% if policy(comment).update? %>
<%= button_to 'Edit', comments_path(comment), :method => :update, :class => 'btn btn-large btn-primary' %>
<% end %>
<% if policy(comment).destroy? %>
<%= button_to 'Delete', polymorphic_path([commentable, comment]), method: :delete, :class => 'btn btn-large btn-primary' %>
I get this error:
undefined method `comments_path' for #<#<class:0x007f9e26697c30>:0x007f9e1a6daff0>
On top of that, now my delete action doesnt work either (it did prior to this set of changes).
Should I go back to capital 'C' in comment for destroy? e.g.: <% if policy(Comment).destroy? %>
No, you'll still want that to reference the variable and not the class.
Do you have a github repo I can show you some fixes for this? It's kinda hard to give guidance in the comments. :)
Hi Chris - I appreciate your efforts to help. The repo is private so I can't share it. Thanks anyway. Ill keep trying. It's been 3 years trying to grasp the basics and I'm still waiting for the penny to drop. Thanks anyway.
You'll get there! I think you can add me as a collaborator to the private project temporarily if you wanted, but no worries if not.
I would also recommend playing a lot with plain Ruby because that will help wrap your head around all these things in Rails. It's mostly just regular old Ruby code just connected in various ways. The Ruby Pickaxe book and Metaprogramming Ruby are both really good.
Feel free to shoot me some emails as well! My email's on the about page I believe.
Chris, Fantastic episode here. Thank you very much.
I got it to work - mostly - in my set-up but I encounter a little redirect problem.
In my case my association is a "noteable" - referring to Notes.
My resources and namespace profile is:
namespace :navigate do
--resources :boks, :only => [:show] do
----resources :tools, :only => [:show, :index]
------resources :notes, module: :processus
----resources :processus do
------resources :notes, module: :processus
The problem I have is with the NotesController for the create/update of the notes. When I try to do a redirect I am finding that I am missing the information about the "bok" entity.
In my controller I have this:
def create
@note = @noteable.notes.new note_params
@note.user = current_user
@note.save
redirect_to [:navigate, @bok, @noteable], notice: "Your note was succesfully created."
end
Really the redirect_to should send me back to the right place, however the @bok entity is not present at all...mostly because at this stage I don't actually need it.
What would be the recommended approach to dealing with this nested situation?
Thanks!
Chris, awesome episode! I'm trying to implement something similar, but before I dive deep into it I'd like to make sure I go down the right path. So I will do the exactly same, except comments are gonna have replies. My guess is if I have the right polymorphic setup for the comments, then I can just setup a simple `has_many + belongs_to` relationship between the comment model and the reply model, so from the reply's perspective it doesn't matter if the comment is polymorphic or not since every comment will have its unique id. Is this right?
Yeah, you can have a comment as a commentable, allowing you to have Comments with comments if you like. That's how you would normally setup threaded comments. You might want to put some limits on the nesting so you don't get threads that are too far nested in. Facebook limits it to one layer for example.
In the comments controller when I try to do
@comment.user = current_user
Throws an error (no user method for user class) unless I do
@comment.user_id = current_user.id
Problem is, when I try to show the username associated to a comment (I need to do queries instead of accessing the object). Any idea on how to fix this?
Question Chris - I'm trying to show the last 5 comments in the show view. I can't get the query string right. Thanks for your help!
You could change it to the following:
<% commentable.comments.order(created_at: :desc).limit(5).each do |comment| %>
I should have mentioned that I'm showing the comments on an dashboard view that shows info from multiple models. So what you provided throws a 'undefined local variable or method `commentable' error.
For example I have a contacts model that is commentable and I would like to show the comments from that model on the dashboard(index)
Just ignore that part and add the order and limit functions to your call when you retrieve comments. That's the important bit there.
Hi Chris!
Awesome video!
I'm having problems with current_user being nil in the base CommentsController where we set
@comment.user_id = current_user
current_user seems to return the right user_id in my other controllers. I have looked around but I haven't been able to pin point the problem (cookie problems maybe?)
Thank you in advance and for the really helpful videos!
yes, the user is signed in when I post a comment. I also have the before_action :authenticate_user! to make sure there is a logged in user available.
Hmm, I was hoping it wasn't. Usually it's just that the user isn't signed in. Otherwise...I'm not entirely sure. There aren't a whole lot of places to go check to make sure things are correct aside from that.
Possibly just a typo in your comment, but make sure you've got:
@comment.user_id = current_user.id
If you specify user_id, then you need to specify ID on the user, or you can just assign the object to the association alternatively:
@comment.user = current_user
That was it!
if I use
@comment.user = current_user I get a undefined method user so I changed it to .user_id but never specified ID of current user.
Thank you very much!
I love this nifty bit of code. I'm wondering how would you go about using will_paginate to paginate the comments?
I forgot about this. I watched it a year ago and used in in my last attempt at implementing these associations. And then I forgot about it. Im back on track again - thanks for this (again). I'm struggling to figure out how to filter the index of the comments resource by an attribute (say :status == 'published'). I can't do that in the regular way because the route is looking for a prefix (of film/actor)
Hey Melanie! :)
If you wanted a route to get all published comments (not ones scoped to a commentable type) you could add a
resources :comments that was not nested in your routes and use that.
resources :comments
resources :actors do
resources :comments, module: :actors
end
resources :films do
resource :comments, module: :films
end
And then you could make a comments_controller.rb that worked for all comments for any object. Is that what you're looking for?
Hi Chris, I'm trying to make an index view (for comments) that shows all the comments on a specific film that are published. If comments wasn't a polymorphic resource, I could add published: true to the index path. But, since the comments view belongs to both actor and film I can't prefix the index with the parent name in the path. So I'm a bit stuck for what to do.
Ah, I gotcha. So in your
films/comments_controller.rb you could say:
def index
end
And then you would want to build the index.html.erb to display all those comments. Is that what you're looking for?
Thank you Chris!
Just coming back here to say thanks! I watched this several times and it eventually sunk in. I managed to set it up a few months ago and it's been working quite well. I will admit it did take me some time to grasp the concept though.
That's great to hear! :D And I agree, it's a tough one to wrap your head around the first time.
Hi all, i am getting issue with my routes when i follow this methodology. I get uninitialized constant Squeals.
SQUEAL Models
class Squeal < ActiveRecord::Base
has_many :comments, as: :commentable
end
comment.rb
class Comment < ActiveRecord::Base
belongs_to:commentable, polymorphic:true
end
/squeal/comments_controller.rb
class Squeals::CommentsController <commentscontroller before_action="" :
Hi Chris, I want to add comments to multiple films at the same time. Suppose I want to add comments section on index page of films. Can you please help me how can i do that.
thanks,It's really help full.If we want to delete the comment from actor or film we need to define a method as destroy or else using _destroy for nested attributes.Can anyone help me out.
@excid3:disqus
How would you need to modify this to work with deeply nested resources?
i.e:
resources :projects do
resources :project_users, path: :users, module: :projects
resources :posts do
resources :comments, module: :posts
end
end
I am trying to use commentable with actioncable , but I keep getting the following error
d6f951d17) from Async(default) in 20.73ms: ActionView::Template::Error (undefined method `comments' for #<class:0x007f5cec08b880>):
someone advice me to look for value of comments and I realized is not define
but my question is Is it necessary to define relationship between comment and lets say article class ?
class Comment < ApplicationRecord
belongs_to :commentable, polymorphic: true #, optional: true
belongs_to :article, optional: true <--- this point
validates :body, presence: true, length: {minimum: 5, maximimum: 1000 }
after_create_commit {CommentBroadcastJob.perform_later self}
end
Hi Chris! Can this form be used on an index page? I'm noticing that I only get it to work on a show page. Any help
It would great if you could expand this to include comment replies. There are no great tutorials on how to accomplish this task.
Been planning on doing that soon. Thinking about doing this in a series where we create an embeddable Javascript comment system like Disqus.
That would be awesome. Thanks!
How we delete these polymorphic comments while we can't type the actor_comment_path or film_comment_path ? i used
case statement, any smarter idea ?
How can I get all of the users that have commented? What I would like is to be able to:
@post.commenters.each do |u|
something here...
end
To make it as
commenters you'd need to set an alias on the users association on comments. Hope this helps. Sorry I don't have exact code for it.
@post.comments.users would work if you had your comments
has_many: users association on your
comment model.
Got it! In my comments.rb:
has_many :commenters, through: :comments, source: :user
|
https://gorails.com/forum/comments-with-polymorphic-associations-gorails
|
CC-MAIN-2021-17
|
refinedweb
| 3,773
| 66.33
|
x += x++;
Other parts:
Today we shipped the September CTP of F# !!!! Evviva !! Read this blog post about it. To celebrate I decided to share one of my several F# project. It might make for a good sample; sort of a crash course on F#.
This application downloads stock prices, dividends and splits from Yahoo Historical Prices and performs computations on them. I will describe it file by file.
common.fs
I always have such a file in my projects. It is a repository for the types that I'm going to use in my program and the functions that are common across multiple modules. If I have a large program I also have a types.fs just for the types.
This one starts like this:
#light
open System
[<Measure>] type money
let money (f:float) = f * 1.<money>
[<Measure>] type shares
let shares (f:float) = f * 1.<shares>
[<Measure>] type volume
let volume (f:float) = f * 1.<volume>
[<Measure>] type rateOfReturn
let rateOfReturn (f:float) = f * 1.<rateOfReturn>
The first line instructs the compiler to use the lightweight syntax. You don't want to know what the heavyweight syntax is. Just always put such a line at the start of your files. The next line opens up the System namespace. Then the good stuff starts.
I'm defining some units of measures. The simplest way to think about units of measures is: they are a type system for floats. You can do much more than that, but it is a good first approximation. For example, you cannot now sum a money type and a volume type. Also for each one I define a function that converts from a normal float type to it (if you come from a C# background, floats are doubles).
Then I define the data model for my application:
type Span = { Start: DateTime; End: DateTime }
type Price = { Open: float<money>; High: float<money>; Low:float<money>; Close:float<money>; Volume: float<volume>}
type Event =
| Price of Price
| Split of float
| Div of float<money>
type Observation = { Date: DateTime; Event: Event}
The first record that I define, Span, represents the difference between two dates. It is just a little useful thing. A more fundamental record is Observation. An Observation is defined as something that happens on a particular Date. That something, an Event, can be one of three things: a Price, a Split or a Div. A Price is another record with a bunch of float<money> fields and on float<volume> field. If you go to the Yahoo site, you'll see what it represents.
A Split is simply a float. Why not a float<...>? Because it is just a number, a factor to be precise. It represents the number of new shares divided by the number of old shares. float<shares> / float<shares> = float. A Div is a float<money>.
This is one way to model the problem. Infinite other ways are possible (and I tried many of them in a C# version of this code that ended up using polymorphism). Note that all of the types are records except Event that is a discriminated union.
Records are read only containers of data. Discriminated unions are what the name says: things that can be one of multiple things (even recursively). They are rather handy to represent the structure of the data. We will see how you use them using the match operator in upcoming posts.
Also notice the following common pattern in F# (and functional programming in general). You define your data and then you define transformations over it. F# has a third optional step, that is to expose these transformations as methods of a .NET objects.
We are almost done here. A handful of other functions are in my file :
let span sy sm sd ey em ed = {Start = new DateTime(sy, sm, sd); End = new DateTime(ey, em, ed)}
let date y m d = new DateTime(y, m, d)
let now () = DateTime.Now
let idem x = x
let someIdem x = Some(x)
Span is a function that creates a span given the relevant info. date creates a date given year, month and day. now is a value that corresponds to the current date. idem is a function that returns its parameter (you'll see how that can possibly be useful). someIdem is a function that unpack an Option type and gives his value. I could write all my code without these things, but it looks better (to me) with them.
Notice that in the F# code you have access to all the functions and type in the .NET framework. It is just another .NET language: it can create or consume .NET types.
All right, we are done for part one. In part II there will be some real code.
PingBack from
Don Syme just announced today the F# Community Technical Preview (CTP) Release September 2008 which is
- Downloading stock prices in F# - Part I - Data modeling [ reddit ] - Units of Measure in F#: Part One,
Wow, what a busy week! The F# CTP is out the door, and it's already making reverberations around
Other parts: Part I - Data modeling Getting stock prices and dividends is relatively easy given that,
Other parts: Part I - Data modeling Part II - Html scraping It is now time to load our data. There is
Other parts: Part I - Data modeling Part II - Html scraping Part III - Async loader for prices and divs
You probably want to change your definition of now:
"let now() = DateTime.Now"
in stead of
"let now = DateTime.Now"
The current definition binds a constant value.
Yeah, you are right. It is a bug.
In this case it doesn't really matter because I use it just for my testcases. But it is a bug. I'll fix it in the post.
|
http://blogs.msdn.com/lucabol/archive/2008/08/29/downloading-stock-prices-in-f-part-i-data-modeling.aspx
|
crawl-002
|
refinedweb
| 967
| 74.9
|
.
A simple request is a no-brainer:
import urllib2 import json import sys ACCESS_KEY = 'YOUR ACCESS KEY' WEBSITE = 'YOUR WEBSITE NUMBER' url = '' % WEBSITE_NUMBER headers = { 'Authorization' : 'Bearer ' + ACCESS_KEY }; id = sys.argv[1] # OK a HACK, I admit. req = urllib2.Request(url + id, headers=headers) response = urllib2.urlopen(req) response_data = response.read() post_json = json.loads(response_data) print post_json
So this is actually a useful script to get information on your post (provide number in command line) in JSON format. Note that the content will be there as well, so it may be a lot :).
Fun starts when you start publishing things.
First, try to publish a media file. Since I prefer to work with standard library for max and easy portability (I switch between different computers on daily basis) I wanted to stick to urllib(2)-like things. This proved to be more difficult, but thanks to this post that was not that difficult.
What did cost me quoite some sleepless night time is that you should not forget that web requests and unicode (difficult subject in Python 2 anyway) do NOT live together. In other words
form = MultiPartForm() form.add_field('media[]', str(file_name)) # ENSURE TO PROVIDE STR, NOT UNICODE! form.add_field('attrs[0][title]', str(title))
In my case both file_name and title happened to be in unicode. Tadaaa… Here we go with dreaded UnicodeDecodeError’s. Took me ages to find this one out.
Second, the less obvious for me was the format of the post request when I needed to add/update metadata for a post. There is what documentation says:
Array of metadata objects containing the following properties: `key` (metadata key), `id` (meta ID), `previous_value` (if set, the action will only occur for the provided previous value), `value` (the new value to set the meta to), `operation` (the operation to perform: `update` or `add`; defaults to `update`). All unprotected meta keys are available by default for read requests. Both unprotected and protected meta keys are avaiable for authenticated requests with proper capabilities. Protected meta keys can be made available with the rest_api_allowed_public_metadata filter.
So now you just have to specify “array of metadata objects” and off you go… If you know what do they mean by this. I spent hours debugging WordPress and Jetpack code with ‘poor man’ tools like
log_error statements. Never mind.
Here is how you can add new meta-data.
HEADERS = { 'Authorization' : 'Bearer ' + ACCESS_KEY } POSTS_API = '' % 'YOUR WEBSITE ID' post_url = POST_API + 'YOUR POST ID' post_values = { 'featured_image' : featured_image_id, 'metadata[0][key]' : 'test-key', 'metadata[0][value]' : 'test-value', 'metadata[0][operation]' : 'add' } request_data = urllib.urlencode(post_values) request = urllib2.Request(post_url, request_data, headers=HEADERS) response = urllib2.urlopen(request) response_raw = response.read() post_json = json.loads(response_raw) # Here you can inspect post_json for all responses.
To update existing values you have to find the ID of the corresponding metadata first:
# Find metadata from previously requested data on your post. See examples above. metadata_id = None for m in post_json['metadata']: if m and (u'key' in m) and (m[u'key'] == u'test-key'): metadata_id = m[u'id'] post_values = { 'metadata[0][id]' : metadata_id, 'metadata[0][key]' : 'test-key', 'metadata[0][value]' : 'test-value', 'metadata[0][operation]' : 'update' } request_data = urllib.urlencode(post_values) request = urllib2.Request(post_url, request_data, headers=HEADERS) response = urllib2.urlopen(request) response_raw = response.read() post_json = json.loads(response_raw)
I hope this saves time for somebody.
|
http://blog.bidiuk.com/2015/02/tips-for-using-json-wordpress-api-for-jetpack-powered-websites/
|
CC-MAIN-2021-31
|
refinedweb
| 554
| 50.43
|
To find the TextElement on the page layout use the ListLayoutElement() with a filter of TEXT_ELEMENT for the type to return.
import arcpy
from arcpy import mapping
mxd = mapping.MapDocument("CURRENT")
textElem = mapping.ListLayoutElements(mxd, "TEXT_ELEMENT")
Now a list of all TextElements are returned. The next step is to find the text element you want to change. Each TextElement has a property called 'name' and when you perform your cartographic duties, I strongly suggest you set this. It makes it easy to update the elements.
import arcpy
from arcpy import mapping
mxd = mapping.MapDocument("CURRENT")
textElem = mapping.ListLayoutElements(mxd, "TEXT_ELEMENT")
for elem in textElem:
if elem.name == "txtTitle":
elem.text = "Mr. Toads Wild Ride"
elif elem.name == "txtCurrentDate":
# make an element have dynamic text of current date
elem."""
mxd.save()
del mxd
Now you might be thinking dynamic text, what the heck is that!?!? well it's exactly what it means, it's dynamic, it changes, and it's awesome. You can check out more here.
|
http://anothergisblog.blogspot.com/2011/06/using-textelement-object.html
|
CC-MAIN-2017-22
|
refinedweb
| 166
| 61.93
|
API an external event loop has to implement for GNUNET_SCHEDULER_driver_init. More...
#include <gnunet_scheduler_lib.h>
API an external event loop has to implement for GNUNET_SCHEDULER_driver_init.
Definition at line 262 of file gnunet_scheduler_lib.h.
Closure to pass to the functions in this struct.
Definition at line 267 of file gnunet_scheduler_lib.h.
Referenced by driver_add_multiple(), GNUNET_SCHEDULER_cancel(), GNUNET_SCHEDULER_do_work(), GNUNET_SCHEDULER_driver_init(), and GNUNET_SCHEDULER_run().
Add a task to be run if the conditions specified in the et field of the given fdi are satisfied.
The et field will be cleared after this call and the driver is expected to set the type of the actual event before passing fdi to GNUNET_SCHEDULER_task_ready.
Definition at line 282 of file gnunet_scheduler_lib.h.
Referenced by driver_add_multiple(), and GNUNET_SCHEDULER_driver_select().
Delete a task from the set of tasks to be run.
A task may comprise multiple FdInfo entries previously added with the add function. The driver is expected to delete them all.
Definition at line 297 of file gnunet_scheduler_lib.h.
Referenced by GNUNET_SCHEDULER_cancel(), GNUNET_SCHEDULER_do_work(), and GNUNET_SCHEDULER_driver_select().
Set time at which we definitively want to get a wakeup call.
Definition at line 307 of file gnunet_scheduler_lib.h.
Referenced by GNUNET_SCHEDULER_do_work(), GNUNET_SCHEDULER_driver_init(), and GNUNET_SCHEDULER_driver_select().
|
https://docs.gnunet.org/doxygen/html/dc/d56/structGNUNET__SCHEDULER__Driver.html
|
CC-MAIN-2021-39
|
refinedweb
| 189
| 61.12
|
On Mon, Feb 11, 2002 at 09:45:46PM -0200, Luiz Henrique de Figueiredo wrote: > > > I noticed you didn't expose lua_loadbuffer as an external symbol in > > > the shared library. > > Lua 4.0 officially has no lua_loadbuffer and friends. > *Please*, package the official Lua 4.0 code. You may want to add the patch > as a separate thing, of course. Thanks. Hmm, I can either package it in the "Debian version" of lua 4.0 or else I could provide an "alternative" liblua package. My current favourite is to make the symbol exposed but guard the definition of it in lua.h with something like #ifdef DEBIAN_LOADBUFFER .... #endif How does that sit with people? -- Daniel Silverstone Hostmaster, Webmaster, and Chief Code Wibbler Digital-Scurf Unlimited GPG Public key available from keyring.debian.org KeyId: 20687895 Brain fried -- Core dumped
|
http://lua-users.org/lists/lua-l/2002-02/msg00194.html
|
CC-MAIN-2016-50
|
refinedweb
| 139
| 76.52
|
Sync your resources on Kubernetes with GitOps
How to synchronise your secrets and configMaps hosted on Git with your Kubernetes cluster using modern tools.
:
- Create the secret
- Apply it on the cluster
- Backup it somewhere safe (or use tools like velero )
- Make this process scheduled with a cron.
The GitOps way :
- Create the secret
- Save it on git safely
- Wait for it to be on each namespaces you want.
One app that I heard a lot about during my Kubernetes journey is SealedSecrets.
I liked this idea of syncing secrets on my cluster with git, because with flux, infrastructure is stored on the same repo and my coworkers can play with it safely with a dedicated repository.
So the GitOps way looks great, light and simple, but, you know, sometime this is hard to make things simple.
The tools we need to do it :
- Something to sync your config with GitOps approach (It looks like FluxV2 is a great app to do it).
- Something to encrypt your passwords : We could use SealedSecrets or Vault.
- Git repository (On this case we will use Git but you can do it wherever you want).
- A tool to sync you secrets on multiple namespaces. On this case we will use kubed which is perfect for our use case.
Now, we need to install all of this on our cluster.
FluxV2 could be used to sync our secrets but also to install kubed and the encryption app (SealedSecrets here).
SealedSecrets looks easier to install than Vault which looks like overkill on our use case.
Explanation attempt :
The installation / configuration process
First of all, you need Flux on your computer.
You should read the FluxV2 documentation here :
Here we will install it on a mac on a kind Kubernetes Cluster.
Install Flux :
$ brew install fluxcd/tap/flux
We will create a cluster locally with kind (Kind documentation here :
$ kind create cluster --name local-demo
Now we need a git repo and your token
Get your github token (created here)
Install FLux on your cluster :
This will install all flux components (AKA the GitOps toolkits) and will create a private git repo on your account with cluster configuration.
$ export GITHUB_TOKEN=YOUR_TOKEN# Then install flux on your local cluster :$ flux bootstrap github \
--owner=YOUR_USERNAME \
--repository=flux-infra \
--path=clusters/my-cluster \
--personal
Create two namespaces :
# Create Kubed / Flux sync namespace :$ kubectl create namespace flux-resources
$ kubectl label --overwrite namespace flux-resources app=kubed# Create Apps namespace :$ kubectl create namespace apps
$ kubectl label --overwrite namespace apps app=kubed
Install Kubed and SealedSecrets :
Then we need to install the two apps we need : Kubed and SealedSecrets
We will create two helm releases from two different helm sources :
For SealedSecrets :
# The helm sources : $ flux create source helm appscode \
--interval=1h \
--url= flux create source helm sealed-secrets \
--interval=1h \
--url= First helmrelase, sealed-secrets : $ flux create helmrelease sealed-secrets \
--interval=1h \
--release-name=sealed-secrets \
--target-namespace=flux-system \
--source=HelmRepository/sealed-secrets \
--chart=sealed-secrets \
--chart-version="1.13.x"
# Create values file with kubed config :
$ cat << EOF > ./values-kubed.yaml
config:
configSourceNamespace: kubed-sync
EOF# Then create kubed helmrelease : $ flux create helmrelease kubed \
--interval=1h \
--release-name=kubed \
--target-namespace=flux-system \
--source=HelmRepository/appscode \
--chart=kubed \
--values=./values-kubed.yaml
Flux will install SealedSecret and kubed to your cluster.
Get the private-certificate generated by sealed secret and save it to the `sealed-secrets.pem` file.
$ kubeseal \
--controller-name=sealed-secrets \
--controller-namespace=flux-system \
--fetch-cert > ./sealed-secrets.pem
If you can’t find your key, check the sealed-secrets pod logs !
Create secrets
Now we want the secrets to be encrypted by SealedSecrets.
Create the regcred secret :
$ kubectl create secret docker-registry regcred \
—-docker-server=<your-registry-server> \
-—docker-username=<your-name> \
—-docker-password=<your-password> \
—-docker-email=<your-email> \
—-dry-run=client \
-oyaml > ./regcred.yaml
Add the following to the metadata part
I must say here that if you want to do it with configmaps, this is exactly the same process, you need this annotation.
annotations:
kubed.appscode.com/sync: app=kubed
Now this is the time to seal your secret using your private key :
$ kubeseal --format=yaml --cert=sealed-secrets.pem \
< regcred.yaml > regcred-sealed.yaml#Remove this good old secret : $ rm regcred.yaml
You can see that kubeseal keeps your annotation.
Deploy the sealedSecret to the kubed-sync :
kubectl apply -f ./regcred-sealed.yaml
SealedSecrets will decrypt it, add it to your cluster as a secret, and kubed will deploy it for every namespace having the metadata label `app=kubed` (thanks to the the `kubed.appscode.com/sync: app=kubed` annotation).
That’s “almost” it
OK But, my secret is not versioned, what was the point of all of this $!??
You’re right, but now it’s your turn to do the “hard” work with Flux / GitOps stuff.
Remember, few lines ago when we installed flux on our cluster. Flux created a private bucket on your git repository, you can use it to save all your secrets, and more !
But I will not leave you alone, you can clone the following Git repository, configure it to match your needs and get everything works automagically.
Once your secrets / configMaps are safe on your git repo, you can share the sealed-secret key to your coworkers, give them correct permissions to the git repo and let them create / update secrets !
To know everything about all of this :
Thanks for reaching the end of this article, it’s my first one, I hope it’ll help some of you !
Have a nice day on your K8S Journey !
|
https://yoandl.medium.com/sync-your-resources-on-kubernetes-with-gitops-227b3aca6a11?source=post_internal_links---------2----------------------------
|
CC-MAIN-2021-25
|
refinedweb
| 931
| 51.07
|
I'm trying to write a small email checking client which will pop up a window in the lower right corner when there's new mail, and where the window will go away when it is clicked on.
- Code: Select all
def main():
app=QApplication(argv)
window=NotifyWindow()
runner=Thread(target=keep_checking_gmail,args=[window])
runner.start()
exit(app.exec_())
So NotifyWindow is just a QWidget containing a single button which holds the text I want to display. When I test it out in the shell, it displays and vanishes when it should, with the text it should. keep_checking_gmail does what its name suggests, and it tells the NotifyWindow when and what to display. When I run this main method, it all seems to work correctly until it actually gets its first new email registered, then the NotifyWindow tries to show up, but never actually gets painted to the screen (it's all white) and I get an hourglass when I hover the pointer over the NotifyWindow. This continues (with the thread still chugging along in the background) until I kill the process or the program simply exits without an error message.
While I don't really know where in my code the problem is, I'm displaying the main method since that's almost the only thing that's run outside of the keep_checking_gmail function. Any help would be appreciated.
I'm using PyQt5 for Python 3.3 and Windows 7.
Note: I am terrible at GUI programming in any language. Please forgive me if this is a really stupid error on my part.
|
http://www.python-forum.org/viewtopic.php?f=12&t=11054
|
CC-MAIN-2014-52
|
refinedweb
| 264
| 60.85
|
SYNOPSIS
#include <nanomsg/nn.h> int nn_send(int sock, const void *data, size_t size, int flags)
DESCRIPTION
The
nn_send() function creates a message containing data (of size size),
and sends using the socket sock.
If size has the special value
NN_MSG, then a zero-copy operation
is performed.
In this case, data points not to the message content itself, but instead
is a pointer to the pointer, an extra level of pointer indirection.
The message must have been previously allocated by
nn_allocmsg() or
nn_recvmsg()
, using the same `NN_MSG size.
In this case, the “ownership” of the message shall remain with
the caller, unless the function returns 0, indicating that the
function has taken responsibility for delivering or disposing of the
The flags field may contain the special flag
NN_DONTWAIT.
In this case, if the socket is unable to accept more data for sending,
the operation shall not block, but instead will fail with the error
EAGAIN.
RETURN VALUES
This function returns the number of bytes sent on success, and -1 on error.
|
https://nng.nanomsg.org/man/v1.2.2/nn_send.3compat.html
|
CC-MAIN-2020-10
|
refinedweb
| 172
| 58.62
|
IRC log of html-wg on 2007-11-16
Timestamps are in UTC.
00:03:50 [mjs]
mjs has joined #html-wg
00:07:13 [shepazu]
shepazu has joined #html-wg
00:08:53 [mjs]
mjs has joined #html-wg
00:10:56 [MikeSmith]
MikeSmith has joined #html-wg
00:11:29 [mjs]
mjs has joined #html-wg
00:37:24 [andreas]
andreas has joined #html-wg
00:37:48 [Marcos_]
Marcos_ has joined #html-wg
00:46:09 [Marcos__]
Marcos__ has joined #html-wg
00:48:03 [MikeSmith]
Marcos__ - buenos días
00:48:28 [Marcos__]
heya mikesmith :)
00:49:20 [shepazu]
shepazu has joined #html-wg
00:52:57 [MikeSmith]
Marcos__ - andreas works for Opera here in Tokyo
00:54:23 [Marcos__]
mikesmith, how's the workshop going? what are you discussing?
00:55:43 [MikeSmith]
Marcos__ - discussing something bout modalities
00:55:46 [MikeSmith]
multiple ones
00:55:52 [MikeSmith]
that's about as much as I know
00:56:08 [Marcos__]
hehe
00:56:12 [MikeSmith]
only 2.5 hours of sleep last night
00:56:22 [MikeSmith]
so I'm a little slow on the uptake at this point
00:56:54 [Marcos__]
fair enough
00:57:20 [Marcos__]
at least you didnt show up drunk :)
00:57:30 [Marcos__]
...or did you? :D
00:57:59 [MikeSmith]
not drunk -- just slightly buzzed
00:58:45 [gavin]
gavin has joined #html-wg
01:05:58 [timbl]
timbl has joined #html-wg
01:06:52 [mjs]
mjs has joined #html-wg
01:18:05 [aaronlev]
aaronlev has joined #html-wg
02:13:27 [sbuluf]
sbuluf has joined #html-wg
02:26:50 [JanC]
JanC has joined #html-wg
03:05:53 [gavin]
gavin has joined #html-wg
04:25:44 [mjs]
mjs has joined #html-wg
05:13:21 [gavin]
gavin has joined #html-wg
05:17:30 [gavin_]
gavin_ has joined #html-wg
05:42:02 [Lionheart]
Lionheart has joined #html-wg
06:15:18 [Thezilch]
Thezilch has joined #html-wg
06:24:47 [andreas]
andreas has joined #html-wg
06:28:47 [MikeSmith]
MikeSmith has joined #html-wg
06:59:21 [xover]
xover has joined #html-wg
07:02:43 [Lionheart]
Lionheart has joined #html-wg
07:03:57 [shepazu]
shepazu has joined #html-wg
07:20:30 [gavin]
gavin has joined #html-wg
07:30:41 [Lionheart]
Lionheart has joined #html-wg
08:33:27 [tH_]
tH_ has joined #html-wg
08:45:22 [Lachy]
Lachy has joined #html-wg
09:01:20 [Julian]
Julian has joined #html-wg
09:15:36 [Lionheart]
Lionheart has joined #html-wg
09:16:50 [tH_]
tH_ has joined #html-wg
09:28:09 [ROBOd]
ROBOd has joined #html-wg
09:28:16 [gavin]
gavin has joined #html-wg
09:43:08 [zcorpan]
zcorpan has joined #html-wg
09:46:35 [Lachy]
Lachy has joined #html-wg
09:48:58 [Lachy]
Lachy has joined #html-wg
09:51:59 [jgraham]
jgraham has joined #html-wg
09:53:24 [Lionheart]
Lionheart has joined #html-wg
09:53:44 [Lionheart]
Lionheart has left #html-wg
09:54:22 [Lachy]
Lachy has joined #html-wg
10:13:54 [Lionheart]
Lionheart has joined #html-wg
10:14:07 [Lionheart]
Lionheart has left #html-wg
10:15:30 [marcospod]
marcospod has joined #html-wg
10:34:01 [Sander]
Sander has joined #html-wg
10:41:05 [marcospod]
marcospod has joined #html-wg
11:00:48 [marcospod]
marcospod has joined #html-wg
11:14:00 [myakura]
myakura has joined #html-wg
11:29:47 [aaronlev]
aaronlev has joined #html-wg
11:35:59 [gavin]
gavin has joined #html-wg
11:45:34 [timbl]
timbl has joined #html-wg
13:07:02 [smedero]
smedero has joined #html-wg
13:10:05 [marcospod]
marcospod has joined #html-wg
13:25:52 [aaronlev]
DanC: i can make the html wg meeting today but will miss the first part
13:43:31 [gavin]
gavin has joined #html-wg
13:45:19 [DanC]
ok; when do you think you can join?
14:08:17 [Lachy]
Lachy has joined #html-wg
14:08:55 [DanC]
hmm... where's mikesmith? I'd like 2 more topic anchors before the aria thing in
14:09:24 [Lachy]
Yay! HDP is being published :-)
14:10:16 [Lachy]
DanC, any idea when the spec will be published?
14:10:44 [anne]
I made the specification ready for publication yesterday
14:13:48 [shepazu]
shepazu has joined #html-wg
14:23:10 [anne]
DanC, any updates on the other drafts, which had pretty much the same feedback through the survey?
14:25:46 [DanC]
working on it
14:34:35 [torus]
torus has joined #html-wg
14:35:03 [torus]
torus has left #html-wg
14:39:09 [Lachy]
Lachy has joined #html-wg
14:50:27 [timbl]
timbl has joined #html-wg
15:08:08 [Julian]
Julian has joined #html-wg
15:13:46 [smedero]
smedero has joined #html-wg
15:26:52 [Lachy]
Lachy has joined #html-wg
15:39:44 [billmason]
billmason has joined #html-wg
15:43:13 [aaronlev]
DanC: can you delay the aria discussion until 15-20 minutes after the hour?
15:44:19 [Sander]
Sander has joined #html-wg
15:45:20 [DanC]
I think so; actually, I forgot to put aria on the agenda.
15:45:37 [DanC]
is Hixie around to eyeball this canvas question?
15:51:18 [gavin]
gavin has joined #html-wg
15:54:36 [DanC]
can anybody else eyeball it? anne? mjs?
15:59:45 [Philip]
Is it relevant that canvas specification is mostly about documenting existing practice, since it's already implemented in most major browsers, and is not about developing a new feature?
16:14:04 [DanC]
I thought I captured a sense of that
16:14:51 [DanC]
"Do use cases such as games, shared whiteboards, and yahoo pipes and others in the ESW wiki motivate a requirement that HTML 5 provide an immediate mode graphics API and canvas element?"
16:15:04 [DanC]
oh... I could cite the relevant design principle.
16:15:25 [DanC]
I'm inclined to leave it up to wiki-elves to do that. I think the WBS question is clear enough.
16:26:11 [Zakim]
Zakim has joined #html-wg
16:26:15 [DanC]
Zakim, this will be html
16:26:15 [Zakim]
ok, DanC; I see HTML_WG()12:00PM scheduled to start in 34 minutes
16:26:42 [DanC]
agenda + Convene HTML WG meeting of 2007-11-16T17:00:00Z
16:27:37 [DanC]
HTML WG teleconference 2007-11-16T17:00:00Z
(logs:
)
16:27:44 [DanC]
DanC has changed the topic to: HTML WG teleconference 2007-11-16T17:00:00Z
(logs:
)
16:27:58 [DanC]
agenda + ISSUE-18 html-design-principles HTML Design Principles
16:28:07 [DanC]
agenda + ISSUE-19 html5-spec HTML 5 specification release(s)
16:28:16 [DanC]
agenda + ISSUE-15 immediate-mode-graphics requirement for Immediate Mode Graphics and canvas element
16:28:23 [DanC]
agenda + # ISSUE-14 aria-role Integration of WAI-ARIA roles into HTML5
16:28:31 [DanC]
agenda 5 = ISSUE-14 aria-role Integration of WAI-ARIA roles into HTML5
16:28:43 [DanC]
agenda + ISSUE-16 offline-applications-sql offline applications and data synchronization
16:28:56 [DanC]
agenda + face-to-face meeting 8-10 November, review
16:36:04 [gsnedders]
DanC: it looks fine to me, but I'd make it explicit that it is just defining current behaviour, and being compat with current content
16:38:46 [oedipus]
oedipus has joined #html-wg
16:40:08 [DanC]
this question is not a judgement on the details of the design; just the requirement
16:40:24 [DanC]
thanks for the quick feedback, in any case
16:40:44 [DanC]
yes, I announced it when it was clear that you understood the question
16:42:39 [DanC]
er... where's MikeSmith?
16:54:38 [ChrisWilson]
ChrisWilson has joined #html-wg
16:59:36 [Zakim]
HTML_WG()12:00PM has now started
16:59:37 [Zakim]
+JulianR
17:00:04 [Lachy]
Lachy has joined #html-wg
17:00:52 [Zakim]
+Gregory_Rosmaita
17:00:54 [Zakim]
-Gregory_Rosmaita
17:00:55 [Zakim]
+Gregory_Rosmaita
17:01:18 [DanC]
RRSAgent, pointer?
17:01:18 [RRSAgent]
See
17:01:25 [DanC]
Zakim, take up item 1
17:01:25 [Zakim]
agendum 1. "Convene HTML WG meeting of 2007-11-16T17:00:00Z" taken up [from DanC]
17:01:35 [Zakim]
+DanC
17:01:35 [Zakim]
+??P9
17:02:46 [rubys]
rubys has joined #html-wg
17:03:06 [oedipus]
scribe: Gregory_Rosmaita
17:03:11 [oedipus]
scribenick: oedipus
17:03:18 [DanC]
Meeting: HTML WG Weekly
17:03:43 [DanC]
Zakim, agenda?
17:03:43 [Zakim]
I see 7 items remaining on the agenda:
17:03:44 [Zakim]
1. Convene HTML WG meeting of 2007-11-16T17:00:00Z [from DanC]
17:03:47 [Zakim]
2. ISSUE-18 html-design-principles HTML Design Principles [from DanC]
17:03:49 [Zakim]
3. ISSUE-19 html5-spec HTML 5 specification release(s) [from DanC]
17:03:51 [Zakim]
4. ISSUE-15 immediate-mode-graphics requirement for Immediate Mode Graphics and canvas element [from DanC]
17:03:53 [Zakim]
5. ISSUE-14 aria-role Integration of WAI-ARIA roles into HTML5
17:03:55 [Zakim]
6. ISSUE-16 offline-applications-sql offline applications and data synchronization [from DanC]
17:03:58 [Zakim]
7. face-to-face meeting 8-10 November, review [from DanC]
17:04:30 [Zakim]
+Sam
17:05:24 [DanC]
Zakim, passcode?
17:05:24 [Zakim]
the conference code is 4865 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), DanC
17:06:16 [Zakim]
+[Microsoft]
17:06:25 [ChrisWilson]
Zakim, Microsoft is me
17:06:25 [Zakim]
+ChrisWilson; got it
17:06:59 [aaronlev]
aaronlev has joined #html-wg
17:07:12 [anne]
anne has joined #html-wg
17:07:27 [oedipus]
GJR: to coordinate some IRC time to discuss HTML5 stylesheet issues with the editors/interested parties -- a limited color pallatte using named colors needs some negotiation (and some eyeballs) and i'm still testing actual support for CSS generated text using :before and :after
17:07:32 [DanC]
22 Nov telcon cancelled
17:07:45 [oedipus]
CW: skip next week's meeting -- next meeting 29 November 2007 at 1700z
17:08:02 [DanC]
Zakim, next item
17:08:02 [Zakim]
agendum 2. "ISSUE-18 html-design-principles HTML Design Principles" taken up [from DanC]
17:08:13 [anne]
Zakim, who is here?
17:08:13 [Zakim]
On the phone I see JulianR, Gregory_Rosmaita, hsivonen, DanC, Sam, ChrisWilson
17:08:13 [oedipus]
TOPIC: HTML Design Principles
17:08:15 [Zakim]
On IRC I see anne, aaronlev, rubys, Lachy, ChrisWilson, oedipus, Zakim, gavin, Sander, billmason, smedero, Julian, timbl, shepazu, marcospod, myakura, zcorpan, tH, xover, Thezilch,
17:08:20 [Zakim]
... gavin_, mjs, JanC, marcos, jmb, heycam, gsnedders, paullewis, DanC, Shunsuke, Hixie, Dashiva, Philip, drry, Bert, laplink, bogi, jane, krijnh, deltab, beowulf, hsivonen,
17:08:22 [anne]
Zakim, passcode?
17:08:23 [Zakim]
... trackbot-ng, Bob_le_Pointu, RRSAgent
17:08:24 [Zakim]
the conference code is 4865 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), anne
17:08:34 [DanC]
HTML Design Principles
17:08:39 [oedipus]
DanC: migrated issues to issue tracker -- issue 18
17:08:52 [Zakim]
+??P13
17:08:55 [oedipus]
DanC: completed action to email negative responders
17:09:08 [Zakim]
+anne
17:09:19 [DanC]
Zakim, ??P13 is aaronlev
17:09:19 [Zakim]
+aaronlev; got it
17:09:24 [oedipus]
DanC: Mike(tm)Smith still needs to compile minutes from saturday's HTML f2f session
17:09:33 [oedipus]
DanC: mjs Action 20 completed
17:09:45 [oedipus]
DanC: explores for a "comments" mailing list
17:10:13 [DanC]
17:10:32 [oedipus]
DanC: feedback on HDP should be sent to public-html-comments@w3.org
17:10:54 [oedipus]
s/feedback/outside feedback/
17:11:01 [DanC]
Zakim, who's on the phone?
17:11:01 [Zakim]
On the phone I see JulianR, Gregory_Rosmaita, hsivonen, DanC, Sam, ChrisWilson, aaronlev, anne
17:11:13 [DanC]
Zakim, next item
17:11:13 [Zakim]
agendum 3. "ISSUE-19 html5-spec HTML 5 specification release(s)" taken up [from DanC]
17:11:28 [DanC]
17:11:30 [oedipus]
TOPIC: HTML5 Specification Draft Release
17:11:51 [oedipus]
DanC: had conversation with PTaylor about formal objection - action done
17:12:09 [oedipus]
DanC: completed action to email negative and non-responders - done
17:12:44 [oedipus]
Chairs have said the question does not carry -- WG will keep working on spec
17:12:58 [DanC]
DanC found out non-responders are not ok to publish
17:13:08 [oedipus]
Anne: graphics API a problem?
17:13:34 [oedipus]
DanC: publication starts the clock on W3C process
17:14:02 [oedipus]
Anne?/Henri?: deadline? make something available?
17:14:49 [hsivonen]
s/Anne\?\/Henri\?/Anne/
17:14:50 [oedipus]
DanC: like those who responded no to releasing draft to explain comments on questions; question may need to be refined
17:15:03 [DanC]
Zakim, next item
17:15:03 [Zakim]
agendum 4. "ISSUE-15 immediate-mode-graphics requirement for Immediate Mode Graphics and canvas element" taken up [from DanC]
17:15:13 [oedipus]
TOPIC: ISSUE 15 Immediate Mode Graphics
17:15:45 [oedipus]
DanC: ChrisW get info from MS (10 december deadline); DanC put question to WG
17:15:51 [DanC]
Zakim, who's on the phone?
17:15:52 [Zakim]
On the phone I see JulianR, Gregory_Rosmaita, hsivonen, DanC, Sam, ChrisWilson, aaronlev, anne
17:16:22 [jgraham]
jgraham has joined #html-wg
17:16:35 [oedipus]
Anne: how does modification of activity affect charter? results clear -- if say "yes" then might be changed
17:17:16 [oedipus]
DanC: question is "who is everyone" -- question to HTML WG and question to W3C; feedback at TPAC was that this issue is in scope; not critical path for denying discussions
17:17:29 [oedipus]
Anne: membership ok with it, can we carry on as usual?
17:17:40 [oedipus]
DanC: presuming all goes well, continue on in parallell
17:18:37 [oedipus]
JulianR: can answer in week
17:19:07 [oedipus]
Henri: can answer in week; question posed isn't what i want answered -- 3 of top 4 already implementing
17:19:15 [oedipus]
ChrisW: questions 3 out of 4
17:19:29 [ROBOd]
ROBOd has joined #html-wg
17:19:32 [oedipus]
DanC: a lot of people have made up their mind, but the question still has to be fielded
17:20:03 [DanC]
q+ to suggest a survey with some options in parallel
17:20:26 [oedipus]
ChrisW: in scope of WebAPI or not? 3 of 4 implemented shouldn't make issue one for HTML WG -- question whether covered by charter or patent policy; some implementors don't believe charter needs to be implemented, but that is my gut feeling
17:21:11 [oedipus]
DanC: considering doing an informal survey in parallel with formal survey; CANVAS tag in HTML WG and CANVAS tag in HTML WG or other WG? if formal question doesn't carry, still gaining info
17:21:30 [ChrisWilson]
My point is that 3 out of 4 implementers implementing means this IS in scope of "the platform"; the question, to my mind, is whether this is covered by our charter and therefore covered by the patent policy.
17:22:00 [oedipus]
DanC: been suggested that html5 spec should have CANVAS in it and cite document with graphics API -- question of whether HTML WG develops document or another WG develops document
17:22:11 [ChrisWilson]
The goal in creating a W3C WG with a patent policy is to explicitly lay out what that WG is going to do, so companies getting involved in the WG know what IP they may be offering up.
17:22:15 [oedipus]
Anne: rather keep it in HTML WG; willing to answer survey
17:22:16 [ChrisWilson]
Charters cannot be open-ended.
17:22:28 [Lachy]
isn't everything in the spec covered by the patent policy, regardless of whether it's explicitly in the charter?
17:22:58 [oedipus]
JulianR: spec already too complex -- need to seriously discuss way to take things out and harmonize with existing specs
17:23:26 [DanC]
agenda + outcome of HTML for authors session
17:23:56 [oedipus]
GJR: spec too complex, but can answer any survey
17:24:03 [ChrisWilson]
Lachy, everything in the spec IS covered by the patent policy. Joining a working group cannot be opening a company's entire patent portfolio in a free-for-all, or those with large patent portfolios would be foolish to participate at all - weakening the point of having a patent policy.
17:24:44 [oedipus]
Henri: formal survey first, then consider steps to separate API portions of spec; question of whether anything should be taken out of spec dependent upon who is going to edit that portion of spec -- do we have expertise?
17:25:04 [oedipus]
SamR: can't answer within week; support informal survey; do have charter concerns
17:25:13 [Lachy]
so the real question is, does Microsoft have patents that they do not want to give up, but which they would be forced to if canvas were included?
17:25:15 [oedipus]
ChrisW: yes, can answer question
17:25:22 [oedipus]
AaronL: not likely to have opinion now
17:25:29 [rubys]
oedipus: I said I CAN answer within a week
17:25:29 [DanC]
trackbot-ng,
17:25:32 [DanC]
trackbot-ng, status
17:25:45 [ChrisWilson]
Lachy, without having a charter that scopes the WG's specifications, I can't know the answer to that question.
17:25:47 [DanC]
ACTION: Dan consider informal survey on canvas tactics
17:25:47 [trackbot-ng]
Created ACTION-21 - Consider informal survey on canvas tactics [on Dan Connolly - due 2007-11-23].
17:25:52 [oedipus]
SCRIBE'S NOTE: Sam Ruby CAN answer within a week
17:26:09 [DanC]
Zakim, next item
17:26:09 [Zakim]
I see a speaker queue remaining and respectfully decline to close this agendum, DanC
17:26:14 [DanC]
ack danc
17:26:14 [Zakim]
DanC, you wanted to suggest a survey with some options in parallel
17:26:17 [DanC]
Zakim, next item
17:26:17 [Zakim]
agendum 5. "ISSUE-14 aria-role Integration of WAI-ARIA roles into HTML5" taken up
17:26:18 [ChrisWilson]
With the charter we have now, our legal staff did not investigate our graphics patents.
17:26:29 [oedipus]
TOPIC: ISSUE 14 ARIA Role Integration
17:26:33 [DanC]
17:26:58 [oedipus]
DanC: action on URI extensibility -
17:27:11 [DanC]
some progress:
17:27:11 [oedipus]
AaronL: hard time figuring out what was being proposed
17:27:22 [oedipus]
DanC: specific questions?
17:27:24 [Lachy]
ChrisWilson, didn't the legal department look at the existing whatwg spec, so that they would have a better idea of what to look for, rather than relying on the vague charter?
17:27:42 [DanC]
17:27:50 [oedipus]
AaronL: page to other links, couldn't ascertain what was DanC's contribution
17:27:56 [ChrisWilson]
Lachy, the WHATWG spec is not our charter.
17:27:59 [hsivonen]
q+ to talk about Norm Walsh's blog post
17:28:03 [aroben]
aroben has joined #html-wg
17:28:12 [ChrisWilson]
Nor has the WHATWG spec been stable in that time frame.
17:28:14 [rubys]
concrete charters tend to trump draft specs
17:28:18 [oedipus]
AaronL: summary, please?
17:28:18 [ChrisWilson]
(i.e. not added features)
17:28:35 [oedipus]
DanC: let WG members read at leisure; may do more work on page to make clearer
17:28:37 [DanC]
ack hsivonen
17:28:37 [Zakim]
hsivonen, you wanted to talk about Norm Walsh's blog post
17:29:06 [ArneJ]
ArneJ has joined #html-wg
17:29:23 [oedipus]
Henri: NormW's post suggests implicit namespaces in parser; considering constraints of aria- proposal don't think what NormW wrote satisfies requirements; can't do something to make DOM APIs act differently
17:29:27 [oedipus]
DanC: can if want to
17:30:14 [oedipus]
Henri: then introduce discrepancy in DOM scripting; changes way XML is parsed to infer namespaces from content-type deeper change than previously proposed; want not to afftect DOM API scripting -- examine RDF
17:30:26 [oedipus]
DanC: couple of steps ahead of me -- good feedback
17:30:39 [oedipus]
AaronL: will speak with Henri offline
17:30:57 [oedipus]
DanC: cost to changing APIs -- still thinking through
17:31:46 [oedipus]
scribe's note: DanC and MichaelC's actions continued
17:31:58 [oedipus]
DanC: next telecon not until 2 weeks
17:32:39 [oedipus]
AaronL: clarity always welcome; asked Doug Schepers and Bill for date by which they will decide aria- ; told me tied to other issues and gave no date
17:32:46 [DanC]
q+ to ask for a test pointer
17:32:59 [hsivonen]
s/examine RDF/could define a URI mapping for apps that need it for GRDDL to RDF mapping without affecting the DOM/
17:33:06 [oedipus]
GJR: PF yesterday discussed what next steps can take to further discussion
17:33:39 [DanC]
good tests?
17:34:16 [DanC]
<div aria="something">
17:34:51 [DanC]
17:34:54 [anne]
s/something/checkbox/
17:35:04 [oedipus]
AaronL: SVG does not want to change "role" to "aria"
17:35:14 [aaronlev]
17:35:24 [anne]
17:35:34 [oedipus]
17:35:36 [hsivonen]
DanC, it is about <div role='checkbox'> or <div aria='checkbox'>
17:35:37 [aaronlev]
s/SVG does/I do
17:36:05 [oedipus]
GJR: comparative tests needed?
17:36:07 [oedipus]
DanC: yes
17:36:21 [hober]
hober has joined #html-wg
17:36:26 [oedipus]
GJR: will communicate back to PF
17:36:52 [oedipus]
AaronL: like tests with role="checkbox"
17:37:11 [DanC]
DanC: thanks; I'll study
and
17:37:18 [oedipus]
UIUC ARIA Tests:
17:37:42 [DanC]
s/SVG does not want/I do not want/
17:37:45 [oedipus]
AaronL: clarifies -- not SVG WG, but my impression of what SVG is saying
17:37:58 [DanC]
aaronlev: I don't recommend the UIUC tests
17:38:17 [hsivonen]
DanC, did you mean test cases or proposed syntax examples?
17:38:55 [oedipus]
GJR: need comparative tests of single concept using diff markup proposals
17:38:57 [DanC]
I tend to call them tests; sorry if that's confusing
17:39:20 [oedipus]
AaronL: don't think there is controversy save for attribute name "role" and "aria"
17:39:26 [oedipus]
DanC: would like comparative tests
17:39:50 [Hixie]
DanC: i can't answer the canvas question. I strongly feel that a canvas API is already in scope, and I strongly object to reopening the charter rathole. But the question asks whether I think it is in scope and says that a "yes" answer reopens the rathole.
17:39:57 [oedipus]
ACTION GJR: coordinate comparative tests using competing ARIA proposals
17:41:02 [DanC]
ack danc
17:41:02 [Zakim]
DanC, you wanted to ask for a test pointer
17:41:10 [oedipus]
DanC: AlG promised that test materials used at HTML f2f would be given stable URIs
17:41:10 [DanC]
Zakim, next item
17:41:10 [Zakim]
agendum 6. "ISSUE-16 offline-applications-sql offline applications and data synchronization" taken up [from DanC]
17:41:24 [DanC]
17:41:30 [oedipus]
GJR: will follow up with PF test suite builders/maintainers
17:41:39 [DanC]
Editor's Draft 11 November 2007
17:41:45 [oedipus]
Anne action completed with editor's draft of 11 november
17:41:54 [oedipus]
ChrisW: not seen yet
17:42:18 [oedipus]
DanC: good to have the document ready; like a few more keywords in abstract: caching, SQL
17:42:26 [oedipus]
Anne: can add -- pretty clear, i think
17:42:50 [oedipus]
DanC: suggests using ToC to populate abstract
17:43:30 [oedipus]
SamR: plan to review
17:43:38 [oedipus]
DanC: page and a half
17:43:46 [oedipus]
SamR: will review this weekend
17:44:34 [oedipus]
ACTION: SamRuby review oflline-webapps by monday, 19 november 2007
17:44:34 [trackbot-ng]
Sorry, couldn't find user - SamRuby
17:45:04 [DanC]
trackbot-ng, status
17:45:24 [oedipus]
ACTION: Julian review offline-webapps by monday, 19 november 2007
17:45:24 [trackbot-ng]
Created ACTION-22 - Review offline-webapps by monday, 19 november 2007 [on Julian Reschke - due 2007-11-23].
17:45:50 [oedipus]
ChrisW: don't have an opinion; from another perspective, offline and SQL not in charter
17:46:12 [oedipus]
DanC: publication a natural way to start conversation; Anne, thinking of note or working draft?
17:46:16 [oedipus]
Anne: note
17:46:31 [oedipus]
DanC: inclined to publish in a few weeks
17:46:36 [oedipus]
Anne: reasonable
17:46:49 [DanC]
Zakim, next item
17:46:49 [Zakim]
agendum 7. "face-to-face meeting 8-10 November, review" taken up [from DanC]
17:46:59 [DanC]
Zakim, take up item 8
17:46:59 [Zakim]
agendum 8. "outcome of HTML for authors session" taken up [from DanC]
17:47:11 [oedipus]
TOPIC: Outcome of HTML for Authors' Session
17:47:15 [oedipus]
DanC: record of session?
17:47:53 [hsivonen]
17:47:56 [mjs]
mjs has left #html-wg
17:48:14 [mjs]
mjs has joined #html-wg
17:48:22 [mjs]
our charter does in fact contain "Data storage APIs"
17:48:26 [oedipus]
DanC: is it an accurate/reasonable catch of what transpired?
17:48:56 [ChrisWilson]
..."if the WebAPI WG fails to deliver."
17:49:05 [oedipus]
DanC: 2 actions noted in minutes
17:49:21 [DanC]
1 is a dup
17:49:38 [DanC]
ah... it's ACTION-5 by tracker
17:49:42 [Hixie]
the webapi wg has failed to deliver their own deliverables, let alone ours
17:49:52 [oedipus]
Henri: not sure if reached some kind of agreement; no consensus on best practices --
17:50:12 [ChrisWilson]
Have they stated that to the W3C staff?
17:50:14 [oedipus]
DanC: read not a call to create task force, but a proposal via email from KarlD
17:50:20 [Hixie]
ChrisWilson: yes
17:50:30 [ChrisWilson]
Can you send a pointer?
17:50:49 [oedipus]
DanC: moves to adjourn
17:50:56 [oedipus]
scribe's note: NO dissent
17:51:10 [oedipus]
Henri: plan on staying around to check records
17:51:19 [oedipus]
ChrisW: seconds motion to adjourn
17:51:19 [DanC]
ADJOURN.
17:51:20 [hsivonen]
not to stay around
17:51:25 [Zakim]
-JulianR
17:51:31 [Zakim]
-hsivonen
17:51:33 [Zakim]
-aaronlev
17:51:40 [oedipus]
s/staying around/not staying around
17:51:52 [Zakim]
-Sam
17:51:54 [oedipus]
SamR: please don't add to issue tracking just yet -- shortly
17:52:23 [Zakim]
-ChrisWilson
17:52:29 [Hixie]
ChrisWilson, look at any status e-mail in hcg
17:53:03 [anne]
Yeah, it's pretty clear that the Web API WG has not enough volunteers to edit
17:53:10 [ChrisWilson]
oedipus, yes and yes.
17:53:22 [oedipus]
thanks ChrisW -- anne, i am joining WebAPI
17:53:28 [oedipus]
zakim, please part
17:53:28 [Zakim]
leaving. As of this point the attendees were JulianR, Gregory_Rosmaita, DanC, hsivonen, Sam, ChrisWilson, anne, aaronlev
17:53:28 [Zakim]
Zakim has left #html-wg
17:53:29 [ChrisWilson]
It's pretty clear we suffer from the same problem.
17:53:39 [anne]
it seems that Hixie is doing just fine
17:53:42 [anne]
to me, anyway
17:53:53 [oedipus]
rrsagent, set logs world-visible
17:53:59 [oedipus]
rrsagent, create minutes
17:53:59 [RRSAgent]
I have made the request to generate
oedipus
17:54:05 [oedipus]
rrsagent, format minutes
17:54:05 [RRSAgent]
I have made the request to generate
oedipus
17:54:18 [anne]
ChrisWilson, could you perhaps e-mail the list with what you consider to be out of scope?
17:54:40 [ChrisWilson]
? Anything not captured in the charter?
17:54:48 [oedipus]
chair: Dan_Connolly
17:54:51 [oedipus]
rrsagent, create minutes
17:54:51 [RRSAgent]
I have made the request to generate
oedipus
17:54:55 [oedipus]
rrsagent, format minutes
17:54:55 [RRSAgent]
I have made the request to generate
oedipus
17:55:00 [anne]
ChrisWilson, basically, yeah
17:55:12 [oedipus]
chair+ Dan_Connolly
17:55:16 [oedipus]
rrsagent, create minutes
17:55:16 [RRSAgent]
I have made the request to generate
oedipus
17:55:19 [oedipus]
rrsagent, format minutes
17:55:19 [RRSAgent]
I have made the request to generate
oedipus
17:55:34 [DanC]
chair: DanC
17:55:56 [oedipus]
any regrets received?
17:56:14 [DanC]
Regrets+ mikko
17:56:18 [DanC]
(I think)
17:56:27 [oedipus]
rrsagent, create minutes
17:56:27 [RRSAgent]
I have made the request to generate
oedipus
17:56:28 [anne]
anne has left #html-wg
17:56:31 [oedipus]
rrsagent, format minutes
17:56:31 [RRSAgent]
I have made the request to generate
oedipus
17:56:45 [oedipus]
regrets+ mikko
17:56:47 [oedipus]
rrsagent, create minutes
17:56:47 [RRSAgent]
I have made the request to generate
oedipus
17:57:02 [Hixie]
ChrisWilson, as far as i am aware, everything in the spec if well covered by our charter.
17:57:20 [Hixie]
ChrisWilson, after all, the charter was written mostly after teh spec, and with the spec in mind.
17:57:21 [DanC]
oedipus, there's bunch of irrelevant stuff at the top, but you don't have write access to /2007/11 ... perhaps you could (a) take a copy of 16-html-wg-minutes.html , edit it manually, and mail it to me and www-archive ?
17:57:45 [oedipus]
yes, i can do that
17:57:49 [DanC]
thanks
17:58:04 [Hixie]
ChrisWilson, also, if you think hyatt and i aren't editing fast enough, it would be helpful to know what you think should be being edited faster
17:58:16 [oedipus]
will get it to you and www-archive asap
17:58:21 [DanC]
Hixie, please consider the charter from the perspective of someone wholly unfamiliar with HTML 5. e.g. a patent laywer at VendorCo
17:58:47 [Hixie]
DanC, first, we do, and second, the only person complaining is microsoft, and they aren't "someone wholly unfamiliar with HTML 5"
17:59:05 [Hixie]
DanC, they are in fact intimiately aware that html5 exists and is why this group was created.
17:59:15 [gavin]
gavin has joined #html-wg
17:59:29 [DanC]
no, microsoft is not the only person complaining; they're the only one nice enough to do it on the public record
17:59:32 [ChrisWilson]
Hixie, from a quick glance through the ToC, canvas and offline; session history and navigation; client-side storage (both types) unless the WebAPI WG fails to deliver; server-sent DOM events; and the connection interface are not in our charter.
17:59:41 [ChrisWilson]
s/nice/foolish
17:59:55 [oedipus]
danC, how far down do you want me to snip - to "convene meeting"?
18:00:04 [DanC]
yes, down to convene
18:00:06 [oedipus]
ok
18:00:18 [DanC]
and fix the duplicate items in the TOC, if you would
18:00:27 [oedipus]
have done
18:00:32 [DanC]
good
18:00:46 [Hixie]
ChrisWilson, wow, i didn't realise how desparate you were to try and slow down the group.
18:00:57 [DanC]
Hixie, cut it out
18:01:01 [mjs]
ChrisWilson, you have a unique way of reading the charter
18:01:01 [ChrisWilson]
I'll try not to take the comment literally or personally.
18:01:01 [Hixie]
oh please
18:01:08 [Hixie]
it's blatently obvious what chris is doing
18:01:20 [Hixie]
no-one in their right mind would claim session history wasn't under HTML5's purview
18:01:22 [DanC]
no, it's not, and it's rude of you to presume
18:01:54 [ChrisWilson]
Hixie, why should it matter? You will continue to create your HTML 5 standard in the WHATWG; and it will continue to ignore patents and IPR.
18:02:14 [ChrisWilson]
s/should it matter/should it matter to you/
18:02:32 [Hixie]
ChrisWilson, we specifically came here to w3c to allow the spec to be covered by the patent policy for you
18:02:47 [Hixie]
ChrisWilson, and now you're claiming you don't think the spec is covered by the charter.
18:03:50 [Hixie]
ChrisWilson, what can we do going forward to make sure the spec isn't pared down, is published soon, and is published with your participation?
18:04:04 [ChrisWilson]
One moment.
18:04:11 [oedipus]
danC, should i trim the "diagnostics" section?
18:05:05 [oedipus]
the question of whether the spec should be pared down is a decision for the WG to make, not a unilateral decision by the editors
18:05:30 [ChrisWilson]
Hixie: in a company with a large patent portfolio, getting approval to allow RF licensing of IP requires knowing what you're signing up to.
18:06:14 [Hixie]
ChrisWilson: sure, that's why when we originally proposed the scope we made it explicit. the w3c staff cut it down saying that it was being redundant.
18:06:20 [ChrisWilson]
That means the charter has to cover exactly what areas are going to be in the spec, because comparing a 500-page specification against [large company]'s entire patent portfolio is not an easy thing to do.
18:06:23 [oedipus]
hixie, isn't the point of editing to make things as clear as possible? that entails clarifications and such that may lead to "paring" in one place and "growth" in another...
18:06:53 [ChrisWilson]
Then let
18:06:59 [ChrisWilson]
erk
18:07:13 [Hixie]
oedipus: (i just meant removing entire sections, i agree that editing work includes making things clearer.)
18:07:41 [jmb]
jmb has joined #html-wg
18:08:59 [ChrisWilson]
Then let's scope out what the charter SHOULD be, and get the charter changed to reflect that. More to the point, I think there should be separate groups handling some of these items; I agree, fwiw, that session history and navigation might belong here, but I don't honestly think connections do. I believe they belong in the webapi group.
18:09:32 [ChrisWilson]
At any rate, it's irresponsible of me to agree to a spec that I don't think our IP reviewers were covering.
18:10:29 [ChrisWilson]
I understand, for example, (because at least you and Maciej have repeatedly said) that the Canvas API is a fairly stable bit of the WHAT WG HTML5 spec.
18:11:40 [ChrisWilson]
I understand anyone's ability to get you to change that API is basically zero at this point (aside from the one or two minor points you mentioned as being in flux). That doesn't mean that I can blithely say "I'm sure we wouldn't mind giving up IP in that area" without what IP we have there being reviewed.
18:11:45 [Hixie]
ChrisWilson: what can we do to publish _soon_, though? rechartering takes easily 6 months which is simply not an option for us.
18:12:06 [Hixie]
ChrisWilson: i'd like to know what we can to publish the current spec soon, with your participation
18:12:54 [ChrisWilson]
Publish all of the current HTML5 spec, with Microsoft's participation? I don't know. I'm not sure it will be possible; it will depend on the patent review I'm kicking off right now with our legal team to look at the areas I mentioned above that I don't think are in the spec.
18:13:29 [ChrisWilson]
s/spec/charter
18:14:24 [ChrisWilson]
If that review is taking more than 90 days, or if it turns up areas of concern to the IP owners, then I would have to depart the WG, because that's the only option left. That's one of the reasons why RF WGs are best off not trying to bite off the entire world.
18:15:09 [Hixie]
ChrisWilson: wow, so there is the chance that microsoft would rather leave the group than license patents?
18:15:15 [ChrisWilson]
I understand you all think this is me being obstructionist, and that's unfortunate. I have to work within the system of a corporation with a large patent portfolio, and that means being responsible with their IP.
18:15:22 [ChrisWilson]
No, that's not it.
18:15:34 [smedero]
I'm a little confused as common-man involved in this process. The <canvas> element is about three years old... though I don't know the exact date it made it into the WHATWG HTML5 sepc.
18:15:55 [Hixie]
smedero: i'm pretty confused myself :-)
18:16:00 [ChrisWilson]
It's not "license patents". It's that what you are asking for is a open-ended "whatever patents might cover technology we think is handy to shove into the HTML5 spec."
18:16:00 [oedipus]
DanC and ChrisW: cleaned minutes attached to
18:16:06 [smedero]
It seems clear that the patent issues were going to be a problem... that's completely reasonable.
18:16:15 [oedipus]
cleaned minutes:
18:16:21 [smedero]
It just feels like this issue should have been reviewed much sooner in this process.
18:16:34 [Hixie]
ChrisWilson: the html5 spec at this point is past feature freeze, so there won't be any new things that can be covered by patents.
18:16:40 [ChrisWilson]
Indeed. My apologies. I have a couple of day jobs too.
18:17:27 [ChrisWilson]
Really? Do you believe that every area that is going in to HTML5 from the WHATWG side is already there, and if we capture everything that's in there today into our charter to my satisfaction, that's not going to change?
18:17:30 [Hixie]
ChrisWilson: so anyway you are saying there is no way to publish the current spec soon with your participation? that it's either publish later, publish without you, or publish something smaller?
18:17:33 [ChrisWilson]
(That's a serious question)
18:17:57 [Hixie]
ChrisWilson: yes, as far as i'm concerned we're in feature freeze, i don't expect any new features to be added before CR.
18:18:05 [Hixie]
ChrisWilson: (there's no "whatwg side" to this, btw)
18:18:09 [oedipus]
ChrisW and DanC: just found another regret notification: Marcin Hanclik's regrets:
18:18:46 [ChrisWilson]
The best possible case is that I take the current spec, categorize the areas, pass it back to legal for review, and the owners of the patents they turn up are all okay with RF-licensing that IP to the W3C for HTML5.
18:18:52 [Hixie]
ChrisWilson: obviously if microsoft has anything they'd like added they would be considered, since that would presumably sidestep the patent problem, and we want to make microsoft happy with the spec.
18:19:13 [ChrisWilson]
So, to echo one of my ultimate bosses' most unfortunate statements, that depends on your definition of soon.
18:19:31 [ChrisWilson]
I don't think we have anything that we've been holding on to, no.
18:19:34 [Hixie]
ChrisWilson: by "soon" i meant this week
18:20:01 [Hixie]
well, next week i guess
18:20:04 [Hixie]
what with it being friday
18:20:56 [oedipus]
ChrisW: for what it is worth, i don't think you're being obstructionist -- your being realistic and practical, something that most of us in spec writing don't necessarily need be...
18:20:58 [ChrisWilson]
Then that's possible - I told Dan last week I am explicitly removing myself from any decision-making around this - but that is running the risk I listed above, that the expanded patent review won't finish and Microsoft would have to depart prior to the 90-day countdown.
18:21:53 [Hixie]
ChrisWilson: aah, interesting.
18:22:12 [Hixie]
ChrisWilson: well that makes sense
18:22:27 [Hixie]
ChrisWilson: that's the same risk google would take too
18:22:52 [Hixie]
ChrisWilson: seems like that's the best course then
18:23:04 [Hixie]
it sidesteps the whole charter can of worms
18:23:31 [ChrisWilson]
What's the same risk Google would take?
18:24:20 [Hixie]
that our patent review wouldn't be complete in 90 days
18:25:37 [ChrisWilson]
The part that is frustrating is that ideally, if you create a clear enough charter, then you don't need to do a patent review every time a new document is issued by the WG; you do a review before joining the group, and then you know what is at stake.
18:25:46 [Hixie]
i agree
18:26:03 [Hixie]
like i said, the original scope list that i and others proposed for html5 was very detailed
18:26:06 [ChrisWilson]
If I have to go through a whole legal review every time there is a new document in a WG, I'm going to have to quit so I don't slit my wrists.
18:26:09 [Hixie]
and explicitly covered all these things
18:26:30 [Hixie]
w3c staff said that the list had too much redundancy and made it smaller, as i recall
18:26:33 [Hixie]
not sure why
18:26:52 [ChrisWilson]
Hmm. Nor am I; I was not involved in developing the charter at all, actually.
18:27:05 [ChrisWilson]
(other than the voting at the AC level, where I advised)
18:27:28 [Hixie]
yeah they didn't even contact me until someone pointed out to them that maybe they should at least consult the guy who'd edited the html5 spec for the past few years
18:27:41 [Hixie]
and even then they only unofficially asked for my advice
18:27:59 [mjs]
Apple's legal review may be hard to complete in 90 days as well, but I would still prefer to just publish and start the clock
18:28:26 [Hixie]
is where i sent the feedback i had
18:28:53 [Hixie]
look in particular at
18:30:12 [Hixie]
afk, bbiab
18:30:36 [mjs]
I'm not sure it's possible to predict all applicable patents short of an actual draft anyway
18:30:48 [mjs]
that's why FPWD and LC are what starts the review clock
18:30:50 [anne]
anne has joined #html-wg
18:32:08 [ChrisWilson]
That may be true - but we should at least know what areas are going to be covered. And I disagree with the scope of the charter Ian pointed to (0045) for this group.
18:32:30 [ChrisWilson]
I don't understand, btw, why the presumption that WebAPI has failed/is failing.
18:33:25 [mjs]
I don't think they have failed at everything, but they (we) certainly haven't delivered all their original charter deliverables
18:33:46 [ChrisWilson]
why not?
18:34:01 [anne]
no dedicated editors
18:34:41 [anne]
editing specs these days is much harder than it was before (if you look at the amount of detail of HTML 5 versus HTML 4 for instance)
18:34:42 [mjs]
nor have they even started on any kind of data storage API, nor does that seem likely to happen any time soon
18:35:13 [anne]
it's like writing an implementation in English
18:35:31 [ChrisWilson]
See, I don't get that. There's one in current HTML5; you guys are on that group too; why do you not just take that spec, move it into that group, get buyoff, stamp it and move on?
18:35:38 [ChrisWilson]
anne: ?
18:36:40 [mjs]
splitting specs is not easy
18:36:47 [ChrisWilson]
It seems like it's useful outside the context of HTML, and moving it into a group like that would make it quicker, not slower.
18:36:49 [mjs]
so far XMLHttpRequest has semi-succeeded
18:37:01 [anne]
i'm already editing cross-site requests, xhr 1 and 2, and several drafts for the HTML WG, besides QA work I do for Opera and trying to keep up with everything relevant
18:37:02 [mjs]
(though Microsoft's rep still objects to the remaining HTML dependencies)
18:37:12 [mjs]
and Window kind of failed
18:37:18 [ChrisWilson]
I understand breaking up HTML5 into, say, separate "tabular data" and "media elements" specs would be hard.
18:37:20 [mjs]
(due to lack of my time)
18:37:20 [anne]
i think i'm one of the few in the Web API WG who actually manages to produce stuff
18:38:08 [mjs]
I think lots of stuff would be better if split off in principle but I don't want to let the perfect be the enemy of the good
18:38:18 [kingryan]
kingryan has joined #html-wg
18:38:57 [ChrisWilson]
But it seems like taking the two client-side storage sections and making them a separate spec would make it easier to focus on. Not to mention use them outside of HTML.
18:39:41 [mjs]
in theory, yes
18:39:49 [mjs]
in practice, I'm not aware of a qualified and available editor
18:39:59 [ChrisWilson]
For what? Client-side storage?
18:40:54 [ChrisWilson]
afk
18:42:57 [Philip]
DanC: You said "There aren't any votes yet" 12 minutes ago, but I currently see 16 votes
18:43:59 [hober]
As one of those 16 voters, I'm all for withdrawing & rewording the question to take into account the feedback on it
18:46:32 [anne]
unless someone can point out volunteers this is really a theoretical question imo
18:48:05 [gsnedders]
DanC: <
> claims it isn't open yet for me
18:49:04 [anne]
i guess it will be rephrased
18:49:47 [gsnedders]
maybe I didn't go all the way back to where I left, then
18:52:07 [dbaron]
dbaron has joined #html-wg
19:06:36 [Hixie]
ChrisWilson: there are a number of sections (setTimeout, Window, XHR, alt stylesheets OM) that have been taken out of HTML5. Only one of them (XHR) has so far managed to get any real traction.
19:06:59 [Hixie]
ChrisWilson: so much so that i had to pull window back into HTML5 because I had dependencies that were falling by the wayside because of the issue
19:08:20 [Hixie]
ChrisWilson: setTimeout and the alt stylesheets OM are tiny bits that wouldn't even take much editing time -- if we could find someone to edit those, then we could consider taking out the much bigger and more important bits out
19:09:20 [Hixie]
ChrisWilson: but if we can't even find competent editors with enough time to edit those tiny bits, i would consider it irresponsible of us to take out the other bits and just throw them over the wall and hope for an editor, especially considering that the sections in question are amongst those sections that browser vendors have indicated are the most critical to html5's success
19:10:32 [Hixie]
bbiab, going to work
19:13:53 [DanC]
mjs, 20 minutes turned out to take longer... could you pick a time later this afternoon?
19:14:10 [DanC]
16 votes? hmm...
19:14:30 [DanC]
"No answer has been received." --
19:15:12 [Philip]
"16 answers have been received."
19:15:15 [Philip]
Cached?
19:15:26 [hober]
I see "3 answers have been received." must be cache
19:15:35 [hober]
shift-reload: 16
19:16:12 [DanC]
ah. shift-reload
19:16:36 [DanC]
"Future questions should avoid conflating distinct issues." indeed. this one should too
19:26:39 [Julian]
Julian has joined #html-wg
19:44:02 [timbl]
timbl has left #html-wg
19:45:08 [Lachy]
Lachy has joined #html-wg
19:58:56 [kingryan]
kingryan has joined #html-wg
20:07:08 [gavin]
gavin has joined #html-wg
20:26:16 [kingryan]
is
closed for editing?
20:28:04 [gsnedders]
yeah
20:34:29 [edas]
edas has joined #html-wg
20:36:31 [jgraham]
jgraham has joined #html-wg
20:40:24 [jgraham_]
jgraham_ has joined #html-wg
20:49:17 [DanC]
yes, closed for editing... I'm getting back to that now...
20:49:46 [jgraham_]
jgraham_ has joined #html-wg
20:50:20 [mjs]
DanC: I'll be around this afternoon some
20:53:44 [DanC]
ok... do you have a sense of how many questions yet?
20:55:12 [rubys]
rubys has left #html-wg
21:04:46 [mjs]
how many questions for what?
21:11:06 [edas]
edas has joined #html-wg
21:32:06 [Lachy]
Lachy has joined #html-wg
21:36:40 [DanC]
oops; hi mjs. can you see
? prolly not
21:39:02 [DanC]
ok... I moved the charter stuff from
to the tactics survey
21:39:11 [DanC]
mjs? kingryan ? Hixie ? anybody around to take a look?
21:42:49 [DanC]
any opinions?
21:42:53 [DanC]
i.e. is it coherent?
21:43:07 [DanC]
rather: are the 2 surveys coherent?
21:43:26 [hober]
It's not clear what question 2 in the tactics survey implies re: HTML 5 spec
21:43:40 [gsnedders]
hober: agreed
21:43:55 [kingryan]
concur
21:44:18 [kingryan]
DanC: is it implying that canvas be extracted into a separate document?
21:44:29 [gsnedders]
DanC: for question five can we just use Yes/No?
21:44:51 [DanC]
this one is unclear? "2. Canvas and immediate mode graphics API introductory/tutorial note"
21:44:56 [gsnedders]
DanC: yeah
21:45:05 [oedipus]
DanC: Should CANVAS and immediate mode graphics be spun off into a note, similar to Offline Web Applications? That is: a sort of extended abstract that might grow into a tutorial.
21:45:20 [hober]
For instance, I'd like to answer: "keep <canvas> in the html5 spec, don't recharter. additional documents (tutorials, etc.) are fine if someone wants to work on them."
21:45:22 [gsnedders]
DanC: also, regarding "charter a new W3C working group for the 2d graphics API" — Opera has experimental 3D support now
21:45:31 [DanC]
spun off? no; the design would stay where it is
21:45:43 [Philip]
(Mozilla has more advanced experiemental 3D support too)
21:45:50 [Philip]
s/e//
21:46:11 [oedipus]
DanC: Should CANVAS and immediate mode graphics be released first in the form of a note, similar to Offline Web Applications? That is: a sort of extended abstract that might grow into a tutorial.
21:46:37 [DanC]
reload; I changed it to: "How about a note to supplement the detailed specification, similar to ..."
21:47:42 [DanC]
what would yes and no mean for question 5? I want information on preferences as well as what people find acceptable
21:47:42 [hober]
I'd like a "no opinion" option on 2, although I suppose simply not answering conveys that...
21:48:11 [DanC]
right; you can just not click any of the options...
21:48:18 [DanC]
... though once you click one of them, you're kinda stuck
21:48:58 [oedipus]
that sounds like reason enough to add "no opinion" as an option
21:49:39 [jgraham]
Q2. on the second one is a bit brief
21:49:52 [jgraham]
s/second/tactics/
21:50:33 [jgraham]
Maybe s/How about/Should the Working Group produce/
21:50:42 [Lachy]
3d canvas could probably be done in webapi
21:51:47 [DanC]
yes, "How about" is overly colloquial; fixed
21:55:13 [DanC]
I'm pretty happy with it now
21:56:56 [Philip]
s/XMLHTTPRequest/XMLHttpRequest/
22:02:23 [hasather]
hasather has joined #html-wg
22:03:03 [DanC]
ok, I announced both of them, subject to change for a day
22:05:11 [DanC]
hmm... the requirement formal question doesn't have separate "no" and "formally object" options.
22:10:32 [hober]
which is the 'requirement formal question'?
22:13:06 [DanC]
22:13:30 [DanC]
anybody know where Hixie and/or mjs went?
22:15:16 [hober]
[11:10] <Hixie> bbiab, going to work
22:15:41 [mjs]
mjs has joined #html-wg
22:18:09 [DanC]
ah. thanks, hober.
22:20:45 [mjs]
mjs has joined #html-wg
22:21:58 [mjs]
mjs has joined #html-wg
22:22:30 [Philip]
DanC: Is it intentional that req-gapi-canvas/results shows 32 non-responders, while tactics- shows 489?
22:23:32 [mjs]
mjs has joined #html-wg
22:23:34 [Philip]
Ah, looks like the difference between a response-represents-organisation and response-is-just-personal survey
22:24:13 [DanC]
yes
22:24:51 [DanC]
though the 32 is low due to a bug; it should could public invited experts, I think
22:27:13 [timbl]
timbl has joined #html-wg
22:35:06 [Lachy]
Lachy has joined #html-wg
22:40:19 [jgraham_]
jgraham_ has joined #html-wg
22:42:35 [Philip]
jgraham_: By "a highly-inoperable mechanism", did you mean "highly-interoperable"?
22:46:56 [timbl]
timbl has joined #html-wg
22:48:45 [jgraham_]
Philip: Yeah, that would b a typo ;)
22:49:13 [gavin]
gavin has joined #html-wg
23:04:13 [gsnedders]
DanC: "Canvas and immediate mode graphics API introductory/tutorial note": An introduction to why it exists, or a tutorial about how to use it? They're very different.
23:06:57 [timbl]
timbl has joined #html-wg
23:09:03 [Philip]
gsnedders: A tutorial should teach readers when it is a suitable technology to use instead of the alternatives, so that would also serve as an introduction to why it exists
23:11:17 [sbuluf]
sbuluf has joined #html-wg
23:12:21 [mjs]
mjs has joined #html-wg
23:36:44 [ChrisWilson]
ChrisWilson has joined #html-wg
23:37:18 [marcos]
marcos has joined #html-wg
23:38:33 [jgraham__]
jgraham__ has joined #html-wg
23:41:18 [jgraham]
jgraham has joined #html-wg
23:51:08 [shepazu]
shepazu has joined #html-wg
|
http://www.w3.org/2007/11/16-html-wg-irc
|
CC-MAIN-2015-11
|
refinedweb
| 9,134
| 56.02
|
Red Hat Bugzilla – Bug 115354
Flex++ using namespace std causes problems.
Last modified: 2015-05-04 21:32:04 EDT
Description of problem:
I noticed the flex-2.5.4a-gcc31.patch adds the line "using namespace
std" at the top of the skeleton file. This causes a ambiguity if the
*.l file includes syntax that uses a function that could be global or
in the std namespace.
Example:
fabs(int) could be ::fabs(float),std::fabs(double) or std::fabs(long
double)
Version-Release number of selected component (if applicable):
2.5.4a in Redhat 8.0 and up
How reproducible:
Use any standard function that also had a global counterpart.
Steps to Reproduce:
1. Create a *.l file with a global function that has a standard
equivilent, but leave the arguments ambiguous.
2. Compile a *.C from the *.l file with flex++
3. Try to compile the *.C file.
Actual results:
You will get a compiler error that the function call is ambigious.
It trys the global function and any standard function equivilents.
Expected results:
Compiler should use the global
Additional info:
The *.l file can be written to be explicit about the function to
use, but it would be better if the flex.skel file didn't use the std
namespace over the whole file. It would even affect header brought in
the *.l file. The istream calls in flex.skel should be scoped with
std:: instead.
From User-Agent: XML-RPC
flex-2.5.4a-35.fc4 has been pushed for FC4, which should resolve this issue. If these problems are still present in this version, then please make note of it in this bug report.
Fedora Core 1 is not supported any longer, however the fix was pushed for FC4.
|
https://bugzilla.redhat.com/show_bug.cgi?id=115354
|
CC-MAIN-2018-47
|
refinedweb
| 295
| 69.18
|
Bear Bibeault wrote:Doesn't look like a Servlets issue to me. I'll move this along to the Beginning Java forum...
Stefan Evans wrote:First questions
- where are your test cases? How would you test this code? Thats a big part (or probably the larger part) of answering the question.
- I'm not a big fan of "static" I would probably have instantiated some objects to solve this problem.
- your 'equalize' method seems to do work that according to the spec is not needed. Maybe I'm making an assumption, but the question seemed to indicate you could expect a rectangular array. Checking and 'fixing' it to be rectangular seems to be extra unnecessary work I would mark you down on.
You call the method: AllWebLeads.searchSequence(i, j, -1, 1);
What does this -1 for angle mean? It is not documented.
I would suggest using an enumerated type for the angle to make it more readable.
Use JavaDoc for documenting your methods
Why does the method searchSequence return a value? Do you ever use it?
Using "Global variables" gets huge negative points from me.
I would suggest some JUnit test cases be written :-)
Ernest Friedman-Hill wrote:There is a method of 60 lines and one of 100 lines in the code; anything more than 10 lines is to be avoided, and methods of just a few lines are best. 100 lines is a huge red mark.
Vipin Anand wrote:I mean the stack operations involved in function calls could be an overhead here.
Bear Bibeault wrote:
Vipin Anand wrote:I mean the stack operations involved in function calls could be an overhead here.
That's like worrying if you have a fever while standing on the surface of the sun.
Besides always worry about code clarity first, then worry about performance if and only if you have a demonstrable performance issue.
Bear Bibeault wrote:You might want to look up the term premature optimization.
angle = -1 implies ...[snip]
Equalize method does not make an array rectangular. It basically solves a problem of unequal rows within the array.
For e.g. input array[] = {{1,0,1},{0,-1}}. Will output {{1,0,1},{0,-1,0}} so a 0 is appended at {0,-1} to make sure my program does not crash when I am traversing the array.
AllWebLeads.searchSequence(i, j, -1, 1); it does return the 'count' variable to check whether its equal to the expected length or not (its a recursive function
Stefan Evans wrote:Feedback on your test class
Why are the examples that they provided in the question not in your test cases? Those should be the first ones in.
Rather than one humongous test, each test case should be in a separate, well named method.
That way you can say at any point "x out of y test cases are working"
And you can also pinpoint exactly which ones you are having trouble with.
Currently all you can say is "the unit test fails somewhere"
It would have been nice to lay out your tests as a 2d array for easier reading, rather than all in one line.
//diagonal traverse (top left-> bottom right)
int[][] grid = {
{1,0,1,1},
{1,1,1,1},
{0,1,1,1}
}
assertEquals(1,AllWebLeads.FindSequence(grid, 8));
Vipin Anand wrote:Too bad I missed out a good opportunity (rare one for an international student like me) in spite of nailing the problem but not in a right way.
Jared Malcolm wrote:
Vipin Anand wrote:Too bad I missed out a good opportunity (rare one for an international student like me) in spite of nailing the problem but not in a right way.
It's always a long shot (but you have nothing to lose). Take the suggestions provided here and redo your work and submit again. Just mention to the HR person that you got some suggestions from fellow coders and implemented them on your own and without help. This in my mind shows that you can follow a standard if one is provided for you and that you really want the position....whats the worst thing they will say?
Your code worked correct? Do it another way and see if they like it better (if they will accept it).
Luigi Plinge wrote:Code seems way too verbose for what is required, hence unreadable and unmaintainable. I put this together in a few minutes:
public class Finder {
public static int FindSequence(int[][] grid, int length) {
int x = grid.length, y = grid[0].length;
int[] values = {-1, 1};
// Iterate through array elements
for (int i = 0; i < x; i++) {
for (int j = 0; j < y; j++) {
for(int a : values){
// count[0] = x-axis, count[1] = y-axis, count[2] = diagonal xy, count[3] = diagonal -xy
int[] count = new int[4];
boolean used [][][] = new boolean[x][y][4];
for (int k = 0; k < length; k++) {
int l = (i + k) % x;
int m = (j + k) % y;
int n = (i - k) % x; if(n < 0) n = x + n;
if(grid[l][j] == a && !used[l][j][0]) count[0]++;
if(grid[i][m] == a && !used[i][m][1]) count[1]++;
if(grid[l][m] == a && !used[l][m][2]) count[2]++;
if(grid[n][m] == a && !used[n][m][3]) count[3]++;
used[l][j][0] = true;
used[i][m][1] = true;
used[l][m][2] = true;
used[n][m][3] = true;
}
for(int p : count)
if(p == length) return a;
}
}
}
return 0;
}
}
If you want to improve your coding I would thoroughly recommend doing a bunch of TopCoder practice competitions. The problems are very similar to this one (in fact I wouldn't be surprised if this were "borrowed" from TopCoder ). It's great practice and you can see how everyone else tackled this problem, which is good for showing how you can write your code more efficiently.
|
http://www.coderanch.com/t/536920/java/java/Java-Code
|
CC-MAIN-2015-22
|
refinedweb
| 981
| 71.65
|
After studying Uncle Bob's Clean Code and trying to practice it as much as possible, I find that it can make a huge improvement to the maintainability or even just readability of a piece of code.
This coding challenge came up in one of my favorite coding communities:
Length of the Last Word
Given a string s consisting of some words separated by some number of spaces, return the length of the last word in the string.
A word is a maximal substring consisting of non-space characters only.
So, because I like to do challenges with tests, I put together this bit of unit test using Python's built in unittest package based on the three examples that were given with the challenge.
from unittest import TestCase def last_word_length(input_str): pass class TestLastWordLength(TestCase): def test_one(self): assert(last_word_length_golf('Hello World')) == 5 def test_two(self): assert(last_word_length_golf(' fly me to the moon ')) == 4 def test_three(self): assert(last_word_length_golf('luffy is still joyboy')) == 6
Now, I knew that I could pull this off with a single line of Python. I quickly wrote this:
import re def last_word_length(input_str): return len(re.split(r'[\W]+', input_str.strip())[-1])
This does everything that the challenge asks for, but part of the goal of doing these challenges in my community is that we use several different languages and it allows everyone to get more familiar with the language that you used, whether they have ever written it before or not. One liners like this are not usually a good idea for long term maintenance OR teaching. So, I took a minute to clean it up a bit. Having tests laid out made it really easy to know that I'd not introduced any errors.
import re def last_word_length(input_str): indefinite_whitespace = r'[\W]+' clean_input = input_str.strip() words = re.split(indefinite_whitespace, clean_input) return len(words[-1])
Now, it's pretty easy for anyone with a moderate amount of coding ability to understand what's going on. We have a regex that matches any clustering of whitespace. We have an input string that gets trimmed/stripped of it's surrounding whitespace. That string gets split according to the regex. And then we return the length of the last word.
There are two main things that this technique improves. Breaking the pieces apart reduces the mental overhead of understanding each piece. Naming the variable helps to document the intention of that line. It also makes the later lines significantly easier to understand by offering those names. That benefit will help you both with the reading and the writing of your code. The other major benefit comes long after this code has been put into version control. The history of this code allows you to tell which part of this was changed when. Instead of a diff showing that something in this block was changed at a, b, c, and d commits, you can see that the regex changed in commit a, the input cleaning changed in b and d and maybe the split call changed in c. This is especially helpful if you have IDE/editor functionality that helps you to see the git blame info for each line.
Discussion (0)
|
https://dev.to/xanderyzwich/coding-challenge-with-clean-code-2hp2
|
CC-MAIN-2021-39
|
refinedweb
| 535
| 60.35
|
Below are the links for other articles of Learn C# Step by Step :-
Learn C# Step By Step: - Part<>
This article is targeted to the audience who are new to programming world and wants to become programmer by learning through articles. Over here will do the programming in .NET using C# language and IDE(Integrated Development Environment) tool used will be Visual Studio 2015 Community Edition. This is the latest edition of Visual Studio at the time when this article is being written.
Here in this article will show from where to get visual studio setup, download it and how to do installation in local system like desktop system or laptop. Each part of the article will be demonstrated step by step form for clear and better understanding of each topic. Also will show how to write program run very first “Hello” program on the console prompt.
Being new to programming field reader may not be knowing from where to get the installation setup of visual studio tool, is this software tool going to be chargeable or free etc. many questions may be arising in mind. We suggest just relax and just follow this article slowly and steadily you will get answers to all questions arising. Too much thinking will confuse and confusion will generate fear which will lead to major hiccup in learning new programming technology.
Google is considered one of the best search engine so go to apart from Google there are many search engine like Bing, yahoo etc. We suggest go with google.com as it most used and best search results gets here. So on search field of type this text on “visual studio 2015 community download”.
On the search page will find many links to download the visual studio 2015 community edition. Suggesting to all the learners use reliable source like Microsoft site, visual studio site etc. to download setup file. From non-reliable sources there are chances along with setup file some infected software also getting downloaded which can seriously harm desktop or laptop. Here will use this link to download visual studio 2015 community edition setup file.
Once the link is open it will land on the visual studio website. On this page there are four flavors of VS 2015 but as decided earlier will go with community edition which is first option download as shown in the image down below. It is free version from Microsoft and widely used as all maximum features of the tool are active. For new learner this edition of VS 2015 is a boon :)
Just click on the “Free download” button, which will download a small exe which will start running the setup installation.
After the exe is downloaded open the folder where exe is downloaded and then do double click and execute the “exe” file which will then open a small open file window with Security Warning. As this exe is downloaded from reliable source of visualstudio.com which belongs to Microsoft stack just without having second thought just click on the “Run” button to start execute the package. It will process of installation.
Please Note: Stay connected to internet with minimum speed of 1 Mbps of internet speed. Recommended speed of internet connection is 2 Mbps to get installation done. Depending on the speed of internet connection installation timing may vary. So better to connected with recommended internet speed.
Other option to do installation is to download complete ISO file which can be later extracted using MagicISO. Extract ISO file into a folder from where installation can be run. This option is available on the same download page
We did the same thing in this article downloaded the ISO file and later extracted into folder of local system as shown in the image down below. Then do double click on “vs_community” in order to start the installation setup of Visual Studio 2015 Community Edition.
It will then open first screen of setup page. Here started setup with warning which states that this Visual Studio works best with IE 10 and in our system IE 10 is not installed. However we can still move ahead with installation setup and later can complete up-gradation of browser. Just click on “Continue” option available at bottom.
Then select the installation folder along with the installation type (a) Default and (b) Custom Please Note: Keep set this installation page to default values only. Also ensure that setup will require 9 GB of empty space on your local drive where installation is being carried out. If your local system space runs short than the required space setup will terminate and installation will stop. Once all above mentioned necessity are full filled then click on Install button available at the bottom right.
Towards the end of the installation process it will also do the installation of update 3 package as shown in the image down below.
After the installation is completed it will show following setup completed successful screen. Finally click on the “Restart” button given at the bottom which is essential update some necessary program files to complete the installation which can be taken into the effect after restart only.
Once the system is restarted next is click on the Start Menu of your system and search for Visual Studio 2015 and then click visual studio icon associated text of “Visual Studio 2015” in order to open the tool.
On viewing the opened Visual Studio 2015 tool screen do the following: - Step 1: Click on New Project option available at the left side which will open New project template where there are options to create various projects like Windows, Web, Mobile, Cloud Computing, WCF, Workflow etc. Step 2: On the left side of New project window there are options to select language of the programming like C# or VB, here selected language to create project is C# and type of project is selected Windows using Console Application. Step 3: Give a suitable name to the project and select the location to save the project on local drive.
After OK on the New Project screen is clicked it will close that window by saving project type, language selected(C# or VB) as selected by the user along with project name & location to save project on local drive(as given by the user).
Visual Studio will now open Console Application project screen which shows source code screen of program.cs and on the right side there is window of Solution Explorer. Solution Explorer displays structure in hierarchy form under Solution there is Project and under project there are various project files like program.cs, config file and other references file if needed. On the left side default open is the program.cs file under which there is source code written, on which there is already default system class included on the top required to complete Console Application. Please Note: On the top there are many menu’s available do not distracted with it. As and when required we will be introducing you to those menu to learn and built project. Currently do not go into much details of this Console Application project screen.
Also on Program.cs where source code is written below to included system class lines there is a hierarchy namespace written with name “ConsoleApplication1” under which there is class with name “Program” and under to that there are method written. This is the location where a developer goes and write the C# code.
Method written here
“static void Main(string[] args)”
Please Note: A namespace can have many classes and a class can have many methods under it.
Now as the default contents of program.cs screen of Console Application is understood next will write a simple program to display text on console prompt.
This is a simple program application which will display small text “Learn C# Step by step” on the console prompt.
In C# in order to display a text we have to use system class which by default has already been called at the start. Next important line to display text we have write code with text to display.
Console.WriteLine("Learning C# Step by Step");
Next line written code is to hold the console prompt output screen so that user can read the written text on console prompt.
Console.Read();
Once writing of the code within the scope is completed. Next go and click on Buid --> Build Solution.
After Solution of the console application is built successfully go and check output where the exe is deployed. In order to go to project folder directly do a right click on “ConsoleApplication1” of Solution Explorer as shown in the image down below.
Once the File Explorer is open then click on ConsoleApplication1 à Bin à Debug. Now under the Debug folder double click and open ConsoleApplication1.
It will open console prompt window screen with output displayed as “Learn C# Step by Step”.
In this step by step lab of C# we have learned how to download and do the installation Visual Studio 2015 community edition. Also we have run a simple program to display output on the console prompt.
With this lab we also have following learning video for you from our fresher’s Learn C# is 100 hrs series: -
Latest Articles
Latest Articles from Ahteshamax
Login to post response
|
https://www.dotnetfunda.com/articles/show/3391/learn-csharp-step-by-step-part-1
|
CC-MAIN-2021-39
|
refinedweb
| 1,553
| 60.65
|
On Mon, Dec 22, 2003 at 09:49:22PM -0500, Greg Leffler wrote: > I have tried it on several different i386 machines and have not been > able to find any errors. > > Thanks in advance for your consideration. I can't upload your packages (yet) as i'm still awaiting DAMnation, but I can offer you some comments. Fair deal? :) The binary package is very solid, especially for a first try. So that's all good. However, I have a few gripes about your frobbing with the source. You've edited the configure script directly. However, these changes don't seem to make a difference because they are overriden by the arguments you pass to configure. So I don't think it matters. If it does make a difference, it's a big autoconf bug. Also, you have some weird quoting in debian/rules: CFLAGS = "-Wall -g ifneq (,$(findstring noopt,$(DEB_BUILD_OPTIONS))) CFLAGS += -O0" else CFLAGS += -O2" endif I see where you're coming from on this, but just don't worry about quotes. This is a Makefile. Hence this would work just fine: CFLAGS = -Wall -g ifneq (,$(findstring noopt,$(DEB_BUILD_OPTIONS))) CFLAGS += -O0 else CFLAGS += -O2 endif Also, on the off chance that you've been reading debian-devel-announce, Peter Palfrader[1] has informed everyone that people should be including a full GPL blurb in debian/copyright if the software is indeed GPL. Otherwise, this looks like it's really ready for prime time. Note, if you intend to become a DD, you should get ready for the NM process by reading up on Policy and Developer's Reference, and get some signatures on your GnuPG key (which appears to be 01374F66) from a Debian developer in your area [2]. Hope this helps. [1] [2] -- Joshua Kwan
Attachment:
pgpafWtx4Xmk7.pgp
Description: PGP signature
|
https://lists.debian.org/debian-mentors/2003/12/msg00176.html
|
CC-MAIN-2017-04
|
refinedweb
| 302
| 72.56
|
Why is my code not doing the task im asking?!right but before i can get to median i need sum of all positive numbers no?
Why is my code not doing the task im asking?!Calculate a massive A[n][m] elements which are randomized within the interval of [-5, 5], n and m, i...
Bubble sort algorithm doesn't work...Im trying to make a bubble sort algorithm for this excersise about massives but it just doesn't work...
CODE doesnt work, int error...Saw the mistake, thanks
[code]#include <iostream.h>
#include <conio.h>
#include <math.h>
#include <...
CODE doesnt work, int error...bubble sort algorithm
The task was to get random numbers in order using bubble sort algorithm.
This user does not accept Private Messages
|
http://www.cplusplus.com/user/EZX/
|
CC-MAIN-2015-48
|
refinedweb
| 129
| 70.9
|
How does the using directive affect function arguments in C ++?
I have the following code that works fine using g ++ 4.4.6, but won't compile using Visual Studio 2008. It seems to be related to search argument dependent, so I think g ++ is correct.
// testClass.hpp namespace test { class foo { public: foo(){} }; class usesFoo { public: usesFoo() {} void memberFunc(foo &input); }; } // testClass.cpp using test::usesFoo; void usesFoo::memberFunc(foo &input) { (void) input; }
Compile error in Visual Studio,
1> Compilation ...
1> testClass.cpp 1> c: \ work \ testproject \ testproject \ testclass.cpp (6): error C2065: 'foo': uneclared identifier 1> c: \ work \ testproject \ testproject \ testclass .cpp (6): error C2065: 'input': undeclared identifier 1> c: \ work \ testproject \ testproject \ testclass.cpp (6): error C2448: 'test :: usesFoo :: memberFunc': functional style initializer, appears to be a function definition
I realize that either putting the namespace directly into a member function in the cpp file or "using a namespace test" will fix the problem, I'm more curious what exactly this standard says in this case.
source to share
The code is correct, but it has nothing to do with search argument dependent. In addition, the use of the declaration only affects the detection
usesFoo
, not
foo
: after you speak the name of a class member, other names are looked up in the context of that class. Since it
foo
is a member of the :: usesFoo` test, it found. Without using a directive, you need to define the member function like this:
void test::usesFoo::memberFunction(foo& input) { (void)input; }
The corresponding proposal for this is 3.4.1. Unqualified name Look-up [basic.lookup.unqual] item 6:
A name used in a function definition following a declarator-id function that is a member of the N namespace (where, for purposes of this presentation only, N can represent the global scope) must be declared before it is used in the block in which it is used, or in one of the its closing blocks (6.3), either must be declared before it is used in the N namespace or, if N is a nested namespace, must be declared before it is used in one of the Ns enclosing namespaces.
Argument-dependent search only goes into the image when the function is called, not when it is defined. These things have nothing to do with each other.
source to share
|
https://daily-blog.netlify.app/questions/1895758/index.html
|
CC-MAIN-2021-21
|
refinedweb
| 392
| 52.8
|
Perl6::Overview -- a brief introduction and overview of Perl 6
This introduction is aimed at the beginning Perl 6 programmer. For the moment, it is assumed that such programmers are coming from a background of Perl 5. However, this document tries to be simple and general enough for anyone with some programming experience to pick up Perl 6. Those who want more information about the changes from Perl 5 should look elsewhere.
Perl 6, like its predecessors, is a multi-purpose dynamic language combining ease of use and powerful programming features.
You will need to install Pugs, which can be found at.
Having done so, you can run your Perl 6 program from the command line as follows:
pugs myprogram.pl
To run one-liners from the command-line, use the -e flag:
pugs -e 'say "Hello, world!"'
You can also start up pugs without any arguments, then type Perl 6 commands at its command line. To exit, type
ctrl-D or
:q.
A Perl 6 program consists of one or more statements. These statements are simply written in a plain text file, one after another. There is no need to have a
main() function or anything of that kind.
Perl 6 statements end in a semi-colon:
say "Hello, world";
Comments start with a hash symbol and run to the end of the line
# This is a comment
Whitespace is irrelevant:
say "Hello, world" ;
... except inside quoted strings:
# this would print with a linebreak in the middle say "Hello world";
Double quotes or single quotes may be used around literal strings:
say "Hello, world"; say 'Hello, world';
However, only double quotes "interpolate" variables and special characters such as newlines (
\n):
my $name = 'Johnny'; print "Hello, $name\n"; # prints: Hello, Johnny (followed by a newline) print 'Hello, $name\n'; # prints: Hello, $name\n (no newline)
Numbers don't need quotes around them:
say 42;
Most of the time, you can use parentheses for functions' arguments or omit them according to your personal taste.
say("Hello, world"); say "Hello, world";
Perl has three main variable types: scalars, arrays, and hashes. Variables are declared with
my.
A scalar represents a single value:
my $animal = "camel"; my $answer = 42;
Scalar variables start with dollar signs. Scalar values can be strings, integers or floating point numbers, and Perl will automatically convert between them as required. There is no need to pre-declare your variable types (though you can if you want -- see XXX).
Scalar values can be used in various ways:
say $animal; say "The animal is $animal"; say "The square of $answer is ", $answer * $answer;
There is a "magic" scalar with the name
$_, and it is referred to as the "topic". It's used as the default argument to a number of functions in Perl, and it's set implicitly by certain constructs (so-called "topicalizing" constructs).
say; # prints contents of $_ by default
Array variables start with an at sign, and they represent lists of values:
my @animals = ("camel", "llama", "owl"); my @numbers = (23, 42, 69); my @mixed = ("camel", 42, 1.23);
Arrays are zero-indexed. Here's how you get at elements in an array:
say @animals[0]; # prints "camel" say @animals[1]; # prints "llama"
The numeric index of the last element of an array can by found with
@array.end:
say @animals[@animals.end]; # prints "owl"
However, negative indices count backwards from the end of the list, so that could also have been written:
say @animals[-1]; # prints "owl"
To find the number of elements in an array, use the
elems method:
say @mixed.elems; # prints 3
To get multiple values from an array:
@animals[0,1]; # gives ("camel", "llama"); @animals[0..2]; # gives ("camel", "llama", "owl"); @animals[1..@animals.end]; # gives all except the first element
This is called an "array slice".
You can do various useful things to lists:
my @sorted = @animals.sort; my @backwards = @numbers.reverse;
There are a couple of special arrays too, such as
@*ARGS (the command line arguments to your script) and
@_ (the arguments passed to a subroutine, if formal parameters are not declared). These are documented in XXX.
Hash variables start with a percent sign, and represent sets of key/value pairs: = %fruit_colors.keys; # ("apple", "banana") my @colors = %fruit_colors.values; # ("red", "yellow")
Hashes have no particular internal order, though you can sort the keys and loop through them.
Just like special scalars and arrays, there are also special hashes. The most well known of these is
%*ENV which contains environment variables. Read all about it (and other special variables) in XXX. references.
my $variables = { scalar => { description => "single item", sigil => '$', }, array => { description => "ordered list of items", sigil => '@', }, hash => { description => "key/value pairs", sigil => '%', }, }; print "Scalars begin with a $variables{'scalar'}{'sigil'}\n";
Exhaustive information on the topic of references can be found in perlreftut, perllol, perlref and perldsc.
[XXX Please check and correct this... just my best attempt --aufrank]
Variables in Perl 6 are put into one of several namespaces based on where and how they are declared. The basic scopes available are global, package, and lexical.
use GLOBAL <$FOO>; # globally-scoped $FOO $*FOO; # same as above our $bar; # package-scoped $bar my $baz; # lexically-scoped $baz
Globally-scoped variables can be used anywhere within the current package, and can be used by any packages that use the current package. Package-scoped variables can be used anywhere in the current package, but cannot be accessed from outside the package. Lexically-scoped variables can only be used within the scope of the nearest enclosing braces.
our $foo; sub bar { $foo++; # OK, package-scoped $foo my $baz; # lexically-scoped $baz } $baz++; # incorrect! $baz is not in the package scope
By default, variables declared in the root of the package are package scoped. By default, variables declared within a block are lexically scoped. This includes variables declared in classes, objects, methods, subroutines, anonymous closures, rules, and conditional and looping constructs.
There are also a number of specialized scopes available. Many of these scopes are declared and accessed through the use of secondary sigils, or 'twigils'.
$foo # ordinary scoping $.foo # object attribute accessor $^foo # self-declared formal parameter $*foo # global variable $+foo # environmental variable $?foo # compiler hint variable $=foo # pod variable $!foo # explicitly private attribute (mapped to $foo though)
Most variables with twigils are implicitly declared or assumed to be declared in some other scope, and don't need a "my" or "our". Attribute variables are declared with "has", though, and environment variables are declared somewhere in the dynamic scope with the "env" declarator.
By default, a subroutine or method hides variables declared within its lexical scope (with the exception of
$_ ). Variables declared within a subroutine or method cannot usually be accessed when the subroutine or method is called. To change this, declare the variable with
env.
[XXX Probably should include an example declaring and accessing an ENV variable]
Variables within certain namespaces can be accessed through that namespace's symbol table. Available symbol tables (or 'pseudo-packages') include:
MY # lexical variables # declared with 'my' OUR # package variables # declared with 'our' GLOBAL # global scoped variables # declared with 'use GLOBAL' or $* twigil ENV # environmental variables # declared with 'env' or $+ OUTER # the immediately surrounding 'MY' scope CALLER # the lexical scope of the caller SUPER # ??? COMPILING # the compile-time scope of a variable, often used in macros # declared with $?
Perl has most of the usual conditional and looping constructs. The are usually written as construct-name condition { ... } { ... }
This is provided as a more readable version of
if not condition.
... is the "yada yada yada" operator. It is used as a place holder, which prints out a warning if executed. condition { ... }
There's also a negated version, for the same reason we have
unless:
until condition { ... }
You can also use
while in a post-condition:
print "LA LA LA\n" while 1; # loops forever
The
loop functions exactly like the C's <for> It is rarely needed in Perl since Perl provides the more friendly list scanning
for.
loop (my $i=0; $i <= $max; $i++) { ... }
Can be expressed like:
my $i=0; while($i <= $max; ) { ...; $i++; }
If you want to create a neverending loop use
loop without any arguments
loop { ... }
for @array { print "This element is $_\n"; } # you don't have to use the default $_ either... for %hash.keys -> $key { print "The value of $key is %hash{$key}\n"; } ** exponentiation % modulo less than 100) or alphabetically (where 100 comes before 99).
Smart comparison ~~ smart match (see smartmatch) !~~ negated smart match
Boolean logic &&.)
Miscellaneous = assignment ~ string concatenation x string multiplication .. range operator (creates a list of items)
Many of the operators can be combined with an "=" as follows: $a += 1; # means $a = $a + 1; $a -= 1; # means $a = $a - 1; $a ~= " "; # means $a = $a ~ " ";
Perl's rules are called regular expressions in other languges. Perl's regular expression support is both broad and deep, and is the subject of lengthy documentation in XXX, and elsewhere. However, in short:
if /foo/ { ... } # true if $_ contains "foo" if $a ~~ /foo/ { ... } # true if $a contains "foo"
The
// matching operator is documented in XXX. It operates on
$_ by default, or can be bound to another variable using the
~~ smart match operator (also documented in XXX).
s/foo/bar/; # replaces foo with bar in $_ $a .= s/foo/bar/; # replaces foo with bar in $a $a .= s/foo/bar/g; # replaces ALL INSTANCES of foo with bar in $a
The
s/// substitution operator is documented in XXX.
You don't just have to match on fixed strings. In fact, you can match on just about anything you could dream of by using more complex regular expressions. These are documented at great length in XXX, but for the meantime, here's a quick cheat sheet:
. a single character \N non-newline (foo|bar|baz) matches and captures any of the alternatives specified <foo> matches the rule foo ^ start of string $ end of string
Quantifiers can be used to specify how many of the previous
Some brief examples:
/^\d+/ string starts with one or more digits /^$/ nothing in the string (start and end are adjacent) /[\d\s]{3}/ string contains three digits, each followed by a whitespace character (eg "3 4 5 ") /[a.]+/ matches a string in which every odd-numbered letter is a (eg "abacadaf") # This loop reads from STDIN, and prints non-blank lines: for (=<>) { next if /^$/; print; }
Additionaly to grouping, parentheses serve a second purpose. They can be used to capture the results of parts of the regexp match for later use. The results end up in
$0,
$1 and so on. Note that those variables are numbered from 0 not 1;
# a cheap and nasty way to break an email address up into parts if ($email ~~ /(<-[@]>+)@(.+)/) { print "Username is $0\n"; print "Hostname is $1\n"; }
Perl rules also support named rules, grammars, backreferences, lookaheads, and all kinds of other complex details. Read all about them in XXX.
Kirrily "Skud" Robert <skud@cpan.org> Shmarya <shmarya.rubenstein@gmail.com> Pawel Murias <13pawel@gazeta.pl>
|
http://search.cpan.org/~lichtkind/Perl6-Doc-0.36/lib/Perl6/Doc/Overview.pod
|
CC-MAIN-2016-30
|
refinedweb
| 1,834
| 62.48
|
In this third and final post about Jekyll, we are going to exploit its blog-aware features.
This article assumes you are familiar with this static site generator, in case you’re not you can read the two previous posts I’ve written where I explain how to set it up and create your first pages.
Building a separate layout for posts
In spite of being one of the reasons why it is very popular, creating a blog section in your site with Jekyll is really simple and you’re going to notice that here because this tutorial is not going to take long time to read.
First of all we’re going to create a new file inside the _layouts folder and call it post.html. You could still use your default layout file for each post you create but it’s highly probable you’re going to display some extra information or make some changes on the design to improve the reading experience.
Let’s go with something simple.
{% include head.html %} {% include header.html %} <section class="post--content"> <p class="post--date">{{ page.date | date_to_string }}</p> <h1>{{ page.title }}</h1> <p>{{ page.introduction }}</p> {{ content }} </section> {% include footer.html %}
The
page word is used to make reference to the post information. As I said, we’re going simple and just showing a title, an introduction or excerpt and the content. You might have also noticed that we’re printing the date of the post and adding some configuration. Jekyll and Liquid itself come with a lot of versatility on date filters, I suggest you to read both docs if you really want the date attribute in a special format.
Creating your first post
There’s no much wizardry here. You just need to start throwing Markdown or HTML files inside the _posts folder. the only condition is that you need to name them in a particular way: year-month-day-title-separate-by-hyphens.md, and that’s it.
Of course you still need to maintain the YAML configuration as it was shown in my previous posts. Here, you’re going to assign the value post to layout and then fill the title and introduction attributes.
--- layout: post title: My First Post introduction: Lorem ipsum dolor sit amet, consectetur adipiscing elit. --- Nam elit purus, tempus vel velit non, laoreet tempus ligula. Suspendisse eu condimentum urna. Aliquam magna magna, faucibus non orci a, ultrices pretium justo. Donec tincidunt tellus mauris, quis viverra orci elementum sit amet. Quisque vulputate diam tortor, quis accumsan velit volutpat mattis.
There you go, that’s a post in Jekyll. Simple, right?
Permalinks
The url that your post will have can be changed in the _config.yml file and Jekyll already has pre-built date filters for you.
Listing your posts
It’s time to brag about your writing skills, posts is an array accessible in any file through the
site namespace. It’s up to you to show them in your home or create a new page where you can list them. Whatever your choice is, this will get the job done.
<ul> {% for post in site.posts %} <li> <a href="{{ post.url }}"> <h2>{{ post.title }}</h2> </a> </li> {% endfor %} </ul>
Not so hard, but what if you’re launching your site and didn’t wrote anything yet?
{% if site.posts.size > 0 %} <ul> {% for post in site.posts %} <li> <a href="{{ post.url }}"> <h2>{{ post.title }}</h2> </a> </li> {% endfor %} </ul> {% else %} <p>There are no posts available right. Come back soon!</p> {% endif %}
A for loop and an if block, I’m guessing you’re familiar with those in any other programming language, that’s them in their Liquid form. Of course you have plenty of options to structure this view and show the date, the excerpt or any data related to posts.
As you get into the Liquid language the opportunities will multiply.
Wrap-up
I hope you’ve enjoyed this last posts about Jekyll. Remember that is the engine behind this blog you’re reading and I’m just scratching is surface, feel free to build your own stuff and share it with the community.
If you have any questions, I’m available on twitter. Happy blogging!
|
https://jeremenichelli.io/2015/08/building-blog-jekyll-posts/
|
CC-MAIN-2018-47
|
refinedweb
| 708
| 74.59
|
In this tutorial, we will learn to create a web server with ESP32 using SPIFFS and Arduino IDE. Firstly, we will learn how to store CSS and HTML files in the SPI flash file system of ESP32 and build a web server through those files. Instead of hard coding the HTML as string literals which takes up a lot of memory, the SPIFFS will help us access the flash memory of the ESP32 core.
To show the working of the SPIFFS, we will create and store individual HTML and CSS files through which we will build a web server that will control the output GPIO pins of the ESP32 module by toggling the onboard LED.
we have a similar project with ESP8266 NodeMCU:
SPIFFS Web Server Overview
We will build a web server to control onboard LED which is connected with GPIO2 of ESP32. The web server responds to a web client request with HTML and CSS files stored in the SPIFFS of the ESP32 module.
A simple HTML page consists of a title: ‘ESP32 WEB SERVER’ followed by two ON/OFF buttons which the user will click to control the ESP32 output which will be GPIO2 through which the on-board LED is connected. In other words, the user will control the onboard LED through the buttons available on the web page. The current GPIO state will also be mentioned.
How Does SPIFFS Web Server Work?
A series of steps will be followed:
- The ESP32 contains the web server and the web browser will act as the client. We will create the web server with the help of the ESPAsyncWebServer library which updates the web page without having to refresh it.
- We will create HTML and CSS files and store them in ESP32’s SPIFFS. Whenever the user will make a request by entering the IP address on the web browser, the ESP32 will respond with the requested files from a filesystem.
- To show the current GPIO state, we will create a placeholder inside our HTML file as the state will constantly change after clicking the ON/OFF buttons. This placeholder will be placed inside %% signs e.g., %GPIO_STATE%
- On an HTML page, there are two buttons. The first will be the “ON” button and the second will be the OFF button. Through these buttons, the user will be able to turn the onboard LED on and off after clicking on the respective button.
- When a user clicks the ON button, they will be redirected to an IP address followed by /led2on, and the onboard LED turns on.
- Similarly, When a user clicks the OFF button, they will be redirected to an IP address followed by /led2off, and the onboard LED turns off
Setting up Arduino IDE
To follow this SPIFFS bases web server project, make sure you have the latest version of Arduino IDE and the ESP32 add-on installed on your Arduino IDE. If you have not installed it before you can follow this tutorial:
Install ESP32 in Arduino IDE ( Windows, Linux, and Mac OS)
Also,
Installing ESPAsyncWebServer and Async TCP Library
We will need two libraries to build SPIFFS based. Therefore, we will have to download and install them on our ESP32 board ourselves.
- To install ESPAsyncWebServer library, click here to download. You will download the library as a .zip folder which you will extract and rename as ‘ESPAsyncWebServer.’ Then, transfer this folder to the library folder in your Arduino IDE.
- To install the Async TCP library, click here to download. You will download the library as a .zip folder which you will extract and rename as ‘AsyncTCP’. Then, copy the ‘AsyncTCP’ folder to the library folder in your Arduino IDE.
Similarly, you can also go to Sketch > Include Library > Add .zip Library inside the IDE to add the libraries. After installation of the libraries, restart your IDE.
Creating Files for SPIFFS like shown below:
Note: You should place HTML and CSS files inside the data folder. Otherwise, SPIFFS library will not be able to read these files.
Creating HTML file
Inside our HTML file, we will include the title, a paragraph for the GPIO state, and two on/off buttons.
Now, create an index.html>
We will start with the title of the web page. The <title> tag will indicate the beginning of the title and the </tile> tag will indicate the ending. In between these tags, we will specify “ESP32 WEB SERVER” which will be displayed in the browser’s title bar.
<title>ESP32 WEB SERVER</title>
Next, we will create a meta tag to make sure our web server is available for all browsers e.g., smartphones, laptops, computers etc.
<meta name="viewport" content="width=device-width, initial-scale=1">
In our web page, we also use icons of lamps to show ON and OFF states. This link tag loads the icons used in the webpage">
Between the <head></head> tags, we will reference the CSS file as we’ll be creating different files for both HTML and CSS by using the <link> tag. This tag will be used so that we can link with an external style sheet which we will specify as a CSS file. It will consist of three attributes. The first one is rel which shows the relationship between the current file and the one which is being linked.
We will specify “stylesheet” which will change the visuals of the web page. The second attribute is type which we will specify as “text/css” as we will be using the CSS file for styling purpose. The third attribute is href and it will specify the location of the linked file. Both of the files (HTML & CSS) will be saved in the same folder (data) so we will just specify the name of the CSS file that is “style.css.”
<link rel="stylesheet" type="text/css" href="style.css">
Inside the HTML web page body, we will include the heading, the paragraph and the buttons. This will go inside the <body></body> tags which will mark the beginning and the ending of the script. We will include the heading of our webpage inside the <h1></h1> tags and it will be the same as that of the web browser title i.e., ESP32 WEB SERVER.
<h1>ESP32 WEB SERVER</h1>
The first paragraph displays the lamp icon and GPIO2 text. We use the icons from Font Awesome Icons website. Besides this when a user clicks the LED ON button, you will be redirected to /?led_2_on URL.
To Create font, we will use the fontawesome.com website. Go this link (fontawesome) and type light in search bar as shown below:
You will see many icons for light bulbs. You can select any one of them to be used in your MicroPython web server. We use the first and second lamp icons in this tutorial. After that click on the selected icon.
You will see this window which contains the link tag. Copy this link tag and use it on your HTML page as follows:
iv <p><i class="fas fa-lightbulb fa-2x" style="color:#c81919;"></i> <strong>GPIO2</strong></p>
Next, we will use a placeholder to monitor the correct GPIO states. %GPIO_STATE% will be used as the placeholder which will help us check for the current state of the output GPIO2 connected with the on-board LED.
<p>GPIO state: <strong>%GPIO_STATE%</strong></p>
Then, we will define the buttons on our web page. We have two buttons one after another: an ON button and an OFF button. When the user will click the ON button, the web page will redirect to /on URL likewise when the OFF button will be clicked, the webpage will be redirect to /off URL.
<p><a href="/led2on"><button class="button">ON</button></a></p> <p><a href="/led2off"><button class="button button2">OFF</button></a></p>
Creating }
Inside our CSS file which will be used to give styles to our web page, we will specify the font types, size and colors of the headings, paragraphs and the two buttons. We will set the display text to font type Arial and align it in the centre of the webpage.
The font colour of the title (h1) and the font size of the first paragraph (p) will also be set. Next, as we are using two buttons one for ON and another for OFF thus, we will use two different colours to differentiate them. We will incorporate their font size, colour and positioning as well.
Arduino Sketch SPIFFS Web Server
Finally, we will create a new file and save it as ESP32_webserver. Copy the code given below in that file.
// Import required libraries #include "WiFi.h" #include "ESPAsyncWebServer.h" #include "SPIFFS.h" // Replace with your network credentials const char* ssid = "Enter_Your_WiFi_Name"; //Replace with your own SSID const char* password = "Enter_Your_WiFi server server.begin(); } void loop(){ }
Firstly, we will include the necessary libraries. For this project, we are using three main libraries such as WiFi.h, ESPAsyncWebServer.h and SPIFFS.h. As we have to connect our ESP32 to a wireless network, we need WiFi.h library for that purpose. The other ESPAsyncWebServer library is the one that we recently downloaded and will be required to build the asynchronous HTTP web server. Also, the SPIFFS library will allow us to access the flash memory file system of our ESP32 core.
#include "WiFi.h" #include "ESPAsyncWebServer.h" #include "SPIFFS.h"
Next, we will create two global variables, one for the SSIDpassword. These will hold our network credentials which will be used to connect to our wireless router. Replace both of them with your credentials to ensure a successful connection.
// Replace with your network credentials const char* ssid = "Enter_Your_WiFi_Name"; //Replace with your own SSID const char* password = "Enter_Your_WiFi_Password"; //Replace with your
Next, define the variable ledPin to give symbolic name to the GPIO2 pin through which the on-board LED will be connected. The ledState variable will be used to store the current LED state which will be used later on in the program code.
const int ledPin = 2; String ledState;
The AsyncWebServer object will be used to set up the ESP32 web server. We will pass the default HTTP port which is 80, as the input to the constructor. This will be the port where the server will listen to the incoming HTTP requests.
// Creating an AsyncWebServer object AsyncWebServer server(80);
Processor Function
Inside the processor() function, we will replace the placeholder %GPIO_STATE% with the LED state (1 or 0). The series of if-else statement will check whether the placeholder is indeed the one that we created in our HTML file. If it is, then it will check for the variable ledPin which specifies the GPIO2 connected to the on-board LED. If the state is 1(HIGH) then ledState variable will be saved as ‘ON’ otherwise it will be saved as ‘OFF.’ As a result, this variable will be returned and will replace GPIO_STATE with the ledState value.
String processor(const String& var){ Serial.println(var); if(var == "GPIO_STATE"){ if(digitalRead(ledPin)){ ledState = "ON"; } else{ ledState = "OFF"; } Serial.print(ledState); return ledState; } return String(); }
setup() Function
Inside the setup() function, we will open a serial connection at a baud rate of 115200. By using the pinMode() function, the GPIO PINs will be passed as a parameter inside the function which will be configured as an output pin.
Serial.begin(115200); pinMode(ledPin, OUTPUT);
These lines of code will initialize the SPIFFS.
if(!SPIFFS.begin(true)){ Serial.println("An Error has occurred while mounting SPIFFS"); return; }
WiFi Network Connection
The following section of code will connect our ESP32 board with the local network whose network credentials we already specified above. After the connection will be established, the IP address of the ESP32 board will get printed on the serial monitor. This will help us to make a request to the server.
WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting to WiFi.."); } Serial.println(WiFi.localIP());
Async Web Server SPIFFS
Now, we will look into how Async Web Server handles the http requests received from a client. We can configure the Async Web Server to listen to specific HTTP requests based on configured routes and execute particular function whenever a HTTP request is received on that route.
We will use the on() method on the server object to listen to the incoming HTTP requests and execute functions accordingly.
The send() method uses to return the HTTP response. The index.html file will send to the client, whenever the server will receive a request on the “/” URL. We will replace the placeholder with the value saved in the variable ledState as we will be using processor as the last argument of the send() function.
server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(SPIFFS, "/index.html", String(), false, processor); });
Next, we will also add the /style.css as the first argument inside the on() function as we have already referenced it on our HTML file. Hence, whenever the client will request a CSS file, it will be delivered to the client as can be seen through the send() function.
server.on("/style.css", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(SPIFFS, "/style.css","text/css"); });
As you already know, when the user will click the ON/OFF buttons the web page will redirect to either /led2on or /led2off URL. Therefore, this section of code deals with what happens when either of the buttons is clicked.
Whenever an HTTP GET request is made on either of the two /on or /off, the on-board LED of the ESP32 module will turn ON or OFF and HTML page will be sent in response to a client.
Also, we will set the on-board LED’s GPIO pin to a LOW state so that initially it will be OFF at the time of boot.); });
To start the server, we will call the begin() on our server object.
server.begin();
Our loop function is empty as we are building an asynchronous server so we do not need to call any handling function inside it.
void loop() { }
Demonstration
After you have saved all three files go to Sketch > Show Sketch Folder and create a new folder. Place both the HTML and CSS files inside that folder and save the folder as ‘data.’
Make sure you.
Now, we will upload the files into our ESP32 board. Go to Tools > ESP32 Data Sketch Upload. After a few moments, the files will be uploaded.
After you have uploaded your code and the files to the ESP32 development board, press its ENABLE button.
In your Arduino IDE, open up the serial monitor and you will be able to see the IP address of your ESP module.
Copy this ESP32 IP address into a web browser and press enter. The web server will look something like this:
Now, press the ON and OFF buttons and the on-board LED will turn on and off accordingly. The GPIO state on the web server will also change and get updated.
In conclusion, we learned about a very useful feature of the ESP32 its flash memory through SPIFFS. Instead of having to hard code all the HTML and CSS files in Arduino sketch, it is very convenient to save them once in the ESP32 SPIFFS.
Other interesting ESP32 Web Server Projects:
-
|
https://microcontrollerslab.com/esp32-web-server-spi-flash-file-system-spiffs/
|
CC-MAIN-2021-39
|
refinedweb
| 2,562
| 71.44
|
- When the Server run, it will show it's own IP and port, open a ServerSocket and wait for socket connection from clients.
- In Client side, enter message to be sent to server, enter the server Ip and port, then clieck Connect... button, The client will connect to server using socket with DataInputStream and DataOutputStream loaded with message to send.
- In client side, when serverSocket.accept(), it will retrieve the message sent with dataInputStream.readUTF().
Both client and server need permission of "android.permission.INTERNET" in AndroidManifest.xml.
Notice:
- All code for network operation should run in background thread.
- The code dataInputStream.readUTF() will block the program flow if no data input. (Read Pervent program blocked by DataInputStream.readUTF())
Notice:
- All code for network operation should run in background thread.
- The code dataInputStream.readUTF() will block the program flow if no data input. (Read Pervent program blocked by DataInputStream.readUTF())
Example code in Server Side:
package com.example.androidserversocket; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException;() { Socket socket = null; DataInputStream dataInputStream = null; DataOutputStream dataOutputStream = null; try { serverSocket = new ServerSocket(SocketServerPORT); MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { info.setText("I'm waiting here: " + serverSocket.getLocalPort()); } }); while (true) { socket = serverSocket.accept(); dataInputStream = new DataInputStream( socket.getInputStream()); dataOutputStream = new DataOutputStream( socket.getOutputStream()); String messageFromClient = ""; //If no message sent from client, this code will block the program messageFromClient = dataInputStream.readUTF(); count++; message += "#" + count + " from " + socket.getInetAddress() + ":" + socket.getPort() + "\n" + "Msg from client: " + messageFromClient + "\n"; MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { msg.setText(message); } }); String msgReply = "Hello from Android, you are #" + count; dataOutputStream.writeUTF(msgReply); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); final String errMsg = e.toString(); MainActivity.this.runOnUiThread(new Runnable() { @Override public void run() { msg.setText(errMsg); } }); }(); } } } } }; } }
Layout:
>
Example code in Client Side:
package com.example.androidclient; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException;; import android.widget.Toast; public class MainActivity extends Activity { TextView textResponse; EditText editTextAddress, editTextPort; Button buttonConnect, buttonClear; EditText welcomeMsg; ); welcomeMsg = (EditText)findViewById(R.id.welcomemsg);) { String tMsg = welcomeMsg.getText().toString(); if(tMsg.equals("")){ tMsg = null; Toast.makeText(MainActivity.this, "No Welcome Msg sent", Toast.LENGTH_SHORT).show(); } MyClientTask myClientTask = new MyClientTask(editTextAddress .getText().toString(), Integer.parseInt(editTextPort .getText().toString()), tMsg); myClientTask.execute(); } }; public class MyClientTask extends AsyncTask<Void, Void, Void> { String dstAddress; int dstPort; String response = ""; String msgToServer; MyClientTask(String addr, int port, String msgTo) { dstAddress = addr; dstPort = port; msgToServer = msgTo; } @Override protected Void doInBackground(Void... arg0) { Socket socket = null; DataOutputStream dataOutputStream = null; DataInputStream dataInputStream = null; try { socket = new Socket(dstAddress, dstPort); dataOutputStream = new DataOutputStream( socket.getOutputStream()); dataInputStream = new DataInputStream(socket.getInputStream()); if(msgToServer != null){ dataOutputStream.writeUTF(msgToServer); } response = dataInputStream.readUTF(); }(); } } } return null; } @Override protected void onPostExecute(Void result) { textResponse.setText(response); super.onPostExecute(result); } } }
Layout:
/welcomemsg" android: >
Client and Server run on WiFi share hotspot:
The following video, at 1:22, show it work when both client and server connect to the shared WiFi network from server hotspot.
- Pervent program blocked by DataInputStream.readUTF()
- Editable message sent from server
- Java/JavaFX Client link to Android Server
- Java/JavaFX Client run on Raspberry Pi, link with Android Server
- Java/JavaFX Server link to Android Client
- A Simple Chat App
46 comments:
This is awesome. I love how simple it is. I have two questions however:
if I want to send messages from the server to the client as well and have the server receive them would it work if I use the client side of your code into the server part of the code and the server part of your code into the client side of the code?
Also I am thinking of changing it so that when pressing connect a new screen shows up where the client can also view the chat messages from the server and send messages. Will it be possible to do this through intents? Sorry if my questions sound silly. I am new to android programming.
hello blackjwl,
"send messages from the server to the client" have already been done in the example. The message "Hello from Android, you are #" is sent from server to client after connected.
For "chat" application, I have no idea right now, may be will try later.
Thx
I see, well I am working on that these days and your code seemed pretty much the solution to a chat application as I thought I only need to add a text box to the server app's xml layout and enter text instead of sending a hard coded message. I did not mean a fully fledged chat application only chatting between the server and client taking place in the simplest way possible.
Anyway I have tried to run your code on two separate emulators and I do not get the local IP address shown as the "SiteLocalAddress" when I run the server app. I only see one SiteLocalAddress statement mentioning "10.0.2.15"
and when I enter that into the client app on the other emulator I got an IOexception java.net.ConnectException saying failed to connect to 10.0.2.15. Connect failed ECONNREFUSED(connection refused).
After that when I entered my computer's local IP address into the client app's text box thinking maybe it would work even if it doesn't show it but absolutely NOTHING happened. I had copied and pasted your code exactly as it is and had run it. Any idea why I can;t run it like you did in the video?
I forgot to mention one other thing, when I tried to run the apps again on the two separate emulators, and I put in the local ip address of my computer even though it did not show up on the server app, I decided to wait and see what happens and I ended up getting an error saying failed to connect to 192.168.0.7 ETIMEDOUT(Connection timed out).
if I run the server app on the mobile and the client app on the emulator your code works however. I am still trying to make the server send a message from the user to the client through a textbox rather than a hardcoded string message but the application just crashes. I have been at this all day and it is starting to drive me crazy.
ServerActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
msg.setText(message);
}
});
sendChat.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
chatMsg= chatBoxText.getText().toString();
try {
dataOutputStream.writeUTF(chatMsg);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
Any clues on how I can insert a textfield that would send the message to the client without causing it to crash?
hello blackjwl,
Is it what you want?
Strongly recommend NOT to test on Emulator.
Thank you so much. :) this tutorial proved to be very useful! :)?
Please check:
Implement simple Android Chat Application, server side..
and
Simple Android Chat Application, client side.
Is it gonna work if I try to connect on different networks?
Awesome....It is really working.Can you please provide a similar code which can transfer files between two android devices
hello Sauradipta Mishra,
Please check File transfer via Socket, between Android devices.
Hi guys,
Could anyone give me a Python socket server code (I have a server running Python). Thanks a lot.
Thanx for sharing such code. it's really good example.
But, I have one question. Can we make server devices ip adr. static? with code or with android user interface? We may get rid of entering Ip adr. and port every time.
sure you can make the ip static. But it is not the job in the application.).
I tried sending http handshake message from serverSocket(Android code) to websocket(JavaScript code).
WebSocket in JavaScript received onopen() callback function but it immidaitely recived onclose() callback with reason code 1006.
Note: ServerSocket is listening to port 8080.
Websocket is cerated using URL - "wb:127.0.0.1:8080"
Please let me know if my approach is wrong.
Is it possible to communicate between serverSocket and webSocket?
Or is there any other way to create WebSocket Server in andorid similar to serverSocket and communicate with JavaScript websocket?
Mallikarjun
Hello Mallik,
I have no idea right now.
I think serverSocket and webSocket cannot communicate directly.
Please,,,,,I need a way to made two mobiles to be connected. Via WIFI and playTicTacToe on them
my android mobile is client only, my wifi module is server , if i connect from client the server should connect and the server sending should be seen in a label. for this how to edit your code. please guide
hi
i use eclipse with target 17 and i use 2 emulator android on eclipse when i run the application i have a prbm in client i can't connect to the server i have an exception (connection refused)plz hel me
Hello,,,im just thinking if it is possible to divide the codes on connect button. by creating 2 buttons[connect,send].connect is just simply notify you if its connected, then send is to send a message to the server. can anyone help me?
Add the permissions
(Uses internet,access ntwk state )
Thanks for the tutorial its quite helpful!!!!!
How can I send the location of the server to client on button click.
As expected, Always find a solution from your blog.Thank you
Hi!
is posible to use the same code for ObjectInputStream and ObjectInputStream? Because, i tried but i didnt have any result. Thanks!!
Regards
Hi,
I am trying to execute your example, but i always got failed to connect IOException also isConnectedFailed: EHOSTUNREACH(no route to host), Please help to resolve this exception.
I am trying to execute your example, but i always got failed to connect IOException also isConnectedFailed: EHOSTUNREACH(no route to host), Please help to resolve this exception.?
Hello Rasoul Gholamhosseinzadeh,
In this example, the server start ServerSocket and wait request from client. The communication link is initiate by client, then server reply something, than close. So the server cannot initiate communication to client.
But you can check socket.getInetAddress() to determine what to reply.
I only implemented client code since I need to connect to a telnet server.
Is this possibile ??
When I press connect I obtain no messages and no errors. Simply, nothing happens !!
i have a same question:
Rasoul Gholamhosseinzadeh said...?
anybody have example?help please
Hello Rasoul Gholamhosseinzadeh and Iqra Ali,
I think what you want is Android Chat example, with server sending individual message to specify client..
thankyou :)
its really helpful
and if i want to communicate with different network. any example about this?
i want to communicate with different network. any example about this?
and also i want some modification like a client sends a msg to the server then server forward it to other user.
hey thank you for code. i used your client side code and use another app as server i can send a msg but i can not receive from another server app and also cant see the chat panel which is shown in ur video.
Hello I want server and client side code is it possible to give me this code my emaild id is kirtigiripunje@gmail.com thanks
hello kirti kumar,
The post already had links to download the code.
Hello,
On the Client side, I would like to connect to the server, then send multiple messages. At the moment, I am able to connect once and send a message, however, when I click on the "Connect" button again it tries to create a new socket. I want the same connected socket to send multiple messages. Is it possible? Is there another example with this implementation?
Thanks
i want to run this client and server in different network and want them to communicate. how do i do this. how to do port forwarding.
on my router wifi its work,phones are connect, when the ip is 10.0.0.2, but in mobile network how can it work? what need to change?
hello Pavel Palei,
As I know, most mobile network provider block your port as server, not related to the program. So mobile phone can be a client, but not as server.
why we need to create new Socket every time we send massage? the connection stop after thread executed?
is there another ways communicate with sockets?
hello Pavel Palei,
Depends on your implementation. In this simple example, client side only connect to server, send data, and close.
Refer to another example of A Simple Chat App (listed below "Next:"), connection no close after connected.
i have problem i dont recived data i recevd java.io.EOFException
NB:i have a python client and i send the data it is tested
Hi sir,Thank you very much very good tutorial.
But sir , I have one question how to transfer file or bidirectional communication "file" using socket ..please help me .thanks
Hi sir,
Can you please tell me 1 thing.
When the connection is established how to show another page for few sec.
I am creating a game app so when the connection is established is should go to the game page play for 2 min and then return the score.
Can we implement it using this??
|
http://android-er.blogspot.com/2014/08/bi-directional-communication-between.html
|
CC-MAIN-2017-47
|
refinedweb
| 2,218
| 59.4
|
DotNetStories
In this post I would like to show you with a hands-on example how to invoke a WCF service from JQuery. I have already posted a few posts regarding JQuery and server communication.Have a look in this post and in this post. This is a similar post .
In order to follow what I am about to say in this post, I assume that you know what JQuery is and have a basic understanding of how to traverse the DOM or handle events with JQuery.
You can find more posts regarding JQuery by clicking here. So we can invoke an AJax enabled WCF service from JQuery as long the binding is webHttpBinding.
1) Launch Visual Studio 2010 (express edition will also work) and create a new empty website.Choose a suitable name for your website.Choose C# as the development language.
Add a new item in your project , a web form. Name it jQueryWCF.aspx.We are going to have some <a> links in the page. We are going to instantiate some calls to the WCF service through JQuery.
These links represent the names of Liverpool footballers. This is the code for the markup
<body> <p>Liverpool Players</p> <a href="#">Steven</a> <a href="#">Kenny</a> <a href="#">Robbie</a> <br /> <br /> <br /> <div id="loginfo"> </div> <br /> <div id="res"> </div> </body>
Nothing difficult in this code.
2) Add another item to your website, an Ajax-enabled WCF service.Name it FootballersInfo.svc. The markup is generated automatically
<%@ ServiceHost Language="C#" Debug="true" Service="FootballersInfo"
CodeBehind="~/App_Code/FootballersInfo.cs" %>
3) In the App_Code folder we need to add a new item, a class file. I name it Footballers.cs
This is the code for the class
[DataContract] public class Footballers { [DataMember] public string FirstName { get; set; } [DataMember] public string LastName { get; set; } [DataMember] public DateTime BirthDate { get; set; } }
This is a simple class with 3 properties.
Also in the App_Code special folder we have the FootballersInfo.cs
The code for this class follows. Have a look at the comments. I would use those lines if I would use a HTTP GET.In this example I will use HTTP POST.
In this class we have a method (GetFootballerInfo) that gets a string as an input parameter and we return back to the calling code a footballer object depending on the input string parameter (name)
public class FootballersInfo {
//"; // Add more operations here and mark them with [OperationContract] [OperationContract] public Footballers GetFootballerInfo(string name) { Footballers footballer = null; if (name == "Steven") { footballer = new Footballers { FirstName = "Steven", LastName = "Gerrard", BirthDate = new DateTime(1980, 11, 11), }; } if (name == "Kenny") { footballer = new Footballers { FirstName = "Kenny", LastName = "Dalglish", BirthDate = new DateTime(1950, 11, 11), }; } if (name == "Robbie") { footballer = new Footballers { FirstName = "Robbie", LastName = "Fowler", BirthDate = new DateTime(1975, 11, 11), }; } return footballer; }
}
4) Now we need to write some JavaScript code that will invoke the WCF service and get back the results from the service and display them on the screen.Add a folder to your website, called scripts.
Add a new item to the scripts folder, a javascript file. Name it jQueryWCF.aspx.js . Go to the main JQuery website -- and download the latest Jquery library. Include those library files in the scripts folder.
In the head section of the jQueryWCF.aspx page add the following
<head runat="server"> <title></title>
<script src="scripts/jquery-1.7.js" type="text/javascript"></script> <script src="scripts/jQueryWCF.aspx.js" type="text/javascript"></script> </head>
I will post below the contents of the jQueryWCF.aspx.js and I will explain what I do
$(function() { $("a").click(GetFootballerInfo) $("/>"); }); }); function GetFootballerInfo(event) { var footballerName = $(this).text(); $.ajax( { type: "POST", url: "FootballersInfo.svc/GetFootballerInfo", data: '{ "name": "' + footballerName + '" }', timeout: 7000, contentType: "application/json", dataFilter: parseJson, success: onSuccess }); } function parseJson(data, type) { var dateRegEx = new RegExp('(^|[^\\\\])\\"\\\\/Date\\((-?[0-9]+)
(?:[a-zA-Z]|(?:\\+|-)[0-9]{4})?\\)\\\\/\\"', 'g'); var exp = data.replace(dateRegEx, "$1new Date($2)"); var result = eval('(' + exp + ')'); return result.d; } function onSuccess(footballer) { var html = "<ul>"; html += "</ul>"; html += "<br/>First Name: " + footballer.FirstName; html += "<br/>Last Name: " + footballer.LastName; html += "<br/>Birthdate: " + footballer.BirthDate; $("#res").html(html); }
When the link is clicked then I call the GetFootballerInfo function. The function does the following:
1) I get the text of the link that was pressed, (footballer name). Then I make a low level call to the ajax JQuery method to perform the asynchronous HTTP (Ajax) request and fill in all the parameters the method requires.Then I use HTTP post to pass the data to the WCF service (type: "POST", )
2) The URL or endpoint is set to the (url: "FootballersInfo.svc/GetFootballerInfo",). We invoke the GetFootballerInfo method.
3) Then I define that the input parameter is in JSon format (data: '{ "name": "' + footballerName + '" }',).What I mean is that we serialise the input data as JSon format. WCF will make a note of that and deserialise it.
4) We have a timeout of 7 seconds (timeout: 7000, )
5) We tell WCF that JSon format is what we are going to pass it and we expect JSon format as a response. ( contentType: "application/json", )
6) The WCF will return JSon to the asynchronous call and JQuery will attempt to parse that JSon. But that does not work well with datetime fields.We can get round that by writing our own parsing JSon datetime method. ( dataFilter: parseJson, ). Have a look at the parseJson method.
7) When everything is successfull we call the onSuccess method. (success: onSuccess). Have a look at the onSuccess method. We get back the answer from the server and then using simple JQuery we stick that output to the page ($("#res").html(html);
})
Finally I would like to say a few things about the global Ajax events. This is the only bit of code I have not explained.
$("/>"); });
What I do here is basically using these events to get information such as when the ajax call started, was sent, was successful,completed and finally stopped.I also find out if something was wrong (error). I print this information to the screen.By all means have a look in the documentation for global Ajax events.
By all means use Firebug to debug / place breakpoints for the .js files.
5) Finally view the page in the browser and click the link. The asynchronous call to the service will be made. The WCF service will get the parameter and will return back JSon results that JQuery will know how to parse.
Have a look at the picture below to see what happened when I loaded the page and clicked one of the links.
Now I would like to show you how to use Firebug which is an excellent add-on to do some basic debugging.You must download firebug and invoke it.I will add a watch (footballerName) and place a breakpoint.Then I run the page again. Have a look at the picture below
Finally we can see the JSon response the WCF Service sent back to the calling application. Have a look at the picture below.
So we saw how easy it is to invoke a WCF Service with JQuery. Hope the screenshots helped to clear things out.
Hope it helps !!!
|
http://weblogs.asp.net/dotnetstories/using-jquery-to-call-a-wcf-service-in-an-asp-net-application
|
CC-MAIN-2014-42
|
refinedweb
| 1,201
| 66.23
|
I am writing a python script (v.3) to read in a VBO export and add in the AVI information to support video.
It works great on a PC, but the identical script on a Mac produces unicode errors in reading the VBO file.
Does anyone know the encoding of the .vbo export or otherwise is aware of potential issues in reading this file.
The trouble character appears to be 0xac. I have tried ASCII, UTF-8 encoding to no avail.
db
Encoding of VCO export files
A number of ways you can support LapTimer development
6 posts • Page 1 of 1
- Harry
- Site Admin
- Posts: 8592
- Joined: Sun Sep 12, 2010 10:32 am
- Location: Siegum, Germany
- Has thanked: 97 times
- Been thanked: 354 times
Re: Encoding of VCO export files
It is a Windows format, so most probably Windows-1252. I haven't checked it in the code but it is worth a try.
- Harry
- Harry
Re: Encoding of VCO export files
Harry wrote:It is a Windows format, so most probably Windows-1252. I haven't checked it in the code but it is worth a try.
- Harry
That worked! Thanks, I now have a working script I can use to quickly add in the needful information to link a video to Circuit Tools using VBO data exported from HLT.
1)Enter pits, email or dropbox VBO file from session out of HLT - download onto laptop from email or dbox.
2)download video from GoPro onto laptop, rename to 8 char compatible with Circuit Tools. Concatenate GoPro chapters if needed. Trim a bit if needed.
3)Run python script against VBO file to add in video information;
4)Start up circuit tools and analyze data
I'm sure it won't be as easy in the field!
db
- Harry
- Site Admin
- Posts: 8592
- Joined: Sun Sep 12, 2010 10:32 am
- Location: Siegum, Germany
- Has thanked: 97 times
- Been thanked: 354 times
Re: Encoding of VCO export files
Please share your code here!
- Harry
- Harry
Re: Encoding of VCO export files
Code: Select all
# This program will read a VBOX formatted file (.vbo)
# exported from Harry's Lap Timer (HLT) and will amend the
# file to include all that is necessary to link a video to it.
#
#
# Get the tkinter library functions
from tkinter import *
root=Tk()
# 4 character video file extension for linked video - e.g., vid_0001.mp4
# video file characteristic
video_file_format = "MP4"
#Use Tkinter to get video_filename
root.filename = filedialog.askopenfilename(title="Input MP4 Video File", filetypes=(("MP4 files","*.mp4"),("all files","*.*")))
video_filename=root.filename
video_file_prefix=video_filename[-12:-8]
video_file_suffix=video_filename[-8:-4]
print("Video filename is:" + video_filename)
print("Video Prefix is:" + video_file_prefix)
print("Video Suffix is:" + video_file_suffix)
# Get input filename using a tkinter GUI
root.filename = filedialog.askopenfilename(title="Input VBO Data File", filetypes=(("VBO files","*.vbo"),("all files","*.*")))
input_filename=root.filename
print("input filename is:" + input_filename)
# Create output filename from input filename with _VID added
output_filename = (input_filename.rstrip(".vbo")+"_VID.vbo")
print("output_filename is:" + output_filename)
# Open the input and output files
input_file = open(input_filename, 'r', encoding='cp1252')
output_file= open(output_filename, 'w', encoding='cp1252', newline='\r\n')
# Parse to '[data]' field - parse the file reading each line and sending it unchanged
# to the output file.
#data_flag is a flag variable to indicate that the [data] header has been passed
# and we should start outputting time synch data
data_flag=0
millisecond_counter=int(input("Enter video offset in milliseconds: "))
for line in input_file:
if data_flag==0: # if not in data output line as read
print (line);
output_file.write(line)
if data_flag==1 and line != "\n": # once in data output ms and file
# print (line + video_file_extension + str(millisecond_counter));
millisecond_counter += 100
millisecond_string=str('{:09}'.format(millisecond_counter))
if millisecond_counter < 0:
millisecond_string='000000000'
output_file.write(line.rstrip() + " " + video_file_suffix + " " + millisecond_string +"\n")
if line == "[data]\n": #if [data] found set data flag
data_flag=1
print("Data flag set to 1")
if line == "EngineSpeed\n": # add avi fields to header
output_file.write("avifileindex\n")
output_file.write("avisynctime\n")
# then add AVI block to file after header block
output_file.write("\n")
output_file.write("[avi]\n")
output_file.write(video_file_prefix + "\n")
output_file.write(video_file_format + "\n")
if line == "[column names]\n": # add avi fields at end of column names
line = input_file.readline() # read the next line of column names
output_file.write(line.rstrip() + " avifileindex avisynctime\n") # output column names with AVI columns
# Close the files
input_file.close()
output_file.close()
Re: Encoding of VCO export files
HI dbbarron,
Thank you for providing the python script for us non-coders. Quick question regarding renaming the video file for 8 char compatible with Circuit Tools. Can the string be anything as long as it's characters long? Do you have an example string?
Thank you for providing the python script for us non-coders. Quick question regarding renaming the video file for 8 char compatible with Circuit Tools. Can the string be anything as long as it's characters long? Do you have an example string?
6 posts • Page 1 of 1
Return to “Support LapTimer Development”
Who is online
Users browsing this forum: No registered users and 1 guest
|
http://forum.gps-laptimer.de/viewtopic.php?f=17&t=4008&sid=0e0e555dd8f4b3d795195bb7b51dc8b0
|
CC-MAIN-2018-22
|
refinedweb
| 850
| 55.44
|
Hello,
I am working with Atmel Studio 7.0.2389 and am hanging on a really stupid problem. The project is with an Xmega32e5 and I am using inline assembler in some functions. Out of some strange reason I get the "undefined reference" error on the NVM defines. I have the #include <avr/io.h> added. When I do a goto definition it finds the defines.
this is just examplary and also produces the error "undeines reference to 'NVM_CMD'":
#include <avr/io.h>
void write_flash(uint16_t adr, uint8_t *data)
{
_adr = adr;
_dadr = (uint16_t)&data;
asm volatile(
"sts NVM_CMD, r1 \n"
);
}
I could not figure out what the problem is.
Thanks
You have mis-understoof how to use asm() - it's actually far more complex than you think. Remember how a C compiler actually works...
You write C, you may well litter it full of #include's and #define's which are pre-processor macros.
You "compile" this but what that really means is the foo.c file is first passed to the C pre-processor. It reads things like #include/#define/#if/etc and acts upon them. Some #defines may be multiply nested - it can cope with finding and expanding out all of them. It writes what it comes up with to a foo.i file (i = intermediate). A "symbol" such as "NVM_CMD" might turn from:
to:
so "NVM_CMD" really means 0x1CA with some sugar on top.
The file now passes to the C compiler itself. That simply reads raw C sequences from the .i file and generates Asm sequences from them. It writes the Asm out to a .s file. In the special situation when it comes across asm("...code ...") within the C it pretty much just lifts all the text in the parenthesis and dumps it verbatim into the .s file. What is within "..." is not subject to C preprocessing.
Finally the .s file is passed to the Assembler which turns the directives and mnemonics it finds there into opcodes. If the assembler comes across text sequences it does not recognize like "NVM_CMD" then it will assume this is a reference to an external symbol and that somewhere else there's likely to be Asm source that assigns the label NVM_CMD to some .data or .bss (or whatever) location. So in the file it outputs foo.o (where o = "object") it will put a "fixup marker" that says "I came across something called NVM_CMD, as yet I don't know what that is so can you fill it in later ("fix it up") when you find the other file where it is defined.
This .o file and any others from other compilations are now all presented to the linker to join (link) them together. It takes in all the .o files, pools all the data and all the symbol information and then makes sure it has everything to write out definitions of .text, .data, .bss in the final foo.elf output file.
If during that linking process it finds one file (foo.o) makes a fix up reference to something called NVM_CMD and in all the other .o files presented at the same time it cannot find anything to say "NVM_CMD means this" then it outputs the message "undefined reference to NVM_CMD".
All this started to go wrong when you used NVM_CMD within the body of asm("...code.."). The correct way to do it as shown in the asm cook book:
is some horrendously tortuous syntax like:
now, while STS in your code suggests "NVM_CMD" is _SFR_MEM_ADDR() not _SFR_IO_ADDR() the fact is that you have to use something like the above to pass the NVM_CMD symbol name (really the numeric value 0x1CA after preprocessing) across the border from the C to and into the Asm. Within the Asm the %1 (in this example) will then be replaced with the value (0x1CA) so all the assembler ever sees is "sts 0x1CA, r1"
The difference is that the bits outside of "" in asm() are processed by the C preprocessor when this code is built. So if you used something like:
then this part of the asm() statement will be pre-processed and all that NVM_CMD stuff will just result into 0x1CA being passed to asm() to replace %1 within the ".. code .." string.
Personally I think this is detestable. It's like learning a whole new language! The very simplest solution would have been to forget asm() all together as it seems to me you are just trying to write:
so do just that and the C compiler (and preprocessor) will convert this to "STS 0x1CA, r1" anyway!. If there is some special time dependent sequence of opcodes that must be performed and it HAS to be in Asm then my next choice would be to put the whole implementation of write_flash() in a .S file as a callable Asm function.
Note that the avr-gcc compiler driver makes a distinction between foo.s and foo.S. If you put Asm in foo.s and ask to build it then it goes straight to the assembler so there's no chance of any pre-pro macros being expanded. If the file extension is upper case S not lower-case S then it is treated differently. Such a file is passed first to the C prepprocessor (so #include, #define and so on are processed) then a .i file that creates is passed to the assembler.
So if you do use separate Asm source files then, unless you have a string reason otherwise, always make them .S not .s files.
Top
- Log in or register to post comments
I never understand why people have this desire to sprinkle raw assembler in their 'C' source files?!
Inline assembler is always fraught with gotchas, caveats, and catches - in any toolchain.
Most uses of inline assembler seem misguided, at best - so I thoroughly agree with:
The 'C' compiler can generally do it perfectly well from plain, standard 'C' source!
Otherwise, if there really is some truly compelling reason to have to use assembler:
Which leaves your 'C' nice & clean (and, potentially, portable), and keeps the "tricky" stuff where it belongs in a separate Assembler file.
Having it in a separate Assembler file highlights the fact that it is "special".
This applies generally - not just to AVR.
#InlineAssembler #InlineAssembly
#InlineAsmJustSayNo
Top Tips:
Top
- Log in or register to post comments
Hi,
thanks for the replies.
@clawson: thanks a lot for the very detailed and informative post. It was very helpfull for any matter as also for my question.
I guess I will go for the .S solution. Trying to be lazy and "just quickly" insert some asm seems like most "just quickly" solutioins not to work out.
Top
- Log in or register to post comments
but why not go for clawson's first suggestion - just do it in 'C' ?!
Top Tips:
Top
- Log in or register to post comments
Hi,
not possible for timing and other reasons. I need direct access to some registers like the Z-pointer. As stated, the code example was just an example and not the actual code.
I have now put all in an .S file with all the stuff necessary and an #include <avr/io.h> at the top. But I still get the same error "undefined reference" to some of the symbols (like "CCP_SPM_gc"). Strangely some of the references work. If I remove the #include all of the symbols give the error.
I do not understand how that can be. Its all in the same file.
Regards
Top
- Log in or register to post comments
You can't use _gc's in Asm. They are done as C enums. The assembler does not speak C.
Notice in the ioXXXX.h how half the file is #if ASM ? Half of it (plain #define's and so on) is C+Asm and half of it is C/C++ only (structs, unions, enums, typedefs etc). It's also why you get a choice between stuff like PORTB.OUTSET and PORTB_OUTSET. The former is a C/C++ concept only, the latter (a plain #define) is everything.
Top
- Log in or register to post comments
Ok, thank you. Its working now.
Top
- Log in or register to post comments
Jolly good!
Now see Tip #5.
Top Tips:
Top
- Log in or register to post comments
|
https://www.avrfreaks.net/comment/2800801
|
CC-MAIN-2020-40
|
refinedweb
| 1,390
| 73.78
|
Remote visualization of real-time sensor data.
This. 'packet'.
Serial.write( 0xA5 );
Serial.write( 0x5A );
// Write x-axis accelerometer to serial port as 16-bit unsigned integer in big-endian format.
Serial.write( highByte( x ) ); // Most significant byte (MSB).
Serial.write( lowByte( x ) ); // Least significant byte (LSB).
// Write y-axis accelerometer to serial port as 16-bit unsigned integer in big-endian format.
Serial.write( highByte( y ) );
Serial.write( lowByte( y ) );
// Write z-axis accelerometer to serial port as 16-bit unsigned integer in big-endian format.
Serial.write( highByte( z ) );
Serial.write( lowByte( z ) );
delay( 20 ); // Add a delay of 20ms to give a sampling rate of approximately 50Hz.
}
The ADC pins have a 10-bit resolution (0 to 1023 inclusive) so I encode them as 16-bit unsigned integers in big-endian format before sending them over the serial port. Depending on the sensor(s) you are using, you can choose to sample more or less of the ADC pins. In my case, the ADXL335 accelerometer measures acceleration along three orthogonal axes: x, y and z. Hence, I sample the three corresponding ADC pins: 0, 1 and 2 respectively.
Finally, you can alter the sampling rate of the sketch by increasing or decreasing the delay as required. For sensors that do not change very often (e.g. a temperature sensor) you will likely want to increase the delay to sample at a slower rate. Setting it to 100 would sample 10 times per second (or 10Hz) for example.
Step 3: Download and Install Bloom
Before I can connect the Arduino to SensorMonkey, I need to map the serial port assigned to the device to a TCP/IP socket. To do this, I download and install Bloom from the SensorMonkey support page.
Bloom is a serial port to TCP/IP socket redirector for Microsoft Windows. It comes with a fairly comprehensive help manual (which I would encourage you to read), but the basic operation is very simple:
- Run Bloom from the Windows Start menu
- Configure serial port settings for the Arduino and choose a TCP/IP port for Bloom to listen on
- Set a polling frequency to (approximately) match the sampling rate of the Arduino's sketch
- Press 'Start'
Bloom will listen for incoming connections on the
The choice of TCP/IP port is arbitrary (you can choose whatever you like, as long as it's in the range 1024 to 49151, inclusive, and not already in use). Also, please bear in mind that your serial port will be different depending on what your Arduino was assigned.
For operating systems other than Windows, you can download an alternative to Bloom (typically referred to as a serial-to-network proxy) from our GitHub account. The sketch, named SensorMonkeySerialNet, runs in Processing on Mac OS and Linux. Please follow the instructions in the project's README file.
Step 4: Login to SensorMonkey public sensor data live over the Internet.
Step 5: Publish Sensor Data
After logging into SensorMonkey and opening my control panel, I'm going to add an entry for the Arduino named "My Arduino". By clicking on the newly added entry, I can configure the connection parameters; namely, the IP address and port number where the device can be found.
Recall from Step 3 that I am using Bloom to map the Arduino's serial port to TCP/IP port 20000 on my local machine. So, I enter a port number of 20000 and an IP address of 127.0.0.1 (the local loopback address).
I also need to specify a format description file that tells SensorMonkey how to parse and interpret the data being sent by the Arduino. In Step 2, I presented the sketch used to sample the accelerometer that was compiled and uploaded to the Arduino's microcontroller using the development environment. To match the data sent by the sketch, I use the following format description file:
<bytestream>
<format endian="big">
<constant>A5</constant>
<constant>5A</constant>
<variable type="u16">Accelerometer X</variable>
<variable type="u16">Accelerometer Y</variable>
<variable type="u16">Accelerometer Z</variable>
</format>
</bytestream>
Note that I have specified big-endian format (<format endian="big">) and have added variables representing the three axes sampled by the accelerometer: x, y and z. The type of these variables is "u16", which is short-hand for 'Unsigned 16-bit Integer'. Many different types of variables are supported; you can find more information on the SensorMonkey support page.
The main point to realize here is that you just need to specify a format description file that matches the data being sent by your Arduino over the serial port. Depending on the sensor(s) that you are using, you may need to add more or less variables to your format description file. Make sure to give them descriptive names so you know what each variable is measuring.
After clicking 'Connect', I navigate to the 'Stream' tab, select the three accelerometer variables, choose a stream type of 'Public', and click 'Publish'. The sensor data is now being streamed live over the Internet as a public stream in my personal namespace.
In the next step, I will write a simple HTML webpage to connect to my namespace, subscribe to my stream, and visualize the data in real-time using Processing.js.
Step 6: Graph Data Using Processing.js
In the final (and best!) part of this tutorial, I'm going to create a simple webpage to view the output from my Arduino that is now being streamed live over the Internet using SensorMonkey (I have downloaded the latest Processing.js library - 1.3.6 at the time of writing - and placed it in the same directory as the webpage). You'll need to edit the code below to match the variables being streamed by your Arduino (unless you have copied my accelerometer setup exactly):
(Important! You must replace YOUR_NAMESPACE and YOUR_PUBLIC_KEY in the code below with those assigned to you when you login to SensorMonkey)
--------------------------------------------------------------------------------
<!DOCTYPE html>
<html>
<head>
<title>Drive a webpage in real-time using Arduino, SensorMonkey and Processing.js</title>
<script type="text/javascript" src=""></script>
<script type="text/javascript" src=""></script>
<script type="text/javascript" src="processing-1.3.6.js"></script>
<style type="text/css">
.sensor-name {
text-align: center;
width: 300px;
}
canvas {
border: 1px solid grey;
}
</style>
</head>
<body onload="setTimeout( run, 100 );">
<div class="sensor-name">Accelerometer X</div>
<canvas data-</canvas>
<div class="sensor-name">Accelerometer Y</div>
<canvas data-</canvas>
<div class="sensor-name">Accelerometer Z</div>
<canvas data-</canvas>
<script type="text/javascript">
function run() {
var accelXGraph = Processing.getInstanceById( "AccelX" );
var accelYGraph = Processing.getInstanceById( "AccelY" );
var accelZGraph = Processing.getInstanceById( "AccelZ" );
accelXGraph.setColor( 255, 0, 0, 100 ); // Red.
accelYGraph.setColor( 0, 128, 0, 100 ); // Green.
accelZGraph.setColor( 0, 0, 255, 100 ); // Blue.
// 1. Connect to SensorMonkey
// 2. Join namespace
// 3. Subscribe to stream
// 4. Listen for 'publish' and 'bulkPublish' events
var client = new SensorMonkey.Client( "" );
client.on( "connect", function() {
client.joinNamespace( "YOUR_NAMESPACE", "YOUR_PUBLIC_KEY", function( e ) {
if( e ) {
alert( "Failed to join namespace: " + e );
return;
}
client.subscribeToStream( "/public/My Arduino", function( e ) {
if( e ) {
alert( "Failed to subscribe to stream: " + e );
return;
}
client.on( "publish", function( name, fields ) {
if( name === "/public/My Arduino" ) {
accelXGraph.update( fields[ "Accelerometer X" ] );
accelYGraph.update( fields[ "Accelerometer Y" ] );
accelZGraph.update( fields[ "Accelerometer Z" ] );
}
} );
client.on( "bulkPublish", function( name, fields ) {
if( name === "/public/My Arduino" ) {
for( var i = 0, len = fields[ "Accelerometer X" ].length; i < len; i++ ) {
accelXGraph.update( fields[ "Accelerometer X" ][ i ] );
accelYGraph.update( fields[ "Accelerometer Y" ][ i ] );
accelZGraph.update( fields[ "Accelerometer Z" ][ i ] );
}
}
} );
} );
} );
client.on( "disconnect", function() {
alert( "Client has been disconnected!" );
} );
} );
}
</script>
</body>
</html>
--------------------------------------------------------------------------------
Without going into too much detail (you can find more information about the JavaScript client API here) the basic workflow is as follows:
- Import client
- Connect to SensorMonkey
- Join namespace
- Listen for 'publish' and 'bulkPublish' events
To graph the data, I'm using the following Processing.js sketch (save this to a file called Graph.pde and place it in the same directory as the webpage above):
--------------------------------------------------------------------------------
int xPos = 0; // Horizontal coordinate used to draw the next data point.
int yMin = 0; // Minimum expected data value.
int yMax = 1023; // Maximum expected data value.
color c; // Stroke color used to draw the graph.
// Sets the stroke color used to draw the graph.
void setColor( int r, int g, int b, int a ) {
c = color( r, g, b, a );
}
void setup() {
size( 300, 200 );
frameRate( 50 );
setColor( 255, 0, 0, 100 );
drawGrid();
}
void draw() {} // Empty draw() function.
void drawGrid() {
int h = height;
int w = width;
background( 255 );
stroke( 127, 127, 127, 127 );
// Draw horizontal lines.
line( 0, h / 4, w, h / 4 );
line( 0, h / 2, w, h / 2 );
line( 0, h * 3 / 4, w, h * 3 / 4 );
// Draw vertical lines.
line( w / 4, 0, w / 4, h );
line( w / 2, 0, w / 2, h );
line( w * 3 / 4, 0, w * 3 / 4, h );
// Draw labels.
fill( 0 );
text( str( yMin ), 5, h - 5 );
text( str( yMax ), 5, 12 );
}
void update( float data ) {
// When we reach the edge of the screen, wrap around to the beginning.
if( xPos >= width ) {
xPos = 0;
drawGrid();
}
// Graph the data point and increment the horizontal coordinate.
data = map( data, yMin, yMax, 0, height );
stroke( c );
line( xPos, height, xPos, height - data );
xPos++;
}
--------------------------------------------------------------------------------
In your case, depending on the sensor(s) that you are streaming, you may need more or less graphs in your webpage. You can edit the Graph.pde file if you need to increase/decrease the size of the graphs, the range of data values that can be plotted, the frame rate etc. Just remember to include the Graph.pde file once for every variable that you want to plot (inside a <canvas> element) and name them accordingly (e.g. <canvas data-</canvas>). Then, you just need to get a reference to the graph (obtained by calling the Processing.getInstanceById() method) and use the update() function to plot new data points received in the "publish" and "bulkPublish" event handlers.
That's it! I now have an accelerometer driving a webpage in real-time using Arduino, SensorMonkey and Processing.js. I can host the webpage on a public webserver and direct people to view the link on any device with a HTML5 compatible web-browser. Thanks for reading and look out for further instructables showing more advanced use cases and projects in the near future.
16 Discussions
4 years ago on Introduction
I know it is a simple circuit, but I have just got to say: That's a great photo of a nice, clean breadboard and workspace. Well done.
4 years ago on Step 3
Im doing a project similar to your description, But Im using android cam as webcam, usin Droid cam app. and Im not usin Pan and tilt, It is just stationary.
Now in my project I have to turn on the webcam depending on PIR sensor output which is interfaced with arduino UNO. Im also using sensor monkey, bloom and justin.TV , how can I achieve it. Please do help me .
prajjujan@gmail.com
5 years ago on Introduction
Hi I'm a total beginner and I'm trying to follow along with the tutorial, I just have one question, how would the tutorial differ if I'm using an arduino YUN, what steps would I leave out?
5 years ago on Introduction
You we're right, it is a Frontpage problem. I had a problem with where the processing.js file was located, apparently I needed to have the whole URL in there. Once I did that the connection error went away..
Reply 5 years ago on Introduction
Hi,.
5 years ago on Introduction
I have the same problem. I am using Microsoft Frontpage and when doing the preview mode, it continually gives me the error "'Processing' is undefined." I have downloaded the processing-1.4.1.js file and have it referenced in my program the same way as demonstrated above. Any idea why I keep getting this same error?
Reply 5 years ago on Introduction
I'm not familiar with Microsoft Frontpage so it may be an issue to do with its 'preview' mode.
I would suggest uploading your files to a webserver (one running on your local machine will do) and testing it out to see if the error message persists.
5 years ago on Step 6
I only get the outline of the graphs not the data on the web page? any ideas
Reply 5 years ago on Introduction
Hi,.
6 years ago on Introduction
I signed up an account with Sensor Monkey. I'm having no success in graphing data from a picaxe sending decimal numbers with the serial port. Such as "72" for 72
degrees. I got Bloom to connect, baud rate is correct, its just the graph is all missed up. How do I set up the Format Description File to receive decimal numbers from the Picaxe?
Reply 6 years ago on Introduction
Hi,.
Reply 6 years ago on Introduction
Yes, that does work. Thank you.
6 years ago on Introduction
This is an intriguing idea. I have a couple of suggestions to
make this more accessible.
1) It should be possible to create an account on SensorMonkey.com
without Facebook. Facebook-centric is a definite disadvantage.
2) Linux support would be extremely helpful. Bloom or equivalent
is probably not necessary on a Linux machine.
Reply 6 years ago on Introduction
Hi Grendel,.
6 years ago on Introduction
Yeah NO facebook
And of course anything Linux is good, mayb it;ll work with wine?
HANKIENSTIEN
rduino.com
6 years ago on Introduction
this would work pretty well with the twittering toilet idea... ;)
|
https://www.instructables.com/id/Drive-a-webpage-in-real-time-using-Arduino-Sensor/
|
CC-MAIN-2019-09
|
refinedweb
| 2,279
| 64.51
|
from qiskit import QuantumCircuit from qiskit.circuit import Gate from math import pi qc = QuantumCircuit(2) c = 0 t = 1
When we program quantum computers, our aim is always to build useful quantum circuits from the basic building blocks. But sometimes, we might not have all the basic building blocks we want. In this section, we'll look at how we can transform basic gates into each other, and how to use them to build some gates that are slightly more complex (but still pretty basic).
Many of the techniques discussed in this chapter were first proposed in a paper by Barenco and coauthors in 1995 [1].
Contents
# a controlled-Z qc.cz(c,t) qc.draw()
where c and t are the control and target qubits. In IBM Q devices, however, the only kind of two-qubit gate that can be directly applied is the CNOT. We therefore need a way to transform one to the other.
The process for this is quite simple. We know that the Hadamard transforms the states $|0\rangle$ and $|1\rangle$ to the states $|+\rangle$ and $|-\rangle$ respectively. We also know that the effect of the $Z$ gate on the states $|+\rangle$ and $|-\rangle$ is the same as that for $X$ on the state $|0\rangle$ and $|1\rangle$. From this reasoning, or from simply multiplying matrices, we find that$$ H X H = Z,\\\\ H Z H = X. $$
The same trick can be used to transform a CNOT into a controlled-$Z$. All we need to do is precede and follow the CNOT with a Hadamard on the target qubit. This will transform any $X$ applied to that qubit into a $Z$.
qc = QuantumCircuit(2) # also a controlled-Z qc.h(t) qc.cx(c,t) qc.h(t) qc.draw()
More generally, we can transform a single CNOT into a controlled version of any rotation around the Bloch sphere by an angle $\pi$, by simply preceding and following it with the correct rotations. For example, a controlled-$Y$:
qc = QuantumCircuit(2) # a controlled-Y qc.sdg(t) qc.cx(c,t) qc.s(t) qc.draw()
and a controlled-$H$:
qc = QuantumCircuit(2) # a controlled-H qc.ry(pi/4,t) qc.cx(c,t) qc.ry(-pi/4,t) qc.draw()
a = 0 b = 1
Sometimes we need to move information around in a quantum computer. For some qubit implementations, this could be done by physically moving them. Another option is simply to move the state between two qubits. This is done by the SWAP gate.
qc = QuantumCircuit(2) # swaps states of qubits a and b qc.swap(a,b) qc.draw()
The command above directly invokes this gate, but let's see how we might make it using our standard gate set. For this, we'll need to consider a few examples.
First, we'll look at the case that qubit a is in state $|1\rangle$ and qubit b is in state $|0\rangle$. For this we'll apply the following gates:
qc = QuantumCircuit(2) # swap a 1 from a to b qc.cx(a,b) # copies 1 from a to b qc.cx(b,a) # uses the 1 on b to rotate the state of a to 0 qc.draw()
This has the effect of putting qubit b in state $|1\rangle$ and qubit a in state $|0\rangle$. In this case at least, we have done a SWAP.
Now let's take this state and SWAP back to the original one. As you may have guessed, we can do this with the reverse of the above process:
# swap a q from b to a qc.cx(b,a) # copies 1 from b to a qc.cx(a,b) # uses the 1 on a to rotate the state of b to 0 qc.draw()
Note that in these two processes, the first gate of one would have no effect on the initial state of the other. For example, when we swap the $|1\rangle$ b to a, the first gate is
cx(b,a). If this were instead applied to a state where no $|1\rangle$ was initially on b, it would have no effect.
Note also that for these two processes, the final gate of one would have no effect on the final state of the other. For example, the final
cx(b,a) that is required when we swap the $|1\rangle$ from a to b has no effect on the state where the $|1\rangle$ is not on b.
With these observations, we can combine the two processes by adding an ineffective gate from one onto the other. For example,
qc = QuantumCircuit(2) qc.cx(b,a) qc.cx(a,b) qc.cx(b,a) qc.draw()
We can think of this as a process that swaps a $|1\rangle$ from a to b, but with a useless
qc.cx(b,a) at the beginning. We can also think of it as a process that swaps a $|1\rangle$ from b to a, but with a useless
qc.cx(b,a) at the end. Either way, the result is a process that can do the swap both ways around.
It also has the correct effect on the $|00\rangle$ state. This is symmetric, and so swapping the states should have no effect. Since the CNOT gates have no effect when their control qubits are $|0\rangle$, the process correctly does nothing.
The $|11\rangle$ state is also symmetric, and so needs a trivial effect from the swap. In this case, the first CNOT gate in the process above will cause the second to have no effect, and the third undoes the first. Therefore, the whole effect is indeed trivial.
We have thus found a way to decompose SWAP gates into our standard gate set of single-qubit rotations and CNOT gates.
qc = QuantumCircuit(2) # swaps states of qubits a and b qc.cx(b,a) qc.cx(a,b) qc.cx(b,a) qc.draw()
It works for the states $|00\rangle$, $|01\rangle$, $|10\rangle$ and $|11\rangle$, and if it works for all the states in the computational basis, it must work for all states generally. This circuit therefore swaps all possible two-qubit states.
The same effect would also result if we changed the order of the CNOT gates:
qc = QuantumCircuit(2) # swaps states of qubits a and b qc.cx(a,b) qc.cx(b,a) qc.cx(a,b) qc.draw()
This is an equally valid way to get the SWAP gate.
The derivation used here was very much based on the z basis states, but it could also be done by thinking about what is required to swap qubits in states $|+\rangle$ and $|-\rangle$. The resulting ways of implementing the SWAP gate will be completely equivalent to the ones here.
Quick Exercise:
- Find different circuit that swaps qubits in the states $|+\rangle$ and $|-\rangle$, and show this is equivalent to the circuit shown above.
qc = QuantumCircuit(2) theta = pi # theta can be anything (pi chosen arbitrarily) qc.ry(theta/2,t) qc.cx(c,t) qc.ry(-theta/2,t) qc.cx(c,t) qc.draw()
If the control qubit is in state $|0\rangle$, all we have here is a $R_y(\theta/2)$ immediately followed by its inverse, $R_y(-\theta/2)$. The end effect is trivial. If the control qubit is in state $|1\rangle$, however, the
ry(-theta/2) is effectively preceded and followed by an X gate. This has the effect of flipping the direction of the y rotation and making a second $R_y(\theta/2)$. The net effect in this case is therefore to make a controlled version of the rotation $R_y(\theta)$.
This method works because the x and y axis are orthogonal, which causes the x gates to flip the direction of the rotation. It therefore similarly works to make a controlled $R_z(\theta)$. A controlled $R_x(\theta)$ could similarly be made using CNOT gates.
We can also make a controlled version of any single-qubit rotation, $V$. For this we simply need to find three rotations A, B and C, and a phase $\alpha$ such that$$ ABC = I, ~~~e^{i\alpha}AZBZC = V $$
We then use controlled-Z gates to cause the first of these relations to happen whenever the control is in state $|0\rangle$, and the second to happen when the control is state $|1\rangle$. An $R_z(2\alpha)$ rotation is also used on the control to get the right phase, which will be important whenever there are superposition states.
A = Gate('A', 1, []) B = Gate('B', 1, []) C = Gate('C', 1, []) alpha = 1 # arbitrarily define alpha to allow drawing of circuit
qc = QuantumCircuit(2) qc.append(C, [t]) qc.cz(c,t) qc.append(B, [t]) qc.cz(c,t) qc.append(A, [t]) qc.u1(alpha,c) qc.draw()
Here
A,
B and
C are gates that implement $A$ , $B$ and $C$, respectively.
4. The Toffoli
The Toffoli gate is a three-qubit gate with two controls and one target. It performs an X on the target only if both controls are in the state $|1\rangle$. The final state of the target is then equal to either the AND or the NAND of the two controls, depending on whether the initial state of the target was $|0\rangle$ or $|1\rangle$. A Toffoli can also be thought of as a controlled-controlled-NOT, and is also called the CCX gate.
qc = QuantumCircuit(3) a = 0 b = 1 t = 2 # Toffoli with control qubits a and b and target t qc.ccx(a,b,t) qc.draw()
To see how to build it from single- and two-qubit gates, it is helpful to first show how to build something even more general: an arbitrary controlled-controlled-U for any single-qubit rotation U. For this we need to define controlled versions of $V = \sqrt{U}$ and $V^\dagger$. In the code below, we use
cu1(theta,c,t) and
cu1(-theta,c,t)in place of the undefined subroutines
cv and
cvdg respectively. The controls are qubits $a$ and $b$, and the target is qubit $t$.
qc = QuantumCircuit(3) qc.cu1(theta,b,t) qc.cx(a,b) qc.cu1(-theta,b,t) qc.cx(a,b) qc.cu1(theta,a,t) qc.draw()
By tracing through each value of the two control qubits, you can convince yourself that a U gate is applied to the target qubit if and only if both controls are 1. Using ideas we have already described, you could now implement each controlled-V gate to arrive at some circuit for the doubly-controlled-U gate. It turns out that the minimum number of CNOT gates required to implement the Toffoli gate is six [2].
*This is a Toffoli with 3 qubits(q0,q1,q2) respectively. In this circuit example, q0 is connected with q2 but q0 is not connected with q1.
The Toffoli is not the unique way to implement an AND gate in quantum computing. We could also define other gates that have the same effect, but which also introduce relative phases. In these cases, we can implement the gate with fewer CNOTs.
For example, suppose we use both the controlled-Hadamard and controlled-$Z$ gates, which can both be implemented with a single CNOT. With these we can make the following circuit:
qc = QuantumCircuit(3) qc.ch(a,t) qc.cz(b,t) qc.ch(a,t) qc.draw()
For the state $|00\rangle$ on the two controls, this does nothing to the target. For $|11\rangle$, the target experiences a $Z$ gate that is both preceded and followed by an H. The net effect is an $X$ on the target. For the states $|01\rangle$ and $|10\rangle$, the target experiences either just the two Hadamards (which cancel each other out) or just the $Z$ (which only induces a relative phase). This therefore also reproduces the effect of an AND, because the value of the target is only changed for the $|11\rangle$ state on the controls -- but it does it with the equivalent of just three CNOT gates.
5. Arbitrary rotations from H and T
The qubits in current devices are subject to noise, which basically consists of gates that are done by mistake. Simple things like temperature, stray magnetic fields or activity on neighboring qubits can make things happen that we didn't intend.
For large applications of quantum computers, it will be necessary to encode our qubits in a way that protects them from this noise. This is done by making gates much harder to do by mistake, or to implement in a manner that is slightly wrong.
This is unfortunate for the single-qubit rotations $R_x(\theta)$, $R_y(\theta)$ and $R_z(\theta)$. It is impossible to implement an angle $\theta$ with perfect accuracy, such that you are sure that you are not accidentally implementing something like $\theta + 0.0000001$. There will always be a limit to the accuracy we can achieve, and it will always be larger than is tolerable when we account for the build-up of imperfections over large circuits. We will therefore not be able to implement these rotations directly in fault-tolerant quantum computers, but will instead need to build them in a much more deliberate manner.
Fault-tolerant schemes typically perform these rotations using multiple applications of just two gates: $H$ and $T$.
The T gate is expressed in Qiskit as
.t():
qc = QuantumCircuit(1) qc.t(0) # T gate on qubit 0 qc.draw()
It is a rotation around the z axis by $\theta = \pi/4$, and so is expressed mathematically as $R_z(\pi/4) = e^{i\pi/8~Z}$.
In the following we assume that the $H$ and $T$ gates are effectively perfect. This can be engineered by suitable methods for error correction and fault-tolerance.
Using the Hadamard and the methods discussed in the last chapter, we can use the T gate to create a similar rotation around the x axis.
qc = QuantumCircuit(1) qc.h(0) qc.t(0) qc.h(0) qc.draw()
Now let's put the two together. Let's make the gate $R_z(\pi/4)~R_x(\pi/4)$.
qc = QuantumCircuit(1) qc.h(0) qc.t(0) qc.h(0) qc.t(0) qc.draw()
Since this is a single-qubit gate, we can think of it as a rotation around the Bloch sphere. That means that it is a rotation around some axis by some angle. We don't need to think about the axis too much here, but it clearly won't be simply x, y or z. More important is the angle.
The crucial property of the angle for this rotation is that it is an irrational multiple of $\pi$. You can prove this yourself with a bunch of math, but you can also see the irrationality in action by applying the gate. Keeping in mind that everytime we apply a rotation that is larger than $2\pi$, we are doing an implicit modulous by $2\pi$ on the rotation angle. Thus, repeating the combined rotation mentioned above $n$ times results in a rotation around the same axis by a different angle. As a hint to a rigorous proof, recall that an irrational number cannot be be written as what?
We can use this to our advantage. Each angle will be somewhere between $0$ and $2\pi$. Let's split this interval up into $n$ slices of width $2\pi/n$. For each repetition, the resulting angle will fall in one of these slices. If we look at the angles for the first $n+1$ repetitions, it must be true that at least one slice contains two of these angles due to the pigeonhole principle. Let's use $n_1$ to denote the number of repetitions required for the first, and $n_2$ for the second.
With this, we can prove something about the angle for $n_2-n_1$ repetitions. This is effectively the same as doing $n_2$ repetitions, followed by the inverse of $n_1$ repetitions. Since the angles for these are not equal (because of the irrationality) but also differ by no greater than $2\pi/n$ (because they correspond to the same slice), the angle for $n_2-n_1$ repetitions satisfies$$ \theta_{n_2-n_1} \neq 0, ~~~~-\frac{2\pi}{n} \leq \theta_{n_2-n_1} \leq \frac{2\pi}{n} . $$
We therefore have the ability to do rotations around small angles. We can use this to rotate around angles that are as small as we like, just by increasing the number of times we repeat this gate.
By using many small-angle rotations, we can also rotate by any angle we like. This won't always be exact, but it is guaranteed to be accurate up to $2\pi/n$, which can be made as small as we like. We now have power over the inaccuracies in our rotations.
So far, we only have the power to do these arbitrary rotations around one axis. For a second axis, we simply do the $R_z(\pi/4)$ and $R_x(\pi/4)$ rotations in the opposite order.
qc = QuantumCircuit(1) qc.t(0) qc.h(0) qc.t(0) qc.h(0) qc.draw()
The axis that corresponds to this rotation is not the same as that for the gate considered previously. We therefore now have arbitrary rotation around two axes, which can be used to generate any arbitrary rotation around the Bloch sphere. We are back to being able to do everything, though it costs quite a lot of $T$ gates.
It is because of this kind of application that $T$ gates are so prominent in quantum computation. In fact, the complexity of algorithms for fault-tolerant quantum computers is often quoted in terms of how many $T$ gates they'll need. This motivates the quest to achieve things with as few $T$ gates as possible. Note that the discussion above was simply intended to prove that $T$ gates can be used in this way, and does not represent the most efficient method we'}
|
https://qiskit.org/textbook/ch-gates/more-circuit-identities.html
|
CC-MAIN-2020-50
|
refinedweb
| 3,021
| 71.34
|
- Did you assume aural style sheets were little more than a pleasant fiction? It turns out that several client programs do support them (Emacspeak, Fonix SpeakThis, and Opera among them). This article explains the basics.
- A nice DHTML timeline control, with a few caveats. I found it a bit challenging to use without detailed instructions. Current implementation requires timeline data come from an XML source, but the API would allow for more accessible options.
- Jason Striegel comes up with a very convincing explanation for the huge spike in the ratings of technology sites on Alexa in April 2006. It turns out the catalyst was Digg overtaking Slashdot’s Alexa ranking.
- Blake Ross, the co-creator of Firefox, talks about where Firefox is at as it approaches 2.0, what effect IE7 will have on the browser space, and a secret start-up he is working on with Joe Hewitt (of recent FireBug fame).
- Emil Stenström points out some of the downsides of microformats (e.g. they don’t use namespaces), and suggests how they might be addressed (e.g. meta tags to declare microformat versions in use).
- Callisto, the Eclipse simultaneous release of Eclipse 3.2 final as well as 10 sub projects is now available for download.
- A quick history of Eclipse’s Java Development Tools (JDT), followed by a full run-down on what’s new in the Eclipse 3.2 version of JDT.
- This latest preview of the “official” framework for PHP web applications (which is actually quite usable if you only use the working bits) includes a number of new components, as well as a manual translated into 10 languages.
- Fusebox 5 was publicly released at CFUNITED. This page describes the changes. Although I’m always wary of framework releases with the words “ground-up rewrite” in them, this actually looks rather nice.
- Rails 1.1.3 fixed the security bug. Rails 1.1.4 fixes the problem with 1.1.3.
- As with any complex platform, .NET comes with a whole stack of “dos” and “don’ts”. This free tool from Microsoft lets you browse the official library of recommended practices and design patterns for ASP.NET and the .NET platform.
- The newest upcoming feature in Rails is ActiveResource, which promises to embrace the RESTful model of application development by making it easy to build full web applications around the REST operations.
- This is news to me. Windows has command-line task list and kill task tools. This post shows how they can be used to terminate hung ASP.NET processes.
- The June pre-release of Microsoft’s Atlas toolkit for AJAX applications is out, with the ability to embed AJAX-updateable page regions within dynamically generated ASP.NET controls.
Blog Post
Blogs »
Industry Links »
Jul 3, 2006 News Wire
Jul 3, 2006 News Wire
This post has 2 responses so far
Sponsored Links
SitePoint Marketplace
Buy and sell Websites, templates, domain names, hosting, graphics and more.
Download sample chapters of any of our popular books.
Learn more with SitePoint books
July 4th, 2006 at 9:53 am
Has anyone ever used or tried Fusebox for PHP? I’ve just been turned off by the example code of a simple Hello World. I fear it’s been abstracted a bit too far.
July 4th, 2006 at 8:22 pm
What a great disclaimer :D
|
http://www.sitepoint.com/blogs/2006/07/04/jul-3-2006-news-wire/
|
crawl-002
|
refinedweb
| 559
| 65.83
|
Having recently acquired a 3D printer, I've been diving in to 3D modeling. While Fusion 360 seems to be the leader in that area generally, I've fallen for OpenSCAD. I appreciate its use of code as the primary (read "only") design environment because it mirrors the way I tend to think about problems. This has the added benefits of making parametrized models being a core feature instead of an afterthought, and readable diffs for files under version control with git.
Over the past few days I've been working hard at reaching competency with OpenSCAD and have come to the point where I feel like I'm able to offer an informed perspective on some of its strengths and limitations. The biggest "rough edge" I've run into thus far is dealing with imports.
Note that this is very much a first draft, and that at this point is really more of a "suggestion" than a "proposal". I'm mostly interested in getting the idea down in writing so it can be discussed if anyone else is interested.
Current State & Limitations
OpenSCAD has two provisions for dependency management: include and use. include includes the entirety of the referenced file, while use brings in only modules and functions defined in the global context of the target file.
use is the workhorse here, as it allows you to keep your context as clean as possible while pulling in only the things you need from elsewhere. In contrast, where include really shines is in dependency injection: because the contents of the referenced file is bodily included, it can be used to define a set of variables to be referened in the including context.
The main limitation of use is that it brings all modules and functions from the included context into the including context. For complex libraries, this could mean quite a large number of objects! This seems to be handled by others by breaking down libraries into a number of files, each with a narrow purpose.
Proposal for Improvement
In short, I'd like more granular control over what gets imported. I propose two new builtins, from and import, inspired largely by Python.
import
use should be deprecated in favor of a new builtin, import, with the following behavior:
The .scad extension should be inferred for the argument if necessary. For the statement import <foo>, resolution should occur in the following order:
- the file foo in the folder in which the design was opened
- the file foo.scad in the folder in which the design was opened
- the file foo in the OpenSCAD library folder
- the file foo.scad in the OpenSCAD library folder
A means of explicitly including or excluding modules and functions from import should be implemented. This could be as simple as ignoring those beginning with and underscore (i.e., importing only those functions and modules matching the regex ^[^_].*$). Alternatively, special variables could be used to define inclusion/exclusion:
To include only foo and bar:
$export = ["foo", "bar"]; module foo() { ... } module bar() { ... } module baz() { ... }
Functionally equivalent would be to exclude baz:
$private = ["baz"]; module foo() { ... } module bar() { ... } module baz() { ... }
from
from effectively be the extended form of import, allowing the inclusion of only explicitly specified modules and funtions from the included context.
Given my_library.scad:
module foo() { ... } module bar() { ... } module baz() { ... }
to include only foo:
from <my_library> import "foo";
to include both foo and bar:
from <my_library> import ["foo", "bar"];
Notably, using from should ignore the usage of the special variables $export and $private described in the previous section. Modules and functions that are either not included or explicitly excluded from the import builtin should be accessible via from.
View comments.
|
http://www.lyndsysimon.com/
|
CC-MAIN-2020-34
|
refinedweb
| 615
| 59.53
|
so they aren't in the original tree and it seems that the <xsl:template doesn't match then. Or should they being matched? They will match (assumning the elements do have name MenuData in no-namespace) but like any template, the template will only be executed if templates are applied to the matching nodes with xsl:apply-templates. If you use xsl:copy-of (for example) then what you get is a copy and no templates will have any effect. David ________________________________________________________________________ This e-mail has been scanned for all viruses by Star. The service is powered by MessageLabs. For more information on a proactive anti-virus service working around the clock, around the globe, visit: ________________________________________________________________________
|
https://www.oxygenxml.com/archives/xsl-list/200501/msg01138.html
|
CC-MAIN-2020-16
|
refinedweb
| 118
| 72.76
|
C Tutorial
Control statement
C Loops
C Arrays
C String
C Functions
C Structure
C Pointer
C File
C Header Files
C Preprocessors
C Misc
C all library function of limits.h header file
limits.h header file contains the definition of functions which gives the characteristics of common variable types. You can easily check the maximum and minimum value of any data types using the function of limits.h header file.
All the built in function of limits.h header file is given bellow;
C program to determine limit of a data type for your system
#include <stdio.h> #include <limits.h> int main(){ printf("Maximum value of integer is = %d\n", INT_MAX); printf("Minimum value of integer is = %d\n", INT_MIN); printf("Maximum value of long int is = %ld\n", LONG_MAX); printf("Minimum value of long int is = %ld\n", LONG_MIN); return 0; }
Output of program:
|
https://worldtechjournal.com/c-tutorial/use-of-limits-h-header-file/
|
CC-MAIN-2022-40
|
refinedweb
| 148
| 56.35
|
Problem Statement
In this problem a positive integer is given which represents a column number of an Excel sheet, we have to return its corresponding column title as appear in an Excel sheet.
Example
#1
28
"AB"
#2
701
"ZY"
Approach
This problem is the reverse of the problem in which we have to find out the column number from a column title.
So in that problem we converted a base-26 number into base-10 number which is a decimal number. In this problem we have to find out column title from column number. So here we just have to do opposite, i.e. we have to convert a base-10 (decimal) number into a number of base-26 system.
We know in general number system of suppose base-26 should have 26 characters which represents values 0 to 25. But in Excel sheet column title this is little different. It represents values from 1 to 26. So if we use characters A-Z as 0-25, then it will look like below equation:
Let string be ABZ, this is corresponding to number n:
n = (A+1) * 26^2 + (B+1) * 26^1 + (Z+1) * 26^0
Why (A+1)? Because in char system ‘A’ is 0, but in excel system ‘A’ is one. Every char get an extra one.
So in order to get last char i.e. Z we would first minus 1 i.e. n– and then get n%26
(n-1)%26 = Z
Now divide with 26 and repeat same process for next character.
(n-1)/26 = (A+1) * 26^1 + (B+1) * 26^0
Algorithm
- Create an empty string for storing the characters.
- Run a loop while n is positive.
- Subtract 1 from n.
- Get current character by doing modulo of n by 26.
- Divide n by 26.
- Now reverse the result string because we have found characters from right to left.
- Return the reversed string.
Implementation
C++ Program for Excel Sheet Column Title Leetcode Solution
#include <bits/stdc++.h> using namespace std; string convertToTitle(int n) { string ans; while(n>0) { --n; int d= n%26; n/=26; ans+= 'A'+d; } reverse(ans.begin(),ans.end()); return ans; } int main() { cout<<convertToTitle(28) <<endl; return 0; }
AB
Java Program for Excel Sheet Column Title Leetcode Solution
class Rextester{ public static String convertToTitle(int n) { StringBuilder ans= new StringBuilder(); while(n>0) { --n; int d= n%26; n/=26; ans.append((char)('A'+d)); } ans.reverse(); return ans.toString(); } public static void main(String args[]) { System.out.println( convertToTitle(28) ) ; } }
AB
Complexity Analysis for Excel Sheet Column Title Leetcode Solution
Time Complexity
O(log(n)) : Where n is the given column number. We are dividing the number by 26 in each iteration, hence time complexity will be O(log(n)).
Space Complexity
O(1) : We are not using any extra space other than for storing the result.
|
https://www.tutorialcup.com/leetcode-solutions/excel-sheet-column-title-leetcode-solution.htm
|
CC-MAIN-2021-49
|
refinedweb
| 483
| 65.62
|
JSF Managed Bean is a regular Java Bean class registered with JSF.
The managed bean contains the getter and setter methods, business logic.
JSF Managed beans works as Model for UI component. It stores the data used by the JSF xhtml pages.
With the help of JSF framework Managed Bean can be accessed from JSF page.
In JSF 1.2, we have to register a managed bean in JSF configuration file such as faces-config.xml.
From JSF 2.0, Managed beans can be registered using annotations.
The following code shows how to register a JSF managed bean with
<managed-bean> <managed-bean-name>helloWorld</managed-bean-name> <managed-bean-class>com.java2s.test.HelloWorld</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> </managed-bean> <managed-bean> <managed-bean-name>message</managed-bean-name> <managed-bean-class>com.java2s.test.Message</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> </managed-bean>
The following code shows how to use annotation to register a JSF managed bean.
@ManagedBean(name = "helloWorld", eager = true) @RequestScoped public class HelloWorld { @ManagedProperty(value="#{message}") private Message message; ... }
@ManagedBean marks a bean as a managed bean with the name specified in name attribute.
If the name attribute is not specified, then the managed bean name will default to simple class name with first letter lowercased. In our case it would be helloWorld.
If eager is set to "true" then managed bean is created before it is requested.
"lazy" initialization is used where bean will be created only when it is requested.
Scope annotations set the scope for the managed bean.
If scope is not specified then bean will default to request scope.
We can set the JSF bean scope to as the following list.
@RequestScopedbean lives as long as the HTTP request-response lives. It get created upon a HTTP request and get destroyed when the HTTP response associated with the HTTP request is finished.
@NoneScopedbean stays as long as a single Expression Language(EL) evaluation. It get created upon an EL evaluation and get destroyed after the EL evaluation.
@ViewScopedbean lives as long as user is interacting with the same JSF view in the browser window. It gets created upon a HTTP request and gets destroyed when users navigate to a different view.
JSF is a simple static Dependency Injection(DI) framework. @ManagedProperty annotation marks a managed bean's property to be injected in another managed bean.
|
http://www.java2s.com/Tutorials/Java/JSF/0060__JSF_Managed_Beans.htm
|
CC-MAIN-2018-34
|
refinedweb
| 404
| 50.84
|
Hey guys, I'm working on converting some existing Java Code over to C++, trying to use only built in standard libraries because the platform we are building on we won't be able to have any outside libraries, and it's been a while since I had to write "fancy" c++ code so here is probably an easy question for you.
I have a base class in Java that is a Number as a return type, each of its chidren return a different type (int, double, float etc). Also have a function that takes an object in java, and each child acts differently on it.
public abstract class Base{ ... public abstract Number doSomething(); public abstract void setMe(Object blah); ... } public class Child1 extends Base{ ... public Number doSomething(){ value= new Double(somedouble); return value; } public void setMe(Object blah){ value=(Double)value; }
What would be the best way to go about this? C++ doesn't have a generic like Number.. can I use a template in c++ aka template<class Number>.
Do I have to do anything special to make that work? Also this will be on a Linux system.
|
https://www.daniweb.com/programming/software-development/threads/254445/java-to-c-help
|
CC-MAIN-2019-04
|
refinedweb
| 190
| 71.75
|
Hello,
I would like to know if it’s possible to select a folder(in this case with txt files), then read the first txt file, then with a button read the next/previous txt file.
The goal of this project is to visualize the changes between versions.
Thank you very much.
import streamlit as st st.title('Read TXT') uploaded_file = st.file_uploader(label="Choose a TXT file", type=['.txt']) if uploaded_file is not None: # To read file as bytes: bytes_data = uploaded_file.getvalue() st.write(bytes_data) col1, col2 = st.columns([0.5,1.5]) with col1: st.button('Previous TXT') with col2: st.button('Next TXT') ````Preformatted text`
|
https://discuss.streamlit.io/t/how-can-i-select-a-folder-and-read-each-txt-file-with-a-button/25606
|
CC-MAIN-2022-33
|
refinedweb
| 108
| 63.46
|
[Updated 15 July 2015: Added a huge simplification. Jump straight to the update at the end.]
URL fragments are strings of characters that follow a hash mark (#) in the URL. Typically, they are used for anchor links, where following a link keeps you on the same page but jumps the browser to some anchored position. They’re also the tool of choice for single-page apps, where content is served dynamically without page reloads.
In Google Analytics, fragment changes are not tracked by default, and the URL paths that are passed to GA with your Pageview hits are stripped of these fragments. With Google Tag Manager, this can be remedied with a History Change Trigger and some Variable magic.:
This simple, one-liner JavaScript function checks first if the Event that invoked this Variable was a history change. If it was and the URL fragment isn’t empty, the next thing it does is return the current page path (e.g. /home/) and concatenates the new URL fragment to the string, separating the two with the ‘#’ symbol (e.g. /home/#contact-us). If the Event was not a history change, the Variable returns the undefined value to ensure that the default path is sent with the Pageview. :-)
Hi Simo, i use this solution in our B2C website to track the keywords from search results URL ie:
But this tip to track pageview increase the pageviews on the main, specially on page /pesquisa, and that’s not correct.
To resolve this problem i use filters in Google Analytics to exclude all the pageviews from fragment and have created a view in the Google Analytics property specific to this traffic (include fragment hashtag), so not to influence unrealistically pageview traffic.
Is this thought correct ?
I can’t say if something is correct or not – you build your own decisions around your data and metrics. :)
The rule of thumb I have is that if clicking something makes dynamic, new content appear, I record it as a page view. If there’s no page load, I send the page view e.g. using the method outlined in this tip.
Hey Simo,
thank you for another great implementation guide! It works well for my on-page-tabs with only two issues.
The first, the variable {{get path with fragment}} works fine like a trigger, but return undefined for page URL to send to GA. It’s easy to solve. Just create another variable (for More settings ->Fields to set -> Field name -> page) and put there whatever you want to see as a page identifier. Finally I choose a Element URL – the part after hash symbol. I tried your suggestion,
function() {
return ({{Event}} === ‘gtm.js’ || {{Event}} === ‘gtm.historyChange’) &} ? {{Page Path}} + ‘#’ + {{URL Fragment after hash}} : {{New History Fragment}};
}
but New History Fragment return undefined.
The second I cannot overcome. If the URL with hashtag opened from elsewhere (for instance from the other page) the variable is undefined thus there is no in analytics. Since the trigger for All Pages fired, it send to Fields to Set {page: undefined}. And I couldn’t figure out where I should put another variable to send it right.
Could you please give any suggestion?
Hi Simo, thanks for this post! Is there a way to do this in V1?
Becky
Hi Becky,
Yes there is, since there’s nothing you can do in V2 that you can’t do in V1, and vice versa (technically). I’ve stopped using V1 terminology and writing guides for it, since the migration will come (hopefully) soon.
In V1, you would need to set up a History Listener Tag firing on All Pages, and make slight adjustments here and there, but technically it’s perfectly doable.
I have single page website and I want to track like same thing means abc.com/#xyz url .
by using analytics code ga(‘send’, ‘pageview’, { ‘page’: location.pathname + location.search + location.hash}); , I track this easily but still using tag manager I unable to track hash url.
I think in function have any problem. Please help me for tack the hash url.
my landing page url is:
Hey,
It’s all in the guide. Please read it and follow the steps. If you have trouble, contact your developers and ask them to help you with it.
Your website works just like the guide expects it to work, i.e. it dispatches a popstate event when the navigation is used. So just reread and talk to your developers.
Simo
Simo-
Thanks for your help. One thing that keeps happening to me when I am live testing with real-time in GA, is that the active content page sometimes reverts back to the root folder only. An example would be /index/ as the root of the entire application.
What might be happening there? It seems to be common when I click an external link especially.
The funny thing is, is I think it is actually working properly because I never see that page in the GA content reports checking back a day or two later.
I have been facing difficulties in tracking the last confirmation page on why website which has a Hashbang in the URL before the confirmation keyword ex : #/confirmed while trying to retrieve Data / Visits from it for Facebook Tracking , Adroll and other external tools. Using Google Tag Manager.
Can you please advice ?
Hi Simo,
how to set the url rewriting with your {{get path with fragment}} macro, if I have a Classic Analytics pageview tags and no “Fields to set” to use for this purpose?
Thank you very much!
Does this also works for urls where the folder is just a hashtag?
for example: domainname.com/#/paymentpage
Analytics tracks this url as “domainname.com/paymentpage”
Hi, I use a custom tag, and a virtual page view like this:
jQuery(window).on(‘hashchange’, function() {
ga(“send”, “pageview”, window.location.hash)
});
Hello SIMO,
Again thanks for your great blog & using this blog I success fully track my hash url but now I have want to create virtual previews for tabs which not generate hash URLs.
like:
products—-features—-advantage—–images
on click I want to see virtual url “website/products”
this time I use traditional technique
onclick=”ga(‘send’, ‘pageview’, { ‘page’: ‘/feature/’}) for virtual preview etc.
I know now need to use onclick=”dataLayer.push({‘event’: ‘button1-click’}); but how Migrate this to GTM.
regards
Mohit
Hi Simo, I am stuck at this last step where you create a Custom JavaScript Variable. When I try to preview or publish the container, the notification is Parse error: semi colon expected.
I’m using the second Ajax solution function you provided and can’t find any other support on this subject as to why this function isn’t working. Any thoughts? Thank you!
Hi,
There was a syntax error in the code on the page, try copy-pasting it again.
Simo
Thanks for this. Now it’s telling me that the variables Event and Page Path are unknown, to remove the reference.
Trying to track a shopping cart configuration that looks something like this-
mydomain.com/products/dress#color=blue&size=large&fabric=cotton
to see at what funnel step in the custom config the user drops off.
My shopping cart is mydomain.com/order_summary.asp#color=blue&size=large&fabric=cotton and the hashtag IS being picked up here in GA, using _setAllowAnchor() true. But not the interim steps.
(We don’t want the custom configurations indexed, hence the hashtags).
Thanks again! (Your blog is awesome)
Very useful and flawless post here, Simo. It’s very much appreciated, I’ve learned so much from reading your blog. Keep up the great work.
hi- thanks for the awesome/helpful blog post.
We have implemented this for my site, but it doesn’t seem to be working on IE or Firefox. Looks like it doesn’t register new page views for secondary pages in my single-page app.
Have you encountered this before and know why? I think it has to do with popstate. What do you think?
Every browser implements popstate a little differently. Firefox, for example, will not dispatch a popstate with every page load, where as Chrome does.
However, I’ve never seen an issue with the simple history listener that’s used here. Have you debugged with IE / Firefox yourselves? Do you see a history event populated in the dataLayer when state changes?
If you want me to take a look, you can send details to simo(at)simoahava.com.
Hi Simo,
i have a problem to create a event “history change” as trigger.
I can create only custom event in this moment, the interface is changed from you written the article?
Thank you for the attention.
Sorry Simo, is only a wrong translation in italian Google Tag Manager, great tip!
Hi Simo,
Great post! It works for the most part, however, it doesn’t pick up the page URL when someone is referred directly to a URL-fragmented page. Is there any way to record pageviews when someone directly visits a page like /home/#contact-us? I can’t seem to find the full URL in any of the out-of-the-box GTM variables.
Thanks for your help!
Hey Brea,
I finally got around to adding an update to this article. See the last paragraph of the post for a hugely simplified version of the solution, which also takes into account page loads and not just history changes.
Simo
Simo,
Just gave it a test drive and it works almost perfectly! You’re a lifesaver, seriously.
Two follow up questions. The second is likely more complicated, but just in case there’s a quick fix I’m not considering:
– When a user is inactive for some time then navigates the site, the first page address it seems to push to GA is “/”. The “get path with fragment” variable in GTM is correct, but the page address that shows up in real-time GA is wrong. Is there any way to account for this?
– I’m attempting to pull the value of an tag into a custom javascript variable in GTM to set it as the page title (all the page titles display the same currently). However, when I test it out in GTM, the variable appears on the next page the user clicks to and displays “null” on the first page. Is there any way to pull the value of a tag on the page for the current page with a single-page app?
Illustrative example for this question: User starts on Page 1, clicks to Page 2, then clicks to Page 3. In the GTM Debugger pane, my “Page Title” custom javascript variable reads “null” on Page 1, the page title for Page 1 on Page 2, and the page title for Page 2 on Page 3.
Using “popstate” causes Safari 8 to double count pageviews. Seems to be a known popstate bug:
This could lead to 2x counting of all pages for Safari traffic…
Trying to switch to “hashchange” does not work at all for me. GTM does not recognize the hash change when it happens. I wonder if this related to site coding techniques.
Made some testing’s with “hashchange” my self, and I can confirm that GTM doesn’t recognize it as a value. Basically because gtm.historyChangeSource always seems to hold “popstate”, nothing else.
@Siom: Could you confirm?
There is a workaround to solve the Safari 8 issue, and others as well regarding a true hashchange event. Create a Custom JavaScript that evaluates if the new and old history fragment is the same. Ensure that you have them bot enabled. If they are not the same, return true. Use that var as a second condition in your trigger.
This will solve the double pageview in Safari 8 as both the new and old history fragment will be null at first. At the same time you prevent GTM to push a pageview if the history listener fires more than once in a row for a given hash value (like following a #-link twice).
Hey,
That’s actually correct. It’s a mistake in my blog post. There is no ‘hashchange’ event, and
popstateshould be used instead. History Source can also have
popState,
replaceStateand
polling, which become available if you use the respective features of the History API.
Your workaround seems sound!
Simo
Hi,
I’m also encountering this same issue where pageviews are being double counted, but with Chrome.
I’m reading over Johan’s response and don’t quite understand how to fix this.
What does the custom javascript look like that evaluates if the new and old history fragment are the same?
Thanks
@johan Could you share your custom js solution for this? Or @Kevin if you figured it out… Thanks!
So far, this seems to be helping.
if ({{Old History Fragment}} == {{New History Fragment}})
return false;
else if ({{Old History Fragment}} != {{New History Fragment}})
return true;
I added a condition that this variable has to be true, in addition to “popstate” in my history change trigger.
Hi –
When I try to create the custom js that evaluates if the new and old history fragment is the same I’m running into a compiling error – Error at line 2, character 3: Parse error. primary expression expected
here is the code:
if ({{Old History Fragment}} == {{New History Fragment}})
return false;
else if ({{Old History Fragment}}! = {{New History Fragment}})
return true;
thanks!
Always a good idea to proofread your code – the mistake is very simple to see. You have {{Old History Fragment}}! = {{New History Fragment}}. It should be {{Old History Fragment}} != {{New History Fragment}}.
Hi!
I first want to thank you for the fantastic blog of your!
So, here’s my problem. I have a dynamic website without any hashtags in the URL. I’ve implemented the tags, triggers and variables according to your guide (I’m using the updated variable js code). But somehow it doesn’t work. When I’m going into preview mode the page view tag triggers but it won’t add the path to the page view.
I have also tested this live without success. It will only send the correct page view when the browser loads the website for the first time. When navigating around, it won’t send any page views to GA.
I know a little bit of javascript (beginner) and I’m also new to GTM (so you know).
Help is much appreciated.
Hey
Sounds like your site just loads content dynamically with custom HTTP requests, and doesn’t manipulate browser history. It makes this whole thing a lot trickier, and you should ask your developers to implement manual dataLayer.push() commands upon page changes, with something like:
Hi,
Thanks for your answer.
When I check the datalayer the fragment field is empty but the New history state field is filled with the right information. But that field is returning an object not a string? Is there any easy piece of JS that can take the object and return a string to the variable? What do you think?
I will check with the developers if they can sort something out.
Thanks again!
Accessing objects in JS is really easy, but if you don’t know where to start I suggest you either consult your developers or take up some JS education ().
Simo
Ok!
Perfect, thanks for the help! :)
Perfect thanks
Hi,
Thanks for all the help as I ramp up on a GTM project in progress. I’m having a problem where product impressions are getting counted twice, when doing fragment navigation. I’m stuck. (The page view itself works fine, from your post.) It’s an ajax call for a list of products, which includes the dataLayer calls for custom dimensions, impressions, and a separate custom event for the page view.
I removed all tags except one on the last custom event, so the event sequence is:
gtm.HistoryChange, customDimensionsPushed, impressionsPushed, pageViewEvent. The last one fires a PageView tag.
Using fiddler, I can see 2 calls to /collect, one with t=pageview, the other with t=timing. They both contain the impression data, so I think this is the source of the problem. When I remove the PageView tag, then there are 0 calls.
Is this expected? A bug? Is there a way to separate these so only one gets the impression data? (The one event/tag is calling them both, is the problem.)
Thank you very much,
Bob
Hey,
Are you sure the Timing hit is sending the impression data? It shouldn’t. I’m assuming it’s because you have siteSpeedSampleRate set to 100 or something? Even if it did carry the impression data, it shouldn’t populate in the reports, so are you seeing double-counted impressions in the reports as well?
Simo
Hi, thanks for the reply. Yes on all your questions: siteSpeedSampleRate is set to 100 (inherited this – trying to get all the data, I assume). Fiddler is showing the impressions in querystring of the two /collect calls. The impressions are showing twice in the Product List Performance report – it’s always an even number, and I selected a page of unique products on the test site to be sure.
Side tidbit, in case it matters: there is a plugin that is calling the old gaq. I mean to get this removed.
Thanks,
Bob
This is extremely strange. Are you pushing the Enhanced Ecommerce payload manually using ga() in a Custom HTML Tag, or are you using the “Use Data Layer” option in the Page View Tag?
Could you share the URL? You can send it privately to simo at simoahava.com if you wish. This is too interesting to pass by.
Use Data Layer. I’m getting a gut feeling it’s the fragment navigation. We’re using an old jquery address plugin, and it has some gaq stuff embedded in it. My next giving-up step is to rip that out and just use history.
The site is still internal; but I’ll email you.
(Sorry this posted outside the thread. You can delete this if you wish.)
Hi, this is great.
In the update, is the only change that of the javascript used in the {{get path with fragment}} Variable? Do all other processes in the tutorial remain the same?
Dave
Yeah, that’s the gist of it. So the {{get path with fragment}} is that code snippet, which is the added to the page field of the Page View Tag which fires both on All Pages and a History Change trigger.
Excellent. Many thanks for this and an incredible informative site.
Hi Simo,
We use this tracking, and it works great. However, when looking through the GA Debug fine-details, the “&dp” URL-parameter is NOT populated with anything after the first “?”-parameter in the sent URL.
Our URLs, of course, contain their own query params and hash values, is there any need to URL-escape these URLs, to prevent them to conflict with the URL that ga builds up on its own?
Hello Simo,
Thanks for your tip, I had well done,
However, after set up this tip, my website bounce is decreased from 4x% to 1%, my website have not #page much, I think it really strange :'(
Could we do something for this problem?
Thank you so much!
Hi,
I got this too.
Our bounce rate dropped dramatically (from around 50% to 2%) and overall traffic decreased somewhere in the region of 20%.
Hi – did you ever figure out why traffic dropped 20%? I’m having a similar issue (possibly) and wonder if this addition is the reason. I can’t see why it would be though.
Hi guys, I have the same problem on my site. I just need to know for sure that it’s the URL fragment-tracking that is causing the low bounce rates. Did you guys keep tracking URL fragments despite the effect on how bounce rates are measured?
Hi,
Tracking URL fragments as pageviews will result in unuseful data when having a one-page website.
Lets say someone clicks a “Products” link, which has an URL fragment that sends the user to the Products part somewhere down the page and also triggers your Tag. Users that don’t use the link but scroll down to the Products part of will not be measured.
Greetings,
Mark
Hi
Yes, with a one-page site scroll depth tracking would be the preferred method.
Simo
If you dynamically update URL of the “one pager” then you will get the correct data. Actually, this is the recommended method because it also improves user experience. People can share URL’s with fragments pointing directly to the part of the page they wanted to show to the world. In short, you don’t need to someone “go to the page x and scroll down to products” ;)
What if you have the same anchor tag located in multiple locations? We won’t be able to tell which of the links led to the virtual pageview. For example, if we have a button called “Programs” that anchors the user down to that section of the webpage, but also have a link in the footer called “Programs” that does the exact same thing, is there a way to somehow identify which link led to that virtual pageview?
Hi
You can write some JavaScript to distinguish which link click took the user to #programs. This means you need a Click Trigger as well, which fires a gtm.click event into dataLayer, and you can then parse the {{Click Element}} Variable to identify which link was clicked.
Simo
Great writeup, lots of helpful detailed info!
A couple of things I’ve run into, working with a parallax one-pager:
1. When I set the History Fragment Change trigger to fire when the History Source equals “popstate” it won’t fire for me. But if I take that restriction off and let it fire on “All History Changes” it does fire.
2. Using the tag manager’s preview console everything looks like it is firing off when it is supposed to. I have a couple of events set up in addition to this page view tag and all look to be working as expected from the preview console. I’ve wrestled with this issue for hours and hours at this point… I simply cannot get the data to push over to my Analytics. I have no errors in my console and I can see calls being made to Analytics from within my Network console every time I trigger a tag. The wacky part in my case, I believe, is the formatting of the .gif urls that push the data to GA. With each tag hit I get four urls, which more or less look like this:
A couple of things stick out to me:
– the “ec” param maps out to “illegal ga call”.
– the “ea” param seems to contain data that looks as though it belongs in other params
– the property ID in this example is not my property ID, even though it is set correctly in the tag and I can track it in the preview/debugging console
Any thoughts? Thanks!
Are you perhaps using “ROI Revolution” and their “Google Analytics Tracking Enhancer”? That adds its own tracking beacon, which collects to UA-61700779-1 (horrible practice by the way).
If it fires on “All History Changes”, what is the history event that is pushed to dataLayer? You canc heck this in preview. There should be gtm.historyChange, and then there’s a bunch of variables pushed to dataLayer. What does the dataLayer message that is pushed to dL with gtm.historyChange look like?
If you could share the URL, it would help debugging.
Simo, your blog has been a life-saver for me. Thanks for everything!
One question – I ran into an issue with the code snippet below, which I was using to retrieve the URL fragment following the anchor. The snippet seemed to be causing irregular behavior with flyout menus in firefox.
Do you have any idea of why this might happen? Is there an alternative way to retrieve the URL fragment?
function() {
return window.location.pathname + window.location.search +
window.location.hash;
}
Some background – my company uses GA/GTM across our entire site. I am using GTM/GA on an single-page Angular app that exists within a subdirectory of the company url. The issues caused by the snippet were happening outside of my subdirectory.
Thanks for this Simo, it is proving very useful (as is the rest of your blog!).
One issue that I have encountered since implementing though, is that the bounce rate for visits on Safari has dropped to circa 5%. I presume that this is something to do with how popstate is handled on the browser, do you have any suggestions?
Hey
You’ll need to debug the issue in Safari. You could add a check for {{New History Fragment}} !== {{Old History Fragment}} to only dispatch the event when the fragment has changed.
Simo
Thanks so much for this solution. It has helped greatly. I am having one problem in that all my sessions from Internet Explorer are not firing the virtual pageview. Has anyone else experienced this? I’m assuming the developers have written some browser specific code for IE that is interrupting popstate. IE is infinitely harder for me to debug, because I can’t get preview mode to work in IE 11.
In case anyone else is using this solution and ran into the problem I explained above, I was able to get it to work by changing the trigger to match both popstate and hashchange. For some reason IE 10, 11, Edge use hashchange.
Hi Simo,
I have kindaa similar issues where I’m not able to track the information when the URL has the “#” in it. E.g.:.
Where I can get info till only abc and nothing after that.
I followed your steps but unfortunately I was unable to get the page view in the settings.
And Also, is the last function only needed to be added in the Google Tag Manager Account?
function() {
return window.location.pathname + window.location.search + window.location.hash;
}
Please, let me know asap.
Thank you.
–Rohit
Simo,
Am implementing tracking on an AngularJS SPA and was struggling to get GTM to work. Was fairly confident you would have the answer. Although I was too lazy to read the bottom of the post so waste 30 mins before seeing your comment about parameter strings and the simplification of the custom javascript. Once I made that change it worked straight away.
Many thanks,
This is such a nice gesture of you to dedicate so much time to help others out. Thank you for the invaluable info.
I am reaching out to anyone that maybe able to help, as I have literally spent the past 10 hours trying to figure out why things are not working properly. I have looked for solutions everywhere and tried about 40 versions and still cannot for the life figure it out.
I have followed the above instructions to a tee (At least that is what I think).
The info is simply not being passed to GA. I don’t see it in the GA live preview and looking in the preview data, it doesn’t seem to trigger anything relating to history change.
I have included a share-a-preview hoping that someone could point me into the right direction.>m_auth=d51Aqwwn-sl7HlPucmcojw>m_preview=QUICK_PREVIEW>m_debug=x&url=
Hi,
As you can see when clicking the links, your site doesn’t use URL fragments, so the tutorial in this article is of no use to you.
Instead, you need to track clicks on the internal links (e.g. navi) using regular Click Triggers, and then parse the clicked element for details about where the user clicked to.
Simo
Thank you very much for the help and your time, it is very appreciated.
I am confused though as the site’s menu uses “#item” for href and take user to an anchor point. How do you determine that it is not using fragements?
Thanks!
Hey,
The #item is probably just there for the click handlers to grab. I know it’s not using fragments by looking at the URL. If it used fragments, you’d see #item in the URL after clicking the link. In other words, the site isn’t using the History API but rather treating links simply as triggers for transporting the user to a specific point in the page using some JavaScript shenanigans.
hi simo, i’m trying to track the anchor tags in my homepage so I added in the header.php this-30537488-1’, {‘allowAnchor’: true});
ga(‘send’, ‘pageview’, { ‘page’: location.pathname + location.search + location.hash});
and it works when i arrive in a anchor side in the home like
from another page of the weebsite, but it doesn’t works when I’m on the same page (homepage) and i go from to so i pit in the footer this
jQuery(document).ready(function () {
jQuery(‘.menu a’).click(function(){
var match = jQuery(this).attr(‘href’).match(/#\S+/);
ga(‘send’, ‘pageview’, location.pathname + match[0]);
});
});
I read in the web and in google groups, but for me doesn’t work again…
I addded another code too in the footer, to see if jquery works but for me is not so
jQuery(document).ready(function () {
jQuery(‘.menu ul li a’).click(function(){
console.log(‘testing’);
});
});
so i don’t see “testing” in the footer of homepage, what I have to do to solve my issue? can you help me? Thank ytou a lot, and sorry for my english i’m italian :)
Hi
For starters you might want to get your class names correct…:) There is no class “.menu” on the page, but there is “.menu-menu1-container”. So use that instead, and your jQuery might work.
Simo
Hi Simo. Really appreciate this post. I implemented these on GTM, but noticed that bounce rates on all pages have decreased by more than 50 percentage points. This is because of tracking the URL fragments, correct? That has been the only activity made on my GTM recently. Only the pageview (significant increase) and bounce rate (significant decrease) metrics have been affected since anchor link tracking was implemented.
Anchor link tracking is very important for my business requirements, so while the sudden dips in in the dashboard may raise concerns, I just need your confirmation that this is affected by those History triggers and variables. I hope you can share advice if my understanding is completely incorrect so I can investigate something else that may have affected the bounce rate.
Hi, Simo. Hope you can share your thoughts on my comment above. Looking forward to it!
Hi Simo,
That’s another practical post from you. You’re a great role model and you’ve given me a lot of inspiration.
I took another challenge and I struggling with the fragment.
I am trying to track all the traffic that’s coming from Instagram and I used an URL fragment and built a lookup table. The problem is that the “source” is still “direct” in GA reports but it should be “instagram”. I am just trying to avoid those long UTM campaign parameters and make the URL as short as possible. That’s why I decided to use it.
What I did?
1. First of all, I use this kind of landing page + fragment:
2. I created a URL fragment variable:
3 I built a lookup table variable:
4. I added “Field name” and “Value” to my normal GA pageview tag:
The problem is that in real-time reports (if I tested it) it still shows “direct” as a traffic source. It doesn’t change.
In my opinion, it would be a great alternative method to use instead of the UTM parameters. This kind of method makes the URL way shorter.
What do you think, what might be the reason/issue why I couldn’t see the right source in real-time reports? :)
Thanks a lot Simo. Definitely, if you’re planning to write a book about GA & GTM in the future, I am your client. That’s 100% sure.
Hi,
As with UTM parameters, you will always need to define both source and medium (preferably campaign name, too) when applying campaign parameters. So you might want to add the fields (and values) for
campaignMediumand
campaignNameas well.
And thanks for the kind words!
Hi Simo,
Would this also work with url-fragments that start with “;”?
I have implemented a solution in GTM V2, in which I use the history change trigger, but I het incomplete urls in Analytics:
For a single page AJAX applications, I am tracking usage of the application in Analytics via GTM, using the history-change trigger.
The URL of the viewed page changes client side, so I am able to track ‘pageveiws’ in Analytics.
For the history-change trigger there is a value that is transmitted to the datalayer, ie ‘gtm.newUrlFragment’
I have a variable that states that ‘gtm.newUrlFragment’ is a virtualURL and in the Universal Analytics tag, I use this virtualURL as ‘Field to set’, Field name ‘Page’
The problem is that not the complete url is sent to Analytics, but only the first part of the URL. The complete URL is visible in the datalayer (in the ‘gtm.newUrlFragment’-value), but only a fragment ends up in Analytics.
The complete url is:
[domain]!segment;segment/document/9050;segment/document/9050/text@fragment-118598
The part that I can get in Analytics is:
[domain]!segment
(I already tried to make it an official url by adding /# in front of the first !, but that doesn’t make any difference.)
Any idea how this can be fixed so that I can get the complete URL in Analytics?
I figured the “;” was the problem, so we created a custom javascript in GTM that replaced the “;” with “:” and now the complete url’s are ending up in Analitycs!
Cool! :) Sometimes it’s the little things that cause the most havoc.. :)
Hi Simo,
I just stumbled upon your blog post. I’m working on an analytics/tag manager account that was previously managed by someone else (I can’t reach). This someone happens to use your implementation with image galleries iterating over the hash like so: example.com/picutres/page_with_gallery/#3 for the third picture.
In analytics said galleries sometimes create URIs with repetitive hashs (e.g. example.com/picutres/page_with_gallery/#3#3#3).
Do you have any ideas, how these hits might get created?
Correction:
The hash is not appened multiple times in the site path, but the page title.
Some of the hits that originate from galleries are associated with a page title like
“I’m a title#1” in analytics.
Right, in that case check the title field in your GTM Tags for the broken JavaScript. If there is none, then it might be a bug in your site (how it’s generating the dynamic page title), or even a bug in GA, where the page title is appended by the hash (though this is very unlikely).
Hi,
It’s most likely a JavaScript issue, where the script which should return the hash as part of the page path has some issue. This would need to be fixed by going to the Custom JavaScript Variable with the code in GTM, and fixing the code.
Alternatively, it could as well be your site not working properly with some web browsers, so instead of just keeping the URL as it is, it always appends #3 into the end.
Simo
Maximum help! thanks mate.
Hi Simo,
I’ve implemented exactly the same thing for my website . But still I’m not getting Pageviews with URL fragments. Please help me….
Hey Simo,
first off thanks a lot for your blog. It already helped me steer clear of some cliffs like iframe tracking, cross-domain references and the like. I implemented your new solution yesterday but currently only get the information /#/ additionally to the original URL. The following URL information is still missing. I checked that I followed your guide by the word. Is there anything I could be missing in this case as the final part of the URL is only appended after a button click that triggers a result page.
Thanks and cheers,
Konstantin
Hey Simo,
great article! I’m trying to implement hash tracking on a website, where the hash is not appended to the URL when clicking a link. Hence, there is no history change and both document.location.hash and window.location.hash return an empty string.
Do you know of any way to receive the hash when the link gets clicked, but not passed to the URL?
Best
David
Hi,
At this point you might want to just ask the developers to add a custom
dataLayer.push()with an ‘event’ key and ‘pagePath’ or something, with which you’ll track the transitions. Doing this type of hack client-side without any changes in the URL or the page state would be very inefficient.
Hi simo,
ive tried your bode, but i have problem here, my home page is changed from mysite.com/ to mysite.com/#2 if i scroll the page. the code calling the pageview request but the page path not showing the hash (#2) instead of the it still on mysite.com/
do you have any ideas how to send the hash number to google analytics?
i tried to put {{get path with fragment}} #{{New History Fragment}} on the page field name not its not working.
thank you Simo
Wonderful!
Its work. But still I have one problem. I make all from your “update” part.
But how I restart code of analytics and send new hash url if anyone make step from one page to another?
Hi Simo,
I need to execute one wordpress function inside one variable. Tried it by placing php code inside custom javascript varaible. But it’s showing parse error eventhough syntax is correct.
I just want to execute one wordpress function and return the result to use it as variable. Please tell me the correct way to do this.
For your reference, the function I need to execute is the_permalink() to get the url of the current post or page. Please help.
What if the url looks something like this:
domain.com/#/login , the changing part is the “login” to preview or sth else.
Hi Simo,
I need to apply history fragment changes on a mobile site such that certain pixels will fire when the history fragment has certain strings within it.
For example, if the history string after the # has “tcart” in it, I need to fire off a particular pixel.
All of the instructions I am seeing show how to do this using a Google Analytics pixel, but we are not using GA in this case. We just need to be able to apply this to standard javascript and image based tags. Is there a way get get history fragments to fire to standard js and image pixels without setting anything up on the GA side?
What I’ve done so far is:
1. Apply a History Change trigger to fire when the New History Fragment equals “tcart”.
2. Applied this history change trigger to the appropiate pixel
Hi,
You never said if you have a problem with your setup or not? This setup has nothing to do with the Tag type (GA or Custom HTML or something). This is about how you set up the Trigger. So if you setup a History Change Trigger, and its only Fire On condition is New History Fragment equals tcart, then it means that your Trigger should fire when the page URL changes to /#tcart.
Hello, Simo!
Thank you for this fantastic tutorial!
I am brand new to Google Tags and I’m trying to make it work with my one page website so that I can see which (virtual) pages people are visiting. Yours is the first tutorial I’ve followed that has resulted in successfully firing pageview tags.
My problem is that I’m not getting the seeing the various pages (such as /#contact ) in Google Anaytics. Everything just registers as a view of the index.
It looks to me like the tags are firing and registering this for a click on the ‘contact’ menu:
Fields to Set {: ‘/#contact’}
Have I misconfigured something or am I just looking in the wrong place in Analytics?
Thank you so much for your help!
Thanks for the great post!
I implemented it and it just works fine.
Now I want to push events on this individual pages. The events work fine but the problem is that events send the page url without the url fragment (thing after #). Is there a way to modify events to send the complete url and not ignore part after #.
Thanks,
Hi,
Yeah, create a new Custom JavaScript Variable:
And add that to the Fields to Set of your Event Tags, with the field name being page.
That will preserve the full path, URL query string, and hash fragment of your URL.
Hey Simo,
Thanks for posting the advice, just one quick question, the name of the variable, does it have to be exactly {{get path with fragment}} ? The reason I ask is that Tag manager wont let me name it as it says the ‘ { ‘ is not allowed i.e. The name cannot start with an “_” or contain a reserved character (!, @, #, $, <, etc).
I'm struggling to get it all to work even though when I review the recordings its picking up the # fragment, I just dont think its triggering into GA, but thought I would start with the first part which is the part where I had to deviate from your instructions :)
Many Thanks!
z
The variable name is “get path with fragment” (without quotes). The {{ and }} are the syntax you use to call a variable in GTM.
Simo
Thanks alot Simo! Will give that a go now :)
|
http://www.simoahava.com/gtm-tips/track-url-fragments-as-pageviews/
|
CC-MAIN-2016-44
|
refinedweb
| 7,048
| 73.37
|
We were unable to locate this content in de-de.
Here is the same content in en-us.
n last month's column I began an ambitious project: building a SQL Server™-specific DataNavigator control that supports two-way data binding. The control I'll present in this column, SqlDataNavigator, is just an extension of last month's DataNavigator. The SqlDataNavigator ASP.NET control described here is meant to be the Microsoft® .NET counterpart of the Data control—an old Visual Basic® control that caused its share of headaches. The control moves from one record to the next according to a given order and displays each data row using a dynamically generated template. Last month I focused on the DataNavigator control's architecture and tackled some programming issues related to connectivity and data display. This month, I'll add editing capabilities to the control, making SqlDataNavigator actually support the "writing" channel of .NET data binding. The SqlDataNavigator user interface includes a new toolbar with buttons to edit and delete the current row, plus a third button to insert a new blank record. I also added Boolean properties to let you control the availability of each of these features individually. Before discussing how the control's internal structure must be modified to allow for the new functions, let me say a few words on the topic of record-based paging.
The SqlDataNavigator control has a pager bar, similar to the MoveNext and MovePrevious methods exposed by the Data control, that you can use to move through records sequentially. There's a significant difference between the Data control, which was tailored for the ADO Recordset object, and the SqlDataNavigator control, which has been designed with ADO.NET and ASP.NET in mind. The SqlDataNavigator control (see Figure 1) has no underlying open connection or server-side cursor to easily move you to the next record. Right now, ADO.NET does not support server cursors. In addition, in this implementation of the SqlDataNavigator control, I have deliberately chosen to build all the logic necessary for data access into the control class. What I need is a flexible tool that can quickly set up an attractive, efficient user interface for SQL Server tables. Furthermore, once full support for two-way data binding has been implemented, you'll have a rather powerful tool for creating edit interfaces for virtually any SQL Server table. But how does the control get the current record, and what kind of logic moves it from one record to the next? The GetRecord method performs the paging. It locates the record that occupies the CurrentRecordIndex ordinal position in the current order. The control's pager bar updates the index stored in the CurrentRecordIndex property. GetRecord utilizes the actual content of the ConnectionString and TableName properties to set up the connection and run the command. The method returns a SqlDataReader object and leaves the connection open until the data reader content is completely processed. How can you retrieve row n in a SQL Server table sorting by a given field? And speaking more generally, how can you select all the rows that fit in page n given a certain page size? The answer lies in the following steps:
Setting the SearchKeyField property allows you to specify the field by which to order. You also need to indicate a primary key field to make sure that records with duplicates in the sort field are not mistaken for one another. Since the SqlDataNavigator control works with a page size of 1, the following T-SQL statements retrieves the record in the fifth position, sorting by lastname:
SELECT * FROM (SELECT TOP 5 * FROM Employees ORDER BY lastname) AS t1 WHERE NOT EXISTS (SELECT * FROM (SELECT TOP 4 * FROM Employees ORDER BY lastname) AS t2 WHERE t1.employeeid=t2.employeeid)
Notice that there are items in the code that represent parametric information such as the table name, sorting field, the key, and the record number to retrieve. If you need to retrieve the record in position N with a page size of 1, then you must discard the first N-1 records. Unfortunately, there's no way to write a command like this using SQL parameters. The reason is that the TOP and the ORDER BY clauses do not accept variable parameters. So I resorted to the following code for formatting placeholders:
StringBuilder sb = new StringBuilder("");sb.Append("SELECT * FROM (SELECT TOP {0} * FROM {1} ORDER BY {2}) AS t1 ");sb.Append("WHERE NOT EXISTS (SELECT * FROM (SELECT TOP {3} * FROM {1} ");sb.Append("ORDER BY {2}) AS t2 WHERE t1.{4}=t2.{4})");strCmd = sb.ToString();
The SQL Server-based .NET data provider deals with this code, and any other T-SQL code, in a relatively efficient manner. The SQL code is transmitted through the sp_executesql system procedure. For programming ease, you might want to consider using format placeholders in your code.
At the foundation of the SqlDataNavigator user interface there is a highly customized DataGrid Web control. The DataGrid is already predisposed toward in-place editing, so making this feature show off the SqlDataNavigator control should not be really hard. To set a DataGrid to edit mode you normally add a special breed of column—the EditCommandColumn column type—and handle the events it fires upon clicking. The EditCommandColumn object allows you to specify the text for the links that will edit the row and then save or cancel any changes. Note that you don't strictly need such a column to set a grid to edit mode. What really matters is that you run a piece of code that properly sets the DataGrid's EditItemIndex property. In Figure 2, you can see an updated layout for the SqlDataNavigator control. There are two key enhancements over the structure I introduced last month. First, in addition to the record navigator and the search box, the pager bar also contains a small toolbar whose buttons allow for the three editing operations: insert, update, and delete.Figure 2 SqlDataNavigator Control If you permit updates, you must also provide for buttons to save or cancel changes. Such controls are automatically provided by the DataGrid infrastructure as long as you define an edit command column. For the sake of consistency, you should maintain the interface and the working metaphor of the DataGrid control. The EditCommandColumn object mostly provides for a row-specific hyperlink to trigger the edit event. If the control's interface is capable of providing an alternate way of entering data in edit mode, then you can have an editable grid even without an EditCommandColumn, or you can keep such a column hidden from view. During the initialization step, you add two columns to the grid. One is a TemplateColumn object for displaying the contents of the record, while the second column is an EditCommandColumn object hidden by default. The idea is to let another piece of the interface fire the edit and when this happens, toggle the visibility of the edit column to make it show the standard Save and Cancel link buttons. Falling into this predefined flow is important because it saves you a lot of coding. You don't have to worry about the layout of the OK and Cancel buttons or about their handlers. The DataGrid does it for you and fires two tailor-made events when users click OK or Cancel. One of the buttons in the new toolbar can easily enable editing by just setting EditItemIndex to a non-negative index. Since the grid has exactly one item, this index must necessarily be 0. Figure 3 shows the initialization code of the grid that makes the SqlDataNavigator control. Notice that the EditCommandColumn object has the Visible property set to False, which hides it from view. Also notice that there's no handler for the EditCommand event. EditCommand is one of the standard events that the DataGrid control fires during its activity. In particular, EditCommand is raised when the user clicks on a link within the edit or when he or she clicks on any button within the grid that has a command name of edit. In response to the event, the programmer if set EditItemProperty to the correct index value to indicate which item should redraw in edit mode and refresh the view. Since I'm not putting the grid into edit mode using the column, I can more easily accomplish the same tasks elsewhere in the code if something happens to change the working mode of the grid. A more elegant approach would be to use a custom button, with a command name of edit. In this case, the grid would still fire an EditCommand event—a piece of the standard programming interface—though it would be unrelated to the EditCommandColumn. The code in Figure 4 creates the toolbar with buttons for insert, delete, and edit operations. Each button is governed by a Boolean property—AllowInsert, AllowEdit, and AllowDelete—which determines whether the button is enabled. Let's see what happens when you click on the Edit button.
The structure of the handler that takes the edit command is pretty straightforward. The code sets up the control's interface to reflect the new working mode and then orders a data refresh:
void OnEditCurrentRecord(Object sender, EventArgs e){ SetupWorkingMode(WorkingMode.Edit); BindDataToGrid();}
The feasible working modes for the control are defined in a custom enum object called WorkingMode:
public enum WorkingMode : int{ View = 0, Edit = 1, Insert = 2 }
When the control is not in view mode (in other words, it's editing or inserting), the button bar is hidden from view to avoid abruptly halting ongoing operations. At the same time, the EditItemIndex must be 0 (which refers to the first item in the grid page) and the EditCommandColumn must be visible:
void SetupWorkingMode(WorkingMode m){ bool bIsInViewMode = (m==WorkingMode.View); // Hide/Show the button bar ShowButtonBox = bIsInViewMode; // Set the new working mode Mode = m; m_grid.EditItemIndex = (bIsInViewMode ?-1 :0); m_grid.Columns[1].Visible = !bIsInViewMode; }
Figure 5 shows the SqlDataNavigator control while editing a record. As you can see, the button bar is hidden while the EditCommandColumn is visible. This column features the Save and the Cancel buttons. Clicking on either of these two buttons would fire the standard pair of events—UpdateCommand and CancelCommand—for you to persist or cancel changes. So much for the editing infrastructure, but what about the edit template? By default, when a DataGrid control enters edit mode, only data-bound columns (class BoundColumn) are automatically rendered through textboxes. In this case, the internal grid control has just one column (plus the edit command column). Last month, I discussed how to dynamically build an HTML template for display purposes. This month I will build the template again to allow for editing. The essential code that outputs the record shown in Figure 5 is repeated here:
TemplateColumn tc = new TemplateColumn();tc.ItemTemplate = new DataNavigatorItemTemplate();tc.EditItemTemplate = new DataNavigatorEditItemTemplate();
The class DataNavigatorEditItemTemplate inherits from ITemplate and binds to the data using editable controls like textboxes, dropdown lists, and checkboxes. In Figure 6 you see the outline of the code for the template class. The edit template creates a placeholder control and then handles its DataBinding event. When the placeholder gets bound, the class dynamically creates and renders an HTML table. The procedure looks similar to what happens for display, but with a few significant differences. As you can see in Figure 5, textboxes are used in lieu of labels. Later in this column, I'll discuss how to utilize dropdown lists and checkboxes if the data allows for it. Using textboxes poses a few modeling problems. For example, if the text is too long, you must switch to a multiline textbox; if you handle dates, then you should employ a short, simple format. But what if the field you're about to edit is set as auto-number or read-only? What if it does not accept null values? How do you handle a field that has binary contents? You need to know about the schema of the table you're editing. Unfortunately, you need to access schema information from within a template class that has been passed only the structure of the data item. This latter point represents just one instance of a more general problem. A template class is a class that is distinct from the control, yet it needs to access and read some of the configuration settings, among which are schema table, colors, and bindings. How do you pass all this information down to the template class? Well, the edit template class is a class whose programming interface is under the programmer's total control. The only requirement is that it has to implement the ITemplate interface. Nothing prevents you from adding public properties to pass all the information needed. You get schema information about a table by calling the GetSchemaTable method on the SqlDataReader object. GetSchemaTable returns a DataTable object whose rows evaluate to the table's columns. The columns of the schema table have predefined names like AllowDBNull, IsReadOnly, ColumnSize, IsPrimaryKey, and so on. The following code checks whether a given column is an identity column:
DataRow r = SchemaTable.Rows[i];bool bIsIdentity = (bool) r["IsIdentity"];
Prior to refreshing the grid, you make sure that the template classes (both item and edit item templates) have been filled with all the configuration information they need:
DataTable dtSchema = GetSchemaTable(); TemplateColumn tc = (TemplateColumn) m_grid.Columns[0];DataNavigatorEditItemTemplate dneit;dneit = (DataNavigatorEditItemTemplate) tc.EditItemTemplate;•••dneit.SchemaTable = dtSchema;
Carrying schema information in the body of the edit template class makes it easy for you to implement some cool features such as marking the field as required if it does not accept nulls, preventing changes on read-only fields, or using multiline controls if the text or the column size can exceed a certain length. In Figure 5 you can see some of these features in action. For example, asterisks mark fields where nulls are not allowed and the employeeid field, which is an identity column, is disabled.
To improve the user's edit experience, you might want to configure each column individually. For example, you may want to pick up the value for that column from a lookup table or render a certain piece of content as a Boolean value. In such cases, the textbox is no longer the most suitable control for editing. You might also want to keep fields as read-only in your application or format them in a special way. For this purpose, I created a new data structure called DataBoundField (see Figure 7). This class describes how a field should be rendered for display and edit. The class represents the binding context for the column and indirectly adds a great deal of flexibility to the overall interface of the SqlDataNavigator control. For example, you can control the label text and the tooltip of the field and decide whether you want it to be displayed with a dropdown list or a checkbox. (More in a moment.) The SqlDataNavigator control exposes a DataBindings property that is an instance of the ListDictionary class:
public ListDictionary DataBindings { get {return (ListDictionary) ViewState["DataBindings"];} set {ViewState["DataBindings"] = value;} }
The contents of the property is persisted across multiple page requests. ASP.NET does not know, though, how to serialize the contents of the DataBoundField. If you want to be served the default way, just mark the class with the [Serializable] attribute. Beware, though, that this approach is not necessarily optimal and could lead to too much code being persisted. Check the MSDN documentation to explore alternative approaches, such as writing a type converter for the class. The following code shows you how to configure the user interface of the SqlDataNavigator control using bindings:
DataBoundField d = new DataBoundField();d.MultiLineRows = 5;d.FormatText = "<i>{0}</i>";d.LabelText = "Personal Notes";d.ToolTipText ="Some personal notes";data.DataBindings.Add("Notes", d);
First, you create a new DataBoundField object and set some of its properties. Then, add the object to the DataBindings collection using a key value that matches the field name. The control internally locates the item using the Contains method of the ListDictionary class and passes the name the column just read off the schema in a string. Since the Contains method is case-sensitive, you must pay attention to how you write the column name. A better approach would be to derive a custom dictionary object from DictionaryBase and make it work irrespective of the key case.Figure 8 Configuring the UI Figure 8 shows the modifications in the user interface of a sample page. The Notes column now has a different label and tooltip that are user-defined. In addition, the value has been formatted to display with an italic font, and when in edit mode the Notes field will contain five rows of a multiline textbox instead of the default three lines. The DataBindings collection can do much more than I've described so far. For example, you can associate a data-bound list of choice when editing a given foreign-key field and resolve the key through an automatic JOIN when in view mode. The ReportsTo column of the Employees table (from the Northwind database) points to the employee ID of the boss. The following code snippet illustrates how to pick up his ID from the existing employees while editing and how to display the full name otherwise:
DataBoundField d1; d = new DataBoundField("reportsto", "employees", "lastname", "employeeid"); d.LabelText = "Boss"; d.FormatText = "The boss is <b>{0}</b>"; d.ToolTipText = "Name of the boss"; data.DataBindings.Add("ReportsTo", d);
The alternate class constructor defines a lookup table for the field. You specify the name of the field to be mapped, the lookup table, and the fields to use to populate the dropdown list control for text and value. There's a bit of redundancy here as the field name appears both as the key of the collection item and as the BoundField member of the DataBoundField object. Figure 9 shows the user interface of the record being edited in this way; Figure 10 shows the ad hoc formatting when in view mode.Figure 10 Ad Hoc FormattingThe SqlDataNavigator control automatically performs an internal query to retrieve the information in the user interface. You can make this particular aspect of programming more effective and flexible by firing a custom event to the page requesting the data. The class constructor used to look up on external tables sets the CtlType property of the DataBoundField to ControlType.DropDown:
public enum ControlType : int{ TextBox = 0, DropDown = 1, CheckBox = 2}
If the data to render lends itself to representation as a binary type of information (yes/no, on/off, true/false), you can use a checkbox control instead of textboxes or lists. For example, in the Employees table, the ReportsTo column contains the ID of the boss. The column allows for nulls, meaning that the given employee does not report to anyone. Although not strictly Boolean, this piece of information can be adapted to display through a checkbox that answers the question: does he or she report to anyone? If the value of the column is greater than 0, the employee has a boss; otherwise, he or she does not report to anyone.Figure 11 Displaying Binary Info Figure 11 shows the output of the following code:
DataBoundField d = new DataBoundField();d.CtlType = ControlType.CheckBox;d.ToolTipText = "Shows whether the employee reports to someone";d.LabelText = "Has a boss?";data.DataBindings.Add("ReportsTo", d);
In view mode, the SqlDataNavigator control shows Yes/No text. In edit mode you have a checkbox whose text is always the true string. The value of the column is converted to a Boolean and the result determines whether or not the checkbox is checked. The control's code also ensures that any null values encountered are rendered as false:
chk.Checked = false; if (!dbdr.IsDBNull(i)) chk.Checked = Convert.ToBoolean(dbdr[i]);
However, this is not necessarily a good approach and potentially leads to some data inconsistency. The SqlDataNavigator assumes that you use a checkbox-based representation of the data either if you have truly Boolean data or if you want to abstract over the data. In the latter case, though, you won't allow for editing. In situations in which you must handle buttons with three possible states (true, false, or nothing) you are better off adding a radio button list rather than using checkboxes.
In effect, the optional presence of null values in some fields poses a few design issues that can be summarized in the following question: how do you let users set null values? In the SqlDataNavigator control I assume that if you edit through textboxes, you are going to enter non-empty strings. So if the textbox turns out to be empty at save time, the control sets that column to null. Columns rendered as checkboxes handle the null value as false, but what about dropdown lists? In this case, the control gets slightly smarter and recognizes the nullity as a special case. For example, the ReportsTo column contains the ID of the boss or null. In view mode, employees without bosses can simply be rendered with an empty label. What happens if you need to update the ReportsTo field to hold the value null? To deal with this issue, I decided to add an extra item to the dropdown list to denote the null value:
if (!dbf.Required){ DataTable dt = ds.Tables[0]; DataRow rowNull = dt.NewRow(); rowNull[dbf.DataTextField] = "<NULL>"; ds.Tables[0].Rows.Add(rowNull);}
If the field accepts nulls then you create a new row and add it to the source table of the dropdown list. To enhance the user interface, you set the Text property of the list item with any text that means NULL. Figure 12 shows how gracefully the SqlDataNavigator control handles the contents of the ReportsTo column.Figure 12 Reports to Column
The edit mechanism is by far the most complex part of the navigator, and it is the core engine of the editing capabilities of SqlDataNavigator. When you click to insert a new record, the control switches in insert mode and refreshes the grid. There are only a few minor differences between the edit and the insert mode:
void OnInsertNewRecord(Object sender, EventArgs e){ SetupWorkingMode(WorkingMode.Insert); BindDataToGrid();}
Actually, both the Insert and the Edit buttons trigger the same engine—the in-place editing feature of the underlying DataGrid. The idea is that the insertion acts as the update of the record currently displayed. When it comes to this, the SqlDataNavigator control detects the insert mode and adapts the user interface. For example, it clears out all the textboxes, sets a few of them to default values—the DefaultValue field of the DataBoundField class—and, more importantly, runs a different procedure to save the data. Figure 13 shows the typical insertion mask with some default values and automatic handling of auto-increment columns.Figure 13 Typical Insertion Mask This way of working isn't problem-free, however. Just because the insert is implemented as a special editing of the current record, the SqlDataNavigator control is rather unusable with empty tables. To work around this problem, you can first count the number of records in the data source and add a fictitious row if the count is zero. Instead of connecting the grid to a SqlDataReader object, you use an in-memory DataTable object with just one blank row. When the user clicks Save, you then collect the data and run an INSERT statement. The Delete operation is even simpler. Click on the toolbar and the control arranges and executes a DELETE. To make your application cool, you can even add script to ask for confirmation:
String js = "return confirm('Do you really want to delete the record?');";btn.Attributes["onclick"] = js;
Upon creation of the Delete button, you just add an onclick item to the button's Attributes collection.
When the user chooses to save the changes he has made, he clicks on the Save button and the following code performs a number of possible operations:
void OnUpdateCommand(Object sender, DataGridCommandEventArgs e){ switch(Mode) { case WorkingMode.Edit: SaveCurrentRecordChanges(); break; case WorkingMode.Insert: InsertCurrentRecord(); SetVirtualItemCount(); break; } SetupWorkingMode(WorkingMode.View); BindDataToGrid();}
Depending on the working mode, the control updates the current record or inserts a new one. Both the INSERT and the UPDATE statements are built by concatenating text into a StringBuilder object. The values are extracted from the page using the Page.Request.Form collection and the textbox control's unique ID. If you snoop through the source code, you see that a lot of facilities such as the automatic duplication of single quotes are already implemented. The neat separation between the various states of the control makes it possible for clients to display the SqlDataNavigator control already in edit or insert mode. As the sample client WebForm demonstrates to you, the SqlDataNavigator control is a table-driven form for both editing and input that you can also customize to a certain extent. So much for beta stages; next month I'll discuss the final version of the SqlDataNavigator control with a slew of new features, among which will be the ability to import user-defined templates for view and edit modes. Until next month!Send questions and comments for Dino to cutting@microsoft.com.
Dino Esposito is an instructor and consultant based in Rome, Italy. Author of Building Web Solutions with ASP.NET and ADO.NET (Microsoft Press, 2002), he now spends most of his time teaching classes on ASP.NET and ADO.NET for Wintellect (). Get in touch with Dino at dinoe@wintellect.com.
From the May 2002 issue of MSDN Magazine
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
|
https://msdn.microsoft.com/de-de/magazine/cc301584(en-us).aspx
|
CC-MAIN-2015-22
|
refinedweb
| 4,321
| 53.81
|
Created on 2013-03-02 15:09 by mic_e, last changed 2013-03-15 20:59 by terry.reedy.
With a prompt that uses ANSI color escape codes, python3 input() and python2 raw_input() behave incorrectly when it comes to the wrapping of the first line - almost certainly due to the wrong string length calculation.
The line breaking occurs k characters early, where k is the number of characters in the ANSI escape sequence.
Even worse, the line break does not switch to the next line, but instead jumps back to the beginning, overwriting the prompt and the previously written input.
How to reproduce:
Call input() with a color-coded string as argument, and type until you would expect the usual line break.
Example:
mic@mic-nb ~ $ python3
Python 3.3.0 (default, Dec 22 2012, 21:02:07)
[GCC 4.7.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>>> len(prompt)
37
>>> input(prompt)
uvwxyzs a bold red prompt> abcdefghijklmnopqrst
'abcdefghijklmnopqrstuvwxyz'
>>>
mic@mic-nb ~ $ python2
Python 2.7.3 (default, Dec 22 2012, 21:14:12)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>>> len(prompt)
37
>>> raw_input(prompt)
uvwxyzs a bold red prompt> abcdefghijklmnopqrst
'abcdefghijklmnopqrstuvwxyz'
>>>
mic@mic-nb ~ $ tput cols
57
I have typed directly after the prompt the string 'abcdefghijklmnopqrstuvwxyz',
so the expected result would be be
this is a bold red prompt> abcdefghijklmnopqrstuvwxyz
with four more characters of space before the line break
Note that the break occurs exactly 8 characters early, which is the total amount of ANSI escape sequence characters.
Also note that the readline module is impored in my .pyrc file.
I can reproduce the issue, but only from the interactive interpreter while using input() directly (Linux/py3).
I tried the following things:
$ ./python -c 'print("\x1b[31;1mthis is a bold red prompt> \x1b[m", end=""); input()'
$ ./python -c 'input("\x1b[31;1mthis is a bold red prompt> \x1b[m");'
>>> print("\x1b[31;1mthis is a bold red prompt> \x1b[m", end=""); input()
>>> input("\x1b[31;1mthis is a bold red prompt> \x1b[m")
In the first 3 cases once I reach the end of the line, the text went on a new line. In the last case it started writing over the prompt instead of going on a newline, and once it reached the end of line again it went on a newline correctly.
The issue might very well be strictly related to GNU readline.
I have both successfully reproduced it in a C program:
#include <stdio.h>
#include <readline/readline.h>
int main() {
readline("\x1b[31;1mthis is a bold red prompt\x1b[m> ");
}
gcc -lreadline test.c
and found a fix, hinted at by this stackoverflow post:
Readline uses the characters \x01 and \x02 to mark invisible portions of the prompt, so I am now pre-processing the prompt with this function:
def surround_ansi_escapes(prompt, start = "\x01", end = "\x02"):
escaped = False
result = ""
for c in prompt:
if c == "\x1b" and not escaped:
result += start + c
escaped = True
elif c.isalpha() and escaped:
result += c + end
escaped = False
else:
result += c
return result
However, in my opionion this fact deserves at least to be mentioned in the readline documentation.
On Windows, both the command prompt interpreter and IDLE ignore special meanings and print prompt as is. Line wrap to physical next line occurs at 80 chars and movable edge of window respectively.
Michael, in light of your last post, do you still believe a change is needed in the code we distribute, in particular our readline module, whether bugfix or enhancement, or should this be closed? I do not believe that Python itself know how the output device will interpret the characters sent to it.
Terry, i guess you are right; it is indeed not the job of python to know how its terminal will print characters; there is a whole lot of issues to consider, such as terminal unicode support, control characters, ansi escape sequences, terminal-specific escape sequences such as terminal title, etc.
I guess that on UNIX, a lot of that information could be taken from terminfo, to provide a method that guesses the length of a text on the terminal that is referenced in the $TERM environment variable.
In fact, I think this is a design problem of readline; readline should print the prompt, and then try to determine its length via the dedicated escape sequence (if it really needs to know the prompt length). In that case, there would be no way to fix this in python - apart from re-implementing. In conclusion, no, I don't believe the python readline code should be modified, maybe an additional, os- and $TERM-dependent method for adding prompt escapes should be added.
However, the issue should definitely be documented in the readline documentation:
importing readline _will_ break input() if the prompt contains non-printable characters that are not escaped by '\x01' and '\x02' characters.
Hence I have changed the bug components to 'Documentation'.
Please suggest the specific change in a specific place that you would like to see. Message is ok, patch even better.
|
https://bugs.python.org/issue17337
|
CC-MAIN-2019-43
|
refinedweb
| 857
| 58.21
|
Hi all,
I was searching for an IDE that is capable of Python GUI drawing, much like MSVS can draw GUI windows without actually having to code them manually.
Atm I use NetBeans 6.5 for Python, but I can switch.
Yeah, I'd prefer a Linux based IDE of course :)
Thanks
Since you're on Linux I recommend Glade for pyGtk. Go with Glade 3. Works like a charm, easy to use, saves time (bad Windows support though).
If you're a kde user, then a better choice might be QT designer, which comes with pyQT. I found it a good bit harder than Glade to use, but it's supposed to be good too.
Note: none of these software are IDEs; they are just designers that create UI files that you "load" into your python program, so don't throw away netbeans.
I was searching for an IDE that is capable of Python GUI drawing, much like MSVS can draw GUI windows without actually having to code them manually.
You might want to take a look at Boa-constructor, which works with wxPython. I haven't tried it yet, but it seems to be the best choice I've come across.
I find designers rather cumbersome, they throw in more code than you want to have, and you have a much harder time fleshing out to working program.
You are better off collecting a set of working templates of the various widgets you are interested in, and copy and paste from there to create your program.
If you use the wxPython GUI toolkit then Boa-Constructor isn't too bad, since it also is a regular IDE that you can write and run your programs with. It does create Python code rather than XML code.
On my Ubuntu machine I installed SPE (written in wxPython) and it has wxGlade as a designer tool, seems to work okay. This way you can test out your code quickly.
I remember trying to write a "Boa for Dummies" blow by blow instruction. Take a look at:
It should at least give you a taste what things you are up to with a designer for your GUI programs. Believe me, Boa is one of the friendlier ones.
A while ago I used QtDesigner for a similar task. It layed out the widgets alright, but then it saved it in XML code as a .ui file. PyQT comes with a utility that translates UI to PY code. The Python code it generated gets you into some rather nasty unreadable Python code, big time vomit code!?
Hi; assuming you have the .xml file that was generated from the .glade file, this is what you do:
import pygtk pygtk.require("2.6") #whatever your gtk version import gtk builder = gtk.Builder() builder.add_from_file("path/to/xml") window = builder.get_object("window") window.show()
More here:
It have been easier for me to "Hard code" than using designers. You have complete control over the code and can do any change and of course code re-use :)?
On my Ubuntu machine I installed SPE (written in wxPython) and it has wxGlade as a designer tool, seems to work okay. However, the output is an XML file again. You can use wxPython code to import these XML files as resource files.
I filed this code that Vegaseat left some a time ago:
# use an xml resource string or file to create widgets # you can create .xrc or .xml files with something like wxGlade # wxGlade from: # source: vegaseat import wx import wx.xrc as xrc class MyFrame(wx.Frame): def __init__(self, parent, title): wx.Frame.__init__(self, parent, wx.ID_ANY, title, size=(400, 300)) self.toggle = True # load the xml resource file # should be in your working folder or use full path res = xrc.XmlResource('resource.xrc') ''' # or you can use a resource string directly res = xrc.EmptyXmlResource() res.LoadFromString(xml_resource) ''' # create the panel from the resource self.panel = res.LoadPanel(self, 'MyPanel') # bind mouse click to the button in the resource self.Bind(wx.EVT_BUTTON, self.onClick, id=xrc.XRCID('ColourButton')) self.Bind(wx.EVT_BUTTON, self.showInfo, id=xrc.XRCID('InfoButton')) def onClick(self, event): """do something with the button click""" if self.toggle: self.panel.SetBackgroundColour('green') self.toggle = False else: self.panel.SetBackgroundColour('red') self.toggle = True self.panel.Refresh() def showInfo(self, event=None): """optional show xml code""" dlg = wx.MessageDialog(self, xml_resource, 'This is the xml code used:', wx.OK | wx.ICON_INFORMATION) dlg.ShowModal() dlg.Destroy() # this is an xml resource file # here a simple panel with two buttons <bg>#E6E6FA</bg> <object class="wxButton" name="ColourButton"> <bg>#F0E68C</bg> <label>Click me!</label> <pos>15,10</pos> </object> <object class="wxButton" name="InfoButton"> <bg>#F0E68C</bg> <label>Information</label> <pos>15,40</pos> </object> </object> </resource> """ # for the test save the xml code as resource.xrc fout = open('resource.xrc', "w") fout.write(xml_resource) fout.close() app = wx.App(0) title = 'xml resource to create widgets' MyFrame(None, title).Show() app.MainLoop()
Thanks a million guys, I'll keep popping in and bothering you with more questions, as they arise :) ...
|
https://www.daniweb.com/programming/software-development/threads/192062/ide-for-python-gui
|
CC-MAIN-2018-13
|
refinedweb
| 857
| 75.1
|
The Unit Test Window lets you run, debug, filter and search tests within the Visual Studio environment.
All available unit tests in the current solution are shown in a tree on the left side. An icon that indicates the current status of the test is shown next to each test that is found in the solution. Three different stages are recognized:
- Failed - The test have been run and failed.
- Passed - The test have been run and passed.
By double clicking on a test you can quickly navigate to the selected test.
By using the toolbar on the top, you can run only selected
() or all
() tests, debug selected tests
(), abort a test run
(), repeat the last test run
(), show only failed tests
(), and show only ignored tests
().
By default, information for the namespace, the type and the member is shown, but you can easily change this to another type of information that you found more useful. Here is a snapshot of the available options:
Build Options help you to save time by choosing the most appropriate setting for your current needs.
You can also use the Collapse All/Expand All buttons to collapse/expand all results.
To find your tests use the Search bar below the tool bar. Start typing and the tests in the tree will be automatically filtered matching your criteria.
On the right side of the window you can see details about your last test run like Exception message and Stack Trace. Click on each test on the tree to see details from the run.
Additionally you can click on the Console output and Console errors tabs below to find more information.
|
http://www.telerik.com/help/justcode/reference-justcode-windows-unit-test-window.html
|
CC-MAIN-2016-50
|
refinedweb
| 275
| 69.92
|
build terminology
javascript builds:
java builds: standard directory layout | artifacts and repositories | maven targets | pom.xml
windows builds: nmake | msbuild
version used
The version used for this reference sheet.
show version
How to get the version of the build tool.
name of build file
The customary name for the build file.
hello world
How to use the build tool to write to standard out.
build hello world
How to build an executable which writes to standard out.
statement separator
How statements are terminated.
comment
How to put a comment in the build file.
Invocation
specify build file
How to use to use a build file other than the default.
dry run
How to do a dry run. Commands that would be performed to build the target are echoed, but no actions are performed.
keep going after errors
When multiple targets are specified, how to keep going even if some of the targets fail to build.
run jobs in parallel
How to run recipes in parallel.
make:
list targets
List the targets which can be specified on the command line.
make:
touch targets
How to run touch on all targets without building them. This updates the last modified timestamp to the present and creates an empty file if the target does not exist.
always rebuild
How to rebuild all targets, even if they are already built and up-to-date.
up-to-date test
How to test whether the targets are up-to-date. Returns a exit status of zero if they are.
silent
How to run the build tool silently, or at least with minimal output.
Variables
set and access variable
How to set a variable; how to use the value stored in a variable.
make
Make variables are implemented as macro substitutions. There is no difference between the $(foo) and ${foo} syntax. There is, however, a difference between using := and = to perform variable assignment.
:= is the immediate assignment operator. It expands variables on the right hand side when the assignment is performed. A variable defined by immediate assignment is called simply expanded. The immediate assignment behaves like assignment in other programming languages.
= is the deferred assignment operator. It expands variables on the right each time the variable on the left is referenced. A variable defined by deferred assignment is called recursively expanded. Variables can be used on the left side of a deferred assignment before they are defined. If a $(wilcard …) or $(shell …) function is used on the right side, the value of the variable may be different each time it is referenced. Deferred assignment should perhaps be regarded as a misfeature of the original version of make.
The flavor function can be used to determine whether a variable is simple or recursive:
rec = foo sim := foo rec sim: # echoes "recursive" or "simple": @echo $(flavor $@)
The variable expressions $(foo) or ${foo} can be used on the left side of an assignment. The variables are immediately evaluated to determine the name of the variable being defined.
Whitespace after an assignment operator is trimmed. It is nevertheless possible to set a variable to a space:
empty := space := $(empty) $(empty)
undefined variable access
What happens when a variable which is undefined is accessed.
redefine variable
What happens when a variable is redefined.
append to variable
How to append to variable.
conditionally define variable
How to conditionally define a variable.
environment variable
How to access an environment variable.
make:
Every environment variable becomes a make variable.
The origin function can be used to identify which variables were environment variables.
example of the origin function and a list of return values
set variable if doesn't exist
How to set a variable if it is not already defined.
raise error if variable not set
How to exit before executing any recipes if a variable is not set.
warn if variable not set
How to write a message to standard error if a variable is not set.
Strings
pattern substitution
global substitution
make
The comma is used as an argument separator. If a comma appears in a match pattern or a replacement pattern, one must store the pattern in a variable:
comma := , comma_list := foo,bar,baz _ semicolon_list := $(subst $(comma),;,$(comma_liost))
shell command substitution
How to get the output of a shell command as a string.
Arrays
foreach
dirname and basename
Targets and Prerequisites
file target
A target which creates a file. The target should not execute if the file exists and is more recent than its prerequisites.
file target with prerequisites
order only prerequisite
How to define a prerequisite which will be built if it does not exist before the target, but will not trigger a re-build of the target if it is newer than the target.
Directories should usually be order only prerequisites, because the last modification timestamp of a directory is updated whenever a file is added to or removed from the directory.
phony target
A target which always executes, even if a file of the same name exists.
make:
A target can be declared phony by making it a prerequisite for the .PHONY target. This ensures execution of the recipe even if a file with the same name is created in the Makefile directory.
Older versions of Make which don't support .PHONY use the following idiom to ensure execution:
clean: FORCE rm $(objects) FORCE:
default target
Which target is executed if the build tool is run without an argument; how to specify the target which is executed if the build tool is run without an argument.
universal target
How to define a default recipe.
Recipes
shared recipe
How to define targets with a common recipe.
shared target
How to define a target with multiple recipes. If the target is invoked, all recipes are executed.
make:
If a target has multiple recipes, all of them must use the double colon syntax. The recipes are executed in the order in which they are defined in the Makefile.
Each recipe can have its own prerequisites. Recipes for which the the prerequisites exist and the target is newer than the prerequisites will not execute.
empty recipe
invoke shell
multiline variable
How to define a variable containing newlines.
Rules
variables used by built-in rules for c
Many of the variables have an accompanying variables for setting flags:
- AR: ARFLAGS
- AS: ASFLAGS
- CC: CFLAGS
- CXX: CXXFLAGS
- CPP: CPPFLAGS
- ld: LDFLAGS
- LEX: LFLAGS
- YACC: YFLAGS
Files and Directories
glob filenames
How to match files in a directory using pattern matching.
Shell-style file pattern matching uses ? to match a single character and * to match zero or more characters.
recursively glob filenames
delete file
delete files matching pattern
delete directory and contents
copy file
copy directory and contents
move file
make directory
make directory and parents
symbolic link
Compilation
Templates
create file from template
How to generate a file from a template with macro expansion.
Maven Repositories
Libraries and Namespaces
include
Recursion
recursive invocation
How to invoke the build tool on a build file in a subdirectory.
make:
Using $(MAKE) instead of make guarantees that the same version of make is used and passes along command line options from the original invocation.
Make
If precautions are taken it is possible to write a Makefile which will build on a variety of POSIX systems:
Rake
Ant
Gradle
Build Terminology
A project is a directory and the files it contains.
A repository is a project under version control.
A build generates files that are not kept under version control.
A build script is code which performs a build.
A build tool executes a build script.
An install copies files from a project to locations on the local host outside of the project. Alternatively an install creates links from locations on the local host outside of the project to files inside the project.
A deploy places files from a project on a remote host.
A target is a file in a project which is built instead of kept under version control.
A dependency is a file that must exist to build a target.
The dependency graph describes the files that must exist to build a target. Each node is a file. Each directed edge points from a target to a dependency of that target.
A recipe is code which builds a target from the dependencies.
A rule is a recipe which can build multiple targets of a type. A rule usually exploits conventions in how the files are named.
A task is code executed by build tool which does not create a target.
JavaScript Builds
Java Builds
Maven standard directory layout
Introduction to the Standard Directory Layout
Sbt Directory Structure
The Maven Standard Directory Layout prescribes the following directory structure for JVM projects:
- src/main/java
- src/main/language
- src/main/resources
- src/main/filters
- src/test/java
- src/test/language
- src/test/resources
- src/test/filters
- target
- README.txt
- pom.xml
- build.xml
Source code goes into src. Java source code, other than tests, goes into src/main/java. Class files and other files generated by the build system go into target. Build files are in the root directory.
Sbt uses the standard directory layout. It uses these additional directories and files:
- src/main/scala
- src/test/scala
- project
- build.sbt
The project directory contains additional .sbt and .scala files which are part of the build system.
artifacts and repositories
Maven repositories call the entities which they make available artifacts. Usually they are JAR files. Each artifact is identified by a groupId, artifactId and version. The groupId is sometimes the reversed domain name of the organization that produced the artifact.
An organization may set up its own repository, but there is a widely used public repository called the Central Repository which has a web site for browsing the available artifacts.
targets
pom.xml
Create the file src/main/java/Example.java to be compiled:
import java.io.File; import java.io.IOException; import org.apache.commons.io.FileUtils; public class Example { public static void main(String[] arg) { File f = new File("/tmp/foo/bar/baz"); try { FileUtils.deleteDirectory(f); } catch (IOException e) { } } }
Create a pom.xml file:
<project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>org.example</groupId> <artifactId>example</artifactId> <version>1.0</version> <packaging>jar</packaging> <name>Maven Example</name> <url></url> <dependencies> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>1.3.2</version> </dependency> </dependencies> </project>
Compile the code and package it in a JAR:
$ mvn compile $ mvn package
Windows Builds
nmake
Visual Studio includes two tools for building on Windows: NMAKE and MSBuild.
NMAKE is similar to Unix make. Recipes use the Windows command prompt instead of a Unix shell.
msbuild
MSBuild is similar to Ant. It uses an XML file to configure the build. The XML file has a .proj suffix.
To get the version of MSBuild that is installed:
> msbuild /version
To run an MSBuild file which echoes "Hello, World!":
> type hello.proj <Project xmlns=""> <Target Name="Build"> <Message Text="Hello, World!" /> </Target> </Project> > msbuild
If the working directory contains multiple .proj files, we must specify which one to use:
> msbuild hello.proj
If a project contains multiple targets, we can use the /t switch to specify the target. We can specify multiple targets if we separate them by commas or semicolons:
> type two.proj <Project xmlns=""> <Target Name="foo"> <Message Text="foo" /> </Target> <Target Name="bar"> <Message Text="bar" /> </Target> </Project> > msbuild /t:foo two.proj > msbuild /t:bar two.proj > msbuild /t:foo,bar two.proj
A build file can specify the default target to be run using the DefaultTargets attribute of the Project element. Separate the names of multiple targets with semicolons:
<Project xmlns="" DefaultTargets="foo;bar"> <Target Name="foo"> <Message Text="foo" /> </Target> <Target Name="bar"> <Message Text="bar" /> </Target> </Project>
If there is no default target and no /t switch, the first target in the build file is invoked.
If there are tasks which can be run in parallel, we can instruct MSBuild to use multiple cores. The /m switch is similar to the -j flag of make:
> msbuild /m:10
We can define properties (like in Ant) at the top of a build file and later use them in targets:
> type hello2.proj <Project xmlns=""> <PropertyGroup> <Msg>Hello, World!</Msg> </PropertyGroup> <Target Name="Build"> <Message Text="$(Msg)" /> </Target> </Project> > msbuild hello2.proj
We can override the value of a property when we invoke MSbuild:
> msbuild /p:Msg="Goodbye, World!" hello2.proj
We can define a property containing the value of an environment variable:
<PropertyGroup> <Invoker>$(USERNAME)</Invoker> </PropertyGroup>
|
http://hyperpolyglot.org/build
|
CC-MAIN-2018-13
|
refinedweb
| 2,086
| 56.86
|
Hey all! Longtime lurker here...
Over the past couple of months, I've been working on a plug-in for Maya that allows users to easily create mathematically convex hulls around their meshes in Maya (primarily for collision creation). I've released the source as well as few example scripts up on github here:
DDConvexHull Github PageDownload v1.0 as Zip
It should be as pipeline agnostic as possible, and is covered under the MIT license - which means your legal team should no problem with you using it (or even modifying it). Let me know what you think!
will check this out, seems usefull for makeing props for game enignes.
so how does the ussage work, since it dosnt provide a command, and just a node?
Hi passerby, thanks for asking! Please check out this wiki page:
Basically, you connect the output of one or more meshes into the inputs of the hull node. Then connect the output of the hull node into inMesh attribute of an empty mesh node.
You can use the included scripts and attribute editor for the to do the above, or you can write your own script and put it into your pipeline however you like. There is example code for both.
Also you may be interested in the attribute reference:
Hope that helps!
thanks, works like a charm.
in cases where i want multiple hulls for one mesh to better fit the mesh, would i just do a componet connection too, than use multiple ddConvexHull nodes and multiple output meshes?
Yes, thats exactly what you would do.
BTW, after your last post, I updated the utils script to add a "createHull" function. Not having that function was an oversight on my behalf (I guess I was just too excited to get it out to folks!)
You can snag the updated version here:
One last thing, if you don't delete your history, that mesh will update as you move your object around. If you do delete your history, you'll lose the dynamic updates, but the resultant hull will still be cached and saved in the mesh nodes.
Let me know if you have other questions or feedback (or if you want to contribute)!-J
ya don't think i will directly change it being more of a python guy not a c++ guy, but i might work it into some scripts i use for export to various game engines.
if i did so, would you care if i bundled the binary for it? and credit you or would you rather i just put a link to it in the readme?
Go ahead and bundle away! A few requests:- Put a link to the github repo in the readme, and credit Jonathan Tilden- Mention in the Readme that DDConvexHull is covered under the MIT License (which basically says do what you want with it)- Post back a link to your bundle (if you can) so we can all check it out!
I hope you've found the plug-in useful!
ya will be a while till i get to it, since im really busy lately, but will do all of that for you.
Would i need to compile for each Maya version? like would the build i made today for 2012 still work with 2013 or 2014 when it comes out?
No unfortunately, with compiled plugins, you have to link against each maya version :(. The scripts should be agnostic.
Python plugins you don't have to, but this probably to slow to execute as a python plugin.
I have an open issue to make it a bit easier to support multiple versions with minimum fuss, but I haven gotten around to it just yet.
ah, well since it is under mit and non comercial, i could prolly just download 2011 and 2013 from the autodesk student site, to build against.
Thank you very much for sharing this. Any chance someone can compile a binary for Maya 2014? I don't have a Visual Studio environment here to build my own. Thanks!
Ya I can make you a build when I get back home during the week for you?
2014 64bit?
Yep, 2014 64bit. Much appreciated. Thanks!
Hey, I found someone in our shop who compiled this for me. Thank you again for your offer. It's working great. Thanks!
thats great, since while i was working on some unrealscript i hosed my vs2010 install with a add-on, so was going to take me a while to get it going again.
if i can get permission from the plugin maker, i may compile for 2014 64bit and 2012 64bit and host builds of it.
Here's a quick shelf button Python script that will make a single convex hull from the selected meshes (or faces). Requires ddConvexHull to be installed on your system.
import maya.cmds as cmds
import DDConvexHullUtils
selection = cmds.ls(sl=True, l=True)
# Create the nodes
convexHullNode = cmds.createNode('DDConvexHull')
outputMeshNode = cmds.createNode('mesh', n='outputHullShape')
# Connect the input object(s) to the hull node
DDConvexHullUtils.addObjects(convexHullNode, objects=selection)
# Connect the output of the hull to the input of the outputMeshNode
cmds.connectAttr('%s.output' % convexHullNode, '%s.inMesh' % outputMeshNode)
can this create hulls around selected components? In other words, could I use this to attach 3 convex hulls to a single mesh that is concave?
edit: nvm, re-read the thread more closely and saw this has already been answered.
Bump.Could you release this as a built plugin? I do not have Visual Studio.
what version do you need nightshade? i have a few compiled ones.
also i can likely compile for 2015 64bit without too much trouble
|
http://tech-artists.org/t/maya-ddconvexhull-convex-hull-plugin-shameless-plug/3455
|
CC-MAIN-2017-51
|
refinedweb
| 950
| 73.17
|
Subject: Re: [Boost-bugs] [Boost C++ Libraries] #2660: `fd_set' has not been declared
From: Boost C++ Libraries (noreply_at_[hidden])
Date: 2009-01-16 18:31:50
#2660: `fd_set' has not been declared
--------------------------------------+-------------------------------------
Reporter: jens.luedicke_at_[hidden] | Owner: anthonyw
Type: Bugs | Status: new
Milestone: Boost 1.38.0 | Component: thread
Version: Boost 1.37.0 | Severity: Problem
Resolution: | Keywords:
--------------------------------------+-------------------------------------
Changes (by steven_watanabe):
* owner: => anthonyw
* component: None => thread
Comment:
So to be clear, do you get an error from a file that
contains only:
{{{
#include <boost/thread.hpp>
}}}
-- Ticket URL: <> Boost C++ Libraries <> Boost provides free peer-reviewed portable C++ source libraries.
This archive was generated by hypermail 2.1.7 : 2017-02-16 18:49:59 UTC
|
https://lists.boost.org/boost-bugs/2009/01/5772.php
|
CC-MAIN-2019-43
|
refinedweb
| 116
| 57.67
|
Application areas such as security systems, automation, and robotics heavily rely on motion detection systems. The scope and depth of each system vary from one application area to another. However, the concept is the same. At the center of each motion detection system lies a motion sensor. This device samples the environment for different parameters and sends a measurement of the physical quantity to a computer. The computer then decides the presence or absence of motion in the environment.
There are different types of motion sensors, but we will focus mainly on the PIR sensor in this tutorial. We will describe how the devices work and the steps to connect the sensor to our Raspberry Pi. Finally, we will show how to activate a relay when the PIR sensor detects motion.
What is a PIR sensor
PIR sensors are devices that are used to detect motion within the sensor’s field of view. The full name is Passive Infrared, and in some texts, they are often referred to as Pyroelectric sensors. As the name suggests, the devices are passive sensors, which means that the sensor does not use its energy for detecting purposes. They work by detecting the energy radiated from the physical environment. Specifically, the module consists of a pyroelectric sensor. This is an electrically polarized crystal that generates a surface electric charge when exposed to infrared radiation levels. Humans and other animals such as cats and dogs are sources of infrared radiation, making the PIR sensor a perfect candidate for human or animal motion detection within the sensor’s range.
The white dome cover of the PIR sensor is known as the Fresnel Lens. This small device focuses the infrared signals onto the pyroelectric sensor, which means that the module can detect weak signals from the radiating sources. Also, a fresnel lens extends the sensor’s field of view.
How PIR sensors work
Motion Detection
A PIR sensor, shown above, consists of crystals that respond to changes in infrared energy. The sensor itself is split into two windows, window A and window B, as shown in the diagram below. The two windows, A and B, are oriented in the same plane, and they have the same field of view, but they do not detect the IR levels simultaneously. There is a delay of a few microseconds between them, and this delay enables us to measure the IR difference between the two windows. This difference will let us know if the target has moved.
When the environment is empty, the two windows detect the same infrared levels and cancel each other out. However, when a warm target, such as a dog or human being, moves into the sensor’s field of view, the first window intercepts the IR. This causes a positive differential change between the two windows because the IR level in A will be higher than the IR level in B. When the object is leaving the area, the IR level in B will be higher than the IR in A, which causes a negative differential change between the two windows. Finally, the onboard IR sensor IC detects these pulse changes and decides motion in the environment, hence the presence of a target.
PIR Sensing Device
Pin Diagram
The complete PIR sensing device consists of the pyroelectric sensor and some supporting electronics. At the center of this lies the main chip—the BISS0001 high-performance micropower PIR detector. The IC’s main job is to perform some processing on the received signals and output a pulse for further processing by the host CPU. Here is the description of the main components of the PIR sensor electronics hardware.
- The module has three pins: Ground and Vcc for powering the module. The Digital Out Pin gives a high logic level as a target moves within the sensor’s range.
- Delay time adjust. The first potentiometer adjusts the delay time. This is the time the output signal stays HIGH if the PIR detects motion. This time can range from 0.3 seconds to 5 minutes. Turning the potentiometer clockwise or right increases the delay time. Consecutively, turning the potentiometer counterclockwise or left decreases the delay time.
- Sensitivity adjust. The second potentiometer on the board is the sensitivity adjustment. We use this potentiometer to control the sensing range, which is approximately 3-7m. Turning this pot clockwise or right decreases the range, while turning it counterclockwise or left increases the sensing range.
- Trigger settings. There are two operational modes:
- Single trigger or non-repeatable mode: Here, when the sensor output is high and the delay time is over, the output will change from a HIGH to a LOW level.
- Repeatable trigger mode: This mode will keep the output high all the time until the detected object moves out of the sensor’s range.
Using an HC-SR501 PIR Sensor
How to connect a PIR sensor to the Rpi
The PIR sensor has three pins: GND, Vcc, and signal out. We connect these pins to the RPi, as shown in the diagram below.
Programming the PIR sensor with Python
We are going to use the
gpiozero
MotionSensor class to read data from the module. Here is a starter code to interface the PIR sensor with the Raspberry Pi. If you do not have the gpiozero library installed, it’s time to do so now. Just use the command
sudo apt install python3-gpiozero, and you are good to go.
from gpiozero import MotionSensor pir = MotionSensor(4) while True: pir.wait_for_motion() print("Motion detected!")
The gpiozero MotionSensor library makes it easy to develop sensor applications on the RPi. This library has several functions such as waiting for a motion, what to do when the sensor detects motion, and what to do when no motion is detected. For our motion detection system, we import the MotionSensor library using the code:
from gpiozero import MotionSensor. We use the digital pin 4 for reading data from the PIR sensor:
pir = MotionSensor(4). Then, we enter an infinite loop that waits for the presence of motion within the sensor’s range using
pir.wait_for_motion(). Finally, we take action when the PIR sensor’s state changes. In this case, we are only printing out to the terminal that motion has been detected. Below is the output.
Sample Project
For this example project, we are going to activate a 5V relay when the PIR sensor detects motion in a room. We are going to need:
- Raspberry Pi
- HC-SR501 PIR sensor
- Relay module
from gpiozero import MotionSensor, LED from time import sleep pir = MotionSensor(4) relay = LED(17) while True: pir.when_motion = relay.on pir.when_no_motion = relay.off
Code Description
The code above will activate a relay when the PIR sensor detects motion. We follow the same setup as we did before, but for this example, we are going to import an additional library from gpiozero. We import the LED library and attach an instance of the LED to pin 17 of the RPi. This will act as our output pin: relay = LED(17). The input pin of the motion sensor remains unchanged on pin 4:
pir = MotionSensor(4). The LED library will also help us to change the state of the output pin when the sensor detects motion:
pir.when_motion = relay.on. To make sure that the output remains LOW when we do not have motion, we use:
pir.when_no_motion = relay.off. Finally, schedule the Python script with cron for running soon after boot. You can refer to this article for scheduling tasks with cron.
Hardware Connection
Please note that you will need to follow the best practices for mounting motion sensors to achieve the best PIR detection performance. That material is beyond the scope of this article.
|
https://www.circuitbasics.com/detecting-motion/?recaptcha-opt-in=true
|
CC-MAIN-2021-39
|
refinedweb
| 1,290
| 54.73
|
Type: Posts; User: btap0644
Hi, I have the following text file (ExamMarks.txt)
John, 85, 95, 90
Micheal, 60, 75, 75
I want to extract a line and take the Name and separately and the ints separately. Then I want to print...
Hi I have compiled the sample PLayCap from the directshow.NET website in C#.
samples are available here :...
Maybe next time I ought to post the code section when asking a question.
Well when I tried in netbeans 6.8 and Eclipse also as you know already:
1) Netbeans 6.8 - it was in the src folder (this src folder is inside the SingleLaneBridge folder)
2) Eclipse - I put it in...
This program compiled without errors.
Any other opinions
Hi,
Note: To see a clearer version of the question asked below, please go to
I...
Hi,
I am using dshownet(first time) and C#. I have got a sample to take the web cam input and display it on a form. I now need to draw a rectangle on top of the video stream using the mouse. (the...
I am doing a project, called user initiated real time object tracking system. Here, is what I
want to happen in the project:
1) Take a continuous stream from a web camera.
2) Using the mouse...
i want to a method to return an integer value and print it outside the method
here is my code
public class returnvariable{
public static void main (String[] args) {
public int...
|
http://forums.codeguru.com/search.php?s=b5bb8995a583dbf6eeb5bf7609604711&searchid=7001775
|
CC-MAIN-2015-22
|
refinedweb
| 247
| 83.96
|
What is Crontab
Cron is a software utility that allows us to schedule tasks on Unix-like systems. The name is derived from the Greek word "Chronos", which means "time".
The tasks in Cron are defined in a crontab, which is a text file containing the commands to be executed. The syntax used in a crontab is described below in this article.
Python presents us with the crontab module to manage scheduled jobs via Cron. The functions available in it allow us to access Cron, create jobs, set restrictions, remove jobs, and more. In this article we will show how to use these operations from within yhour Python code.
For the interested reader, the official help page can be found at.
Crontab Syntax
Cron uses a specific syntax to define the time schedules. It consists of five fields, which are separated by white spaces. The fields are:
Minute Hour Day Month Day_of_the_Week
The fields can have the following values: on some systems) │ │ │ │ │ │ │ │ │ │ * * * * * command to execute
Source: Wikipedia. Cron. Available at
Cron also acccepts special characters so you can create more complex time schedules. The special characters have the following meanings:
Let's see some examples:
* * * * *means: every minute of every hour of every day of the month for every month for every day of the week.
0 16 1,10,22 * *tells cron to run a task at 4 PM (which is the 16th hour) on the 1st, 10th and 22nd day of every month.
Installing Crontab
Crontab is not included in the standard Python installation. Thus, the first thing we have to do is to install it.
This is done with the
pip command. The only thing to consider is that the name of the module is 'python-crontab', and not just 'crontab'. The following command will install the package in our machine:
$ pip install python-crontab
Getting Access to Crontab
According to the crontab help page, there are five ways to include a job in cron. Of them, three work on Linux only, and two can also be used on Windows.
The first way to access cron is by using the username. The syntax is as follows:
cron = CronTab(user='username')
The other two Linux ways are:
cron = CronTab() # or cron = CronTab(user=True)
There are two more syntaxes that will also work on Windows.
In the first one, we call a task defined in the file "filename.tab":
cron = CronTab(tabfile='filename.tab')
In the second one, we define the task according to cron's syntax:
cron = CronTab(tab="""* * * * * command""")
Creating a New Job
Once we have accessed cron, we can create a new task by using the following command:
cron.new(command='my command')
Here,
my command defines the task to be executed via the command line.
We can also add a comment to our task. The syntax is as follows:
cron.new(command='my command', comment='my comment')
Let's see this in an example:
from crontab import CronTab cron = CronTab(user='username') job = cron.new(command='python example1.py') job.minute.every(1) cron.write()
In the above code we have first accessed cron via the username, and then created a job that consists of running a Python script named example1.py. In addition, we have set the task to be run every 1 minute. The
write() function adds our job to cron.
The example1.py script is as follows:
from datetime import datetime myFile = open('append.txt', 'a') myFile.write('\nAccessed on ' + str(datetime.now()))
As we can see from the above code, the program will open and append the phrase "Accessed on" with the access date and time added.
The result is as follows:
Figure 1
As we expected, Figure 1 shows that the file was accessed by the program. It will continue to do the assigned task while the example1.py program is running on cron.
Once cron is accessed, we can add more than one job. For example the following line in above example would add a second task to be managed by cron:
job2 = cron.new(command='python example2.py')
Once a new task is added, we can set restrictions for each of them.
Setting Restrictions
One of the main advantages of using Python's crontab module is that we can set up time restrictions without having to use cron's syntax.
In the example above, we have already seen how to set running the job every minute. The syntax is as follows:
job.minute.every(minutes)
Similarly we could set up the hours:
job.hour.every(hours)
We can also set up the task to be run on certain days of the week. For example:
job.dow.on('SUN')
The above code will tell cron to run the task on Sundays, and the following code will tell cron to schedule the task on Sundays and Fridays:
job.dow.on('SUN', 'FRI')
Similarly, we can tell cron to run the task in specific months. For example:
job.month.during('APR', 'NOV')
This will tell cron to run the program in the months of April and November.
An important thing to consider is that each time we set a time restriction, we nullify the previous one. Thus, for example:
job.hour.every(5) job.hour.every(7)
The above code will set the final schedule to run every seven hours, cancelling the previous schedule of five hours.
Unless, we append a schedule to a previous one, like this:
job.hour.every(15) job.hour.also.on(3)
This will set the schedule as every 15 hours, and at 3 AM.
The 'every' condition can be a bit confusing at times. If we write
job.hour.every(15), this will be equivalent to
* */15 * * *. As we can see, the minutes have not been modified.
If we want to set the minutes field to zero, we can use the following syntax:
job.every(15).hours()
This will set the schedule to
0 */4 * * *. Similarly for the 'day of the month', 'month' and 'day of the week' fields.
Examples:
job.every(2).monthis equivalent to
0 0 0 */2 *and
job.month.every(2)is equivalent to
* * * */2 *
job.every(2).dowsis equivalent to
0 0 * * */2and
job.dows.every(2)is equivalent to
* * * * */2
We can see the differences in the following example:
from crontab import CronTab cron = CronTab(user='username') job1 = cron.new(command='python example1.py') job1.hour.every(2) job2 = cron.new(command='python example1.py') job2.every(2).hours() for item in cron: print item cron.write()
After running the program, the result is as follows:
$ python cron2.py * */2 * * * python /home/eca/cron/example1.py 0 */2 * * * python /home/eca/cron/example1.py $
Figure 2
As we can see in Figure 2, the program has set the second task's minutes to zero, and defined the first task minutes' to its default value.
Finally, we can set the task to be run every time we boot our machine. The syntax is as follows:
job.every_reboot()
Clearing Restrictions
We can clear all task's restrictions with the following command:
job.clear()
The following code shows how to use the above command:
from crontab import CronTab cron = CronTab(user='username') job = cron.new(command='python example1.py', comment='comment') job.minute.every(5) for item in cron: print item job.clear() for item in cron: print item cron.write()
After running the code we get the following result:
$ python cron3.py */5 * * * * python /home/eca/cron/example1.py # comment * * * * * python /home/eca/cron/example1.py # comment
Figure 3
As we can see in Figure 3, the schedule has changed from every 5 minutes to the default setting.
Enabling and Disabling a Job
A task can be enabled or disabled using the following commands:
To enable a job:
job.enable()
To disable a job:
job.enable(False)
In order to verify whether a task is enabled or disabled, we can use the following command:
job.is_enabled()
The following example shows how to enable and disable a previously created job, and verify both states:
from crontab import CronTab cron = CronTab(user='username') job = cron.new(command='python example1.py', comment='comment') job.minute.every(1) cron.write() print job.enable() print job.enable(False)
The result is as follows:
$ python cron4.py True False
Figure 4
Checking Validity
We can easily check whether a task is valid or not with the following command:
job.is_valid()
The following example shows how to use this command:
from crontab import CronTab cron = CronTab(user='username') job = cron.new(command='python example1.py', comment='comment') job.minute.every(1) cron.write() print job.is_valid()
After running the above program, we obtain the validation, as seen in the following figure:
$ python cron5.py True
Figure 5
Listing All Cron Jobs
All cron jobs, including disabled jobs can be listed with the following code:
for job in cron: print job
Adding those lines of code to our first example will show our task by printing on the screen the following:
$ python cron6.py * * * * * python /home/eca/cron/example1.py
Figure 6
Finding a Job
The Python crontab module also allows us to search for tasks based on a selection criterion, which can be based on a command, a comment, or a scheduled time. The syntaxes are different for each case.
Find according to command:
cron.find_command("command name")
Here 'command name' can be a sub-match or a regular expression.
Find according to comment:
cron.find_comment("comment")
Find according to time:
cron.find_time(time schedule)
The following example shows how to find a previously defined task, according to the three criteria previously mentioned:
from crontab import CronTab cron = CronTab(user='username') job = cron.new(command='python example1.py', comment='comment') job.minute.every(1) cron.write() iter1 = cron.find_command('exam') iter2 = cron.find_comment('comment') iter3 = cron.find_time("*/1 * * * *") for item1 in iter1: print item1 for item2 in iter2: print item2 for item3 in iter3: print item3
The result is the listing of the same job three times:
$ python cron7.py * * * * * python /home/eca/cron/example1.py # comment * * * * * python /home/eca/cron/example1.py # comment * * * * * python /home/eca/cron/example1.py # comment
Figure 7
As you can see, it correctly finds the cron command each time.
Removing Jobs
Each job can be removed separately. The syntax is as follows:
cron.remove(job)
The following code shows how to remove a task that was previously created. The program first creates the task. Then, it lists all tasks, showing the one just created. After this, it removes the task, and shows the resulting empty list.
from crontab import CronTab cron = CronTab(user='username') job = cron.new(command='python example1.py') job.minute.every(1) cron.write() print "Job created" # list all cron jobs (including disabled ones) for job in cron: print job cron.remove(job) print "Job removed" # list all cron jobs (including disabled ones) for job in cron: print job
The result is as follows:
$ python cron8.py Job created * * * * * python /home/eca/cron/example1.py Job removed
Figure 8
Jobs can also be removed based on a condition. For example:
cron.remove_all(comment='my comment')
This will remove all jobs where
comment='my comment'.
Clearing All Jobs
All cron jobs can be removed at once by using the following command:
cron.remove_all()
The following example will remove all cron jobs and show an empty list.
from crontab import CronTab cron = CronTab(user='username') cron.remove_all() # list all cron jobs (including disabled ones) for job in cron: print job
Environmental Variables
We can also define environmental variables specific to our scheduled task and show them on the screen. The variables are saved in a dictionary. The syntax to define a new environmental variable is as follows:
job.env['VARIABLE_NAME'] = 'Value'
If we want to get the values for all the environmental variables, we can use the following syntax:
job.env
The example below defines two new environmental variables for the task 'user', and shows their value on the screen. The code is as follows:
from crontab import CronTab cron = CronTab(user='username') job = cron.new(command='python example1.py') job.minute.every(1) job.env['MY_ENV1'] = 'A' job.env['MY_ENV2'] = 'B' cron.write() print job.env
After running the above program, we get the following result:
$ python cron9.py MY_ENV1=A MY_ENV2=B
Figure 9
In addition, Cron-level environment variables are stored in 'cron.env'.
Wrapping Up
The Python module crontab provides us with a handy tool to programmatically manage our cron application, which is available to Unix-like systems. By using it, instead of having to rely on creating crontabs, we can use Python code to manage frequent tasks.
The module is quite complete. Although there have been some criticisms about its behavior, it contains functions to connect to cron, create scheduled tasks, and manage them. As shown in the above examples, their use is quite direct. Thus, it provides a tool that allows for complex scripts with the main Python characteristic: simplicity.
|
https://pythondigest.ru/view/32014/
|
CC-MAIN-2018-43
|
refinedweb
| 2,166
| 67.04
|
#include <SSLIOP_Acceptor.h>
#include <SSLIOP_Acceptor.h>
Inheritance diagram for TAO_SSLIOP_Acceptor:
Constructor.
Destructor.
[virtual]
Reimplemented from TAO_IIOP_SSL_Acceptor.
[private]
Helper method to add a new profile to the mprofile for each endpoint.
Reimplemented from TAO_IIOP_Acceptor.
Helper method to create a profile that contains all of our endpoints.
[private, virtual]
Parse protocol specific options.
Retrieve the SSLIOP::SSL component associated with the endpoints set up by this acceptor.
Implement the common part of the open*() methods.
Ensure that neither the endpoint configuration nor the ORB configuration violate security measures.
State that will be passed to each SSLIOP connection handler upon creation.
The concrete acceptor, as a pointer to it's base class.
The SSL component.
This is the SSLIOP endpoint-specific tagged component that is embedded in a given IOR.
The accept() timeout.
This timeout includes the overall time to complete the SSL handshake. This includes both the TCP handshake and the SSL handshake.
|
https://www.dre.vanderbilt.edu/Doxygen/5.4.1/html/tao/ssliop/classTAO__SSLIOP__Acceptor.html
|
CC-MAIN-2022-40
|
refinedweb
| 151
| 54.29
|
I have a very basic scrapy spider, which grabs urls from the file and then downloads them. The only problem is that some of them got redirected to a slightly modified url within same domain. I want to get them in my callback function using response.meta, and it works on a normal urls, but then url is redirected callback doesn't seem to get called. How can I fix it?
Here's my code.
from scrapy.contrib.spiders import CrawlSpider
from scrapy import log
from scrapy import Request
class DmozSpider(CrawlSpider):
name = "dmoz"
handle_httpstatus_list = [302]
allowed_domains = [""])
f = open("C:\\python27\\1a.csv",'r')
url = ''
start_urls = [url+row for row in f.readlines()]
def parse(self, response):
print response.meta.get('redirect_urls', [response.url])
print response.status
print (response.headers.get('Location'))
def parse(self, response):
return Request(response.url, meta={'dont_redirect': True, 'handle_httpstatus_list': [302]}, callback=self.parse_my_url)
def parse_my_url(self, response):
print response.status
print (response.headers.get('Location'))
By default scrapy requests are redirected, although if you don't want to redirect you can do like this, use start_requests method and add flags in request meta.
def start_requests(self): requests =[(Request(self.url+u, meta={'handle_httpstatus_list': [302], 'dont_redirect': True}, callback=self.parse)) for u in self.start_urls] return requests
|
https://codedump.io/share/kDZN0nic0Nfs/1/scrapy-callback-after-redirect
|
CC-MAIN-2018-22
|
refinedweb
| 211
| 54.39
|
Get the highlights in your inbox every week.
Why you need to drop ifconfig for ip | Opensource.com
Why you need to drop ifconfig for ip
Start using the modern method for configuring a Linux network interface.
opensource.com
Subscribe now
For a long time, the
ifconfig command was the default method for configuring a network interface. It served Linux users well, but networking is complex, and the commands to configure it must be robust. The
ip command is the new default networking command for modern systems, and in this article, I'll show you how to use it.
The
ip command is functionally organized on two layers of the OSI networking stack: Layer 2 (data link layer) and Layer 3 (network or IP layer). It does all the work in the old
net-tools package.
Installing ipThe
ipcommand is included in the
iproute2utilpackage. It's probably already included in your Linux distribution. If it's not, you can install it from your distro's software repository.
Comparing ipconfig and ip usage
The
ip and
ipconfic commands can be used to configure a network interface, but they do things differently. I'll compare how to do common tasks with the old (
ipconfig) and new (
ip) commands.
View network interface and IP address
If you want to see the IP address of a host or view network interface information, the old
ifconfig command, with no arguments, provides a good summary:
$ ifconfig
eth0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether bc:ee:7b:5e:7d:d 41 bytes 5551 (5.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 41 bytes 5551 (5.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.1.6 netmask 255.255.255.224 broadcast 10.1.1.31
inet6 fdb4:f58e:49f:4900:d46d:146b:b16:7212 prefixlen 64 scopeid 0x0<global>
inet6 fe80::8eb3:4bc0:7cbb:59e8 prefixlen 64 scopeid 0x20<link>
ether 08:71:90:81:1e:b5 txqueuelen 1000 (Ethernet)
RX packets 569459 bytes 779147444 (743.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 302882 bytes 38131213 (36.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The new
ip command provides similar results, but the command is
ip address show, or just
ip a for short:
$ bc:ee:7b:5e:7d:d8 brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 08:71:90:81:1e:b5 brd ff:ff:ff:ff:ff:ff
inet 10.1.1.6/27 brd 10.1.1.31 scope global dynamic wlan0
valid_lft 83490sec preferred_lft 83490sec
inet6 fdb4:f58e:49f:4900:d46d:146b:b16:7212/64 scope global noprefixroute dynamic
valid_lft 6909sec preferred_lft 3309sec
inet6 fe80::8eb3:4bc0:7cbb:59e8/64 scope link
valid_lft forever preferred_lft forever
Add IP address
To add an IP address to an interface with
ifconfig, the command is:
$ ifconfig eth0 add 192.9.203.21
The command is similar for
ip:
$ ip address add 192.9.203.21 dev eth0
Subcommands in
ip can be shortened, so this command is equally valid:
$ ip addr add 192.9.203.21 dev eth0
You can make it even shorter:
$ ip a add 192.9.203.21 dev eth0
Remove an IP address
The inverse of adding an IP address is to remove one.
With
ifconfig, the syntax is:
$ ifconfig eth0 del 192.9.203.21
The
ip command syntax is:
$ ip a del 192.9.203.21 dev eth0
Enable or disable multicast
Enabling (or disabling) multicast on an interface with
ifconfig happens with the
multicast argument:
# ifconfig eth0 multicast
With
ip, use the
set subcommand with the device (
dev) and a Boolean or toggle
multicast option:
# ip link set dev eth0 multicast on
Enable or disable a network
Every sysadmin is familiar with the old "turn it off and then on again" trick to fix a problem. In terms of networking interfaces, that translates to bringing a network up or down.
The
ifconfig command does this with the
up or
down keywords:
# ifconfig eth0 up
Or you could use a dedicated command:
# ifup eth0
The
ip command uses the
set subcommand to set the interface to an
up or
down state:
# ip link set eth0 up
Enable or disable the Address Resolution Protocol (ARP)
With
ifconfig, you enable ARP by declaring it:
# ifconfig eth0 arp
With
ip, you set the
arp property as
on or
off:
# ip link set dev eth0 arp on
Pros and cons of ip and ipconfig
The
ip command is more versatile and technically more efficient than
ifconfig because it uses
Netlink sockets rather than
ioctl system calls.
The
ip command may appear more verbose and more complex than
ifconfig, but that's one reason it's more versatile. Once you start using it, you'll get a feel for its internal logic (for instance, using
set instead of a seemingly arbitrary mix of declarations or settings).
Ultimately,
ifconfig is outdated (for instance, it lacks full support for network namespaces), and
ip is designed for the modern network. Try it out, learn it, use it. You'll be glad you did!
3 Comments, Register or Log in to post a comment.
ip it's a great tool, but the output of ip a it's still awful compared with the ifconfig output.
P.D.: there is typo after the title comparing ip and ifconfig, you wrote ipconfic
You are right by mistake I wrote ipconfig actually ifconfig. Thanks for your response.
Ugh. Yet another "wonderful" (in a perjorative sense) "new" tool - like systemd, that creates yet another thing to remember. Systemd is why I pretty much stopped growing the use of Linux in my household. A much better approach would be to modify ifconfig to use the newer API if available, and perhaps extend it to handle name spaces, without creating yet another <insert expletive of your choice> new command or tool.
|
https://opensource.com/article/21/1/ifconfig-ip-linux
|
CC-MAIN-2021-21
|
refinedweb
| 1,021
| 59.23
|
By Harlan Ritchie
critical of his methods because they were strongly opposed to the incestuous inbreeding of animals. These
criticisms soon subsided when Bakewells methods began to yield positive results.
Because of Bakewells success in implementing his principles of like begets like and mating the best to the
best, other stock breeders in England and Scotland applied these principles in the development of the other
British breeds. As these new and improved breeds developed, the English Longhorn fell out of favor and
eventually out of use.
About this same time, cattle were imported from the continent of Europe into the Eastern counties of England.
These were dairy-type cattle from Holland that were primarily of the Large White Dutch or Flanders breeds.
Consumers in Holland and other low countries viewed diary products as a special delicacy, which led to the
development of these dairy breeds and subsequently the Holstein-Friesian breed. However, English palates,
especially those of well-to-do Britons, demanded a more substantial protein source than dairy products. They
relished well-marbled beef, which was a major factor that led to development of the improved British breeds.
THE BRITISH BREEDS
The American Breed
Development in Scotland
The Angus breed (originally referred to as Aberdeen-Angus) was developed in northeastern Scotland
primarily in the counties of Aberdeen, Angus, Banff, and Kincardine. These counties border the North Sea and
are characterized by land that is either rough or mountainous. Polled cattle apparently existed in Scotland
before recorded history, because the likeness of such cattle can be found on ancient sculptured stones in
Aberdeen and Angus counties. Black hornless cattle are mentioned in the middle of the ninth century and
references are again found in early sixteenth century charters. The body size was larger and the frequency of
polled animals was greater in lowland than in upland districts. In the county of Banff, a local type of black,
horned, large but slow maturing cattle existed, and very early in the nineteenth century, the cattle were crossed
with Galloway, Shorthorn, Ayshire, and Guernsey cattle. But there is no knowledge of the extent to which such
crossbred progeny were used as foundation animals for the Aberdeen-Angus breed.
The early cattle in northern Scotland were not necessarily of uniform color, and many of them had varied color
markings or broken color patterns. Most of the cattle were polled, but some had horns. The trait that is now
commonly called polled was often referred to in old Scottish writings by the terms of doddies or humlies.
Two strains were used in the formation of what would later be known as the Aberdeen-Angus breed. In the
county of Angus, cattle had existed for some time that were known as Angus doddies. In the county of
Aberdeen, other polled cattle were found that were called Buchan humlies, Buchan being the primary
agricultural district Aberdeen. The cattle in the region were early valued as work oxen, as were most other
strains of cattle that later became breeds. But by the beginning of the nineteenth century, the polled cattle in
Aberdeen already had a favorable reputation for the production of high quality, well-marbled carcass beef.
3
Improvement in Scotland
Over time, it appears that the strains of polled cattle from Angus and Aberdeen were crossed and recrossed
eventually leading to a distinct breed that was not greatly different from either of the two strains. Hugh Watson
of Angus County is generally regarded as the first real improver and, consequently, the founder of the breed. In
1808, when 19 years old, Watson started farming and brought with him from his fathers farm six of the best
cows as well as a bull, all of which were black and polled. Along with these animals, he visited the leading
Scottish cattle markets and ended up purchasing the ten best heifers and best bull he could find that possessed
the traits he was striving to breed. The heifers were of various colors, but the bull was black. Watson decided
that the color of his herd should be black, so he started to breed in that direction.
Watson applied the methods employed so successfully by the Colling Brothers on the Shorthorn breed by
mating the best to the best regardless of relationship. Consequently, his herd was rather closely inbred even
though he did not necessarily intend it to be so. Watsons favorite herd sire was Old Jock, who was given the
number 1 in the herd book at the time it was started in 1862. The bull was born in 1842 and was used very
heavily in the herd from 1843 to 1852. He was awarded the sweepstakes prize for bulls at the Highland Society
Show in 1852, when he was nearly 11 years old. The Watson cow, Old Granny, also contributed a great deal to
the breed. She was born in 1824 and did not die until she was killed by lightning at 35 years of age. She is
known to have produced a total of 29 calves, eleven of which were registered in the Herd Book. A very high
percentage of Angus cattle today trace to either Old Jock or Old Granny or both. Watson fitted and showed his
cattle extensively. At his first show in 1829, he had the first prize pair of steers. One of these steers was
exhibited later that same year at the famous Smithfield show in London, where he attracted a great deal of
attention. When slaughtered after the show, his trimmed fat weighed an amazing 240 lbsabout 84 lbs more
than the fat of the famous Durham Ox. In contrast to today, fat was considered nearly as valuable as lean meat
at that time. The Aberdeen-Angus breed was threatened with extinction after 1810, when word spread that the
Shorthorn bull, Comet, had sold for $5,000. Soon thereafter, good herds of Shorthorn cattle were established in
Scotland. Cattle producers began using Shorthorn bulls on their polled Aberdeen-Angus cow herds. Crossing
in this fashion became almost a craze. For a time, it seemed as if farmers had been rendered oblivious to the
risks of totally running out of their purebred black polled cows. However, a few far-sighted purebred breeders
recognized the danger which threatened their native polled cattle and became determined to preserve the breed.
William McCombie of Tillyfour in Angus country stood out among these breeders as the great preserver of
the breed. He began breeding cattle in 1830 and continued until his death in 1880. McCombie was a master at
blending the Angus and Aberdeen strains. He practiced inbreeding with much success to establish the type of
animal he desired. McCombie used the show ring to publicize his herd and had unparalleled success in doing
so. His show ring career reached its pinnacle in 1878 at the International Exposition in Paris. There he won
first prize as an exhibitor of cattle from a foreign country as well as the grand prize for the best group of beefproducing animals bred by any exhibitor. In addition to the breeding classes, McCombie also showed steers in
the market shows. The most famous steer he produced was Black Prince, who won both the Birmingham and
Smithfield shows in 1867 when 4 years of age. Black Prince weighed 2,200 lbs and was quite large-framed. It
was said, a short man would need a ladder to see over his back.
Another very influential Scottish herd was that of Ballindalloch in the county of Banff, owned by Sir George
MacPherson Grant. His herd was established by drawing heavily on Tillyfour cattle from William McCombie.
His most valuable purchase was that of the bull Trojan sired by Black Prince of Tillyfour. The Ballindalloch
herd is well-known for establishing several famous cow familiesthe Ericas, Jilts, Miss Burgesses, and the
Georginas. Angus breeders in Scotland placed a great deal of emphasis on cow families. The same was true in
America until recent times. At one time, the craze over cow families reached ridiculous proportions when
rather mediocre females of a certain scarce, fashionable cow family would sell at significantly higher prices
than excellent females of a more numerous, ordinary family.
Establishment in America
The first Angus cattle to reach North America were sent to Montreal in 1859 or 1860, but there is no record of
what became of these animals. In 1873, George Grant of Victoria, Kansas imported the first Angus into the
U.S., when he brought over four bulls that he crossed on Texas Longhorn cows. These resulting progeny were
favorably accepted on the Kansas City market. Several other notable importations took place from 1878 to
1882.
W.A. McHenry of Denison, Iowa is generally considered the master Angus breeder of his era. In 1887, he
purchased a bull and the female Barbarity 2nd, which was the founder of the Barbara cow family. At the 1894
Worlds Fair in Chicago, his show herd dominated the competition. In 1916, the McHenry herd was sold to the
firm of Esher and Ryan at Irwin, Iowa. Included in the acquisition was the great 2-year-old bull, Earl Marshall,
who turned out to be the most influential sire in the history of the breed n the U.S. He was a prepotent sire
whose sons also contributed greatly to the breed.
J. Garret Tolan of Pleasant Plains, Illinois, followed McHenry as the breeds most prominent herd for the better
part of four decades. Tolan dominated the International Livestock Exposition Angus shows in Chicago with his
Eileenmere line of cattle, starting in 1929 with Eileenmere 15th up to 1954 with Mr. Eileenmere.
After J. Garret Tolan, Ankony Farms in New York became the leading herd in the breed. It was founded in
1948 by a partnership between Allen Ryan and Lee Leachman. Later, other owners that included Les
Leachman and Myron Fuerst became involved in the operation. Like Tolan before them, Ankony Farms
dominated the International Livestock Exposition in Chicago until the International closed in 1975. One of the
highlights of the International Show was the inter-breed carcass competition. During the 75-year history of the
show in Chicago, Angus cattle won the carcass championship 87% of the time.
The Angus Today
For the past 30 years, the growth of the Angus has been remarkable. By the mid 1970s, Angus registrations had
exceeded those of Herefords. The breed received a boost when Certified Angus Beef (CAB) products were
put on the market. From a humble beginning in 1978, CAB has become a formidable brand in the market
place. Today, the fed cattle destined for CAB represent 7% of all fed cattle in the U.S.
In its most recent GPE evaluation (Cycle VII), U.S. MARC data showed that todays Angus-sired steers are
comparable to Simmental- and Charolais-sired steers in postweaning avg. daily gain, final slaughter wt., and
carcass wt. Angus, along with Red Angus, were significantly higher in percentage of carcasses grading Choice
than the average of all other breeds (88.8 vs. 61.5%). Fat thickness and numerical yield grades were the highest
of all breeds (.58 in. and 3.44, respectively). Percentage of retail product was similar to the other two British
breeds and significantly lower than the average of the Continental breeds (59.7% vs. 63.5%). Shear force of rib
steaks was similar to Herefords and Red Angus and significantly lower than Continentals (8.9 vs. 9.6 lbs).
Sensory tenderness scores were not significantly different from other breeds, except for the Gelbvieh which was
the lowest. Of the six endpoints for feed efficiency, Angus-sired steers were the most efficient when fed to a
constant marbling endpoint of Low Choice.
The American Angus Association was established in 1883. In 2007-08, the association registered 347,572
cattle, which ranked it first among all by U.S. beef breed associations.
Development in Australia
The Australian Lowline is the result of a large research project initiated in the Trangie Agricultural Research
Centre in 1974 with the objective of studying the effects of divergent selection for growth rate. Initially, 85 low
growth rate cows were selected from the Trangie Angus herd to establish the Lowline herd. These cows were
mated to yearling Angus bulls also selected for low growth rate from birth to a year of age. At the same time, a
high growth rate line (Highline) and a randomly selected line (Controline) were established.
Since 1974, the Lowline herd has remained completely closed, with all replacement bulls and heifers selected
from within the line on the basis of low growth rate. As a consequence, the Lowline cattle are now smaller at
all stages of growth, from birth to maturity, than cattle in the other two lines. During the early 1990s, the
Trangie Research Centre released Lowline animals into the industry. At a meeting in 1992 of 14 interested
persons, the Australian Lowline Cattle Association (ALCA) was established.
The Lowline Today
Like other Angus, the Lowline is black and naturally polled. In mature size, it is only about 60% of the size of
normal Angus. As they stand today, they are generally considered to be the smallest of beef cattle breeds. Birth
weight of calves averages 45 to 53 lbs. Their growth rate at first is very rapid due to the milking ability of their
dams, and they often double their birth weight during the first 6 weeks of life. At 8 months of age, heifers
average 240 lbs and bulls 300 lbs. At a year of age, they average about 420 and 510 lbs, respectively. At
maturity, cows average about 710 lbs in good condition and stand only 37 to 41 inches tall at the withers.
Mature bulls in good condition top out at about 880 lbs and stand between 39 and 43 inches tall.
Lowline females are somewhat unique physiologically. Heifers generally do not cycle until they reach a weight
of about 485 lbs, which will occur when they are between 14 and 18 months of age. To low-acreage, part-time
hobby farmers, this is often advantageous because the heifers can continue to run with their virile brothers well
after weaning without risk of conceiving until they achieve the critical weight. Lowline cattle do not carry the
achondroplasia (dwarfism) gene; therefore, there is no risk of genetically generated deformity or abortion.
Lowline beef can be marketed as a premium product known as Lowline Boutique Beef.
The British White
Development in England
The British White first became noticed in 1697, when there was a dispersal of these cattle at Whalley Abbey.
This herd is considered the origin of the breed and was probably developed by crossing a polled bull from
6
Cleveland, in northeast England, with wild horned white cattle of the area near Whalley. These cattle went to
Gisburne and then on to Somerford around 1725. From there, it spread to East Anglia, which was the center of
activity for the British White for many years. The oldest existing herd, Woodboastwick, is located there.
Although they are different breeds, the British White and the White Park were kept in the same herd book from
1921 until 1946, when a separate herd book was established for each breed. Originally, two typespolled and
hornedwere admitted into the British White Society, but since 1948 only polled cattle have been accepted for
Introduction to America
The British White Cattle Association of America (BWCAA) was established in 1988. It joins the British White
societies of Great Britain and Australia in promoting the breed. There is much confusion in the United States
between the three white breedsWhite Park, British White, and American White Park. There are three
separate associations, and cattle of the same color are accepted by all the associations. But the White Park is a
horned breed, and blood typing has shown it to be very distant in relation to most modern breeds of cattle. In
addition, the White Park is larger-framed than the other two white breeds.
The British White Today
The British White was considered a dual-purpose meat/milk breed until 1950. Since then, selection has been
for beef production. As the name implies, the color is white with black points. Normally, the muzzle, eyelids,
teats, and feet are black. However, a few cattle have all red points, which is acceptable for registration. In
weight, mature cows can range from 1,000 to 1,500 lbs, and mature bulls from 1,800 to 2,300 lbs. They are
generally smooth polled, although an occasional scur may be found.
The Devon
Development in England
The Devon, sometimes called North Devon, to distinguish it from the South Devon breed, is one of the oldest
beef breeds in existence today. Some authorities believe the Devons origin is prehistoric, the assumption being
that the breed descended directly from Box longifrons, a smaller type of aboriginal cattle in Britain. It is
considered possible that the red cattle of North Devon may have contributed to the Hereford and other British
breeds.
The Devon originated in southwestern England, primarily in the countries of Devon, Somerset, Cornwall, and
Dorset. For centuries, herds of red cattle grazed in this cool, moist region. History has recorded that the
Romans found red cattle when they occupied this area in 55 B.C. There is some evidence that the seagoing
Phoenicians may have taken some ancestral red stock from northern Africa or the Middle East to Southwestern
England during their search for tin. Some animal breeders speculate that this might account for the breeds
unusual ability to adapt to hot climates in spite of its centuries of exposure to the damp, chilly hills of Englands
coast.
The early improvers of the breed were Francis Quartly and his brothers William and Henry, and John Tanner
Davy and his brother William. It is generally agreed that Francis Quartly accomplished for the Devon what the
Colling brothers did for the Shorthorn. The Devon herd book was founded by Colonel John Tanner Davy in
1850. In 1884, the Devon Cattle Breeders Society was organized and took over the herd book.
7
Introduction to America
In 1623, the British ship Charity brought a consignment of red cattle (one bull and three heifers) from
Devonshire to Edward Winslow, the agent for the Plymouth Colony. These Devonshire cattle, brought in by the
Pilgrims, were probably the first purebred cattle to reach North America. The Devon was originally a triplepurpose breed, but modern Devons are raised primarily for beef.
The Devon Today
As noted before, the Devon is red in color, varying from a deep rich red to a light red or chestnut color. A
bright ruby red color is preferred, which accounts for their nickname, Red Rubies. The hair is of medium
thickness and is often long and curly during the winter. However, they shed easily and their coats are short and
sleek in the summer. Mature cows average about 1100 lb in weight; mature bulls about 1900 lb. Compared to
most other beef breeds, they tend to be finer-boned and lighter-muscled.
The Devon was evaluated in Phase 3 of U.S. MARCs Germ Plasm Evaluation program. Percent of unassisted
births (94%) was similar to the other British breeds. Weaning weight (426 lb), along with the Red Poll was the
lowest of all breeds evaluated. The same was true for final slaughter weight (1034 lb), which was 118 lb lighter
than Hereford x Angus crosses. Marbling score was the lowest of all British breeds. Percent retail product
(68.5%) was comparable to other British breeds. Furthermore, reproductive traits did not differ significantly
from other British breeds. Weaning weights of their calves (476 lb) were among the lightest of all breeds
evaluated.
The Dexter
Development in Ireland
This miniature breed of cattle originated in the south and southeast of Ireland where they were raised by small
land holders. Its origin is relatively obscure. There is evidence of its existence as early as 1776. Some people
suggest that they derive their name from a Mr. Dexter, an agent of Lord Hawarden, who sought to develop a
small animal suitable for milk production, and for fattening within the limits of the resources on his property.
Others believe the Dexter resulted from crosses between the Kerry and some other breed, perhaps the Devon or
possibly a French breed. It has also been proposed that they are the result of a mutation within the Kerry
population.
The Dexter flourished in Ireland in the late 1800s, and a herd book was established in 1887. However, as a
result of continual cattle sales to England, the breed nearly disappeared from Ireland by the turn of the century.
Shortly after World War I, the Irish herd book was closed.
Introduction to America
The first recorded knowledge of the breed in America is when more than 200 Dexters were imported to the
United Stated between 1905 and 1915. Most of these were purchased by three breedersone each in
Kentucky, New York, and Minnesota. Since 1950, a number of other breeders have imported Dexters from
England. In 1982, a Canadian breeder imported several head from England.
Development in Australia
The Droughtmaster was developed in the hot, tropical climate of northern Queensland. Development began in
the 1930s with initial crossing of the Shorthorn and Brahman breeds. The objective was to form a breed
composed of to Zebu blood with the remainder consisting of European blood, principally Shorthorn. The
Zebu portion came from the Red Brahman. A breed society was established in 1956.
Although the Droughtmaster is found primarily in Queensland, it has spread throughout much of Australia. It
has also been exported to Africa, Asia, and various islands of Oceania.
The Droughtmaster Today
The Droughtmaster is basically red in color; however, it can vary from a golden color to a deep red.
Droughtmasters may be either horned or polled, with the majority of the registered cattle carrying the polled
trait. Like most other Brahman-influenced breeds, they exhibit tick tolerance.
The Droughtmaster is a large breed that is medium to slightly late in maturity. The best specimens are thickly
muscled, and the carcasses yield a high percentage of lean beef. Mature cows range in weight from 1,400 to
1,600 lbs; mature bulls from 2,000 to 2,400 lbs.
The English Longhorn
Development in England
The English Longhorn originated in northwest and central England and Ireland. It was the first breed, in the
mid 1700s, that was improved by Robert Bakewell of Leicestershire. Bakewell was a farmer of reasonable
wealth who was born in 1726. Prior to the time of Bakewell, farmers practiced the breeding of unrelated
animals and avoided the mating of animals that were closely related. Bakewell demonstrated with his Leicester
9
sheep and his long-horned cattle that animals of close relationship could be mated, and if rigid culling was
practiced, desirable traits could thereby be fixed more rapidly then by mating unrelated animals.
Following the development of this revolutionary breeding system by Bakewell, Shorthorn breeders as well as
breeders of other classes of livestock adopted his methods. He selected the English Longhorn for rapid growth
and heavy hindquarters. His selection practices led the Longhorn to become the most widely used breed
throughout England and Ireland until it was surpassed by the Shorthorn in the early 1800s. Today, Robert
Bakewell is often affectionately referred to as the Father of Animal Breeding, although it is said that in his
time he was considered very eccentric and lacking in mental stability. This was an example of genius in animal
breeding not being appreciated in his day. A breed society and herd book was established in 1878.
Introduction to America
During the latter half of the 19th century, a considerable number of English Longhorns were exported to the
United States, but had no appreciable influence on the cattle population. The long-horned cattle of the
southwestern U.S. were derived mostly from descendants of Spanish cattle imported into Mexico. These cattle
had similarly shaped but much longer horns, and were more rangy and not as well-muscled as the English
Longhorn.
The English Longhorn Today
Only a few herds of Longhorn cattle remain in England today. Even in Bakewells time, the breed was
declining in numbers, and the adoption of Bakewells methods by the Shorthorn breeders of his time was a
major factor in the near-extinction of the Longhorn. In 1980, the breed was rescued by the Rare Breeds
Survival Trust (RBST). The efforts of the RBST resulted in 255 registered English Longhorns.
The Longhorn is used primarily for meat production. Mature cows weigh 1,300 to 1,400 lbs, and mature bulls
2,000 to 2,200 lbs. Frame size is 5.5 to 6.5 on a scale of 1 to 9. Color varies considerably. Red on the sides is
most common, but the cattle may be yellowish gray, brindle, or deep mahogany to nearly black. A white, often
lightly spotted topline, which can be quite broad over the rump, is always present. Often, there are white
markings on the legs, face, and underline. The Longhorn is relatively late-maturing. This trait can be improved
by crossing with other British beef breeds.
The Galloway
Development in Scotland
The Galloway breed was developed in the province of Galloway in southwestern Scotland, where the climate is
damp and cloudy, the valleys are fertile, but the uplands are very rough. It is believed that Galloway cattle are
descendents of wild cattle that were in the region when the Romans first visited Britain. It is said that the
Galloway was never crossed with other breeds and is the oldest breed of cattle in Britain.
During the late 1700s and early 1800s, both horned and polled Galloway cattle were prevalent in the region.
Breeders preferred the polled trait and starting selecting in that direction. The early Galloway cattle were of
various colors, but black was the color preferred by breeders.
10
Introduction to America
The first Galloway cattle in America were shipped to Ontario in 1853 and to Michigan in 1870. These cattle
were well adapted to the harsh climate of Canada and the northern U.S. Consequently, the breed spread rapidly
throughout the region. However, their popularity declined during the late 1800s and early 1900s and gave way
to the increasing popularity of the Angus. It soon became evident that the Galloway lacked the thickness of
fleshing and overall beefiness of the Angus.
The Galloway Today
Even though there are only minimal numbers of Galloway cattle today, there continues to be a significant
number of dedicated Galloway breeders in America. In addition to the Galloway Cattle Society of America,
there is also an association for the registration of Belted Galloways. Both of these strains had their origin in
southwestern Scotland.
The Gloucester (Old Gloucestershire)
Development in England
The Gloucester is an ancient breed that was found as early as the 13th century in the Severn Vale of
Gloucestershire in southwest England. They were developed as a triple-purpose breed, and were valued for
their milk (producing Gloucester cheese), their beef, and for producing docile oxen. Recognized as a distinct
breed since the early 1800s, herds of Gloucester cattle flourished in the Cotswalt Hills of Gloucester and in
adjoining England counties for some 50 years before they were displaced by Hereford, Shorthorn, and Friesian
cattle.
After World War I, most Gloucester cattle were concentrated in two major herds. One herd, the Bathurst, was
bred for beef with Shorthorn, Friesian, and possibly Welsh Black blood, introduced to prevent tight inbreeding.
The other herd, Wick Court, was bred primarily for milk with Jersey blood introduced. The herds were
dispersed in 1966 and 1972, respectively; the breed society had already been disbanded in 1945. Efforts to
preserve the breed began with the establishment of a new breed society in 1972, and the establishment of a
Gloucester sperm bank by the Rare Breeds Survival Trust. By 1979, there were 86 animals in 20 herds, and
currently there are over 700 registered Gloucester females.
The Gloucester Today
Today, the Gloucester is a dual-purpose milk/meat breed. Under appropriate management, cows produce 6,500
to 7,000 lbs of milk per lactation. Under intensive management systems, calves can be finished for harvest by 2
years of age. The Gloucester is not a large breed; mature cows weigh an average of about 1,100 lbs, mature
bulls about 1,650 lbs. In color, they are blackish brown with black head and legs, a white stripe down the back,
and a white underline.
The Hereford
11
Development in England
The Hereford originated in the county of Hereford which is located in southwestern England. Herefordshire
was not as fertile as the Tees River Valley where the Shorthorn originated. Consequently, Herefordshire
farmers depended more on forage and less on roots and harvested crops for the production of market beef. The
heavy dependence upon forage may have better adapted the Hereford to its eventual use on the rangelands of
North America.
The early native cattle of Herefordshire were valuable primarily as draft animals. Size and strength were,
therefore, the two traits of greatest value. The beef that was consumed came from aged steers and cows that
were approaching the end of their usefulness. Therefore, these early Hereford cattle were dual-purpose
draft/meat animas as opposed to the dual-purpose meat/milk animals that characterized the early English
Shorthorn cattle. The original local cattle of Herefordshire were mostly solid red. The white markings of the
Hereford breed seem to have resulted from an infusion of Flemish, Welsh, and possibly Teeswater breeding.
The early Herefords were both white-faced and brockle-faced. In the early 1800s, a feud developed between
the advocate of these two color features. After a lapse of many years, the adherents of the white-faced feature
won out.
Benjamin Tomkins was the first breeder to improve the cattle of Herefordshire and is generally regarded as the
founder of the breed, but his son Benjamin Tomkins, Jr. made the most significant improvements. The elder
Tomkins began breeding cattle in 1742. He put strong selection pressure on the two traits that he considered
most importanta hardy constitution and a propensity to put on flesh at an early age. The son started breeding
cattle in 1769 and put even greater selection pressure on early maturity than his father. Extreme size was
sacrificed in his cattle to gain some refinement. He succeeded in establishing an earlier maturing type that was
shorter-legged, more refined in the bone, and had superior fleshing qualities. Both Tomkins used linebreeding
to fix the type of cattle they desired. The Tomkins herd was dispersed at auction in 1819 for an average of
$745, which was considered high at the time.
Improvement and Expansion
William Hewer, born in 1757, and his son John, born in 1787, were the breeders who fixed the color pattern of
present-day Hereford cattle. They also improved the quality and conformation of the cattle. They practiced
close inbreeding to secure the color and type they desired.
Thomas Jeffries supplied some of the first Hereford cattle that came to the U.S. However, the herd of T.J.
Carwardine contributed more directly to modern American Herefords than any other English herd. Carwardine
purchased the bull Lord Wilton, that was calved in 1873, from another early breeder, William Tudge. This bull
was an outstanding sire that gave the breed some needed refinement. During his time, Lord Wilton was the
most popular bull of the breed in both England and the U.S., and many of his best sons were imported by
American breeders. Carwardine also bred the great young breeding bull, Anxiety, who was used in the herd
until he was exported to America as a 2-year-old in 1879. He was reported to have developed into the thickest,
smoothest bull of any breed that had been seen in the U.S. Three of Anxietys best sons were also sent to the
U.S. Included among them was the most influential bull in American Hereford history, Anxiety 4th, imported in
1881. He was said to be the thickest hindquartered Hereford bull that could be found in England by his
purchaser, the firm of Gudgell and Simpson, Independence, Missouri.
During the expansion of the Hereford breed in England, there was a trend to reduce their extreme size and to
improve conformation. At the first Royal Show in 1839, the Grand Champion Hereford bull, Cotmore, weighed
a phenomenal 3920 lbs. By 1889, fifty years later, weight of the Grand Champion bull had been reduced to
2,600 lbs. This change in type was first evident in the 1863 Royal Show, where there was notable downsizing
of the winners. It picked up steam at the 1868 Royal Show, where quality and conformation clearly took
precedence over size and scale. In fact, many classes were won by the smallest individual in the class. From
thenceforth, quality rather than scale was the first consideration of English Hereford breeders.
12
Establishment in America
The first Herefords imported into the U.S. consisted of a bull and two heifers brought over in 1817 by the
distinguished statesman, Henry Clay, who put them on his farm at Ashland, Kentucky. Clay used these cattle
primarily for crossing on his Shorthorn herd. In 1839 and 1840, twenty-one females and a bull were exported to
New York. A large wave of importations took place from 1848 to 1886 by 83 different breeders. During this
period a total of 3,073 English Herefords were recorded in the American Hereford Herd Book. Most of these
cattle were brought into the central states but some were sent to western range country.
Shorthorn bulls preceded Herefords to range county, when large numbers of them were sent in the early 1860s.
This was prior to the importation of the Scotch Shorthorn and these bulls were of Bates dual-purpose breeding
and consequently lacked the thickness and beefiness needed by western ranchers to improve their native
Longhorn cows. Hereford bulls were first sent to range country in the early 1870s and made a very favorable
impression on ranchers. A severe winter hit the western ranges in 1881, and an even more disastrous winter
struck in 1886-87. Thousands of range cattle died during these winters. By far the greatest losses occurred
among the Shorthorns, whereas the Herefords came through in surprisingly good shape. This secured the role
of the Hereford as the breed of choice in range country.
Pedigree fads have occurred in nearly all breeds of cattle, and the Hereford was no exception. Following the
dispersal of the famous Gudgell and Simpson herd in 1916, there was a craze for cattle of straight Gudgell and
Simpson breeding, generally referred to as airtight breeding. Many airtight cattle sold at exorbitantly high
prices. Some of these cattle were of inferior quality, and served to detract from the reputation of Gudgell and
Simpson breeding.
From the 1890s on, the popularity of Hereford cattle spread rapidly throughout the U.S. and Canada. By 1930,
Hereford registrations surpassed those of the Shorthorn breed to become the most numerous beef breed in the
U.S. until they were surpassed by Angus in the 1970s.
The Hereford Today
At one time, polled cattle were registered in an association separate and apart from the American Hereford
Association (AHA). Today, both polled and horned Herefords are registered by the AHA.
The Hereford is one of the 27 breeds evaluated in U.S. MARCs Germ Plasm Evaluation (GPE) program. It has
been included in each of the seven GPE cycles dating from 1970 to 2000. The data reviewed here is taken from
the most recent cycle (Cycle VII) which included calf crops born in 1999 and 2000. These British breeds
(Hereford, Angus, and Red Angus) and four Continental breeds (Simmental, Gelbvieh, Limousin, and
Charolais) were evaluated in Cycle VII.
Hereford-sired calves had significantly higher average birth weights (90.4 vs. 84.2 lbs) than those sired by
Angus and Red Angus bulls. Weaning weights were similar for the three British sire groups, but were
significantly lighter than the Continental sire groups except for Limousin. Final slaughter and carcass weights
were significantly lighter for Hereford-sired than for Angus- and Simmental-sired steers, but did not differ from
other breed crosses. Percentage of carcasses grading Choice was significantly lower than the average of Angus
and Red Angus (65.4 vs. 88.8%), but similar to the Continental breeds. Yield grade was numerically lower than
the average of Angus and Red Angus (3.19 vs. 3.44), but significantly higher than that of the Continental
breeds. Retail product percentage was significantly higher for all Continental-crosses than for the three British
crosses, including Hereford (63.5 vs. 59.7%). A trait in which the Hereford breed clearly stood out was feed
efficiency (live wt. gain per unit metabolizable energy consumed, lb/Mcal). This trait was evaluated at six
different slaughter endpoints. Out of the seven breeds in Cycle VII, Herefords ranked first if fed to four of the
six endpointstime, weight, fat thickness, and fat trim percentage.
The American Hereford Association was established in 1881. In 2007-08, the association registered 69,344
cattle, which ranked third behind Angus and Charolais. About 50% were polled.
13
The Kerry
Development in Ireland
Kerry cattle are believed to be the descendants of ancient Celtic cattle, brought to Ireland as long ago as 2000
B.C. Up to the 19th century, they were the only breed in Ireland, but many subsequent importations and crosses
have been made. Nevertheless, the Kerry has resisted all these changes, and some can still be found grazing in
the marginal hill pastures of southwestern Ireland.
As breed classification became fashionable among cattle raisers, a distinction was made between the very small
Dexter and the larger Kerry. Both were black in color, and for a short period of time both types were registered
in the same herd book. The Kerry became the most prevalent breed in western Ireland by the mid-1800s. But
when the Dairy Shorthorn from England began to enter Ireland in the mid-1800s, both the Kerry and the Dexter
suffered from Irish Farmers use of the Dairy Shorthorn in the crossing and breeding-out process. The breed
declined during the 20th century to the point that by 1974 the total Kerry herd was estimated at only 5,000 head.
And by 1983, the world population of registered Kerries had fallen to around 200 head. Since then, the Irish
government has taken steps to support the breeds continuance.
Introduction to America
Kerry cattle were imported to the United States beginning in 1818. The breed prospered through the remainder
of the 1800s and into the early years of the 20th century. However, by the 1930s, it had practically
disappeared from North America. Today there are only a few Kerries in the United States and only a few herds,
based on recent imports, in Canada.
The Kerry Today
The Kerry is predominantly black in color on the body, but has a lighter stripe along the spine. It is quite fineboned in its skeletal make-up. Although larger than the Dexter, it is nevertheless relatively small in size.
Mature cows weigh from 775 to 1,000 lbs and average about 48 in. in height. Mature bulls weigh from 1,200 to
1,300 lbs.
The Kerry can be classified as a dual-purpose milk/meat breed, but tends to be used primarily for dairy
purposes. Milk production averages 7,000 to 8,000 lbs, but can occasionally exceed 10,000 lbs. Butterfat
content averages about 4.0%. A notable feature of the Kerry is its hardiness and longevity. Many cows
produce up to 12 years of age, while old dams of 20 are not exceptional.
The Lincoln Red
14
Development in England
This breed was developed in Lincolnshire and surrounding areas. The original cattle in their early unimproved
state were noted for their large size. It was not until the late 18th and early 19th centuries that improvement
efforts began. Three bulls were taken to Lincolnshire from Charles Collings Shorthorn sale in 1810. This was
later followed by the introduction of other Shorthorn breeding stock. These animals, crossed with the local
cattle, established the Lincoln Red breed.
Introduction to America
Lincoln Red cattle were imported to Canada direct from England in 1966, with the Shaver Beef Breeding Farms
the principal sponsors of the breed. This company was active in the introduction of other European breeds as
well as the Lincoln Red.
The Lincoln Red Today
Originally, the Lincoln Red was developed as a dual-purpose breed, and remained so until the early 1960s.
Today, it is considered a beef breed. Selection has been for a solid deep red color and a relatively large frame
compared to the Beef Shorthorn. A polled strain was developed by the incorporation of Angus blood in a few
English herds. Mature Lincoln Red cows weight about 1,400 lbs and mature bulls from 1,800 to 2,200 lbs.
The first Lincoln Red Shorthorn herd book was published in 1896. In 1925, the breed was amalgamated with
the Shorthorn Society, then separated as the Lincoln Red Shorthorn Society in 1941. In 1960, the breed became
recognized officially as the Lincoln Red and the word Shorthorn was dropped.
The Canadian Lincoln Red Association was organized in 1969 and incorporated under the Livestock Pedigree
Act, whereby it is affiliated with the National Livestock Record.
The Luing
Development in Scotland
The Luing (pronounced Ling) is a relatively recent breed that was developed on the island of Luing, which
lies off the west coast of Scotland. In 1947, a selected group of first-cross Shorthorn x Highland heifers were
mated to a Shorthorn bull. It was the progeny of this mating that served as the foundation on which the breed
was built. In 1965, the Ministry of Agriculture officially recognized the Luing as a distinct beef breed and a
herd book was established.
15
Development in Australia
Development of the Mandalong Special began at Mandalong Park, near Sydney, South Wales, in the mid1960s. Five breeds were used in the development: Charolais, Chianina, Polled Shorthorn, British White, and
Brahman. After four generations of breeding, the Mandalong special was stabilized with a content of 58.33%
Continental, 25% British, and 16.67% Brahman bloodlines.
The Mandalong Special Today
The Mandalong Special is a hardy animal that is well adapted to the environment in which it was developed.
Calves are small at birth, resulting in easy calving. In spite of its low birth weight, the Mandalong Special has a
high rate of growth. It fattens easily on grass, with an ability to produce a well-muscled, high-quality carcass
with an evenly distributed fat cover. The Mandalong Special is a relatively large breed. It varies in color from
light cream to dun.
The Murray Grey
Development in Australia
The Murray Grey originated in southern New South Wales during the early 1900s. The name of the breed
comes from its color and its site of origin along the Murray River, which serves as the boundary between the
states of New South Wales and Victoria. In 1905, a light roan, nearly white Shorthorn cow on the Thologolony
property of Peter Sutherland dropped a grey calf sired by an Angus bull. By 1917, the cow had produced a total
of 12 grey calves sired by various Angus bulls. When her husband died in 1929, Mrs. Sutherland sold the herd
of Greys to her cousin Helen Sutherland who started a systematic breeding program with eight cows and four
bulls.
In the early 1940s, Mervyn Gadd started a second Murray Grey herd, using a grey bull from the Sutherlands and
breeding up from Angus cows. Gadd was convinced that the Murray Greys were more efficient weight gainers,
but it wasnt until about 1957 that a demand for them developed. Butchers paid a premium for the Greys
because of their high cutability and less wastage. In 1962, 50 breeders joined together to form the Murray Grey
Beef Cattle Society of Australia. In 1979, the Society absorbed the Tasmanian Grey breed, which had
originated from crossing a white Shorthorn cow with an Angus bull at Parknook in 1938.
Introduction to America
In 1969, three different importers brought Murray Grey semen to the United States. In 1972, a yearling heifer
and bull calf were imported to the United States. Another twenty-eight bulls and nine heifers were imported
16
from Australia by way of New Zealand. By 1976, the American Murray Grey Association reported that 83
bulls in the United States were listed as foundation sires, and their semen was available for distribution. In
addition, 20 females were listed as purebreds. Because the total number of Murray Greys was relatively small,
expansion in the breed has been largely through the grading up process.
The Murray Grey Today
The Murray Greys started winning carcass competitions in the early 1970s, and have continued to dominate the
carcass classes at the Royal Shows in Australia. Murray carcasses can be said to be an ideal combination of
muscle, fat trim, and marbling.
The Murray Grey is similar to the Australian Angus, but tends to be smaller-framed, finer-boned, and thickerfleshed than the Angus. Mature cows weigh from 1,100 to 1,500 lbs, and bulls from 1,725 to 1,900 lbs. The
calves of the breed are small at birth, and the cows milk well. Their survival and reproductive rate has been
very satisfactory under a wide range of climatic conditions. The grey hair color plays an important role in
reflecting heat. The skin color should be heavily pigmented or dark-colored as this helps prevent certain eye
and skin problems, such as cancer eye and sunburned udders.
The Red Angus
17
The role of the Association is to objectively describe reproduction, growth, maintenance, and carcass
traits utilizing the fewest EPDs possible to achieve this purpose. The concept of Economically Relevant
Traits guides this process.
The RAAA actively seeks out and implements new technologies that are based on sound scientific
principles.
The American Red Angus magazine is sent to all bull customers, so in general, the editorial content of
the magazine has a commercial and technical focus; i.e., typical breed journal articles such as member
profiles are avoided.
The Associations general role in assisting marketing of the memberships
members cattle.
With respect to physical traits, there is essentially no significant difference today between Red Angus and black
Angus cattle. The only obvious difference is coat color. The color of Red Angus is an asset in extremely hot
climates because it absorbs less heat than the darker coat of black Angus. It is worth noting that over time,
since the breed was founded in the early 1950s, there have been fewer ups and downs in body type in Red
Angus than in the blacks. This is likely due to the fact that objectively measured performance traits (growth,
maternal, carcass, etc.) took precedence over subjective visually appraised physical traits. Red Angus did not
participate in the compact/comprest trend during the early 50s, nor did it get deeply engaged in the great
frame race of the 1970s and 80s, when size of frame was a major factor in selection of seedstock.
U.S. MARC data from Cycle VII of the Germ Plasm Evaluation program showed that there were essentially no
differences between the Red Angus and Angus breeds for any economically relevant traits.
For a breed that has been in existence for a relatively short period of time, the growth of the Red Angus is
impressive. In 2007-08, the Association registered 47,064 cattle, which ranked it fifth among all beef breed
associations, exceeded only by Angus, Charolais, Hereford, and Simmental.
The Red Poll
Development in England
At the end of the 18th century, there were two breeds of cattle in the East Anglia region of England, the Norfolk
and the Suffolk which resided in the counties from which they derived their names. Both breeds were relatively
small and fine-boned compared to other breeds at that time, and both were developed as dual-purpose meat/milk
breeds. However, the Norfolk tended to be beefier in type, whereas the Suffolk tended to be heavier-milking.
Shortly after 1800, John Reeves of Norfolk country purchased a bull in Suffolk and began crossing the two
breeds. By 1846, the two breeds had essentially merged into one and were referred to as the Improved Norfolk
and Suffolk Red Polled Breed until 1863, when the words Improved Norfolk and Suffolk were dropped from
the title. The breed is now referred to as Red Poll.
18
Introduction to America
The first Red Poll cattle were imported in 1873 by G.F. Taber of New York State. In 1882, he imported more
Red Polls. Other breeders imported cattle up until 1902. After then, practically no more Red Polls were
brought over. The breed was established in the U.S. on total of about 300 head that were imported from
England. The breed spread from the U.S. into Canada and enjoyed a steady increase in popularity during the
first half of the 20th century. Since then, the cattle industry has become virtually completely specialized into
dairy and beef type cattle. Consequently, dual-purpose cattle are currently in low use in North America.
The Red Poll Today
Today, the Red Poll has evolved into a viable beef breed in North America. Red Poll breeders have done an
admirable job of increasing growth rate, thickness, muscling, and overall stoutness of their cattle. However, the
three major British breeds (Angus, Hereford, and Shorthorn) still excel them in these traits. Carcass traits of the
Red Poll are comparable to those of the other British breeds. Data from the Germ Plasm Evaluation (GPE)
program at the U.S. Meat Animal Research Center revealed that calves sired by Red Poll bulls had the highest
percentage of unassisted births (99.9%) among all 27 breeds evaluated. Their survival rate to weaning (95.7%)
was the second highest among all breeds evaluated.
The Scotch (West) Highland
Development in Scotland
The West or Scotch Highland originated in the highlands of northwestern Scotland and on the Hebrides Islands.
Like the Galloway, it descended from the wild cattle that inhabited the West Highland region. The particular
area where the Scotch Highland originated is extremely rough and mountainous and adverse in its climate. By
necessity, therefore, the breed had to adapt to its challenging environment in order to survive and thrive. Scotch
Highland cattle are very unique in their appearance. They are relatively small in size with long, shaggy hair
coats and widespread horns. Their heavy hair coats adapt them well to the harsh environment of the western
highlands. They can be found in numerous hair colors: black, brown, red, brindle, white, and silver. Brown
seems to be the most prevalent color.
Introduction to America
The first importations of Highland cattle to the U.S. and Canada were made in 1893. Since then, a number of
other shipments have been made in order to broaden the genetic base of the breed. They have proven to be very
hardy in the harsher environments of northern U.S. and Canada. However, they lack the growth rate and
thickness of fleshing demanded by most North American cattle producers.
The Scotch Highland Today
There are relatively small numbers of Scotch Highland cattle in North America compared to the predominant
British breeds. In recent years, however, there has been a renewed interest in the breed, and there is now a
sizeable group of devoted breeders who are dedicated to breed expansion and improvement. Highland herds
can now be found in virtually every state in the northern half of the U.S. and in every Canadian province.
19
The Shorthorn
Development in England
The Shorthorn breed was developed from old cattle stocks in the northeast of England in the counties of
Durham, York, and the Northumberland. Before the breed was established, the animals were often referred to
as Durham, or Teeswater cattle, the latter referring to the Tees River which formed the boundary between
Durham county on the north and York county on the south. The Colling Brothers, Charles and Robert, who
farmed in Durham county, are often referred to as the founders of the Shorthorn breed. In 1783, they visited
Robert Bakewell and made a study of his breeding methods. In 1784, Charles Colling visited the Darlington
market in Durham county and purchased the cow Duchess. She was described as being much lower set and
easier-fleshing than most cattle of her day. Duchess became famous for the foundation of a family by that
name. About that same time, Robert Colling purchased the bull Hubbach, who was used for 2 years and then
sold. Hubbach sired some outstanding progeny but was not fully appreciated because of his lack of size for that
period.
The bull Favorite, bred by Charles Colling, developed into the greatest sire of his day. For many years, the bull
was used indiscriminately upon his own offspring and often mated back to his daughters through the second and
third generations and in some cases into the fourth, fifth, and sixth generations. In 1804, Favorite sired the bull
Comet, which was the result of intense inbreeding. Charles Colling considered Comet to be the best bull he had
ever bred, and when his herd was dispersed in 1810, Comet sold for the then unheard-of price of $5,000. The
Colling brothers attempted to produce more moderately-sized cattle instead of the extremely large animals
which captured the fancy of other breeders. They encouraged earlier maturity and better carcass conformation,
and their stock, although considered patchy by modern standards, were relatively smooth fleshed.
The Colling brothers were also good salesmen who believed in advertising their cattle. The second calf sired by
Favorite was steered and became known as the Durham Ox. He was fitted for public exhibition and was
reported to weight 3,400 lbs. In those days, the cattle were exhibited but were not shown competitively as our
cattle are today. Favorite also sired a free-martin heifer that became famous by the name The White Heifer
that Traveled. This non-breeder attained a weight of 2,300 lbs. These two animals were toured throughout the
country in somewhat of a side show exhibition. The publicity that was accorded to them did much to advertise
the new breed of Shorthorn cattle that was just being formally founded. It is known that some Galloway
breeding was infused into the Charles Colling herd and some cattle carried this blood when the herd was
dispersed.
Thomas Booth and his sons in York county were the next important improvers of Shorthorn cattle. Starting in
1790, they purchased Colling-bred bulls and mated them to rather large females from other herds. The bulls he
purchased were much more refined than the cows to which they were mated. Most early Shorthorn breeders
selected for dual-purpose meat/milk cattle. However, Booth placed great emphasis on fleshing qualities, and
valued beef almost to the exclusion of milk. After establishing the type of cattle he desired, Booth practiced
inbreeding with considerable success.
The other influential breeder in the early years of breed development was Thomas Bates of Northumberland
county. He established his herd largely on the breeding of the Colling brothers and purchased his first cattle
from them in 1800. In 1804, he purchased the cow Duchess, a tightly inbred descendant of Favorite. At the
Charles Colling dispersal in 1810, he purchased Duchess 3rd, sired by Comet and a granddaughter of his original
20
Duchess cow. These two females became founders of the famous Duchess family. So convinced was Bates of
the value of this particular line that he launched a program of intense inbreeding. Unfortunately, the Duchess
females were extremely low in fertility and by 1831 the family had produced only 32 cows in 22 years. Bates
selected for heavy milking qualities in his herd, and the current Milking Shorthorns are largely descendents of
his breeding. Thomas Bates died in 1849, and his herd was dispersed in 1850 by his nephew.
Improvement in Scotland
By the early 1800s, Shorthorn cattle were already prevalent in southern Scotland. However, they had not yet
been introduced to the northern regions, where the environment was harsher and the land much more rugged
and less productive. Much of the Scottish improvement of the breed was a result of the efforts of Amos
Cruickshank in Aberdeen County in northeastern Scotland. Being a typically frugal Scotsman, Cruickshank
bred practical cattle that could subsist on high forage diets that included straw and other coarse roughages. He
had little use for aesthetic beauty or fashionable pedigrees. Cruickshank bred Shorthorn cattle from 1835 until
his death in 1895.
Cruickshank selected for cattle that were shorter-legged, earlier maturing, easier-fleshing, wider-topped, and
deeper-middled. He did not select for the dual-purpose type, but like Thomas Booth, bred strictly for beef type
cattle. In 1860, the outstanding breeding bull, Champion of England, was born. His progeny were blocky, low
set to the ground, and exceptionally easy feeding. Cruickshank fixed this type by concentrating the blood of
Champion of England in his herd through tight inbreeding. Many other breeders adopted Cruickshanks
breeding program by producing cattle of the Scotch Shorthorn type. Consequently, he was viewed as somewhat
of a saviour of the Shorthorn breed.
Establishment in America
From 1820 to 1850, many Shorthorn cattle were imported into the eastern and central United States. Most of
these early cattle were of the dual-purpose type and came from the Thomas Bates herd. Americans were
familiar with Bates success with the Duchess family. Therefore, cattle that were of straight Duchess breeding
were highly valued regardless of their individual merit. Out of this emerged the Bates-bred boom in the U.S.
The craze for Bates-bred cattle was fueled by a sale in New York in 1873, when a Duchess female sold for a
world record price of $40,600 and eleven Duchess females brought a staggering average of $21,709.
High prices for fashionably bred Shorthorn cattle continued until 1878, when the first American Fat Stock Show
was held in Chicago. This gave cattlemen an opportunity to compare the merits of several breeds. When the
highly promoted Shorthorns were exhibited alongside the newly introduced breeds, the Hereford and AberdeenAngus, the comparison proved to be very unfavorable, because many of the Shorthorns were narrow-made
thinly fleshed cattle that had little individual merit to support their highly touted pedigrees. This resulted in a
severe collapse in Shorthorn prices throughout the U.S.
The breed began to regain its popularity when the first Scotch Shorthorns were imported during the 1880s and
1890s. It was revealed that the Scotch-bred cattle could perform on western grass as well as in the feedlot.
Scotch cattle gradually replaced the English-bred Shorthorns throughout the U.S. This served to cement the
future of the Shorthorn breed in America.
The Shorthorn Today
Both horned and polled Shorthorns are registered by the American Shorthorn Association. Todays Shorthorn
is a considerably improved breed over its ancestors. This was demonstrated in U.S. MARCs Germ Plasm
Evaluation (GPE) program. The Shorthorn was one of 26 breeds evaluated in the first four cycles of the GPE,
which lasted from 1970 to 1990. The program consisted of mating 26 breeds of sires to Angus, Hereford, or
Angus-Hereford cross cows to produce F1 calves. The sire breeds consisted of seven British, eleven
Continental, five Bos indicus, two dairy, and the American Longhorn breed.
21
Weaning, final slaughter, and carcass weights of Shorthorn-sired steers were similar to Hereford-Angus steers,
similar to many of the Continental-sired steers, and heavier than several other Continentals. Marbling scores
and percentage of carcasses grading Choice were the highest of all 26 breeds. Fat thickness, ribeye area, fat
trim, and percent retail product were comparable to Hereford-Angus crosses.
Age at puberty did not differ from other British crosses, but pregnancy rate was somewhat higher for Shorthornsired heifers than for Hereford-Angus heifers (89.0 vs. 80.1%). Shorthorn-sired mature cows produced calves
with heavier birth weights than other British-cross cows, but did not differ in percentage of assisted births.
Calves out of Shorthorn-sired cows had heavier weaning weights than all other British-cross cows, and were
comparable to weaning weights of calves out of Continental-cross cows.
The American Shorthorn Association was established in 1882. In 2007-08, the association registered 19,700
cattle.
The South Devon
Development in England
The South Devon originated in southwest England in the counties of Devon and Cornwall, where it has been a
distinct breed since the 16th century. It is the largest-framed of the British breeds and is not related to the
Devon, which also originated in southwest England. The South Devon was developed as a dual-purpose
meat/milk breed, whereas the Devon is strictly a beef breed.
The South Devon Herd book Society was formed in 1891, but interest in the breed was stimulated when the
Ministry of Food agreed that the milk of this breed should be sold separately as South Devon milk and at a
premium price. Average milk production in a 305-day lactation is about 6,550 lb, with a butterfat percentage of
4.2%.
Hair color is a rich medium red or yellowish red. It is medium thick and long, with a tendency to curl. The
average weight of mature cows is about 1,450 lb, while mature bulls weigh as much as 2700 lb. South Devons
are available as both horned and polled. Some blacks are also available.
Introduction to America
The first South Devon cattle were brought to the U.S. in 1969. In 1974, the North American South Devon was
formed for the purpose of development, registration, and promotion of the breed.
The South Devon Today
The South Devon shares a common blood factor with the Zebu, and this relationship may account for its ability
to withstand the climates of the tropical countries to which they have been exported. The breed is now well
established on five different continents.
The South Devon was evaluated in Cycle 1 of U.S. MARCs Germ Plasm Evaluation program. Birth weight,
percent of unassisted births, and survival rate to weaning of South Devon crossbred calves did not differ from
Hereford x Angus calves. However, weaning weight, postweaning avg. daily gain, and final slaughter weight
were all lower than Hereford x Angus steers. Percent of South Devon carcasses grading USDA Choice (72.6%)
22
was the highest of all beef breeds, except for Shorthorn (74.7%). South Devon-sired carcasses were slightly
lower in backfat and higher in percent retail product than Hereford x Angus carcasses.
Age at puberty of South Devon-sired heifers was among the youngest (352 days) of the 26 breeds evaluated in
Cycles 1-4. Percent of unassisted calvings (85%) and weaning weights (492 lb) of calves from South Devonsired cows were comparable to Hereford x Angus cows.
In summary, the South Devon is unique in its combination of providing a relatively lean carcass together with a
high degree of marbling.
The Sussex
Development in England
The Sussex breed was developed in the countries of Sussex and Kent in southeastern England. During the first
part of the 19th century, the farmers of Sussex started to select a beef type of animal from the red cattle they had
been using for draft purposes. By 1840, Sussex cattle were well known in the region as useful beef animals.
The Sussex was never used as a milk animal. In 1874, the herd book was established, and in 1879 the first
edition was published. A polled section was added in 1979.
Introduction to America
Sussex cattle were introduced to the United Stated in 1884 on the farm of Overton Lea in Tennessee.
Descendants of the Lea herd found their way to Texas. In the early 1890s, these cattle were used as a nucleus to
establish a small purebred herd on the ranch of T.D. Wood in south Texas. Later, during the period of a
drought, the Wood cattle were sold back to a breeder in Tennessee. Small herds of Sussex cattle were known to
exist during the first decade of the twentieth century in Tennessee, Oklahoma, Indiana, and Texas, but they
eventually disappeared. It is estimated that there are currently no more than 200 head of purebred Sussex cattle
in the United States.
In 1947, there was a small importation of Sussex cattle to south Texas. Then, over the period of 8 years, a
grandson of T.D. Wood imported a total of 44 females and 14 bulls to the state. In 1971, there were four Sussex
breeders in south Texas. They were largely producing purebred Sussex bulls to be crossed on Brahman cows.
This cross resulted in the establishment of a new breed called the Sabre, which was developed on the Lambert
ranch in Refugio, Texas. The Sabre is composed of Sussex and Brahman.
The Sussex Cattle Association of America was established in 1966 in Refugio, Texas.
The Sussex Today
The hair coat of the Sussex is a solid medium red that becomes curly in the winter. Only the tail switch is
white. In body type, the Sussex is quite thick and beefy. In England, mature cows weigh 1,300 to 1,500 lbs;
mature bulls 2000 lbs or more. In South Texas, the Sussex is somewhat smaller than its English counterpart.
Although the Sussex is a strongly horned breed, a polled strain, based on the progeny of a Red Angus bull has
been developed in England. They can be registered if they are 15/16 Sussex. Some of the Sussex cattle in the
United States also carry the polled gene.
23
It is interesting to note that the Sussex has been exported to southern Africa and other tropical regions of the
world because the breed adapts well to hot climates and resists the tick-borne disease.
The Welsh Black
Development in Wales
Prior to the days of modern transportation, Wales was relatively isolated from England, and there was little
communication even between North and South Wales. A distinct type of horned black cattle was developed
that is reported to trace back to the stock which the ancient Britons took with them as they were forced back
into the mountains by the invading Saxons.
Originally, there were two strains of Welsh Blacks, both of which were dual-purpose meat/milk cattle. The
cattle raised in North Wales were a small compact type, whereas those in South Wales were of a much larger,
rangier type. A herd book for each was established in 1883. The difference in type was partially due to the
difference in nutritional level between the two regions. The successful intermingling of the two types over a
period of about 90 years resulted in an optimum-sized animal that is now raised only for beef. Welsh Blacks
can be found throughout the U.K.
Introduction to North America
Welsh Blacks were imported to North America in the late 1960s and early 1970s. The greatest concentration
of Welsh Blacks is in the province of Alberta.
The Welsh Black Today
The majority of Welsh Black cattle are horned and black. They vary in color from rusty black to jet black.
Some white is permitted on the underline if it is back of the navel. The breed carries a low incidence of the
recessive red gene. Consequently, some cattle in the breed are red in color. There are also naturally polled
Welsh Blacks available in increasing numbers in both blacks and reds.
The winter haircoat of the Welsh Black is quite long and shaggy, which aids in its adaptability to the wet, cold
climate and the rough mountains and hill country of Wales. The Welsh Black is a moderate-sized breed, with
mature cows averaging about 1100 lbs. Like other British breeds, the Welsh Black is not as heavily muscled as
most Continental breeds of cattle. It is a relatively late-maturing breed that is slower to fatten than the Angus,
Hereford, or Shorthorn. However, they are an extremely long-lived breed, and many are at their best when 10
to 14 years old.
The White Park
24
25
Development in France
Like the Montbeliard and Pie Rouge, the Abondance originated from Simmental cattle brought to France from
Switzerland. It was developed during the mid- and late-1800s in the provinces of France just west of the Swiss
border. The Abondance Breeding Association was established in 1894 and has been working with the Eastern
Red and White Association since 1945.
The Abondance Today
Like the Montbeliard, milk production has been emphasized in the selection criteria followed by the French
Herd book Association. Consequently, Abondance cows tend to give greater milk yields than the Swiss cattle.
Average yield ranges from 8,800 to 11,000 lbs., with 3.7% butterfat content. In body size, the Abondance is the
smallest of the Simmental derivatives. Mature cows weight from 1200 to 1375 lbs, mature bulls from 1985 to
2425 lbs. Frame size is 4.5 to 5.0 on a scale of 1 to 9. It is similar in muscle thickness to the Montbeliard.
Compared to the other Simmental derivatives, its hair coloring is solid red over most of the body. It has a white
underline and face. A few individuals have a white topline. A red color patch around the eyes is typical. The
Abondance accounts for approximately 2% of the French cattle population.
The Amorican (Amoricaine)
Development in France
Beginning in 1840 on the Brittany peninsula in the northwest of France, two breeds were developed by crossing
Shorthorn bulls imported from England with local cattle. Two breeds resulted from these matings, the MaineAnjou and the Amorican, which have been maintained separately and still have different herd books.
The Amorican was developed from the crossing of English Shorthorn (Durham) bulls on the native Brittany red
and white, draft/milk cattle and then interbreeding the progeny. The percentages of these two base breeds are
not known. The size of the local cattle that made up the composition of the Amorican are said to have been
somewhat smaller than those from which the Maine-Anjou breed was derived. A herdbook was established in
1919.
The Amorican Today
The Amorican was developed as a dual-purpose milk/meat breed, although it tends to be more dairy than beef
in its conformation. Average milk production of recorded Amorican cows is about 5,900 lbs containing 3.62%
butterfat. Mature cows weigh from 1,400 to 1,600 lbs, and mature bulls approximately 2,400 lbs. The
Amorican is usually a solid dark red, but some individuals have white markings on the underline and lower legs.
In attempts to improve milk production, while at the same time retaining good beef conformation and rapid
growth, Meuse-Rhine-Ijsell bulls from Holland and some Rotbunte bulls from Germany were used on the
Amorican Starting in 1963. With this consolidation a new red breed has been developed, the Rouge de lOest
(Red of the West).
26
The Aubrac
Development in France
Development of the Aubrac breed started during the 1600s in the province of Aveyron in south central France at
the Benedictine Abbey of Aubrac. Controlled breeding was practiced by the monks until the Abbey was
destroyed during the French Revolution. Selective breeding was promoted by the French government between
1840 and 1880, with Brown Swiss blood used to improve the breed. The breed was developed primarily for
meat and draft purposes, but milk production became increasingly important during the early 1900s, resulting in
greater selection for this trait. Consequently, they eventually became regarded as a triple-purpose breed
(meat/milk/draft).
The region in which the Aubrac was developed is defined as having a modified Continental climate. It is a
mountainous, semi-desert region. The cattle are grazed on open mountain pastures on the high plateaus from
late May to mid October. In the summer months, the wind-swept plateaus are quite hot during the day and cold
at night; the range in temperature may be as much as 50 F. During the winter months, the cows are stall-fed
hays and straws supplemented with rye meal and oilcakes. The conditions under which the Aubrac was
developed indicates that it is adaptable to a wide range of environments.
The Aubrac Today
Aubrac cattle are of moderate size, with mature bulls averaging about 1,820 lbs and mature cows about 1,275
lbs. Frame score averages about 5 on a scale of 1 to 10. Calves are relatively small at birth, averaging 60 to 65
lbs. Aubrac coat color ranges from light yellow to brown, and is darker on the shoulders and rump. In body
type, the Aubrac is very thick, stout, and heavily muscled. It is relatively compact in its make-up.
The Bazadais
Development in France
The Bazadais breed is located in the Gironde-Landis area in southwest France. The exact origin of this breed is
unknown, but it has been found in the region for centuries. The popularity of the Bazadais started to increase in
the late 1800s. It steadily increased in numbers until World War II. Although a herd book was established in
1895, a breed society was not formed until 1976.
The Bazadais Today
The Bazadais was originally used as a work animal, but has gradually been transformed into a beef breed. It is
a moderately large sized breed. Mature cows average about 1,435 lbs in weight; mature bulls approximately
2,100 lbs. Frame size is 5.5 to 6.0 on a scale of 1 to 10. Color ranges from a medium to dark gray. In total
numbers, it is not a major breed in France.
27
Development in France
The Barnais is one of two strains of the Pyrenean Blond breed, the other being the Lourdais. The Barnais is
an ancient breed that was originally selected more for its aesthetic attributes (especially for their horns) than for
reasons of productivity. The breed originated in the southwest corner of France in the Pyrenean mountain
region adjacent to the Spanish border. It was developed as a triple-purpose draft/meat/milk breed. In recent
times, increased selection pressure has been placed on beef production. A herd book was established in 1981.
The Barnais Today
Compared to its relative, the Blonde dAquitaine, the Barnais is not as large or muscular. Average frame score
is 5 on a range of 1 to 9. It is a medium blond in color. The breed has declined in number to the point that it is
in danger of extinction. As a result, most of the Barnais cattle are owned by a conservatory known as the
Conservatoire des Races dAquitaine. A few Barnais are still milked by some mountain farmers. The cows
produce only 15 to 18 lbs of milk per day. The milk is used for making a cheese that is similar to the cheese
made from sheep or goats milk.
The Blonde dAquitaine
Development in France
The Blonde dAquitaine was recognized in France in 1962 as a distinct breed, having been formed from an
amalgamation of three breeds: The Garonne, the Quercy, and the Blond Pyrenean. These breeds occupied the
southernmost region of France, not far from the Pyrenees mountains which border northern Spain. These
breeds trace to cattle that were in the region during the Middle Ages, when blonde cattle were used to pull carts
carrying weapons and other goods.
At the beginning of the 19th Century, the Garonne breed occupied a vast area of the region. Intense selection for
a preferred type began in 1820. To improve functional characteristics, English Beef Shorthorns were
introduced around 1860. Because working ability was of utmost importance and the Shorthorns had not
preserved this characteristic, breeders started to cross with the Charolais, but quickly abandoned this and used
Limousin bulls. Finally, this was followed by selection back toward the original type, which was primarily
valued as a draft animal but also for its meat and milk.
Introduction to America
The first Blonde dAquitaine cattle were imported into North America in 1972. The breed had the misfortune
of arriving in North America after numerous other Continental breeds had preceded it, starting in 1968.
Consequently, this has prevented it from gaining strong foothold in the U.S.
28
Development in France
The Charolais originated in west central France in the Charolles district and in several neighboring departments
(provinces). The breed is said to have descended from an ancient cream-colored ancestral form that probably
had much in common with the Simmental cattle of Switzerland and Germany. The foundation stock of the
present breed were crossed to a limited extent with white Beef Shorthorn cattle from England. Like most other
cattle of Continental Europe, the Charolais was used for draft, milk, and meat. Compared to cattle breeders in
the British Isles, the French have long selected for size and muscling. They selected for more bone and power
than did the British breeders. They paid little attention to refinement because they were more focused on
substance and strength.
Two different Charolais herd books were established, one in 1864 and another in 1882. In 1919, the two
societies were merged. The popularity of the breed increased steadily throughout the 20th century. Presently,
Charolais are distributed throughout 62 departments of France and have been exported to more than 70
countries. Of the 17 major breeds in France, Charolais is the fourth largest in numbers and accounts for about
9% of the total cattle population. In their native country, they are no longer a multiple purpose breed. Instead,
they are raised solely for meat production. Mature bulls weight about 2,600 lbs, and mature cows
approximately 1,875 lbs.
The town of Nevers in France annually holds a livestock exposition. As early as 1849, Nevers had a special
showing for Charolais cattle. Three years later, in 1852, the National Exposition at Versailles opened a special
section for the breeds, and a 21-month-old Charolais entry was selected as the Overall Grand Champion Bull.
The Charolais Herd Book of France was initiated in 1864.
Establishment in America
The first Charolais came to the U.S. by way of Mexico. Jean Pugibet, a Mexican industrialist of French
ancestry, imported two bulls and ten females in 1930. He made two more importations from France, one in
1931 and another just prior to his death in 1937. In 1936, the King Ranch purchased two bulls from Pugibet
and brought them to their Texas ranches. Following this, at last six other southern ranches and possibly others
imported Charolais cattle from Mexico.
There was a foot and mouth disease (FMD) outbreak in Mexico during the mid-1940s. Imports were then
banned from Mexico as well as other countries that had FMD. In 1966, the ban was lifted and there were two
importations from France through Canada. That same year, two small importations came from the Bahamas
and one from the French island of St. Pierre Miquelon. These early importations were used to cross on existing
29
U.S. breeds in a grading-up process. When cattle reached the percentage of 31/32 (five generations), they were
registered as purebreds. During this period of time, semen from full French Charolais bulls in Canada was also
used for the grading-up process.
The Charolais Today
Some of the early full French bulls used in the U.S. sired extremely large calves that resulted in an unduly high
percentage of assisted births. Discontinued use of these bulls eventually corrected much of this problem. U.S.
and Canadian breeders selected for cattle that were not as coarse-boned or extreme-muscled as the imported
French cattle. The end result of this selection pressure is a Charolais breed that more ideally fits the needs of
the North American beef industry.
Even though the current Charolais is a smoother, more refined animal than its French counterpart, it still excels
in growth traits. Out of 26 breeds evaluated in the first four cycles of U.S. MARCs GPE study, Charolais-sired
calves ranked first overall in 200-day weaning wt., postweaning avg. daily gain, final slaughter wt., carcass wt.,
and pounds of retail product. Like other continental breeds, the Charolais does not have the degree of marbling
of the British breeds. However, out of 11 continental breeds evaluated by U.S. MARC, Charolais-sired steers
ranked fourth, with 59% grading USDA Choice. In a later evaluation, steaks from Charolais-sired steers did not
differ from British steaks in shear force or sensory tenderness scores. Although Charolais females do not milk
as heavily as the dual-purpose Continental breeds (Simmental, Gelbvieh, etc.), they nevertheless produce
enough milk to allow the calves to express their genetic ability to gain weight rapidly.
Among the nine breeds evaluated for feed efficiency (live wt. gain per unit metabolizable energy consumed)
Charolais ranked third when fed to a constant endpoint of 465 lbs of retail product.
Most Charolais are white in color, but currently there are some red Charolais cattle being propagated, primarily
in Canada.
The American International Charolais Association was established in 1957. In 2007-08, the association
registered 75,569 cattle, which ranked second among all beef breeds and first among the continental breeds.
The Gasconne (Gascony)
Development in France
The Gasconne evolved from ancient indigenous types of cattle in the extreme south of France in the foothills of
the French Pyrennes mountains. This region is known as Gascony, from which the breed takes its name. The
Gasconne is related to the Blonde dAquitaine and the Piedmontese to which it bears some resemblance.
There are two types of Gasconne cattle: 1) a larger type, the Gasconne areole; which is light gray and has a
light muzzle; and 2) a smaller type, the Gasconne amuqueueses, which is darker gray and has a black muzzle.
Separate herdbooks were established in 1856 and 1894, respectively, but were combined into a single herdbook
in 1955.
30
Development in France
The Limousin breed originated in the province of Limousin, now the departments (provinces) of Haute-Vienne
and Corrze, in west central France. It is believed that the Limousin may share some ancestry with the Blonde
dAquitaine breed. Breed development began in the late 1600s and was well established by the late 1700s.
Selection was for good draft qualities as well as meat production. Therefore, the cattle were primarily
developed as a dual-purpose draft/meat breed and not many were milked. These early cattle were described as
strong, fast, and active.
It is documented that cattle from the Limousin region were valued for their well-muscled beef characteristics in
the 1700s and were in strong demand in the Paris market. Toward the end of the 1700s, an unsuccessful attempt
was made to cross Limousin with Durham cattle from England. The resulting Durham strains were bred out
during the 19th century, thereby allowing the Limousin to recover its muscularity and regain its popularity. The
French Palate has never acquired a taste for highly marbled beef. It has always demanded a high lean, low fat
product. The French Herd book was started in 1864.
The popularity of the Limousin flourished during the first 30 years of the 20th century. This was followed by a
period of decline, between 1930 and 1960. Much of this decline can be attributed to World War II, which was
very nearly fatal to the breed. Starting in the early 1960s, the breed rebounded due to the tenacity of twenty or
more influential breeders, who formed an organization known as EPPA, headed by a dynamic personality, Mr.
Louis de Neuville. These people were open-air breeders who raised their cattle outside, which was not a
common practice at the time. Other breeder groups were formed and by 1979, Limousin cattle had spread to 49
departments. There was even further growth from 1979 to 1988, when the breed grew by 10% in its own region
and by 28%, 46%, and 78% three other regions. By 1988, the Limousin was the second leading beef breed in
France. During the 20th century, the Limousin increased considerably in muscling and weight. At the turn of
the century, the average cow weight was only 935 lbs, while today it is 1,325 lbs. Today, mature bulls average
about 2100 lbs.
Establishment in America
The first Limousin animal exported from France to North America was a bull named Prince Pompadour. He
was purchased from Emile Chastanet, an outstanding breeder who was a member of ELPA. Chastanet had in
turn purchased him from the Pompadour Breeding Research Center owned by the French government. The bull
was exported to Canada in 1968, the same year that the North American Limousin Foundation was established
in Denver, Colorado. Prince Pompadour was not only an outstanding individual, but he also turned out to be a
great sire. His semen was used extensively in the U.S. and Canada, and he left an estimated 60,000 progeny.
His daughters provided the basis for many foundation herds in North America.
31
In 1969, six bulls and four females were imported by the Canadian Department of Agriculture. Five of the bulls
were leased to AI organizations. They were: Dcor to ABS; Danseur and Dary to Prairie Breeders; and Dandy
and Diplomate to Bov Import, Inc. Of these five bulls, Dcor, who was bred by the Pompadour Research
Center, sired the most progeny.
The Limousin Today
The Limousin has enjoyed broad acceptance throughout North America primarily because of its efficient
production of red meat. It is not a high growth breed when compared with three other major Continental breeds
(Charolais, Gelbvieh, and Simmental). Recent data from Cycle VII of U.S. MARCs GPE program shows that
Limousin-sired calves had lower weaning weights than these three breeds. Postweaning avg. daily gain was
lower than three British breeds (Hereford, Angus, and Red Angus), resulting in the lowest slaughter and carcass
weights of the seven breeds evaluated. However, its high dressing percentage and high percent retail product
yield resulted in pounds of retail yield that were significantly greater than the average of the British breeds (504
vs. 481 lbs) and comparable to the other three continental breeds. Numerical yield grade was significantly
lower than the average of the British breeds (2.43 vs. 3.36) and similar to the other continental breeds.
As expected, percentage grading USDA Choice was significantly lower than the British breeds (56.9 vs.
81.0%). However, Limousin rib steaks did not differ from British rib steaks in either shear force or sensory
panel tenderness scores.
Estimates of feed efficiency (live wt. gain per unit metabolizable energy consumed, lb/Mcal) to six different
slaughter endpoints were greater for Limousin-sired steers than the other Continental breeds for three of the six
endpoints.
Age at puberty was significantly greater for Limousin-sired heifers than all other breeds. However, final
pregnancy rate (87%) did not differ significantly from the other six breeds. Calf wt. weaned per cow exposed
for first calves at 2 years of age was not significantly different for Limousin-sired heifers from other breeds.
The original color of Limousin was red, but a high proportion of Limousin cattle in North America are now
black.
The North American Limousin Foundation was established in 1968. In 2007-08, the Foundation registered
37,742 cattle, which ranked sixth among all breeds and third among continental breeds.
The Lourdais
Development in France
The Lourdais is one of the two strains of the Pyrenean Blond breed, the other being the Barnais. Like the
Barnais, it was developed in the southwestern corner of France as a triple-purpose draft/meat/milk breed.
However, the Lourdais eventually became recognized as more of a milk producer than was the case for the
Barnais. Lourdais cows could be expected to produce about 45 lbs per day compared to less than 20 lbs for the
Barnais.
32
Development in France
The Maine-Anjou originated in the northwestern part of France. This region is excellent for beef production as
it has both pasture land and fertile tillable land. At the beginning of the 1800s, the cattle in this region were
large-well-muscled animals with light red coats spotted with white. These cattle were known as the Mancelle
breed. In addition to their size and marbling, the breed had a reputation for easy fattening.
In 1839, the Count de Fallou, a land owner, imported Durham (Shorthorn) cattle from England and crossed
them with the Mancelle. This cross was very successful, and by 1850, Durham-Mancelle cattle were winning
championships at the French agricultural fairs. In 1908, the Society of Durham-Mancelle breeders was
established at the town of Chateau-Gontier in the Mayenne district. A year later, the name was changed to the
Society of Maine-Anjou Cattle Breeders, a name taken from the Maine and Anjou River valleys. The breeders
were mostly small farmers whose goal was to maximize income from their small parcels of land. For this
reason, the Maine-Anjou evolved as a dual-purpose breed, with the cows used for milk production and the bull
calves fed for market. However, their milk production is not high. In some herds, half the cows are milked and
the other half are used to raise two calves each. Average milk production is about 5800 lb over a 300-day
lactation. Today, the breed is used primarily for beef production.
The Maine-Anjou is the largest breed in France, even larger than the Charolais. Mature bulls average about
2,750 lb and mature cows approximately 1,985 lb. The coat color is dark red with white marking on the head,
underline, hind legs, and tail. A large proportion of red with minimal white is the preferred color in France.
Among the 17 major cattle breeds in France, the Maine-Anjou ranks seventh in number of registered cattle.
Establishment in America
The first Maine-Anjou cattle imported into North America came to Canada in 1969. Semen from the imported
Canadian bulls was then used in the United States on British-bred cows in a grading up process to attain
purebred status. An animal was considered purebred it is consisted of 15/16 Maine-Anjou blood or higher,
which would require four generations of Maine-Anjou sires. In 1996, this was changed to a percentage of 7/8 to
achieve purebred status.
The Maine-Anjou Society was incorporated in Nebraska in 1969, and included both American and Canadian
members. In 1971, the name was changed to the International Maine-Anjou Association and headquarters were
set up in the Livestock Exchange Building in Kansas City, Missouri. In 1976, the name was changed to the
American Maine-Anjou Association. In 2001, the association purchased a building in Platte City, Missouri as
its permanent headquarters.
33
In 1973-74, semen from the early imported breeds of bulls was used on the Hereford-Angus cow herd at U.S.
MARC in its Germ Plasm Evaluation (GPE) program. Average birth weight of Maine-Anjou sired progeny was
the second highest of the 25 breeds evaluated, and the percentage of assisted births was the highest. The calves
were very growthy, ranking second in postweaning avg. daily gain and final slaughter wt. Only 49.5% of the
Maine-Anjou sired carcasses graded USDA Choice compared to 64.7% for Charolais crosses and 74.5% for
Hereford-Angus. Other carcass traits (fat thickness, ribeye area, % retail product, etc.) were similar to the other
Continental breeds. The female progeny of Maine-Anjou sires were excellent in reproductive traits, comparable
to Hereford-Angus females and the best of the continental breeds.
The Maine-Anjou Today
Todays American Maine-Anjou is an improved breed over its imported ancestors. It has been successfully
altered to better meet the needs of the North American beef industry. Through selection, it has been refined and
downsized in its skeletal make-up without compromising its propensity to gain rapidly from birth to slaughter.
The breeds ability to marble and produce USDA Choice grade carcasses has been improved without sacrificing
its ability to yield a high percentage of lean retail product. In a recent feedlot study, two groups of MaineAnjou sired steers weighing 1,265 and 1,378 lbs, respectively had carcasses that performed as follows: 86%
and 97% graded Choice or higher; yield grades were 2.77 and 3.42; premiums/head were $30.42 and $23.34.
Calving difficulty is much less of a problem today than it was during the early days of the imports. The MaineAnjou has always been known for its excellent disposition and todays cattle are no exception. Like most other
Continental breeds in America, a high percentage of Maine-Anjou cattle are now black instead of the original
red and white spotted color.
In fiscal 2007-08, the American Maine-Anjou Association registered 12,316 cattle, which ranked sixth among
continental breeds.
The Montbeliard
Development in France
The Montbeliard was developed in eastern France as a derivation of the Swiss Simmental. It was brought from
Switzerland to France by the Mennonites during the 18th century. Originally known as the Alsatian breed, the
present name comes from the principality of Montbeliard, where it was developed. The Montbeliard herd book
was established in 1880. The Mamet family of Haute Doubs were the leaders in the selective breeding of these
cattle, especially in the improvement of their milking qualities.
During the 20th century, Montbeliard cattle were exported from France to the French Cameroons and from there
throughout the world. By 1979, nearly 1 million Montbeliard cows were reported worldwide.
The Montbeliard Today
The Montbeliard was developed as a dual-purpose milk/meat breed, but has been gradually developed into
primarily a dairy breed. As a result, it more refined in its conformation and not as thickly fleshed as the Swiss
Simmental or the Pie Rouge of France. Consequently, it is lighter in body weight. Mature Montbeliard cows
average about 1,500 lbs, mature bulls about 2,300 lbs. Frame size is approximately 6 on a scale of 1 to 9. Milk
yield of recorded cows averaged 8,750 lbs in 1961. Twenty years later, in 1981, average milk yield increased
dramatically to 12,240 lbs. Butterfat content of the milk is about 3.65%.
34
The Montbeliard has the white face of the Swiss Simmental, but the color patches on the body are almost
always dark or bright red instead of the tan or light yellow predominant among the Swiss cattle. The French
breeders have selected in favor of the darker color, and discriminate against the light reds and yellows. The
Montbeliard accounts for about 6% of the total cattle population in France.
The Normande
Development in France
The Normande breed originated from cattle that were brought to Normandy in northwestern France by the
Viking conquerors in the 9th and 10th centuries. For over a thousand years, these cattle evolved from an earlier
type of draft cattle into a dual-purpose meat/milk breed. The breed was influenced to some extent by the
crossing with Shorthorn bulls brought in during the late 1800s.
Although the breed was decimated by the Allied invasion of Normandy during World War II, the Normande has
rebounded to the point that there are currently 3 million of them in France, accounting for about 20% of the
nations cattle population. They are located primarily in the northwest of France, but it is also present in
significant numbers in the center of the country. In France, they play an important role in providing milk for the
production of Camembert cheese.
Normande cattle have been exported world-wide, but their greatest acceptance has been in South America,
when the breed was introduced in the 1890s. Columbia has the greatest number of Normandes, with the
remainder primarily in Brazil, Ecuador, Paraguay, and Uruguay. They are a highly adaptable, hardy breed and
have done well in beef operations in the Andes Mountains at elevations up to 13,000 feet.
Introduction to America
Normande cattle were brought to North America from France during the late 1960s and early 1970s, when a
great wave of other Continental breeds were introduced to North America. Unfortunately, the breed has not
flourished in North America like it has in its native country. The reason for this may be the fact that the most
popular continental breeds tend to have more muscle, or more growth, or both, than the Normande.
The Normande Today
The Normande is a medium frame size breed with most cows weighing 1,200 to 1,500 lb, and bulls from 2,000
to 2,400 lb. While selection in purebred Normande herds is mainly for milk production, much attention is also
paid to beef production. Milk production averages about 7,500 lbs per lactation, with some cows producing
10,000 lb or more. Butterfat content averages 4.1%. In North America, where the breed is used strictly for beef
production, purebred and crossbred Normande cows produce calves with weaning weights that range from 500
to 700 lb.
The color of the Normande is dark red, brown, or black, distributed in patches on a white background. Within
the breed there is a wide variation in color, from nearly all white to mostly colored. The face is mostly white,
with small colored patches around the eyes, giving the spectacled appearance for which the Normande is
known.
35
The Parthenaise
Development in France
Parthenaise cattle are of very ancient origin, and existed in western France for hundreds of years. They
experienced a period of popularity in France after winning first prize in 1853 at the National Cattle Show of
Paris. By 1890, a well-designed breeding program had been initiated, and a herd book was established in 1893.
In the early part of the 20th century, there were at least 1 million cattle in the breed. Since then, the Parthenaise
has gradually declined in numbers, and has been increasingly replaced by Charolais, Friesian, and Normande
cattle.
Introduction to America
Parthenaise cattle were brought to Canada after many of the other Continental breeds had already been
introduced. A Canadian herd book was established in 1993.
The Parthenaise Today
In its native country of France, the Parthenaise was transformed from a triple-purpose draft/milk/meat breed
into a dual-purpose meat/milk breed. Average milk yields are relatively low at about 6,600 lbs in a lactation.
Fat content, however, is relatively high at 4.4%. Consequently, the milk has been valued for butter making.
The Parthenaise breed is moderate in size. Mature bulls average about 2,300 lbs; mature cows about 1,400 lbs.
Frame size would score about 5 on a 1 to 9 scale. The Parthenaise is a heavy-muscled breed, similar to the
Limousin and Charolais. As a result, lean meat yield (cutability) from the carcass is quite high.
Color-wise, the Parthenaise is reddish buckskin with black skin pigmentation. However, the underline and
insides of the legs tend to be pearl gray in color. The coat color is darker in the male than in the female. The
breed accounts for about 1.7% of the cattle population in France.
The Pie Rouge de lEst (Eastern Red and White)
Development in France
The Pie Rouge has been reported to have originally evolved from the ancient Jurassic brachiocephalic breed,
and more recently from the old Franche-Comte breed, which until the late 1800s was found only in the Jura
Mountains. During the 20th century, a reorientation of breeding practices occurred, and the breed evolved from
a crossing of the Swiss Simmental with a number of old varieties including the Haute Bresse and the Doubs
(yellowish, coarse, dual-purpose cattle of the Jura Mountains) and the more refined Franche Comte. It has
been reported that some Montbeliard and Abondance blood was also used in the development of the Pie Rouge.
36
Development in France
The Salers breed originated in south central France in the Auvergne region, which is characterized by poor
volcanic soil, very hilly terrain, high rainfall, and long winters (6 to 7 months). Because of this environment, it
is nearly impossible to grow cereal grains. Consequently, the cattle are fed almost entirely on native grasses in
the summer and hay in the winter. The breed, which has been raised in the region for a very long time, took its
name from Salers, a small medieval town in the heart of the volcanic area of the region. The breed is
considered to be one of the oldest and most genetically pure of all continental breeds.
Until modern time, the Salers was considered a triple-purpose meat/milk/draft breed. Today, it is used
primarily as a beef breed in France, although some are still being milked for the purpose of cheese production.
When milked, the average production of registered Salers cows is about 5,700 lbs in a 265-day lactation period.
The fat content averages 3.7%.
Until the 1960s, Salers cattle were found only in a few provinces in the region. Since then, they have spread
throughout France. The small herds in the south of the Auvergne region that were used to produce veal and
milk are now specializing in milk production. At the same time, some large dairy herds in other regions are
changing over to cow-calf production.
In France, the Salers is considered to be a medium to large sized breed, with mature cows averaging about 1300
lbs and mature bulls about 2,100 lbs in breeding condition. The hair color is typically a uniform dark mahogany
red and often slightly curly. The horns curl outward and forward with the tips turned upward and backward.
The Salers herd book was established in 1908.
Establishment in America
The Salers was one of the last continental breeds to be imported into North America. The first Salers bull,
Vaillant, was imported into Canada in 1972. His semen was sold both in the U.S. and Canada. The American
Salers Association was founded by fourteen cattlemen in Minneapolis, Minnesota. The first Salers imports
made directly into the U.S. came in 1975, with the arrival of one bull and four heifers. From 1975 to 1978, 52
37
heifers and 6 bulls reached the U.S. and more than 100 Salers cattle arrived in Canada. These cattle were the
foundation of the breed in North America.
The Salers Today
The Salers breed has expanded and can now be found in nearly every state in the U.S. Because of its
adaptability to extensive range conditions, many Salers-cross cattle are concentrated in western range country.
The breed was evaluated in Cycle IV of the Germ Plasm Evaluation (GPE) program at U.S. MARC. Birth wt.
of Salers-sired calves (80.9 lbs) was similar to that of Hereford x Angus crosses. The percent of unassisted
births (95.2%) was among the highest of the 26 breeds evaluated in the first four cycles of GPE. Average
weaning wt. was also among the highest of the 26 breeds evaluated. Postweaning avg. daily gain (2.70 lbs) and
final slaughter wt. 1,148 lbs) were similar to Simmental- and Maine-Anjou-sired steers.
Percent of Salers carcasses grading USDA Choice (49.5%) was lower than Simmental, Braunvieh and
Pinzgauer, but comparable to Tarentaise, Maine-Anjou and Gelbvieh. Pounds of retail product (478) was
similar to the Chianina and Maine-Anjou and greater than all other breeds except Piedmontese and Charolais.
Age at puberty for heifers (365 days) was comparable to Hereford x Angus crosses, and pregnancy rate (89%)
was among the highest of all breeds. Percent of unassisted calvings from Salers-sired cows (92%) was also
among the highest of all breeds. Although it has not been objectively evaluated by scientific research, the breed
has been known for its nervous temperament. However, selection for docility in recent years has led to
improvement in this trait.
In 2007-08, the American Salers Association registered 14,399 cattle.
The Tarentaise
Development in France
The Tarentaise breed has descended from an ancient Alpine strain of cattle. It originated in southeastern France
in the province of Savoie, which was the site of the 1992 Winter Olympics. This is a rugged mountainous
region, where temperatures range from an average of about 32 F in mid-winter to about 65 F in mid-summer.
In France, the Tarentaise is used solely for milk production for the making of Beaufort cheese. Cows produce
approximately 12,000 lbs of milk in a 305-day lactation with no concentrates fed in the summer. The milk
averages about 3.6% fat. Cows are managed under intensive grazing management in the summer and kept in
the barn from October through April because of snow and the danger of avalanche. Their basic ration is hay,
and sometimes haylage. Only the high-producing cows get up to 5 lbs of concentrates daily, and then only for
the 6 weeks prior to AI breeding season. Most calving and breeding occurs in winter. In May, the cows are
turned out on lush pastures at 2,500 feet. In June, they are moved to high, steep pastures at an average elevation
of 8,000 feet where daily temperatures often swing from below freezing to highs above 80F.
The first Tarentaise in North America were imported into Canada in 1972. A year later, they were introduced to
the U.S. They have also been exported to Equatorial Africa and the Indian sub-continent, where they are used as
dairy cattle. It is obvious that the breed can adapt to a wide range of environments, from the Alps to deserts and
from dry plains to humid coasts.
38
The Tarentaise Herd Book was stated in 1888. It was revised in 1922 with more severe standards and more
enforced rules. Females are not registered until they have produced at least 5,700 lbs of milk in their first or
second lactations.
The Tarentaise Today
The Tarentaise is a moderate sized breed, with cows weighing between 1,200 and 1,300 lbs, and bulls ranging
from about 1800 to 2200 lbs. The cattle in the valleys tend to be somewhat larger than those in the mountains.
The Tarentaise cattle have yellowish fawn-colored hair, which is darker in males than in females. The muzzle
is black in color. Black hairs are normal on the ears, the poll, and also occur on the tail. The body orifices are
also black colored. The udder quality of Tarentaise cows is exceptional. The udder is firmly attached both fore
and rear, and teat size is small. Pendulous udders and balloon teats are virtually never seen.
The Tarentaise was evaluated in Cycle III (1975-76) of the Germ Plasm Evaluation (GPE) program at U.S.
MARC. Birth wt. and percent of unassisted births were similar to Hereford x Angus crosses. Percent survival
to weaning (94.0%) was the highest of all 27 breeds evaluated in the GPE program. Weaning wt. was not
significantly different from Hereford x Angus crosses, but postweaning avg. daily gain and final slaughter wts.
were somewhat lower than Hereford x Angus crosses (2.49 vs. 2.74 and 1079 vs. 1152). Percent of Tarentaise
carcasses grading USDA Choice (49.3%) was similar to that of four other continental breeds (Gelbvieh, MaineAnjou, Salers, and Limousin). The same was true for fat thickness (0.42 in.) and percent retail product (69.2%).
Pregnancy rate of Tarentaise heifers (94.4%) was among the highest of all breeds evaluated.
Research conducted at South Dakota State Univ. showed that Tarentaise x Hereford crosses were 10% more
feed efficient than the other breeds evaluated (straightbred Hereford, Simmental, x Hereford, and Angus x
Hereford). This, together with their carcass leanness, resulted in the lowest amount of total cow/calf feed
required per pound of retail product produced.
In fiscal 2007-08, the American Tarentaise Association registered 1,500 cattle.
BELGIUM
The Belgian Blue
Development in Belgium
The Belgian Blue originated in central and upper Belgium and they, at one time, accounted for nearly half of the
cattle in the national herd. During the last half of the nineteenth century, native red spotted and black spotted
cattle were repeatedly crossed with Dutch Friesians and with English Shorthorns, from which they acquired
their roaming gene. At the turn of the twentieth century, certain French breeds, particularly the Charolais, were
used on these native cattle. From this heterogenous population, an improved dual-purpose meat/milk type of
animal of blue and white color was established in the 1890s and early 1900s.
During the late 1950s, a debate arose among the breeders whether to maintain the dual-purpose type as it was or
to select for more muscling. The muscling group won out. During this period, three prominent AI sires were
used heavily, thereby established the desired type within the breed. Because the newborn calves are so heavily
muscled, a relatively high percentage of them are taken by cesarean section.
39
Introduction to America
Belgian Blue cattle were introduced to North America during the wave of importation of continental cattle in
the 1970s. However, it has not experienced the popularity of many other continental breeds in the U.S.
The Belgian Blue Today
European comparisons between the Belgian Blue and the Charolais found the Belgian Blue to have a greater
muscularity, milk yield, and daily gain. As might be expected, the Belgian Blue performed lower in calving
ease and calving percentage. It is a large breed, with females averaging about 1,600 lbs, and bulls about 2,500
lbs.
Belgian Blue half-bloods were evaluated in Cycle V of the Germ Plasm Evaluation program at U.S. MARC.
Other breeds evaluated were Hereford, Angus, Brahman, Boran, Tuli, and Piedmontese. Percent of unassisted
births was the lowest (92.8%) of all breeds except for Brahman (88.7%), but calf survival to weaning (95.8%)
was comparable to the average of the British breeds. The same was true for weaning wt. (526 lb).
Postweaning avg. daily gain (2.80 lbs) and final wt. (1248 lbs) were lower than the average of the British
breeds (2.98 and 1274 lbs, respectively) but higher than that of the other six breeds in Cycle V.
Percent of carcasses grading USDA Choice (23.8%) was the lowest, but ribeye area (13.34 sq in) was the
largest of all breeds. Percent of retail product (69.3%) was highest of all breeds except for Piedmontese
(71.0%). Pounds of retail product (508 lbs) was considerably higher than the other breeds. Percent grading
USDA Choice (23.8%) was the lowest of all breeds. However, meat tenderness, as measured by WarnerBratzler shear force (10.7 lbs), was similar to the Hereford (10.6 lbs). This is not surprising, because the
Belgian Blue carries he same mutant myostatin gene as the Piedmontese.
Age at puberty (348 days) was the youngest of all breeds except for the Piedmontese (348 days), but pregnancy
rate of heifers (85.0%) was the lowest of all breeds except for the Brahman (84.2%). Age at puberty of bulls
(325 days) and scrotal circumference (31.0 cm) were similar to the average of the two British breeds (320 days
and 31.8 cm, respectively.
First-calf half-blood Belgian Blue heifers did not differ significantly from the average of Hereford and Angus
for calf birth wt., percent unassisted births, percent of calves born and weaned, weaning wt., and pounds of calf
weaned per cow exposed. However, mature half-blood cows were higher than all other breeds for calf birth wt.,
(94 lb) and lower for percent calf crop born (89.2%) and weaned (79.0%). Calf weaning wt. (502 lbs) was
higher than all other breeds except for Brahman (516 lbs).
An interesting comparison was made in Cycle V between progeny (both steers and heifers) of Belgian Blue and
Charolais sires. Following is a brief summary of the statistically significant differences in postweaning growth
and carcass traits that were observed.
Charolais-sired heifers were higher in postweaning avg. daily gain, final slaughter wt., and carcass wt.
Belgian Blue-sired heifers had a numerically lower yield grade (1.66 vs. 1.84).
Belgian Blue-sired steers had a higher percentage of carcasses that graded USDA Standard (7.5 vs.
0.3%).
Belgian-Blue sired steers had greater carcass fat thickness (.288 vs. .237 in.), but did not differ in other
carcass traits.
In summary, the Belgian Blue can be characterized as a terminal sire breed that would be expected to increase
pounds of retail product without jeopardizing meat tenderness.
40
Development in Belgium
The Red and White Campine breed developed from the fusion of ancient north European cattle types, which
inhabited the northeast corner of Belgium, with various strains of imported stock. Development is reported to
have begun in 1838. Improvement was made by the introduction of British Shorthorn blood from 1844 to 1851.
Additional improvement was made with the introduction of Meuse-Rhine-Ijssel cattle from the Netherlands,
which reached a peak during the period of 1878 to 1888. There is also evidence of crossings with other
European cattle. A herd book was established in 1919.
Development in Belgium
The Red and White East Flemish, as the name implies, was developed in the East Flanders region of Belgium.
It resulted from crossing the heterogenous red cattle population of the region with the Meuse-Rhine-Ijssel from
the Netherlands and, to a lesser extent, with Shorthorn blood. Development occurred in the late 19th century,
starting around 1880. A herd book was established in 1900. The breed was nearly destroyed during World War
I. After the war, a rehabilitation plan was organized to collect the scattered remains of the purebred herds.
There was also some introduction of Central and Upland blood following the war. Fortunately, the breed was
preserved through World War II, and subsequently multiplied and improved.
The Red and White East Flemish Today
The Red and White East Flemish has been developed as a dual-purpose milk/meat breed. Average milk
production of registered cows is about 9,600 lbs per lactation, which is less than the Belgian Friesian but
comparable to the other dual-purpose breeds of Belgium. Butterfat content averages 3.55%. In body size, it
tends to be the largest of the Belgian breeds. Mature bulls weigh from 2,600 to as much as 2,900 lbs. Mature
cows weight from 1,500 to 1,650 lbs. The breed is moderate in frame size, with a frame score of about 6, but
41
massive in body depth and thickness. In color, the breed is predominantly white with some red on the sides of
the head and neck, and a few small spots on the body. It accounts for about 15% of Belgiums cattle
population.
The Belgian Red (Red West Flemish)
Development in Belgium
The Belgian Red was developed in the West Flanders region of northwest Belgium. During World War I, the
pastures of West Flanders were turned into battlefields which nearly eradicated the cattle population of the
region. The two main native breeds, the Cassel and the Veurne-Ambalcht, were among those whose numbers
were decimated by the fighting. After the war, the breeders of the Cassel formed a new breed known as the
Belgian Red. The Veurne-Ambalcht had been crossed with the British Shorthorn, whereas the Cassel contained
no Shorthorn blood.
The Belgian Red herd book was established in 1919. Since 1920, selection has been for a solid red color, with
only minor white markings on the underline permitted.
The Belgian Red Today
The Belgian Red is a dual-purpose breed, but with considerable emphasis on meat-producing ability. Recorded
cows average about 9,000 lbs of milk with 3.7% fat. Mature bulls weight about 2,500 lbs; mature cows
approximately 1,450 lbs. The Belgian Red has an average frame score of about 7. It accounts for about 10% of
the cattle population in Belgium.
NETHERLANDS
The Friesian
42
Two herd book societies in the Netherlands register Friesian cattle: 1) the Netherlands Cattle Herd book
Society, founded in 1874, which handles the Friesian as well as two other Dutch breeds, the Meuse-Rhine-Ijssel
(M.R.I.) and the Gronigen; and 2) the Friesland Cattle Herd book Society, founded in 1879, which register
Friesian cattle only in the province of Friesland. All cattle in the other ten provinces are registered by the
Netherlands Society.
Before the establishment of the two breed societies, both black and white and red and white cattle that had
originated from the same base stock were in the general area. The black and white cattle became a majority,
and herds of the two colors were maintained separately by their owner. With the establishment of the breed
societies, the black and white type rapidly became predominant. Red and White Friesians have been selected
out and propagated in Germany, Denmark, England and the United States but only in relatively small numbers.
Dutch Friesian milk production levels declined during the 1950s when undue emphasis was placed on correct
color pattern. During the 1970s, Holsteins were imported from North America and used to improve milk
production. This resulted in larger framed cattle with more pronounced dairy characteristics.
Introduction to North America
Dutch Friesians were first imported in the mid-1800s by WW. Chenery of Massachusetts, who continued to
import Friesian cattle from Holland for many years. The first herd book was published in 1872. In the U.S., the
name of the breed became Holstein-Friesian, possibly a corruption of the word Holland. In 1978, the name
was officially shortened to Holstein. Intense selection pressure in North America has led to a much different
type of animal than the original Dutch imports. As noted above, the North American Holstein is larger-framed
and much more refined in its overall type (referred to as dairy character). The North American Holstein also
far exceeds its Dutch ancestor in milk production.
The Friesian Today
The Dutch Frisian was bred for many years as a dual-purpose animal. Today it is a dairy breed with milk
yields highest in the cows of North Holland (11,500 lbs), and slightly lower in the cows of Friesland (10,700
lbs). Fat content averages 4.1%. The Friesian is a medium-sized breed; mature cows weigh 1,400 to 1,500 lbs,
and mature bulls about 2,300 lbs. In body type, it is much heavier-muscled than the North American Holstein.
The Friesian was evaluated in Cycle VI of U.S. MARCs Germ Plasm Evaluation program, along with cattle
sired by Hereford, Angus, Norwegian Red, Swedish Red and White, and Wagyu bulls. Semen from 24
European Friesian bulls was used in the matings with Hereford and Angus cows. Percent of unassisted calvings
was very high at 99.2%. Weaning weight (487 lbs) was significantly lower than all other breeds except Wagyu
(459 lbs). Postweaning avg, daily gain (2.8 lbs), final slaughter weight (1269 lbs), and carcass weight (774 lbs)
were significantly lower than Hereford and Angus, but comparable to the two Scandinavian breeds.
Percent of carcasses grading USDA Choice (52%) did not differ significantly from the Hereford (58%) and
Swedish Red and White (59%), but was significantly lower than Angus (88%), Wagyu (85%), and Norwegian
Red (71%). Percent retail product (62.8%) was significantly higher than Hereford and Angus, and similar to the
other breeds. Cooked steaks from Friesian carcasses were less tender than those from Angus and Wagyu
carcasses, but did not differ significantly from the other breeds.
Age at puberty (341 days) and pregnancy rate (84%) were similar to the two Scandinavian breeds. Percent of
unassisted births from Friesian first-calf heifers (82.1%) was similar to all other breeds evaluated, except for the
Norwegian Red, which was lower at 71.5%.
In summary, the Friesian exceeds all other continental breeds in milking production, while being moderate in
most other traits.
43
44
In 1906, the Netherlands Cattle Herd book Society recognized the M.R.I. as a distinct breed and began
registering them, along with the other two Dutch breeds, the Friesian and the Gronigen.
Introduction to North America
M.R.I. cattle were brought to North America in the 1970s. However, they were preceded by other important
continental breeds such as Charolais, Simmental, Limousin, Chianina, Gelbvieh, Maine-Anjou, Salers, etc.
Consequently, they have not gained a foothold in North America.
The M.R.I. Today
The M.R.I. accounts for about 24% of the national herd in the Netherlands, compared to 74% for the Friesian
and 2% for the Gronigen. The M.R.I. is a very rugged, massive-bodied, heavy muscled breed. It is relatively
short-legged for a large breed. In weight, mature cows average approximately 1,750 lbs, and mature bulls about
2,750 lbs.
The color of the M.R.I. is red patches on white, but the markings are not as sharply defined as those on the
Friesian. Bright reds are preferred over dark reds. The neck and shoulders are usually red; the face, if red,
usually has a white streak from the forehead to the muzzle. Red over much of the rest of the body now
predominates on most individuals.
SWITZERLAND
The Fribourg
Development in Switzerland
The Fribourg originated in three cantons (provinces) in western Switzerland. One of these was the canton of
Fribourg, from which the breed took its name. It is an old black and white breed of cattle that is now extinct.
The Fribourg was developed out of the same base stock from which the Simmental breed originated.
Zoologically, therefore, it was a black and white relative of the Simmental. It bore no relationship to the
Friesian or other black and white lowland cattle of northern Europe.
The Fribourg Today
As noted above, the Fribourg is, for all practical purposes, now extinct. In the 1950s, large numbers of Friesian
cattle were smuggled into Switzerland from Germany to increase the milk production of Fribourg herds. After
that, Holstein semen from North America was used to upgrade the Fribourg to a strictly dairy-type breed. The
last purebred Fribourg bull was slaughtered in 1973.
The original Fribourg was developed as a triple-purpose milk/meat/work breed. It was one of the largest and
heaviest breeds of cattle in Europe. Mature cows weighed from 1,765 to 1,875 lbs, and mature bulls from 2,425
to 2,650 lbs. Frame size of the Fribourg was approximately 7 on a 1 to 9 scale. Average milk yield of recorded
cows was 9,100 lbs containing 3.80% butterfat.
The Hrens (Eringer)
45
Development in Switzerland
The Hrens is an ancient breed that belongs to the Bos taurus brachyceros type. It derives its name from the
Herens Valley area in the Alps of southwest Switzerland. The Hrens origins are unknown, but it is thought to
have existed as early as Roman times. A bronze head of that period closely resembles the typical proportions of
a Hrens bull. The Herens was developed as a triple-purpose meat/milk/work breed.
The Hrens Today
The Hrens is a relatively small breed. Mature cows weigh from 900 to 1,000 lbs and mature bulls from about
1,300 to 1,400 lbs. Frame size is only 2 on a scale of 1 to 9. Average milk production is about 6,000 lbs per
lactation, with an average butterfat content of 3.87%. Hair color may vary from chestnut to dark red or dark
brown to almost black, often with a lighter stripe down the back.
Herens herds are turned out to pasture as soon as possible in the spring, usually on land at medium altitude.
But, with the onset of summer, they are sent up to graze in higher mountains that are 1.0 to 1.5 miles in altitude.
They remain on grass as long as possible, even to November, spending nights in the open and often in snowy
conditions. The Hrens is obviously a hardy breed to be able to thrive in such a harsh environment. The herds
have declined since World War II, because there is less farming in the higher mountain regions. In 1970, there
were 25,000 Hrens cattle in Switzerland. By 1980, the number had declined to 5,000 head, accounting for less
than 0.5% of Switzerlands total cattle population.
At one time, the Hrens had been a sporting breed in which fighting the cows was a pastime for festive
occasions. Admission was charged and the proceeds donated to charity. Cows selected for fighting were given
special care. Selection was made for aggressiveness through the choice of daughters from good-performing
cows. Bulls were never used for fighting.
The Simmental
Development in Europe
The Simmental originated in central western Switzerland in the Simme River Valley, from which it takes its
name. In English, the word tal means valley; hence, the name Simmental. This alpine region is characterized
by good rainfall and productive pastures. During the late Middle Ages, from about 1250 to 1350 A.D., the
cattle in this region were valued for their ability to do work and to produce milk and meat. By the early 1500s,
these Fleckvieh (spotted cattle) could be found in alpine valleys reaching into Italy, Austria, France and
southern Germany. By 1550, the breed had been crossed with native German herds, producing cattle similar in
type to todays red and white Simmental. The French also crossed the Fleckvieh with other breeds and
produced three strains that are collectively known as Pie Rouge (red speckled). The three strains of Pie
Rouge in France are: Abondance, a smaller strain of dairy cattle; Montbeliarde, raised for beef and dairy
purposes; and the Pie Rouge de lEst (Eastern red and white), also raised for meat and dairy. All of these strains
share an ability to thrive in harsh winters and damp springs. The strain that was exported to help form the
American Simmental was the Pie Rouge de lEst, which is larger, heavier muscled, and more rugged in its
structure than the other two strains. In its native country, Switzerland, Simmental cows average about 1,775 lbs
in weight and bulls about 2,875 lbs.
The Swiss began maintaining a herd book in 1803. In 1890, the country amalgamated herd book statistics,
probably as a result of the demand for beef products in a rapidly industrializing Europe. By the end of the 19th
46
century, the Swiss were exporting Simmental cattle to many parts of central and eastern Europe, to Latin
America, and occasionally to the United States. The U.S. Simmentals, however, were blended with other cattle
and lost their identity as a pure breed. The Simmental was a popular breed world-wide because it was so
adaptable and it crossed well with existing native cattle. They were good foragers and were capable of
withstanding harsh environmental conditions.
Since its origins in Switzerland, the Simmental has spread to all six continents. Total numbers are estimated at
40 to 60 million worldwide. The worldwide spread was gradual until the late 1960s. Records show that a few
animals were exported to Italy as early as the 1400s. During the late 19th century, the Simmental was
distributed through most of Eastern Europe, the Balkans, and Russia, ultimately reaching South Africa in 1895.
Guatemala imported the first Simmentals into the Western Hemisphere in 1897, followed by Brazil in 1918 and
Argentina in 1922. Among all breeds worldwide, the Simmental is second in numbers only to the Brahman.
Establishment in America
There are reports from a variety of sources indicating Simmental cattle arrived in the U.S. before 1900. They
were reported as early as 1887 in Illinois; in 1895 in New Jersey; and in New York and New Mexico around the
period of 1916-1920. As noted previously, these imports apparently lost their identity by being blended in with
other breeds.
The Simmental made its most recent appearance in North America when a Canadian, Travers Smith, imported
the famous bull, Parisien, from France in 1967. Semen was introduced into the U.S. that same year, with the
first half-blood Simmental calf born at Geyser, Montana in February, 1968. The American Simmental
Association (ASA) was founded in October, 1968. The first purebred bull was imported into the U.S. in 1971.
Then, in 1975, the World Simmental Federation (WSF) was formed, of which the U.S. is a member. It is also
noteworthy that in 1971, the association published the first beef breed sire summary, and more recently
developed the first multi-breed EPDs in the beef industry.
The Simmental Today
Today, Simmentals in the U.S. and Canada have been selected for more moderate size than was the case in the
early days of the breed. In the U.S., most Simmentals are no longer spotted. Instead, they are largely solid red
or black, but often with a white face. Spotted Simmental-cross cattle were being discounted by packer buyers
during the late 1970s and into the 1980s because they were thought to be mongrelized cattle of uncertain
breeding. Large birth weights accompanied by calving difficulty, overly-large frame size, and late maturity
were also problems that were encountered in the early days of the breed in North America. These are seldom
problems for the breed today.
A comparison of data from U.S. MARC shows that assisted birth from mature cows have declined from 10.8%
to 2.3% over the past 25-30 years. Furthermore, Simmental-sired first calf heifers in MARCs most recent GPE
evaluation (cycle VII) had the lowest percentage (14%) of assisted births among the four continental and three
British breeds evaluated. This has been accomplished without sacrificing growth. Among the four Continental
breeds (Charolais, Gelbvieh, Limousin, and Simmental), the Simmental ranked first in weaning cut, first in
postweaning gain, and first in final slaughter wt. at 445 days of age. In percentage grading USDA Choice
(65.7%), Simmental-sired steers ranked first among continentals and were comparable to Herefords for this
trait. Fat thickness, ribeye area, and yield grade were similar to the other continentals. Tenderness of steaks
from Simmental-sired steers, based on shear force as well as sensory evaluation, like the other continentals, was
slightly but not significantly lower than tenderness of the British breeds.
Among the nine breeds evaluated for feed efficiency (live weight gain/unit metabolizable energy consumed) in
Cycle VIII of US MARCs GPE program, Simmental ranked first when fed to a constant endpoint of 465 lbs of
retail product.
47
In calendar year 2007, the American Simmental Association registered 51,166 cattle, which ranked fourth
among all beef breeds and second among continental breeds.
GERMANY
The Braunvieh
Development in Europe
The Braunvieh of Germany and Austria was developed from the Brown Swiss cattle of Switzerland. These all
are essentially the same breed, with only small differences between them. Literally translated, Braunvieh means
brown animal.
Skeletal remains found in Switzerland suggest that the Brown Swiss is one of the oldest breeds of cattle in the
world. They were developed from crosses between the Bos Taurus primigenius) (Aurochs) and the later Bos
taurus brachyceros. These crosses occurred in the Neolithic period and by 1800 B.C., a small brown beast
could be found in the lake villages. During the last 1,000 years, records indicate that these short-horned brown
animals had been in existence and that they were kept for meat and work. They continued in this primitive form
until the 1800s, when improved management systems opened the door for breed improvement and high
productivity. Once this became possible, milk production potential was exploited and a triple-purpose animal
evolved. The advent of mechanization resulted in their use as a dual-purpose meat/milk breed. Modern Brown
Swiss are used primarily for milk.
Today, nearly all Brown Swiss cattle are found in the more mountainous eastern half of the country, whereas
Simmental cattle populate the western half. Brown Swiss account for about 47% of the total cattle population
in Switzerland, while Simmentals account for about 50%. Brown Swiss milk production averages 8,800 lbs, but
a few individuals exceed 22,000 lbs. Cows at lower altitudes generally product more milk than those at higher
altitudes. Mature cows range in weight from 1,325 to 1,535 lbs and mature bulls from 2,200 to 2,600 lbs.
Introduction to America
The first Brown Swiss cattle brought to the United States consisted of a bull and seven heifers in 1869. Only 15
more bulls and 111 cows were brought over before an outbreak of foot-and-mouth disease prompted the USDA
to ban importations of European cattle in 1880. From these animals and only seven others which were imported
via Mexico between 1908 and 1931, have sprung the large numbers of Brown Swiss in the U.S. today. The
American Brown Swiss is no longer a dual-purpose breed. Rather, it is considered strictly a dairy breed.
In the late 1960s and early 1970s, dual-purpose Braunvieh cattle, along with other Continental breeds, were
exported to North America. These cattle were much stouter and more muscular than the American Brown
Swiss.
The Braunvieh Today
The Braunvieh has been shown to be adaptable to a wide variety of environments. Today, they can be found in
over 60 countries from the Tropics to the Arctic Circle. The Braunvieh was evaluated in Cycle II of the Germ
Plasm Evaluation program at U.S. MARC. Percent unassisted births of Braunvieh-sired calves (94.5%) was
among highest of all 27 breeds evaluated in GPE. The same was true for percent survival to weaning (95.1%).
Weight at weaning was similar to Simmental, Gelbvieh, and Maine-Anjou sired calves. Postweaning avg. daily
48
gain, however, was lower than these breeds, but higher than Limousin, Tarentaise, and Piedmontese. Percent
grading USDA Choice (59.4%) was among the highest of the Continental breeds. Percent retail product
(69.5%) was similar to most other Continental breeds, except for the Piedmontese (73.4%).
Age at puberty of Braunvieh-sired heifers occurred earlier than Hereford x Angus crosses (332 vs. 357 days of
age) as well as most other Continentals. Pregnancy rate of heifers (93.0%) was among the highest of all breeds.
The same was true for percent unassisted calvings of Braunvieh-sired cows (92%) and weaning wt. of their
calves (534 lbs). It is obvious from these results that the maternal ability of Braunvieh-sired females is
exceptional.
In 2007-08, the Braunvieh Association of America registered 3,500 cattle.
The Gelbvieh
Development in Germany
The Gelbvieh or German Yellow breed was developed in the Franconian area of northern Bavaria in southern
Germany. They were derived from an amalgamation in the early 1900s of four breeds of triple-purpose yellow
cattleYellow Franconian, Limburg, Lahn and Glan-Donnersberg. From around 1750, the indigenous cattle of
these areas were crossed with Simmental and Brown Swiss bulls in order to improve them. Also, the Shorthorn
breed was introduced during the late 1800s, which had a favorable influence on fattening and carcass quality
traits.
Purebred herds and standard breeding stations began to be developed around 1870. This beginning of organized
breeding led to the formation in 1897 of the Breeders Association for Yellow Franconian Cattle, Division of
Upper and Middle Franconia, in Nuremberg. Then, in 1899, the Breeders Association for Yellow Franconian
Cattle, Division of Lower Franconia, was founded in Wrzburg. In 1905, the two associations were joined
together into one organization headquartered in Wrzburg.
The associations breeding goal was to improve those characteristics that would contribute to the development
of a triple-purpose meat/milk/work breed. They also aspired to develop a breed that was solid colored rather
than spotted. Gelbvieh cattle today range from cream to a reddish yellow, with lighter rings around the muzzle
and eyes. In Germany today, two types of Gelbvieh are bred: a dual-purpose meat/milk type and a strictly beef
type. German Gelbvieh milk cows average about 8,200 lbs in a 300-day lactation. Mature Gelbvieh bulls
weigh about 2,700 lbs and mature cows approximately 1,650 lbs. Gelbvieh cattle are moderate in frame size,
averaging frame score 6.0 to 6.5. The weight of the Gelbvieh is a result of its heavy muscling.
Establishment in America
Leness Hall, general manager of Carnation Breeding Service, was visiting Germany in 1969 looking for new
sources of Fleckvieh semen for the Simmental breed. While looking through the sire line-up at a large A.I.
stud, he saw a very impressive red bull named Hass, who happened to be a Gelbvieh. Hall investigated the
breed further in 1970 and decided to add Gelbvieh bulls to Carnations semen imports. The first semen arrived
from Germany in 1971, the same year the American Gelbvieh Association was organized. Four Gelbvieh A.I.
sires were represented in this initial importation. A total of 43,000 ampules of semen came in during 1971-72.
The sires were Ufa, Uni, Universaal and Upat.
49
Development in Germany
The Pinzgauer is one of the oldest breeds in the world. The earliest reports state that around 600 A.D.,
herdsmen in the alpine region of Europe developed cattle that would thrive on small, rocky pastures. They
needed a type of cattle that could withstand harsh environmental conditions and serve as triple-purpose animals
(meat/milk/draft). The Pinzgauer is indigenous to the alpine regions of Bavaria in southern Germany. Some
authorities consider it has resulted from crossings between Celtic and spotted cattle (Fleckvieh), while others
believe it has developed from the Spotted Mountain Cattle (Bergscheck) in southern Germany. Herd books
trace back to the 1600s. The breed spread throughout Europe in the 1800s, was exported to South Africa, and
was introduced to North America in 1972.
In Germany, except on some lowland farms, Pinzgauer cattle are kept on pastures in the summer but are fed
inside during the winter. They are dual-purpose cattle, raised for both milk and meat. The base color of the
Pinzgauer is chestnut brown, with a color range from light to dark brown, and a clearly defined white stripe of
variable width along the topline. This white color continues down the thighs, along the underline to the brisket.
In Germany, Pinzgauer bulls weight an average of about 2,000 lbs; cows range from 1,100 to 1,300 lbs.
Establishment in America
As noted previously, the first Pinzgauer cattle were imported into North American in 1972, one of the last
Continental breeds to be introduced. It has been observed that the breed is adaptable to both temperate and subtropical regions of the U.S.
The Pinzgauer was evaluated in cycle III of U.S. MARCs Germ Plasm Evaluation program. Pinzgauer-sired
calves were similar to Hereford and Angus crosses in percent unassisted births, percent survival to weaning, and
200-day weaning wt. In postweaning avg. daily gain and final slaughter wt., they ranked below the larger
Continentals, but were comparable to the smaller Continentals (Limousin, Tarentaise, and Piedmontese). The
50
same was true for mature wt. Like other Continentals, they had leaner carcasses and a higher percent of retail
product than the British breeds. Interestingly, marbling score was comparable to Hereford-Angus crosses and
higher than all other Continental breeds. Furthermore, shear force tenderness was among the best of all breeds.
Age at puberty for Pinzgauer-sired heifers was the third youngest (343 days) of the 27 breeds evaluated in the
total GPE program. Pregnancy rate (93.9%) ranked among the highest of all breeds. Obviously, fertility is a
major strength of the breed. Percent unassisted births (87%) for Pinzgauer-sired females was similar to the
British and smaller Continental breeds.
In 2007-08, the American Pinzgauer Association registered 664 cattle. The Pinzgauer is a breed that is not
extreme in any one trait and is moderate in nearly all traits. This is an admirable characteristic. Therefore, it is
difficult to understand why the Pinzgauer has not received greater use in North America. A possible
explanation is the fact that it was introduced several years later than other Continental breeds.
The Rotbunte (German Red and White)
Development in Germany
The origins of the various cattle types, which evolved in West Germany and which were involved in the
development of the Rotbunte, are lost in antiquity. During the late 1700s and well into the 1800s, the native
stock were differentiated into eight separate breeds, each of which were located in eight different regions. Each
breed had their own herd book. In 1934, Meuse-Rhine-Ijssel blood was introduced from the Netherlands, and
the eight regional herd books were combined into one.
The Rotbunte Today
The Rotbunte is located primarily in the Westphalia area of west-central Germany and in the northwest lowland
region of Schleswig-Holstein. It is the third leading breed in Germany after the Friesian and the Fleckvieh
(Simmental), respectively. It accounts for about 14% of the cattle population in West Germany.
The Rotbunte was developed as a dual-purpose milk/meat breed. Average milk production of registered cows is
about 10,300 lbs, slightly less than that of the German Friesian and the Meuse-Rhine-Ijssel. Butterfat content
averages about 3.75%.
The Rotbunte is heavier-muscled than the German Friesian, but not as muscular as the Fleckvieh or Gelbvieh.
Mature cows weigh from 1,400 to 1,600 lbs, and mature bulls from 2,200 to 2,400. In color, the red area covers
a larger area than the white, which tends to be predominately in the lower part of the body. The skin under the
red hair is pigmented.
The Small Spotted Highland (Vorderwald and Hinterwald)
Development in Germany
These small spotted cattle have a very ancient origin. They developed from the crossing of tiny Celtic stock
with a larger northern type brought in by the Alemanni during early migrations. They are indigenous to the
Black Forest region in the Baden-Wurtemburg province of southwest Germany. Originally raised for work
purposes, they are now used as milk/meat animals. There are two types, the Vorderwald and the Hinterwald.
The Vorderwald is the larger of the two. The Hinterwald is the smallest breed of cattle in central Europe. The
Hinterwald Cattle Breeders Association was founded in 1901 and was amalgamated with the Vorterwald
51
Association in 1936 into the Association of Baden Cattle Breeders. In 1988, a breed society was also
established in Switzerland.
The Vorderwald and the Hinterwald Today
In color, the breed is spotted and speckled. It ranges from a pale yellow to red on a white background. Mature
Vorderwald cows weigh an average of about 1,200 lbs, and mature bulls about 1,975 lbs. Frame size is about
4.0 on a scale of 1 to 11. Mature Hinterwald cows weight an average of 775 lbs, and mature bulls about 1,200
lbs. Frame size of the Hinterwald is a tiny 1 to 2. Recorded Vorderwald cows produce an average of 7,700 lbs
of milk per lactation with 4.1% butterfat. Recorded Hinterwald cows produce an average of 5,600 lbs with
4.2% butterfat.
Starting in 1936, the cattle have been extensively crossbred so that now few pure animals of either breed
remain. The Vorderwald has largely been replaced by the Simmental and other lowland breeds and currently
represent less than 0.7% of Germanys cattle population, while the Hinterwald accounts for less than 0.1% f the
national herd. The Swiss Hinterwald Breeding Society now handles all of the breeds activities, including
maintenance of the herd book.
ITALY
The Chianina
Development in Italy
The Chianina is the oldest breed of cattle in Italy and may well be one of the oldest breeds in the world. It
served as a sacrificial beast when Rome was at its height in ancient times. They were praised by the poets,
Vergil and Columella, and were the model for Roman sculptures of cattle.
The breed originated primarily in the west central part of Italy and was raised in a wide variety of
environmental conditions. Because of this, the cattle vary in size and type from region to region. The largest
variety, the Val di Chiana, from the provinces of Siena and Arezzo, has provided most of the foundation stock
that has been used in the U.S. and Canada. The name comes from the Chiana Valley in the province of
Tuscany. These cattle are the largest in the world. Mature bulls weigh 2,500 to 3,500 lbs and are nearly as tall
at the withers as a small man, averaging about 5.6 feet. Mature cows weigh 1,600 to 2,200 lbs and are 5.0 to 5.5
ft tall. Chianina oxen are known to attain wither heights of 6.25 ft. The world record for cattle size was set in
1955 by the Chianina bull, Donetto, who weighed 3,840 lbs.
There is reason to believe that the Chianina, like other Italian white breeds, has been influenced by the Zebu,
since there is a common blood-group factor, and because the breed shows excellent resistance to heat. The
Chianina has been used to improve other Italian breeds such as the Marchigiana, Romagnola, and Maremma.
Until recent times, the Chianina was used primarily as a draft animal in its native country. With the advent of
mechanized farming, selection emphasis has been placed on the ability to produce beef. The earlier selection
for draft animals had produced a very large mature size breed with heavy muscling. Recent selection for beef
production has maintained the size of the breed while improving early growth. Chianina cows have small
udders and are not noted for milk production. This is not surprising as they were originally valued for draft and
later for meat.
52
Introduction to America
U.S. servicemen stationed in Italy during World War II discovered Chianina cattle. In 1971, Chianina genetics
were introduced to the U.S. when the first semen was imported from Italy. Diaceto was the first Italian
fullblood to be collected. The first Chianina calf born in the U.S. was a black half-blood Chianina X
Angus/Holstein bull calf born on Jan. 31, 1972.
For the first few years, Chianina genetics were available only through semen. A private quarantine station was
established in Italy where semen was collected and shipped to U.S. breeders. Another avenue for securing
fullblood Chianina semen was from Canadian breeders.
The Chianina Today
Chianina half-bloods were evaluated in Cycle II of the Germ Plasm Evaluation program at U.S. MARC.
Percent of unassisted births of Chianina-sired calves (88.4%) was similar to that of other large Continental
breeds. The same was true for percent survival to weaning. Weight at weaning was similar to the other large
Continentals, except for Charolais which was significantly heavier. Postweaning average daily gain (2.63
lb/day) was greater than the Limousin and Piedmontese and similar to the Gelbvieh and Braunvieh. Percent
grading USDA Choice (27.5%) was the lowest of all 27 breeds evaluated in GPA. Except for the Piedmontese,
the Chianina was the leanest (0.32 in backfat) of all 27 breeds.
Age at puberty of Chianina-sired heifers (400 days) was later than that of other European breeds, but earlier
than Brahman-influenced breeds. Pregnancy rate (84.0%) was intermediate among all breeds evaluated.
Weaning percentage of half-blood cows (86%) was among the highest of the Continental breeds. Weaning wt.
of their calves (523 lbs) was also among the heaviest of the Continentals.
Today, the Chianina is a totally different breed than it was when first imported from Italy. The breed needed to
be Americanized, so to speak, because fullblood Chianina cattle did not fit market specifications for the North
American beef cattle industry. Today, there are relatively few fullblood Chianinas in the U.S. Instead, most
registered Chianina cattle today consist of a relatively low percentage of fullblood breeding. The remainder is
largely Angus. These hybrid (composite) cattle contain from 1/8 to Chianina blood. The average is
estimated to be about Chianina.
These Chianina/Angus hybrid cattle are referred to as Chiangus. The Angus and Chianina breeds are quite
complementary to one another, because of the marbling ability of the Angus and the lean retail yield of the
Chianina. This is verified by research at U.S. MARC. Furthermore, data on thousands of cattle harvested by
Swift and Co. reveals that a blend of Continental and British breeding is near-optimum in feed conversion
and cost of gain.
Today, the American Chianina is moderate in frame size and black in color in contrast to its extremely largeframed porcelain-white Italian ancestor. In 2007-08, the American Chianina Association registered 9,270
cattle.
The Marchigiana
53
Development in Italy
After the fall of the Roman Empire in the 5th century, barbarians settled in the hilly area of Ancona, along the
Adriatic coast in the province of Marche. They brought with them the gray-white cattle from which the
Marchigiana is believed to be descended. The cattle spread to surrounding provinces in southern Italy. The
area is characterized by rough terrain and the available feed is often less than ideal.
Improvement of the breed began in 1850 with the infusion of Chianina, Romagnola, and some Apulian blood,
but since the herd book was established in 1933, pure breeding has been practiced.
Introduction to America
The Marchigiana was introduced to North America a bit later than the Chianina. It did not attain the popularity
of the Chianina, because at that time, the U.S. was placing a great deal of emphasis on size. Consequently, the
larger breed won out. Later, it may not have turned out that way.
The Marchigiana Today
Today, the Marchigiana can be found throughout much of the southern half of Italy. Among the 26 breeds of
cattle in Italy, it is one of the more important, accounting for 8% of the countrys total cattle population.
The Marchigiana was once classed as a dual-purpose meat/draft breed. Except for size, it very closely
resembles the Chianina in color and general conformation. It is a short-haired breed that varies from light gray
to almost white. The skin is pigmented and the tongue, muzzle, and external orifices are black. The tail switch
is dark, and they are usually dark around the eyes. Their musculation is very similar to the Chianina. For all
practical purposes, they are a smaller version of the Chianina. Mature bulls average about 2,550 lbs in weight
and 63 inches in height at the withers. Mature cows average about 1,450 lbs in weight and 57 inches in height.
In the U.S., Marchigiana cattle are registered by the American International Marchigiana Society.
The Maremma (Maremmana)
Development in Italy
The origins of this ancient breed is lost in antiquity, and scholars debate whether its ancestors were derived
from Podolic stock that immigrated from the Asiatic steppes or whether it has developed in Italy from preRoman times. The Maremma has for thousands of years been found in the lowlands and on the hilly areas in
the regions of Maremma, Tuscany, and Latium in west central Italy.
Until the 20th century, Maremma cattle ran semi-wild, driven by mounted herdsmen called Vaccari. These
rough-shod cattle breeders, often men with criminal backgrounds, used snares to capture Maremma bulls, which
were then worn down in a grueling struggle. The more docile bulls were castrated and used for work, while
bulls that proved difficult to tame were pitted against Corsican dogs in bull fights.
The Italian Ministry of Agriculture established breed standards of achievement and opened a herd book in 1935.
The Maremma Today
Today, the Maremma is a relatively minor breed, accounting for only about one percent of Italys cattle
population. Originally used as a draft animal, the breed is now used for meat production. Mature cows average
54
about 1,325 lbs in weight; mature bulls approximately 1,925 lbs. Frame size is relatively large, ranging from 7
to 8 on a scale of 1 to 10. Hair color is gray in varying shades, from light gray to nearly black, and the skin is
pigmented. The horns of the Maremma are very long and distinctively lyre-shaped.
The Piedmontese
Development in Italy
Twenty-five thousand years ago, a migration of Zebu (Brahman) cattle from Pakistan made its way into
northwestern Italy. Blocked by the Alps Mountains from moving further, these cattle stayed and intermingled
with the local native cattle, the Aurochs. This blend of Bos taurus (Aurochs) and Bos indicus (Brahman)
evolved in this harsh terrain over thousands of years of natural selection to become the Piedmontese breed.
There are several breeds from Italy which also show the influence of this Brahman migration; these are the socalled Italian white breeds. All Italian white breeds, Piedmontese included, are born fawn or tan and change
to the gray-white color, with black skin pigmentation.
In 1886, the appearance of double muscling in Piedmontese cattle attracted the attention of breeders, who had
the foresight to recognize the potential of this trait. The first herd book was established in 1887. Systematic
improvement of the Piedmontese began around 1920, and a new herd book was set up by the Breeders
Association in 1958.
The Piedmontese was developed as a triple-purpose meat/milk/work breed. Today, however, it is used
primarily for beef production, but some cows are still milked. The milk is very rich in solids and is used for
specialty cheese production. The majority of Piedmontese cattle in Italy are of the double-muscled type which
produces up to 10% greater retail meat yield than conventional cattle. Mature cows weight an average of 1,250
lbs, and mature bulls about 1,875 lbs.
Introduction to America
The first Piedmontese in North America arrived in 1979 through an importation made from Italy by the PBL
Cooperative of Saskatchewan, Canada. Additional importations throughout the 1980s added to the Piedmontese
lines in North America. By the 1990s, importation of additional genetic material (semen and embryos) had
dramatically increased, and there is now a wealth of bloodlines from which to select.
The Piedmontese Today
The Piedmontese was evaluated in Cycle IV of the Germ Plasm Evalutaion program at U.S. MARC. Birth wt.
(80.2 lbs), unassisted births (92.5%), and survival rate to weaning (91.1%) of half-blood Piedmontese calves
were similar to Hereford x Angus cross calves. The same was true for 200-day weaning wt. Postweaning gain,
however, was somewhat lower (2.49 vs. 2.74 lb/day) and was comparable to the smaller Continental breeds.
Dressing percentage (62.7%) was the highest of all Continental and British breeds. Fat thickness (0.31 in.) was
the lowest of all 27 breeds evaluated in GPE and ribeye area (13.19 sq in.) was the largest of all breeds. Percent
retail product (73.4%) was the highest of all breeds and wt. of retail product (485 lbs) was second to the
Charolais. Percent of carcasses grading USDA choice (41.7%) was the lowest of all breeds, except for
Brahman (39.7%). In spite of low quality grade, tenderness as measured by shear force did not differ
significantly from the average of Herefords and Angus. Research has shown that double-muscled cattle, such as
the Piedmontese, have a mutation in a gene known as myostatin. This mutation is related to improved
tenderness of the muscle.
55
Age of half-blood Piedmontese heifers at puberty (348 days) was among the youngest of all breeds. Pregnancy
rate of heifers (95.5%) was second highest of all 27 breeds. Percentages of calves born (93%) and weaned
(84%) for Piedmontese cows were higher than for Hereford x Angus cows (88% and 79%, respectively).
However, percent of unassisted calvings were slightly lower (84% vs. 87%). Calf weaning wt. (498 lbs) was
similar to Hereford x Angus cows (504 lbs).
In fiscal 2007-08, the North American Piedmontese Association registered 1,768 cattle.
The Pisana (Pisa)
Development in Italy
The Pisana originated in the provinces of Pisa and Lucce in western Italy. It was developed in the mid-1800s
from crossing of the Brown Swiss and the Chianina. The Pisana was developed as a dual-purpose meat/work
breed.
The Pisana Today
The Pisana declined in numbers to a point that by 1980 only 50 purebred animals remained. However, Tuscany
established a program to preserve the breed.
The Pisana ranges in color from a light chestnut to a dark brown or nearly black. It has a reddish-colored line
along the back. In size, the Pisana is nearly as large as the Chianina. Mature cows weigh 1,700 to 1,800 lbs.
Mature bulls range from 2,700 to 3,300 lbs. Average frame size is about 8.5 on a scale of 1 to 11.
The Red and White Valdostana (Aosta Red Spotted)
Development in Italy
The ancestry of the Valdostana goes back to the 5th century. They are descendants of red and white cattle
brought to the Aosta valley by the Allemanni. The breed is found throughout the northwest of Italy. Smaller
numbers are also present in central and southern Italy. The Valdostana is primarily a mountain breed, adapted
to grazing sparse pastures at high altitudes (1.25 to 1.50 miles).
The Valdostana Today
The Valdostana is a dual-purpose meat/milk breed. Average milk production of cows in the mountainous
regions is only about 5,100 lbs. However, cows raised on better nutrition on the plains produce as much as
9,000 lbs per lactation. In body size, the Valdostana is considerably smaller than its Simmental relative.
Depending upon the nutritional environment, mature cows range in weight from 880 to 1,270 lbs, and bulls
from 1,430 to 1,875 lbs. Frame size of the Valdostana is about 4.5 on a scale of 1 to 9. It is a fairly muscular
animal.
Hair color ranges from red to yellowish red or violet. The head is white and the ears are red. The top of the
neck is white as are the abdomen, the lower part of the legs, and the tail brush. Additionally, white patches of
varying size are found on the body.
Like so many other local breeds in Europe, Valdostana numbers have declined. They account for less than
1.5% of Italys total cattle population. In addition to the Red and White Valdostana, there is also a Black and
White variety of the breed; however, very few of them are in existence.
56
The Romagnola
Development in Italy
The Romagnola is believed to be descended from a blending of the Bos primigenius podolicus, a wild ox which
lived on the Italian peninsula, and from the Bos primigenius nomadicus, the distant ancestor of the Zebu. The
Romagnola, therefore, combines the characteristics of both subspecies of the Aurochs, which were the forbears
of the modern Bos taurus and Bos indicus cattle. These primitive cattle gave rise to several breeds having
similar characteristics throughout Italy, which included the Chianina and Marchigiana as well as the
Romagnola.
The Romagnola originated in three provinces in northeastern Italy. This region was known as Romagna, from
which the breed acquired its name. Since its origin, it has spread to other provinces in the northeast of the
country. The breed was initially developed for use as a draft animal.
Improvement of the Romagnola started at the beginning of the 19th century, with greater selection for beef
production rather than draft qualities. The man largely responsible for this change in direction was Leopoldo
Tosi, who developed the first heard of selectively bred Romagnola cattle in the mid-1800s. This became the
standard for the entire breed. Such great progress was made that by the year 1900, the Romagnola tied for first
prize with the Hereford as best breed at the Paris International Agricultural Fair.
Introduction to America
The Romagnola was introduced to North America in the early 1970s, when many other breeds were imported
from the Continent of Europe. However, the breed has not achieved the popularity of its sister breed, the
Chianina.
The Romagnola Today
The Romagnola is one of the largest breeds of beef cattle. Mature bulls average about 2,750 lbs, and mature
cows approximately 1,650 lbs. It is a heavy muscled breed that is heavier-boned, shorter-legged, and deeperbodied than the Chianina. Compared to the Chianina and Marchigiana, it tends to have more loose hide
between its front legs and along its underline. Like the Chianina and Marchigiana, the skin is black pigmented
and the haircoat is white. This coloration is an adaptive response to the hot climate of Italy.
SWEDEN
The Swedish Red and White
57
Development in Sweden
Native cattle, which were mostly a solid brownish red, existed in central and southern Sweden until the
beginning of the 19th century. These animals were then mixed in varying degrees with Ayrshire cattle imported
from Britain in large numbers between 1800 and 1870. To a lesser extent, they were also mixed with Milking
Shorthorn cattle imported from Britain. Ayrshire herds were also maintained in a pure state both during and
after the time of these importations.
In 1928, the Red and White Cattle and the pure Ayrshire cattle were combined in one herd book and named the
Swedish Red and White breed. Later, an Ayrshire herd book was established and now separate herd books are
maintained. Swedish Red and Whites make up nearly 60% of the Swedish national herd. In recent years,
semen from North American Holstein bulls has been used to increase milk production in the breed.
Introduction to America
Semen from Swedish Red and White bulls has been introduced to North America, but no live cattle have been
imported from Sweden.
The Swedish Red and White Today
Color of the Swedish Red and White is predominantly brownish red, but may vary from a tan shade to medium
dark-red. White spots are commonly found on the underline, legs, and tail. The animals are of a dual-purpose
type, more like the Milking Shorthorn than the Ayrshire or Holstein. Mature cows weigh from 1,100 to 1350
lbs, mature bulls from 1,700 to 2,200 lbs. Average milk production from recorded cows is 10,000 to 11,000 lbs
with 4.1% fat content. Unrecorded cows produce somewhat less.
The Swedish Red and White was evaluated in Cycle VI of U.S. MARCs Germ Plasm Evaluation program
along with five other breeds (Hereford, Angus, Norwegian Red, Friesian, and Wagyu). Calves sired by
Swedish Red and White bulls had a very high unassisted calving rate (99.1%). Calf weaning weight was
slightly lower than the average of Hereford- and Angus-sired calves (497 vs. 504 lbs), but higher than the
Friesian and Wagyu (487 and 457 lbs respectively). Postweaning avg. daily gain (2.89 lbs/day), final slaughter
weight (1,281 lbs), and carcass weight (777 lbs) were significantly lower than the two British breeds, but similar
to the Norwegian Red and Friesian.
Percent of carcasses grading USDA Choice (59%) was similar to the Hereford (58%) and Friesian (52%), but
markedly, lower than the Angus (88%) and Wagyu (85%). Fat thickness (0.31 in.) was significantly lower and
percent retail product (62.8%) significantly higher than the British breeds. Cooked steaks from Swedish Red
and White carcasses were significantly less tender than Angus and Wagyu, but similar to the other breeds.
Percent of Swedish Red and White heifers expressing puberty (94%) was significantly higher than Herefordand Wagyu-sired heifers (78 and 80%, respectively), but similar to the other three breeds evaluated. Percent of
unassisted births (85.2%) for first-calf heifers was the highest of all breeds. Average weaning weight of their
calves (511 lbs) was comparable to the Norwegian Red (509 lbs), and significantly higher than the other breeds.
In summary, the Swedish Red and White is a typical dual-purpose breed, somewhat similar to the British dualpurpose breeds, Milking Shorthorn and Red Poll.
58
NORWAY
The Norwegian Red
Development in Norway
The Norwegian Red was formed by the amalgamation of a number of breeds. This movement was initiated in
1935 through government sponsorship. The first three breeds used to develop the Norwegian Red population
were three native breeds: the Norwegian Red and White, the Red Trnder, and the Red Polled Eastland. In
1963, the native Dle breed was absorbed into the breed, and in 1968 the South and West Norwegian breeds
were added. Other breeds that contributed to the gene pool included Swedish Red and White, Finnish Ayrshire,
and Friesian. By far the greatest contribution was made by the Swedish Red and White.
In 1961, approximately 60% of the cattle in Norway were of the Norwegian Red breed. By 1975, 98% of the
Norwegian national herd consisted of the Norwegian Red. In Norway, the breed is used primarily for dairy
purposes. Milk yields average about 12,800 lbs per lactation with 4.2% fat. In size, mature cows weigh about
1100 lbs, mature bulls about 1,975 lbs. Coat color is red or red with white markings.
Introduction to America
A few Norwegian Reds were brought to North America in the 1970s. However, they were not widely accepted
by American producers.
The Norwegian Red Today
The Norwegian Red was evaluated in Cycle VI of U.S. MARCs Germ Plasm Evaluation program along with
five other breeds (Hereford, Angus, Swedish Red and White, Friesian, and Wagyu).
There were virtually no differences between the Norwegian Red and the Swedish Red and White in any traits
growth, reproduction, or carcass. This is not surprising because, as U.S. MARC researchers noted, they are for
all practical purposes, the same breed. For a considerable period of time, the two breed associations have had
an open herd policy with one another.
SPAIN AND PORTUGAL
The Asturian
Development in Spain
There are two types of cattle found in the province of Asturias in northwestern Spain, the Asturian Mountain
and the Asturian Valley. The mountain or Casina variety is located in the eastern mountains of the province.
The valley or Carrenana variety is located on the western plains of the province. The Asturian is a dualpurpose meat/milk breed.
59
Asturian Mountain
The mountain variety belongs to the Cantabric branch of very ancient origins and was perhaps somewhat related
to the Asturian Valley breed. For centuries, Asturian Mountain breeders selected for higher milk yields for the
purpose of cheese production. Both varieties are of a single red color which varies from dark red to clear
chestnut. The mountain variety tends to be darker in color than the valley. In both varieties, the bulls coat is
usually darker than the females, especially at the back of the head, the neck, and the dewlap. In these areas, it is
nearly black. The Asturian Mountain is about 30% lighter in body weight than the Asturian Valley. However,
in recent years, the two varieties are coming closer together because mountain cows are being crossed with
valley bulls. In fact, about 95% of Asturian cattle are now of the Valley variety. An Asturian herd book was
established in 1933.
Asturian Valley
Mature cows of the valley variety average about 1,325 lbs in weight. Mature bulls average nearly 2000 lbs.
Frame size is 5.0 to 5.5 on a scale of 1 to 9. In body conformation, the Valley is more desirably proportioned
than the Mountain, because it is trimmer in the throat and dewlap, and heavier-muscled in the hindquarter.
Valley cows are fair milkers, averaging about 6,500 lbs per lactation, with 4% fat.
In recent decades, a considerable amount of Brown Swiss and Friesian blood has been infused into the breed.
This is expected to result in the extinction of the Asturian cattle.
The Brava (Fighting bull or Toro De Lidia)
Development in Spain and Portugal
A subspecies of the Aurochs, Bos taurus Ibericus, is believed to be the ancestor of all the dark colored breeds
found on the Iberian Peninsula, including the Brava or fighting bull. Breeders of the Brava select primarily for
aggressiveness, strength, and vigor. They are found not only in Spain and Portugal, but also in those South and
Central American countries where bull fighting is organized. The Brava is raised in many parts of Spain as well
as the southern provinces of Portugal.
Brava breeders are the elite cattlemen of Spain. The Spanish attitude toward the bull fight is that it combines
both a spectacle and ceremony in a skilled sport, something of a world series game and a symphony in one
afternoon. It has evolved from the 15th century practice of mounted noblemen with lances fighting cattle in an
exhibition put on for the king and enjoyed also by the people. Herd books were established in Spain in 1980,
and in Portugal in 1986.
The Brava Today
Color, which is not an important trait in the selection process, is usually black but may range from gray to
white-patched, brindled, roan, red, or chestnut. Mature bulls range in weight from 1,100 to 1,600 lbs. Heavier
bulls have been developed in parts of northern Spain, which are said to bring a somewhat higher price, because
their size makes a more impressive appearance in the ring. However, they are not as dangerous as the smaller
1,100 animals of central Spain. Both types have the same external characteristics.
The males that possess the necessary vigor and aggressiveness are destined for the bull fight ring. The more
timid and docile animals of both sexes are culled out after special tests and eventually sold for beef. The
carcass yields are often surprisingly good and may reach 60%. However, the carcass contains a
disproportionately high percentage of fore-quarter cuts which are of lower quality.
60
Development in Spain
This breed is indigenous to the Galacian region in the far northwestern corner of Spain. It was first used as a
draft animal, later as a dual-purpose meat/milk breed, and is now used primarily for beef production. It was the
only breed in this part of Spain until the end of the 19th century. Since that time, the breed has been crossed first
with Shorthorn, Barroso and Brown Swiss, and later with South Devon and Simmental. In 1965, there were
180,000 animals of this breed, but very few were purebred. The 1978 Census, however, registered 400,000,
making it the second most numerous breed in the country next to the Retinta. A herd book was established in
1933.
The Galacian Blond Today
As its name implies, the breed is blond in color, ranging from cream to golden red, with darker colors found
among the mountain strains. In body type, the Galacian Blond is very muscular. Average weight of mature
cows is about 1,425 lbs. Mature bulls average 2,200 lbs. Frame size is about 5.5 on a scale of 1 to 11. Average
milk production of Galacian Blond dairy cows is about 4,500 lbs a year, with some individuals producing as
much as 10,000 lbs.
The Mirandesa (Miranda or Ratinha)
Development in Portugal
The Mirandesa is indigenous to the mountainous regions of Mirando do Douro in central Portugal but has
spread to northeastern Portugal and to southwestern Orense in Spain. It is thought to have originated from the
Iberian stock from which many cattle types, in countries both north and south of the Mediterranean, are
believed to have originated. A herd book was established in 1977.
The Mirandesa Today
The Mirandesa is the most widely distributed native beef breed in Portugal. It accounts for almost one-fourth of
the countrys cattle population. It was originally used as a draft animal, but it is now raised primarily for meat
production. It has not been used as a dairy breed.
Mature cows average about 1,200 lbs in weight. Mature bulls average slightly under 2,000 lbs. Frame size
averages about 5.0 on a scale of 1 to 11. Males are brown in color, while females are a lighter brown,
approaching beige. Both sexes have short, very broad heads with large horns that grow outward and bend down
and then forward, with the point upward.
61
The Retinta
Development in Spain
The Retinta is the most numerous breed in Spain. The highest concentrations are found in southern Spain in the
provinces of Extremadura and West Andalusia, where the breed originated. The Retinta was developed from a
union of the Andulusian Red, Extremadura Red, and Andalusian Blond breeds of the region. A herd book was
established in 1933.
The Retinta Today
Retinta means dark red, which refers to the typical mahogany red color of the breed. It is solid in color, with
no other markings except for black nose and feet. The horns are lyre-shaped and yellow or greenish yellow
with dark ends. Body size varies greatly with environmental conditions from region to region. Mature cows
range in weight from 850 to 1,300 lbs; mature bulls from 1,450 to 2,200 lbs. Frame size ranges from 5 to 6 on a
scale of 1 to 9. The largest variety is the Tamerone, which is bred in the region of Cadiz in the extreme south
of Spain.
The Retinta was originally developed as a work animal, but is now predominantly a meat breed. It has fair beef
conformation, but is relatively light-muscled in the hindquarters. Retinta cattle are now being crossbred
commercially with Santa Gertrudis, Charolais, and Friesians.
HEAT TOLERANT BREEDS
The American Brahman
Development in India
The cattle of India are referred to by the names of Zebu or Brahman. The Zebu is said to have had its origin on
the edge of the Indian desert. It is likely that the Zebu descended from an Asiatic form of the Aurochs, Bos
primigenius namadicus. The Zebu is considered to be of a different species (Bos indicus) than cattle of
European origin (Bos taurus). Zebu cattle spread throughout India and Pakistan, and as far west as the Caspian
Sea. They also spread throughout Asia and to the south across that portion of Africa that lies below the Sahara
Desert.
Within the Zebu species, there are more than forty different breeds. The physical differences between the Zebu
and European cattle are striking. All Zebu cattle are characterized by a prominent hump over the top of the
shoulder and neck. The hump is chiefly made up of muscle tissue, so it is not a fat reserve as it is in camels; its
function is unknown. Another distinguishing characteristic is in the Zebus loose skin which hangs along the
neck in folds that form a large dewlap. The function of this characteristic is said to be that the loose skin
provides more surface area for radiating the Zebus body heat back into the atmosphere, thereby helping the
animal maintain a constant internal temperature in an intensely hot climate. Other notable features include: a
62
long head; oblong eyes; large, often pendulous ears (except for the Nellore breed, which has small ears); a
quick, light gait; and a unique voice.
Introduction to America
There are four principal Indian breeds that came to the U.S. and contributed to the development of the American
Brahman. They are the Guzerat, Nellore, Gir (Gyr), and to a lesser extent the Krishna Valley. The names of
these Zebu breeds represent the names of the provinces from which they originated. There has been no special
effort to keep these breeds separate; instead, they have been blended into one breed, the American Brahman.
The first importation from India came to South Carolina in 1849, but the identity of these cattle was lost during
the Civil War. Other Indian importations were made in 1854, 1885, and 1904-06. During 1923-25, a total of
228 cattle of the Guzerat, Gyr, and Nellore breeds were imported to Texas from Brazil. In 1946, eighteen more
Brazilian cattle were imported. In total, there are records of less than 300 Zebu importations, and most of these
were bulls. Therefore, it follows that other breeding provided most of the foundation females for this rapidly
expanding American breed. By 1910 to 1920, it is reported that many cattle in Southwest Texas and the coastal
country of the Gulf of Mexico showed evidence of Brahman breeding.
The American Brahman Today
The bull, Manso, became the most important foundation sire of the breed in America. He was bred by
Sartwelle Bros. of Palacios, Texas, one of the most prominent herds in breed development. Manso became the
property of J.D. Hudgins, Hungerford, Texas, where he sired large numbers of outstanding progeny that were
used widely at a time the breed was undergoing favorable acceptance and expansion. A high percentage of
cattle in the breed can be traced back to Manso.
Todays American Brahman is superior to any of the original Indian breeds that contributed to its development.
The best specimens of the modern Brahman are moderately deep bodied and very thick and muscular
throughout. Brahman cows are good milkers and mothers. Over time, American Brahman breeders have
improved the reproductive efficiency of the breed. They have also cleaned up the pendulous sheaths and
prepuces that were problems in some bulls. Steel-gray is the most common color of the American Brahman, but
solid red cattle are also prevalent. A nervous temperament that is characteristic of some Brahman cattle has
been improved through selection for quiet dispositions.
Due to their heat tolerance and insect resistance, Brahman crossbred cattle have contributed immeasurably to
profitable beef production throughout the gulf coastal region of the U.S. Because of their relatively wide genetic
differences, crosses of the Brahman with European breeds can result in a very high level of hybrid vigor.
Following is a summary of research conducted at Texas A & M Univ. on Brahman-Hereford crossbreds:
Crossbred cows dropped more calves than did purebred cows, and more crossbred calves survived until
weaning. The crossbred calves were heavier at weaning and gained slightly more in the feedlot than did the
purebreds. With respect to the total amount of beef produced, the combined advantage over the better of the
two parents was about 25%. The advantaged offered by Brahman crossbreds resulted in the formation of
several Brahman-influenced breeds which will be discussed in subsequent sections. In 2007-08, the American
Brahman Breeders Association registered 8,300 cattle.
The Barzona
63
His culling program was based on six traits: disposition, fertility, weight, conformation, hardiness, and milk
production. Stress was placed on the production of pounds of beef. Lasater paid no attention to traits that do
not affect the production of carcass beef, such as horns, hide, or color. In 1961, a breed association was
established in San Antonio, TX, under the name of Beefmaster Breeders Universal. Since then, the name was
changed to Beefmaster Breeders United.
The Beefmaster Today
Today, it is estimated that the breed is composed of slightly less than one-half Brahman breeding and slightly
more than one-fourth each of Hereford and Shorthorn breeding. Colors vary greatly and include red, reddish
brown, brown, dun, and black. Some are solid colored; many have some white markings. With the current
commercial demand for black-hided cattle, more breeders are producing black Beefmasters. Even though a
breed association was formed, the original concepts of Tom Lasater in developing the Beefmaster have been
continued.
The best specimens of the breed today are thick, muscular, easy-fleshing cattle that can perform well under
harsh range conditions. Their body conformation is similar to that of the Brangus. However, they tend to be a
bit larger than the Brangus. In 2007-08, Beefmaster Breeders United registered 19,017 cattle, ranking them
tenth among all beef breeds in number of cattle registered.
The Braford
65
The modern Braford resembles the Brangus in size and type, but tends to be a bit more variable, perhaps
because its development began somewhat later (1947 vs. 1932), and fewer generations have been involved in
stabilizing its type. The Brafords color pattern is similar to that of the Hereford a red body with a white face.
The Bradfords popularity has spread from its Florida origins to other regions throughout the southern U.S. The
Brafords red color is of value in the south because it absorbs less heat than black-coated cattle. In 2005, the
United Braford Breeders registered approximately 1,800 cattle.
The Brangus
In Central and South America, the various descendants of the early Spanish and Portuguese cattle are generally
referred to as Criollo. In parts of Northern Mexico, they are often called Corriente, although this term is
normally used for any small cattle of indiscriminate breeding. Corriente became the most common term used
at the Mexico/U.S. border to refer to cattle purchased for rodeo use.
The Corriente Today
Descendants of the original Spanish cattle that have remained pure are now seen in only remote regions of
Central and South America, and in very limited numbers in some areas of the southern U.S. In Florida, the few
remaining small, native cattle descendants of the Mexican Corriente are called Florida Scrub cattle or
Cracker cattle; similar cattle in Louisiana are called Swamp cattle.
The Corriente is a very hardy, heat-tolerant animal that can survive in harsh environments where feed
conditions are sparse. They are small, slender-bodied, fine-boned cattle. Mature cows range in weight from
500 to 875 lbs, and mature bulls from 825 to 1,110 lbs. Hair color is quite variable, with tan and grayish tan
being the most common. Occasionally there is a white underline or a black-and-white pattern, and sometimes a
solid or nearly solid black.
There is a breed organization representing the Corriente the North American Corriente Association in Kansas
City, Missouri. In 2007-08, 3,575 Corriente cattle were registered.
The Florida Cracker (Florida Scrub)
Development in Brazil
The Indo-Brazilian is a Zebu type breed that was developed in Brazil from 1910 to 1930. The breed was
developed from unplanned crossings between the Gyr and Guzerat breeds and later with some infusion of the
Nellore breed. A herd book was established in 1936. Scientific breeding of Indo-Brazilian cattle began near the
city of Uberaba in the state of Minas Gerais in southeastern Brazil. A breed society was formed in 1939.
Introduction to America
By 1946, Indo-Brazilian cattle were being exported to the United States. Some sources suggest that they
contributed to the development of the American Brahman. Greater use was made of Indo-Brazilian blood in the
American Brahman during the 1980s, when undue emphasis was placed on greater frame size in virtually all
U.S. beef breeds.
The Indo-Brazilian Today
The Indo-Brazilian is larger-framed, more loosely built, and lighter-muscled than the American Brahman. The
breed is distinctive for its very large pendulous ears, similar to those of the Gyr breed but much larger. It
probably has the largest ears of any breed of cattle in the world. The Indo-Brazilian is white to dark gray in
color.
The Romosinuano
U.S. MARC scientists evaluated the Romosiuano along with eight other heat-tolerant breeds as well as the
Hereford and Angus. Percent of unassisted births for Romosinuano-sired cattle was nearly perfect (99.7%).
Growth traits were significantly greater than the Texas Longhorn, tended to be greater than the Boran and Tuli,
but were lower than the two British breeds, as well as the Brahman, Nellore, Brangus, Beefmaster, and
Bonsmara. Age at puberty of heifers (385 days) was significantly later than British and Longhorn heifers,
significantly earlier than Brahmans, and similar to the other breeds. Heifer pregnancy rate was significantly
higher than the Brahman (92.7 vs. 82.8%) and comparable to all other breeds. Percent of unassisted calvings
for first-calf Romosinuano-sired heifers (83.8%) was among the highest of the breeds evaluated. However,
average weaning weight of their calves (388 lbs) was the lowest of all breeds.
In summary, the Romosinuano is a heat-tolerant breed that is high in reproductive traits, acceptable in
temperament, but quite low in growth traits.
The Santa Cruz
69
Development
The Santa Gertrudis breed was developed by the King Ranch at Kingsville in south Texas. This is the largest
ranch in the U.S. and one of the largest in the world. In the early 1900s, the ranch became interested in using
Brahman breeding to improve range cattle in the region. In 1910, they secured an extremely growthy halfblood Shorthorn-Brahman bull from Tom OConnor of Victoria, TX, and began mating him to their purebred
Shorthorn cows. The resultant offspring did so well that King Ranch became seriously interested in
crossbreeding Shorthorns and Brahmans. Around 1918, they purchased 52 of the best 3-yr.-old bulls that they
could secure from the Borden Brahman herd. These bulls were primarily three-fourths blood Brahman, because
very few purebreds were available at the time. The Borden bulls were then mated to their Shorthorn cows
which resulted in Brahman, Shorthorn progeny. From these matings, the best red heifers were mated to the
best red bulls to propagate the - line. After about three year of such matings, an outstanding red bull
named Monkey appeared. Monkey is recognized as the foundation sire of the breed, and all present day Santa
Gertrudis trace to him. The breed was named Santa Gertrudis because it was developed on the Santa Gertrudis
land grant, which was originally granted from the Crown of Spain.
The Santa Gertrudis Today
Today, the popularity of the Santa Gertrudis is being challenged by two other Brahman derivatives, the Brangus
and the Beefmaster, which were developed after the Santa Gertrudis. It would appear that all three of these
Brahman derivatives have an important role to play in the environment of the southern U.S. The Germ Plasm
Evaluation (GPE) study at the U.S. Meat Animal Research Center shows that the Santa Gertrudis is similar to
the three major British breeds (Angus, Hereford and Shorthorn) in growth rate, percent retail cuts, and
reproduction traits. They do, however, have a significantly lower percentage of carcasses that grade USDA
Choice (59% vs. 72%). In 2007-08, Santa Gertrudis Breeders International registered 7,500 cattle.
The Senepol
The pure NDama is a small, hardy, tan-colored breed, usually with horns, but some individuals are polled. It is
heat tolerant and unusually fertile under tropical conditions. The NDama is one of the very few African breeds
which carries a strong tolerance to the disease, trypanosomiasis, carried by the tsetse fly.
Introduction to the United States
In 1976, Senepol breeders on St. Croix adopted an on-farm performance testing program through the U.S.
Department of Agriculture and the College of the Virgin Islands Extension Service. In 1977, the first plane load
of Senepol cattle were sent to the U.S. mainland. Since its introduction, the Senepol influence has spread across
the southern United States.
The Senepol Today
The Senepol is a strongly polled breed of medium frame size. Mature cows weight 1,100 to 1,300 lbs. Mature
bulls average about 1,750 lbs. The cows are good milkers, and bull calves weaned at 8 months of age will
weigh up to 550 lbs. The Senepol is light solid red in color with only very minor white undermarkings on some
individuals. Its beef-type conformation is generally better than that of most other heat-tolerant breeds.
The Senepol was evaluated along with the Brahman and Tuli breeds in a multi-state Southern Regional
Research Project (S-1013) designed to study tropically adapted breeds. At birth, Senepol-sired calves were 6.5
lbs lighter than Brahman-sired calves, and 5.5 lbs heavier than Tuli-sired calves. At weaning, Senepol-sired
calves were 41.6 lbs lighter than Brahman-sired calves, but there was no significant difference between
Senepol- and Tuli-sired calves. The same was true for heifers at 1.5 and 2.5 years of age. However, from 3.5 to
8.5 years, Tuli-sired females were significantly lighter in weight than either Senepol- or Brahman-sired females.
Lifetime production efficiency was then evaluated for 73 Senepol-, 93 Brahman-, and 86 Tuli-sired females that
were maintained on south Texas rangeland. As first-calf 2-year-olds, Senepol-sired females had calves that
were 5.3 lbs heavier at birth than calves from the other two breeds. They also experienced greater calving
difficulty, having 29.2% assisted births compared to 16.7% and 5.6% for Tuli-and Brahman-sired heifers.
Weaning weights of calves from Senepol-sired females were significantly lower than calves from Brahmansired females until their 7th parity, when there was no significant difference. Weaning weights of calves from
Senepol-sired females did not differ from calves out of Tuli-sired females at any parity, from 1st through 7th.
Brahman-sired heifers had the lowest pregnancy rates of 2-year-olds. Thereafter, except for the 3rd parity,
Senepol-sired females had the lowest pregnancy rates through the 7th parity. Senepol-sired heifers also had the
lowest number of live calves at birth and lowest percent weaned until the 6th and 7th parity.
Production efficiency as measured by lbs of calf weaned/ 100 lbs of cow exposed was lowest for the Senepol in
the 1st, 2nd, 3rd, 4th, and 6th parities, but highest in the 7th parity. For the average of seven calf crops, Tuli had the
highest efficiency, being 1.6% greater than Brahman, and 3.6% greater than Senepol.
The authors concluded that in the south Texas environment, the Tuli crosses compared favorably and in most
circumstances were more productive and efficient than Brahman and Senepol crosses, even though Brahman
crosses produced consistently heavier calves. They went on to say that the Senepol crosses were the least
adapted to south Texas and only began to perform comparable to the Brahman and Tuli crosses in later parities.
The Simbrah
71
Development
In the late 1960s, shortly after the introduction of Simmental to North America, a few southern cattlemen began
to act upon the idea of a Brahman derivative breed that could thrive in the sub-tropical environment of the Gulf
Coastal region of the U.S. and still meet the needs of other segments of the beef industrystocker, feeder,
packer, retailer, and consumer.
These producers wanted to develop a breed that would complement the Brahman in the areas of muscling,
growth, temperament, early sexual maturity, and fertility. The Brahman would in turn provide strengths in heat
tolerance, insect resistance, longevity, grazing ability, and calving ease.
Although the experimentation of combining Simmental and Brahman began in the late 1960s, the actual
registration of the first Simbrah animal occurred in 1977, the year that Simbrah registration was approved by the
membership of the American Simmental Association (ASA). The first year, 700 Simbrah cattle were registered.
The second year, 1,100 head were registered, and after five years, and the herd book approached 10,000
animals. Since then, Simbrah cattle have spread beyond the Gulf Coast area to other regions of the country.
The Simbrah Today
The basic requirements for Simbrah cattle are as follows: 1) a minimum of Simmental; 2) a minimum of
Brahman; 3) a maximum of combination of other breeds. The following criteria must be met to qualify as a
purebred Simbrah: 1) Simmental; 2) Brahman; 3) both parents must be ASA registered. The Simbrah herd
book is maintained by the ASA, and all procedures and performance requirements applying to Simmental cattle
also apply to the Simbrah.
In size, the Simbrah is a moderate to large breed with most cows in the range of 1100-1500 lbs and bulls in the
range of 1,800-2,500 lbs. It is the largest of the Brahman derivative breeds. Cattlemen in the warmer climates
prefer red color and eye pigmentation. In cooler environments, black Simbrah are quite popular. Simbrah have
fine sleek hair in the summer, but usually grow enough hair in the winter to thrive up into the central plains of
the U.S. Polled Simbrah are popular and becoming more numerous.
In 2007, the American Simmental Association registered 2,020 Simbrah cattle.
The Texas Longhorn
The era of the long trail drives, when cowboys drove large Texas Longhorn herds over paths that led to cow
towns along the railroad, lasted until late in the 19th century when the open range was carved up by settlers.
Furthermore, the earlier-maturing British breeds had been imported to improve the fattening qualities of the
native cattle. At the beginning of the 20th century, the Texas Longhorn was in danger of extinction. Then, in
1927, a small herd was established by the U.S. government on the Wichita Mountain Wildlife Refuge in order
to preserve the breed. The Longhorn was further perpetuated by formation of the Texas Longhorn Breeders
Association of American in 1964, headquartered in Fort Worth, TX.
The Texas Longhorn Today
Interest in the Texas Longhorn has spread throughout the U.S., and breeders may now be found in virtually
every state. In order to preserve the basic characteristics of the breed, strong selection pressure is placed on
length and shape of horn, and coat color pattern. A highly speckled pattern with varying colors is preferred.
Breed average horn length is 50 inches, although many individuals have horns that are much longer. The
Longhorn is relatively light-muscled, but in recent years some breeders have selected for greater muscling.
Mature cows range in weight from 725 to 825 lbs. Bulls range from 1,100 to 1,750 lbs.
The Texas Longhorn was evaluated in Cycle IV of U.S. MARCs Germ Plasm Evaluation program. Birth
weight of half-blood Longhorn calves (73.5 lbs) was the lowest of all beef breeds, and the percent of unassisted
calvings was 100%. Weaning weight, postweaning average daily gain, final slaughter weight, and carcass
weight were the lowest of all breeds. Percent of carcasses grading USDA Choice (57%) was lower than the
British breeds, comparable to many of the Continental breeds, and higher than the Zebu breeds. Backfat
thickness and percent retail product were also similar to the Continental breeds. Reproductive traits of halfblood Longhorn females were among the best of all breeds in the U.S. MARC GPE program. Weaning weight
of calves from half-blood Longhorn cows were the lightest of all breeds except for Galloway.
The U.S. MARC data show that the Longhorn excels in fertility and ease of calving. In addition, they are
tolerant to extreme heat, much like the Zebu breeds. All in all, the Texas Longhorn is a fertile, very hardy and
relatively trouble-free breed that is adaptable to sparse feed resources and harsh environments.
INDIA AND PAKISTAN
The Cholistani
Origin in Pakistan
The Cholistani cattle are found in the Cholistan desert in Bahawalpur in eastern Pakistan. They are a Bos
indicus breed. The Cholistani is a triple-purpose meat/milk/draft breed. They are of relatively recent origin,
and are believed to have been developed by crossing the Sahiwal with the local cattle of the region. In color,
the Cholistani is usually white with red, brown, or black speckling. There is no indication that the Cholistani
has been exported to other countries.
73
Development in India
The Guzerat or Kankrej breed is of the Zebu type. It originated in the state of Gujerat on Indias west coast
many centuries ago. The Guzerat has spread beyond its original breeding area to become one of Indias most
important milk/draft breeds.
Introduction to the Americas
The importation of the Guzerat from India to Brazil, where it is still bred pure, began in 1890. A Brazilian herd
book was established in 1938. The Guzerat was exported to the United States, where it was the most important
breed in the formation of the American Brahman.
The Guzerat Today
The Guzerat is among Indias largest and heaviest breeds. Preweaning growth of Guzerat purebreds is among
the highest of the Bos indicus breeds but lower than that of the larger Bos taurus breeds. Similar results have
been reported for post-weaning gain, yearling weight, feedyard gain, and feed efficiency. Carcass quality grade
is lower when compared to Bos taurus breeds, but similar to other Bos indicus breeds. Color of the Guzerat
varies from light gray to black. The midsection is generally lighter in color than the remainder of the body,
especially in bulls. The bulls tend to become darker than cows or steers. In India, Guzerat cows produce an
average of 2,500 lbs of milk, but superior cows yield as much as 6,500 lbs per lactation.
The Gyr (Gir)
Development in India
The Gyr originated in the state of Gujerat on the west coast of India, and has spread to the neighboring states of
Maharashtra and Rajasthan. It is one of Indias foremost dual-purpose milk/meat breeds. The Gyr has been
used in India to improve local breeds, including the Sahiwal and Red Sindhi.
Introduction to the Americas
From 1890 to 1921, Gyr cattle were exported to Brazil, where they were used for beef production. The largest
shipments were made from 1918 to 1921. Most of the Gyr cattle introduced to the United States came from
Brazil. The Gyr played a role in the development of the American Brahman.
The Gyr Today
The Gir is distinctive in appearance with an unusually round, bulging forehead, long pendulous ears, and horns
that point backward and bend spirally up. The Gyr ranges from white to red, with many of them being mottled
in color. It is a relatively small breed. Mature cows average about 850 lbs; mature bulls about 1,200 lbs.
Average milk production of Gyr cows is 3,500 lbs per lactation, with a record production of 7,000 lbs. Butterfat
content is about 4.5%.
74
Development in India
The Krishna Valley is a breed of recent origin. During the last two decades of the 19th century, some of the
Rajas (Hindu noblemen) of the South Mahratta country, which lies in the watershed of the Krishna and other
rivers, wanted to develop a powerful bullock for agricultural purposes in the sticky, black soil of the region. It
is reported that the Krishna Valley developed from local Zebu cattle mixed with Gyr, Nellore, Guzerat, and
Mysore blood. Massiveness of size was the primary factor in the selection process.
Introduction to the Americas
Krishna Valley cattle were exported to the United States and Brazil, where they were crossed with other Zebu
breeds in developing regionally adapted Bos indicus cattle in the two countries. As a result of these crossings,
none of the Krishna Valley cattle retained their identity in the Americas.
The Krishna Valley Today
The cattle of this breed are very large in frame size, but rather loosely-built in their skeletal make-up. A
grayish-white is the preferred color, with a darker shade on the forequarters and hindquarters of the bulls. Adult
females appear more white. The forehead has a distinct bulge. The Krishna Valley has small curved horns that
usually emerge in an outward direction and curve slightly upwards and inwards. The ears are small and
pointed; breeders prefer that they not droop too much.
Krishna Valley oxen are good workers, but they cannot be used on hard ground because their hooves are
relatively soft. The cows produce some milk for human consumption after suckling their calves. However,
milking ability is extremely variable in the breed.
The Nellore
Origin in India
There has never actually existed a breed in India known as the Nellore. The name refers to a district of the old
Presidency of Madras, now belonging to the new State of Andra by the Bengal Sea. It was in Brazil that some
writers began to use the name Nellore as a synonym for Ongole, the Indian breed that contributed most to the
development of the Nellore.
The origin of the Ongole goes back 2,000 years before Christian times. It was Aryan people that brought the
ancestors of the Nellore to India, where they were submitted to very extreme weather conditions--the arid land
of Belushistan, the cold winter of Punjab, the alluvial lands of Ganges, and torrid heat by the Bengal Sea. These
stressful environments provided the Ongole breed the adaptability genes that are now expressed in the Nellore.
75
Development in Brazil
Two Ongole cattle were brought to Brazil in 1868, and two more came in 1878. Another importation came in
1895. The Nellore expanded gradually in Brazil during the late 1800s and into the 1900s. In 1938, the Nellore
herd book was created and breed standards were established. In 1960, an additional 20 animals were imported
from India. In 1962, the last and most important purchase of cattle from India was made, when 84 Ongole were
imported, and became the foundation of the most influential breeding lines. These lines contributed greatly to
the rapid expansion of the Brazilian cattle population. In 1965, the national herd consisted of 56 million. By
1995, there were 160 million cattle in Brazil, of which 100 million were Nellore.
Introduction to America
Nellore cattle were introduced to the U.S. for the first time during 1923-25, when 228 cattle of the Guzerat, Gyr,
and Nellore breeds were imported to Texas from Brazil. In 1946, eighteen more cattle were imported to the
U.S. from Brazil. These cattle were of the Indu-Brazilian breed, which is made up of some Nellore breeding as
well as Gyr and Guzerat.
The Nellore Today
The Nellore is among the largest of the purebred Indian breeds. In India, Nellore bulls are used for plowing.
Cows produce an average of about 3525 lbs of milk per lactation, but record yields of over 7,770 lbs indicate
excellent dairy potential for the breed in their native land.
U.S. MARC evaluated the Nellore along with a number of other tropically adapted breeds as well as two British
breeds, Angus and Hereford. Nellore-sired calves were lighter at birth, required less assistance, and had a
higher survival rate (95.5 vs. 88.1%) than those sired by American Brahman bulls. There were no significant
differences in weaning weight, postweaning average daily gain, or final slaughter weight. Both Zebu breeds
were significantly lower in postweaning gain and final slaughter weight than the two British breeds. Percent
grading choice was 44.0 and 39.7% for Nellore and Brahman, respectively, compared to 70.7% for Hereford x
Angus cross steers.
Percent of Nellore-sired heifers expressing puberty at 550 days was only 52.1% vs. 87.4% for Brahman.
However, final pregnancy rate was in favor of the Nellore (92.1 vs. 82.8%). As first-calf 2-year-olds, Nelloresired females had a remarkable 100% unassisted birth rate. Calf weaning weight was similar to Brahman-sired
females and significantly heavier than Angus and Hereford. Performance of mature (3 to 7 years of age)
Nellore-sired females was similar to their 2-year-old performance when compared with the British breeds.
Based upon the U.S. MARC data, it would appear that the Nellore contributed much of the weight gain found in
the American Brahman.
The Red Sindhi
Development in Pakistan
The Red Sindhi is a humped Zebu breed that originated in the Pakistani state of Sind. Due to its hardiness, heat
tolerance, tick resistance and milk production, it has spread to many regions of India and to at least 33 countries
in Africa, Asia, Oceania, and the Americas.
76
Introduction to America
It is unclear when the Sahiwal was brought to America. Because there is no official Sahiwal breed association
in the U.S., it is difficult to determine if or how many animals remain.
Evaluating the Sahiwal
The Sahiwal was evaluated in Cycle III of the Germ Plasm Evaluation program at U.S. MARC. Sahiwal-sired
calves had a higher percent of assisted births (8.7%) than might be expected for a small breed. Weaning weight,
77
postweaning gain, slaughter weight, and carcass weight were similar to the smaller British breeds (Red Poll,
Devon, and Galloway). Percent grading USDA choice (42.8%) was comparable to the Brahman and Nellore.
The same was true for percent retail product at 69.2%.
Pregnancy rate of Sahiwal-sired heifers was a remarkable 100%. Percent assisted births of Sahiwal-sired cows
was only 2%, similar to the Brahman and Nellore. Weaning weight of their calves (502 lbs) was lower than the
Brahman and Nellore, but similar to Hereford x Angus crosses.
In summary, the Sahiwal would appear to be an ideal low-maintenance, dual-purpose meat/milk breed for harsh
tropical and sub-tropical environments.
AFRICA
The Africander (Afrikander)
Mature cows weigh 1,150 lbs to 1,325 lbs, and mature bulls, weight from 1,650 to 2,200 lbs. Frame size is
about 6.5 on a scale of 1 to 11. The Africander tends to be a late-maturing animal and yields a carcass with
relatively low fat cover. The preferred color is a deep dark red; yellow animals are bred separately. Although
most Africander cattle are horned, there is a herd of polled cattle in South Africa. The Africander was used
with the Shorthorn in developing the Bonsmara breed, and with the Holstein in developing the Drakensberger
breed. Through the use of bulls and frozen semen, the Africander has been used in up-grading indigenous cattle
in a number of tropical countries.
The Ankole
Development in Africa
The original Ankole cattle are believed to have been brought to northern Uganda by Homitic tribes sometime
between the 13th and 15th centuries. The Ankoles susceptibility to the tsetse fly forced the tribes and their cattle
further south. The Bahima tribe settled on the shores of Lake Victoria in Uganda, Kenya, and Tanzania. The
Watusi or Tutsi tribe continued on to Rwanda and Burundi with their cattle, some of which have spread to Zaire
Selection in all the tribes is based on horn size and shape. They frequently measure 5 feet in length, 6 inches in
decimeter at the base of the skull, and as much as 6 feet between tips.
The Ankole Today
Ankole cattle are of the Sanga type (Longhorn Zebu x Egyptian Longhorn). The color is often red, but brown,
black, or spotted cattle are not uncommon. They are long-legged, fine-boned animals with a narrow body, and
are quite light-muscled. The hump is small and barely visible on the cow.
Even though the small-uddered Ankole cows yield only a meager amount of milk, milking is nevertheless an
important ritual in some tribes. Bloodletting is a common practice. A few tribes use the cattle for work, but
more use them for meat. In general, the animals are highly valued as status symbols for ceremonial functions,
and not for their productivity. In good condition, a mature Ankole cow may weigh 600 lbs, a mature bull 900
lbs. These weights are reached only at seasons when good grass is available. Average weights are typically
lower because of inadequate nutrition. There are three main strains of the Ankole: 1) Bahema strain, found in
Northern Kivu; 2) Bashi strain, found in Southern Kivu; and 3) Tutsi (Watusi) strain, found in Burmundi and
Rwanda. The horns are smallest in the Bahema and largest in the Tutsi strain.
The Ankole-Watusi
79
Development in Africa
The Ankole-Watusi is one of three strains of the Ankole, as discussed previously.
The tall, stately Watusi or Tutsi people arrived in Rwanda and Burundi around the 14th century, bringing with
them the massive-horned Ankole cattle. These cattle so impressed the native Hutu people that the Watusi
devised a way to exploit the situation. They loaned cows to the Hutu farmers who were allowed to look after
them, milk them, and sometimes keep the bull calves. Heifer calves were returned to the Watusi owners. For
these favors, the Hutu farmers cultivated the Watusi land and performed other services. In this way, the
Watusi avoided what they regarded as menial work, and dominated the Hutu people for centuries.
The Ankole-Watusi Today
Traditionally, Watusi cattle were considered sacred. They provided milk to their owners, but were only rarely
used for meat production because an owners wealth was measured in number of live animals. Within the
breed, by far the longest horns are those of the Inyambo sub-type, which are regarded as sacred animals and
kept only by tribal chiefs and kings. The general run of Watusi cattle are of the Inkuku sub-type, which has
shorter horns. For the Watusi people, milking carries a ritual meaning. Cattle and milk are given a special
place in ceremonial rites.
Milk yield of Watusi cows is very low, with a typical cow producing only 2 lbs of milk per day, although an
exceptional one may produce up to 8 lbs per day. However, butterfat content is extremely high at 10%. The
lactation period is short. In recent years, the national government has attempted to select for cattle that produce
more milk and have better meat production. Famine and disease, as well as conflicts with traditional practices,
have slowed this effort.
In body type, the Ankole-Watusi is similar to the Ankole. While the color of the Ankole varies, the Watusi
strain is predominantly red.
The Ankole-Watusi in North America
In January, 1983, North Americans interested in the Ankole-Watusi met in Denver, Colorado, and established
the Ankole Watusi International Registry. Many of these people had been raising the breed since before 1978.
Within 5 months, the Registry had 74 members. These members shared a strong commitment to the breed,
although they had different priorities for it. Some simply wanted to concentrate on preventing extinction of the
breed. Some were involved in the production of superior crossbred roping cattle. Others championed the lowfat characteristics of the meat. In 1989, the Registry adopted standards for the breed.
The Bonsmara
To develop a productive beef breed for the subtropical climate, Dr. Jan Bonsma at the Mara Research Station in
Transvaal in 1947 began crossing selected Afrikander cows with bulls of five British beef breeds, and then
performance tested the progeny. After pilot trials, it was decided to continue only with the better performing
Hereford and Shorthorn crossbreds. The initial results were encouraging; weaning weights of calves from the
crossbred cows averaged about 20% higher than the average of the three parent breeds. Likewise, calving
percentages were appreciably higher, and calf mortality was much lower than in the British breeds. Within 20
years after the initial crossbreeding trials, a superior beef cattle breed for the environment had been established.
The final result consisted of approximately 3/16 Hereford, 3/16 Shorthorn, and 5/8 Afrikander. The name
Bonsmara was derived from Bonsma, the scientist who played the major role in developing the breed, and
Mara, the research station at which the animals were bred.
Introduction to North America
Although the Bonsmara has not been exported to the U.S., it nevertheless has had an indirect effect as a result of
the many lectures conducted by Jan Bonsma himself throughout the Americas. As a result of his long-time
research, he had observed that certain conformation traits were related to various functional traits (reproduction,
growth, etc.). In other words, form related to function. His theories were often challenged by members of the
audience, but Bonsmas strong rebuttals usually carried the day.
The Bonsmara Today
Bonsmara calves represent about 45% of all births recorded by beef and dual-purpose breeds in South Africa.
The conformation of the Bonsmara is an improvement over the Africander. The sloping rump has been
somewhat leveled, and the hump has been reduced in the bull and made almost nonexistent in the cow.
Bonsmara cattle are reddish brown in color. Mature cows weigh about 1,100 lbs; mature bulls about 1,775 lbs.
The Bonsmara was recently evaluated along with eight other heat-tolerant breeds, as well as the Hereford and
Angus, in U.S. MARCs Germ Plasm Evaluation program. Percent of unassisted births (98.3%) was similar to
the Romosinuano and the Longhorn, and significantly higher than the Brahman (89.4%). Final slaughter weight
(1,218 lbs) was significantly higher than the Boran, Tuli, Romosinuano, and Longhorn, similar to the Brahman
and Nellore, but significantly lower than the British breeds and the Brangus and Beefmaster. Age at puberty of
Bonsmara-sired heifers (380 days) was significantly later than British and Longhorn heifers, significantly earlier
than Brahman, and comparable to the other breeds. Heifer pregnancy rate (87.9%) was intermediate among all
breeds. Percent of unassisted calvings for first-calf Bonsmara-sired heifers (55.6%) was the lowest of all
breeds. Average weaning weight of their calves (428 lbs) was significantly higher than the Romosinuano,
significantly lower than the Brahman, and similar to the other breeds.
In summary, the Bonsmara is a heat-tolerant breed that is similar in many respects to the Beefmaster and
Brangus and could be used as an alternative in a crossbreeding program in hot climates.
The Boran
Development in Africa
The Boran was developed by Kenyan ranchers from the cattle of the Borana people of southern Ethiopia. The
breed can now be found in southwestern Somalia as well as southern Ethiopia and northern Kenya. The Boran
belongs to the East African Shorthorned Zebu type and is raised primarily for meat production. In 1990, there
was an importation of Boran cattle to Australia.
81
Development in Africa
The Masai cattle are those developed and owned by the Masai tribe of Kenya and Tanzania in eastern Africa.
The life of the tribe revolves around cattle. Virtually all social roles and status are derived from the relationship
of individuals to their cattle. Cows milk, along with blood, is the staple food of the Masai, who eat no grain or
fruit. Once a month, blood is taken from the animals by shooting a small arrow into the neck. The blood is then
mixed with milk in a gourd prior to consumption. Masai cattle are of a Zebu-Sanga type.
The Masai Today
There is a significant amount of variation in Masai cattle due to the centuries-old practice of stealing cattle from
other tribes. This is supported by the Masai legend which relates that Ngai (God) sent them cattle at the
beginning of time and gave them the sole right to retain cattle. Compared to calves belonging to neighboring
tribes, Masai calves are the largest and in the best condition. This is due mainly to the generous amount of milk
the calves receive. In general, the Masai have so many cows that only a portion of the milk is needed for human
consumption, leaving an abundant supply for the calves.
Masai cattle are relatively small in size. Mature cows average about 790 lbs, and mature bulls about 880 lbs.
Frame score ranges from 3.5 to 4.5 on a 9 point scale. The breed has an unusually small, narrow, head. They
are reasonably well-muscled. Coloration is quite variable, although the Masai people prefer brindle colored
animals.
82
The Mashona
Development in Zimbabwe
Mashona cattle originated from the Shona people of eastern Zimbabwe. Natural selection over the centuries
resulted in a hardy breed that was tolerant of the disease and parasites of the dry area where it was raised. Like
the Tuli, the Mashona is of the Sanga type.
Improvement of the Mashona began in 1941, when F.S.B. Willoughby secured some of the best cows and a
few of the bulls that he could convince the chiefs of the Mashona tribe to part with. A herd book was
established in 1954. For registration, the Mashona Breed Society requires the following: 1) All cattle must meet
the beef conformation standards established; 2) Cows must have calved at least twice in three years and have
produced two progeny meeting breed conformation standards; and 3) Bulls must have produced an entire
seasons progeny, of which 90% and a minimum of 18 calves must meet the breed conformation standards.
The Mashona Today
The breed is raised primarily for meat production, although it is reported that they can also be used as work
animals. The cattle are either solid dark red or solid black in color. Most of them are polled, but horns are not
discriminated against. When horns are present, they are relatively small, growing outward and upward from the
head. The improved Mashona is a thickly muscled animal. In size, they are quite small, with mature females
weighing only 600 to 775 lbs. Today, the Mashona is being bred in a widely spreading territory covering most
of the eastern half of Zimbabwe and an adjoining region of Mozambique.
The Ndama
Development in Africa
The NDama is a humpless Bos taurus breed that originated in west Africa in a mountainous region of the
country of Guinea. From there, NDama cattle spread or were exported to nearly every country of west Africa
and to parts of central Africa. The breeds expansion is primarily due to its resistance to nagana, the usually
fatal cattle disease caused by tsetse fly.
The Ndama Today
The NDama is used primarily for producing meat, because it is not a good milker. NDama cows yield only
about 1,000 lbs of milk in a 7 to 8 month lactation. The word NDama means small cattle. It is indeed a very
small breed. Mature cows weigh only 460 to 660 lbs, and mature bulls 550 to 800 lbs. Frame score is barely 1
on a 1 to 9 scale. Color of the NDama varies from region to region. In general, it is beige or light yellow to
deep brown. However, some animals are red to nearly black, and a few are gray or white. The NDama is a
83
very hardy breed that thrives better than most other breeds on the low-protein grasses and rough vegetation of
tropical Africa.
The Tuli
Development in Africa
In 1942, Mr. Len Harvey, a farmer in the lowveld region of Zimbabwe, noticed that there appeared to be a
distinct type of yellow Sanga cattle amongst the ordinary native stock. These cattle seemed to be better adapted
to the harsh local environment, and were superior to other stock. As a result of Harveys observations, the
Zimbabwe government decided to purchase some of these cattle to determine if they could be further improved
and whether they could breed true to type.
During 1945, 3000 acres were set aside in this same region for the establishment of a cattle breeding station,
and Mr. Harvey was employed as full-time officer in charge. The objective of the then named, Tuli Breeding
Station, was to assist in improving the cattle in the outlying areas of Zimbabwe.
The commercial cattle raisers soon appreciated the potential of the Tuli, and for many years, breeding stock was
sold to them. The breed can now be found throughout Zimbabwe. The potential of the Tuli was also
recognized by South African cattle breeders, and numerous imports have resulted in an ever-increasing Tuli
population in South Africa. In 1961, the Tuli Breed Society was established, which included development of a
constitution and rules for registration.
The Tuli is a small- to medium-sized breed. Mature cows average about 1,100 lbs. in weight, and mature bulls
about 1,750 lbs. Most Tuli cattle are naturally polled and most of them are golden brown in color, but some are
white, pale gray, or red. The Tuli Breed Society recognizes all unicolored animals except black.
The Tuli Today
The Tuli was evaluated in Cycle V of U.S. MARCs Germ Plasm Evaluation program. Percent of unassisted
calvings (97.1%) of Tuli-sired calves was not different from Hereford x Angus calves. Weaning weight, final
slaughter weight, and carcass weight were the lowest of the breeds evaluated. Percent of carcasses grading
USDA Choice (63.8%) was the highest of the other breeds, except Hereford x Angus crosses (77.4%).
Age at puberty for Tuli-sired heifers was significantly later than Hereford x Angus crosses (365 vs. 351 days),
but there was no difference in final pregnancy rate. Tuli-sired cows were similar to Hereford x Angus cross
cows in all reproductive traits. However, average weaning weight of their calves tended to be lower (471 vs.
483 lbs.).
In summary, the Tuli provides an opportunity for producers to use a heat-tolerant breed that does not
compromise carcass quality.
84
AUSTRALIA
The Belmont Adapteur
Development in Australia
The Belmont Adapteur was developed by the Commonwealth Scientific and Industrial Research Organizations
(CSIRO) Division of Animal Genetics at the Belmont Station near Rockhampton in Queensland, Australia.
Development began in 1953, the same year as development of the Belmont Red breed began. The Adapteur is
the result of crosses between Herefords and Shorthorns. Selection has been primarily for increased tolerance to
heat and resistance to ticks. The foundation female of the breed was a cow with zero ticks.
The Belmont Adapteur Today
Adapteur cattle are early maturing and no more than medium in size. They are low maintenance cattle that
require relatively little care. They are mostly red in color with some white on the head, underline, and legs.
Their eyes are well pigmented. When Adapteur bulls are mated with Brahman cows to produce F1 progeny, a
substantial degree of hybrid vigor is expressed. The F1 progeny grow faster and are more fertile than Brahmans,
but have similar resistance to ticks and internal parasites. They are about 10% more efficient than Brahmans,
and have the carcass qualities of the Bos taurus. Some Adapteurs have an extremely high resistance to cattle
ticks as they carry a gene that has a major impact on resistance. The frequency of the gene is being increased
by embryo transfer and assortative mating.
The Belmont Red
Development in Australia
The Belmont Red was developed by the Commonwealth Scientific and Industrial Research Organization
(CSIRO) Division of Animal Genetics at the Belmont Station near Rockhampton in Queensland, Australia. Its
breed composition is approximately 50% Africander, 25% Hereford, and 25% Shorthorn. The breed was
developed to improve the fertility of the Bos indicus component (Africander), while still retaining the traits of
heat tolerance and tick resistance. Development began in 1953. By 1968, it had become a recognized breed. A
breed society was established in 1978.
The Belmont Red Today
Research trials in Australia and Africa have shown that the Belmont Red has higher fertility than pure Bos
indicus breeds and better than most other Bos indicus x Bos Taurus composites. . Heat tolerance has remained
remarkably good. Tick resistance, while lower than that of pure Bos indicus cattle, is still high. As the name
85
implies, the coat color is red. Regarding frame size, the Belmont Red lies somewhere between the Africander
and the British breeds.
OTHER BREEDS
The Amerifax
86
The Beefmaker
Development in Australia
The Beefmaker was developed, starting in 1972 by the well-known Wright family on their New South Wales
properties of Wallamumbi and Jeogla. Its development involved the infusion of Simmental blood with
specially selected base Herefords from the Wright herd. The objective of the breeding program was to develop
cattle with faster growth rates, heavier carcass weights, a higher ratio of lean to fat in the carcass, maximum
fertility, and greater stress tolerance compared to the two contributing breeds.
The Beefmaker Today
After eight generations of breeding, the Beefmaker was stabilized at 75% Hereford and 25% Simmental
breeding. It established a national reputation for high feed conversion efficiency, high carcass yields, and
relatively low maintenance costs.
The Hays Converter
Development in Canada
In 1952, Senator Harry Hays of Calgary, Alberta, a senator in the dominion government began selecting the
individuals that were used in developing a highly productive beef animal. His goal was to breed an animal that
would efficiently convert feed to lean meat and reach a desirable market weight of 1100 lbs at 12 months of
age. Selection would be based only on performance.
Hays started by selecting a group of Hereford heifers from the reputation heard of a neighboring rancher in the
Turner Valley in southern Alberta. As a former dairyman, he was impressed with the growth potential of the
Holstein. He proceeded to select eight sons of Spring Farm Fond Hope, a Holstein bull weighing 3,120 lbs
whose progeny were known for their large size, strong constitution, and excellent feet. These bulls were from
stock that had exceptional udders and were better fleshed than the typical Holstein cow. Fond Hope himself,
having well-fleshed hindquarters, tended to be more of the type of the European Friesian than the Canadian
Holstein. The Fond Hope cows were mated to the Hereford females to produce Holstein x Hereford
progeny. From these progeny, 159 heifers were selected as the foundation cow herd of the new breed.
The second step was a backcross to the Hereford. The 159 heifers were bred artificially to the Hereford bull,
Silver Prince 7P, a Certified Meat Sire who weighed 2,640 lbs. From the resulting progeny, five of the fastestgaining bulls were selected to go back into the foundation herd of 159 Holstein x Hereford females.
87
The third step was the introduction of Brown Swiss influence. Four bulls, all from dams weighing 1,800 lbs
each and which were great-grandsons of the Brown Swiss cow, Jane of Vernon, famous for having the worlds
most perfect udder, were bred to 100 selected Hereford cows. For several years, the best females from the
resulting Brown Swiss x Hereford progeny were introduced into the original foundation herd, which at the time
was approximately 2/3 Hereford, 1/3 Holstein. The highest gaining bulls from these combined crosses were
then bred to the females with the best udders. At that point, the herd was closed. Subsequent selection
strategies emphasized weaning and yearling weights and, especially, the udders on replacement females. No
attention was paid to color. In 1975, the first purebred certificate of registration was issued for Hays
Converters.
The Hays Converter Today
Hays Converters are a large, rugged, well fleshed type of cattle with sound feet and legs. The cows are noted
for their excellent udders. The body is usually black, with white markings much like a Hereford. There are
some animals that are red and white rather than black and white. Mature cows in breeding condition weigh
1300 to 1400 lbs. Mature bulls weigh up to 2,800 lbs in breeding condition. Steers put on a finishing diet after
weaning weigh about 1,100 lbs at 12 to 15 months of age.
The breed registry association is located in Calgary, Alberta, and is known as the Canadian Hays Converter
Association-Hays Ranches.
The RX3
88
The Wagyu
Development in Japan
The word Wagyu refers to all Japanese beef cattle. Wa means Japanese or Japanese-style, and gyu means
cattle. The original cattle of Japan were Turano-Mongolian animals, an ancient type of Asian cattle that are
believed to have been imported from Korea.
During the late 19th century, British and Continental breeds were imported to increase the size and draftability
of the native Japanese cattle. The Devon, Shorthorn, Jersey, and Guerney were the first breeds to be imported.
Later, the government sponsored a program for cattle improvement, and Simmental, Brown Swiss, and Ayrshire
were brought in. There was no planned program for crossbreeding when this was done. The results of all this
miscellaneous interbreeding were considered unsatisfactory, and in 1918 a government program was initiated
for selection in accordance with recognized standards established for the Wagyu. Selection for specific traits
was dependent upon region, and extensive linebreeding was used to achieve these traits.
Three breeds evolved out of this program: 1) the Japanese Black, which accounts for about 80% of the Wagyu
population; 2) the Japanese Brown, which makes up close to 20% of the population; 3) the Japanese Polled,
which accounts for less than 1% of the Wagyu population and is black in color. However, another breed of
Wagyu was developed on the island of Kyushu that is red in color.
Introduction to America
The first Wagyu cattle were imported to the U.S. in 1976. They consisted of two Black Wagyu and two Red
Wagyu bulls. That was the only importation of Wagyu into the U.S. until 1993, when two male and three
female Black Wagyu were imported. Then, in 1994, 35 male and female cattle consisting of both black and red
genetics reached the U.S.
The Wagyu Today
The Wagyu is lighter-muscled and smaller in size than most British or Continental breeds. Mature cows weigh
about 950 lbs., and mature bulls about 1400 lbs.
The Wagyu was evaluated in Cycle VI of the Germ Plasm Evaluation program at U.S. MARC. Seven from 19
bulls obtained through cooperation of the American Wagyu Association was used in the program. Even though
gestation length (286.9 days) was significantly longer than for the other five breeds evaluated, birth weights
were significantly lower (80.3 lbs) and unassisted calvings were 99.3%. Weaning weight (459 lbs),
postweaning average daily gain (2.69 lbs/day) and final slaughter weight (1,196 lbs) were significantly lower
than all other breeds.
Percent of carcasses grading USDA Choice (85%) was significantly higher than all other breeds, except for
Angus (88%). External fat thickness (.36 in.) was significantly lower than the average of the Hereford and
Angus (.49 in.), and percent retail product was significantly higher (62.5 vs. 60.8%). Warner-Bratzler shear
force of cooked steaks (7.82 lbs) was comparable to Angus, but significantly lower than the other breeds.
89
Age at puberty was younger for Wagyu-sired heifers than the average of Hereford and Angus-sired heifers (353
vs. 362 days), but final pregnancy rate was not different.
In summary, the Wagyu produces a high quality beef product, but lacks the growth rate of the European breeds.
SUMMARY OF MEAT TENDERNESS
Tenderness is economically important because it is the palatability trait that consumers rate highest when
evaluating meat quality. An accurate objective measure of tenderness is the Warner-Bratzler shear force. It is
measured in pounds of force required to slice through a core sample of cooked steak. Tables 1, 2, and 3 are
summaries of shear force values for sire breeds evaluated in Cycles V, VI, and VII of U.S. MARCs Germ
Plasm Evaluation program.
Table 1. Sire Breed Means for Shear Force of Rib Steaks From Steers in Cycle V of GPE.
Sire breed
No. of
Warner-Bratzler
of steer
steers
shear force, lb
Hereford
106
10.6b
Angus
101
8.9a
Average
207
9.7a
Brahman
119
13.2d
Boran
138
11.3c
Tuli
158
10.1b
Piedmontese
35
10.1b
Belgian Blue
143
10.7b
a,b,c,d
Different superscripts denote significantly different values (P<.05).
As shown in Table 1, steaks from Brahman- and Boran-sired steers were significantly tougher than those from
other sire breeds, and Brahman-sired steaks were significantly tougher than Boran-sired steaks. Steaks from
Angus-sired steers were the most tender of all sire breeds.
Table 2. Sire Breed Means for Shear Force of Rib Steak From Steers in Cycle VI of GPE.
Sire breed
No.
Warner-Bratzler
shear force, lb
of steer
of steers
Hereford
86
8.38b
Angus
88
7.87a
Wagyu
125
7.82a
Norwegian Red
82
8.35b
Swedish Red and White
74
8.69b
Beef Friesian
132
8.70b
a,b
Different subscripts denote significantly different values (P<.05).
Table 2 shows that steaks from Wagyu- and Angus-sired steers do not differ from one another, but are
significantly different from all other sire breeds.
90
Table 3. Sire Breed Means for Shear Force of Rib Steaks from Steers in Cycle VII of GPE.
Sire breed
No.
Warner-Bratzler
of steer
of steers
shear force, lb
Hereford
86
9.1b
Angus
83
8.9b
Red Angus
82
9.2b
Simmental
80
9.5a,b
Gelbvieh
81
10.0a
Limousin
73
9.5a,b
Charolais
85
9.6a,b
a,b
Different superscripts denote significantly different values (P<.05).
As Table 3 shows, there are no statistically significant differences among breeds except for steaks from
Gelbvieh-sired steers which were significantly different than those from Hereford-, Angus-, and Red Angussired steers.
91
References
Akins, Zane. 2006. National Pedigreed Livestock Council, The Villages, FL. 32159.
American Livestock Breeds Conservancy. 1993. Pittsboro, North Carolina. 27312, USA.
Briggs, Hilton M. 1969. Modern Breeds of Livestock, 3rd Edition. The Macmillian Publishing Company.
Briggs, Hilton M. 1980. Modern Breeds of Livestock, 4th Edition. The Macmillian Publishing Company.
Cundiff, L.V., F. Szabo, K.E. Gregory, R.M. Koch, M. E. Dikeman, and J.D. Crouse. 1993. Breed
Comparisons in the Germplasm Evaluation Program at MARC. Proc. Beef Improvement Federation
Meeting. May 26-29, Asheville, NC. pp. 124-136.
Cundiff, L.V., K. E. Gregory, T.L. Wheeler, S.D. Shackelford, M. Koohmaraie, H.C. Freetly, and D.D.
Lunstra. 2000. U.S. MARC Germplasm Evaluation Program Progress Report No. 19. ARS. USDA.
Cundiff, L.V., J.L. Wheeler, S.D. Shakelford, M. Koohmaraie, R.M. Thallman, K.E. Gregory, and L.D. Van
Vleck. 2001. U.S. MARC Germplasm Evaluation Program Progress Report No. 20. ARS. USDA.
Cundiff, L.V., T.L. Wheeler, K.E. Gregory, S.D. Shakelford, M. Koohmaraie, R.M Thallman, G.D Snowder,
and D. Van Vleck. 2004. U.S. MARC Germplasm Evaluation Program Progress Report No. 22. ARS.
USDA.
Cundiff, L.V. 2005. Performance of tropically adapted breeds in a temperate environment: calving, growth,
reproduction and maternal traits. Tropically Adapted Breeds, Regional Project 5-1013. pp. 123-135.
Felius, Marleen. 1985. Genus Bos: Cattle Breeds of the World. MSD-AC-VET, Rahway, N.J.
French, M.H. 1966. European Breeds of Cattle, Vol. I. Food and Agriculture Assoc. of the United Nations,
Rome.
French, M.H. 1966. European Breeds of Cattle, Vol. II. Food and Agriculture Assoc. of the United Nations,
Rome.
Hough, Bob. 2005. The History of Red Angus. Red Angus Assoc. of America, Denton, TX.
MacDonald, James, and James Sinclair. 1910. History of Aberdeen-Angus Cattle. Vinton & Company, Ltd.,
London.
Mason, I.L. 1960. A World Dictionary of Breeds, Types and Varieties of Livestock. Slough, England,
Commonwealth Agricultural Bureaux.
Purdy, H.R. and R.J. Dawes. 1987. Breeds of Cattle. Chanticlear Press, Inc., New York.
Ritchie, Harlan D. 2002. Historical Review of Cattle Type. Animal Science Staff Paper 390, Michigan State
University, East Lansing, MI.
Rouse, J.E. 1970. World Cattle, Vol. I. Univ of Oklahoma Press, Norman, OK.
Rouse, J.E. 1970. World Cattle, Vol. II. Univ of Oklahoma Press, Norman, OK.
92
Sanders, Alvin H. 1914. The Story of the Herefords. Sanders Publishing Company, Chicago, Ill.
Sanders, Alvin H. 1918. Shorthorn Cattle. Sanders Publishing Company, Chicago, Ill.
93
|
https://www.scribd.com/document/289916407/Breeds-of-Beef-Cattle-Ritchie-Jan2009-pdf
|
CC-MAIN-2019-30
|
refinedweb
| 39,230
| 64.1
|
CJ Keist wrote: >Running the bash command came back to the prompt with no errors. > > >On 8/7/10 10:26 AM, Mark Sapiro wrote: >> bash -c 'for p in ; do echo Huh? $p ; done' That's the expected result. Per your immediately previous post, it seems the issue was some incompatibility with a specific (non-GNU) make. Instead of commenting the offending portion of the makefile, we could (expanding on Stephen's suggestion) make it ifneq ($(strip $(PACKAGES)),) for p in $(PACKAGES); \ do \ gunzip -c $(srcdir)/$$p.tar.gz | (cd $(PKGDIR) ; tar xf -); \ (cd $(PKGDIR)/$$p ; umask 02 ; PYTHONPATH=$(PYTHONLIBDIR) $(PYTHON) $(SETUPCMD)); \ done endif however, the ifneq ... endif is a GNU make construct and that itself may not work in non-GNU makes. -- Mark Sapiro <mark at msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan
|
https://mail.python.org/pipermail/mailman-users/2010-August/070065.html
|
CC-MAIN-2022-21
|
refinedweb
| 146
| 73.37
|
ship 0.3
Swiss Health Insurance Premiums.
Under Development
Currently, SHIP is under development, which is why the following instructions are meant for developers. Expect this README to grow in the future.
Installation
Create Project
mkdir ship && cd ship git clone git://github.com/seantis/ship.git .
Install SHIP
(Virtualenv or Virtualenvrwapper are highly recommended)
virtualenv -p python2.7 --no-site-packages . source bin/activate python setup.py develop
Test SHIP
python setup.py test
Usage
There’s an interactive example using IPython notebook in the “docs” folder. Read docs/example.txt for further instructions.
For now it is best to get a database running, grab a coffee and read the source.
To get a simple sqlite database running:
from ship import config config.connect('sqlite:///premiums.db') from ship import load load.all()
To understand the data read models/premium.py and db.py
Import latest data
The latest data for the Swiss healthinsurance premiums are not yet publically available, but they will be soon. Currently to get them one has to contact the Swiss governement.
The data they release is a mixture of csv and xls files. To import them into ship one has to do the following:
Check if the data structure has changed.
Compare Doku_PraemienDaten.txt in the data release with ship/rawdata/doku_praemien_daten.txt. The field descriptions should match.
Copy the premiums.
Praemien_CH.csv and Praemien_EU.csv can be used without changes. Just copy them to the ship/rawdata folder, renaming them appropriately. E.g. if 2014 rename them as follows:
Praemien_CH.csv -> ship/rawdata/2014_ch.csv Praemien_EU.csv -> ship/rawdata/2014_eu.csv
The first line (headers) may be omitted, though it should also work with the header line present.
Copy the insurers.
Open the Praemien_CH.xls file, select the “(G)” sheet, and copy the columns “G_ID” and “G_KBEZ” to the new 2014_insurers.csv file. Use semicolons as separator. When in doubt, check the insurers file of a previous year.
Copy the towns.
The towns and the regions they are in can be acquired through the following website:
From the B_NPA_2014 copy PLZ, Ortsbezeichnung, Kanton, BFS-Nr., Region and Gemeinde into a csv in the same format as the insurers in step three.
Note that the BFS-Nr. comes before the region. The column order must be as follows:
PLZ, Ortsbezeichnung, Kanton, BFS-Nr., Region, Gemeinde
Store this as ship/rawdata/2014_towns.csv
Adjust the test.
Add the newly added year to ship/tests/test_db.py and run python setup.py test. If there’s an unicode error you should save the csv files using UTF-8 encoding.
License
This project is released under the GPL v3. See LICENSE.txt.
Changelog
0.3
- Rerelase of 0.3rc2
0.3rc2
- Fixes data not being served when executing ‘map-run’
0.3rc1
- Includes insurance data for 2014
- Moves map example inside the module, including it on PyPI.
0.2
- Inclues insurance data for 2013
0.1
- Includes insurance data for 2012
- Author: Denis Krienbühl
- License: LICENSE.txt
- Package Index Owner: seantis
- DOAP record: ship-0.3.xml
|
https://pypi.python.org/pypi/ship/0.3
|
CC-MAIN-2016-44
|
refinedweb
| 510
| 62.24
|
A number is given. We have to find two pairs, that can represent the number as sum of two cubes. So we have to find two pairs (a, b) and (c, d) such that the given number n can be expressed as n = a3 + b3 = c3 + d3
The idea is simple. Here every number a, b, c and d are all less than n1/3. For every distinct pair (x, y) formed by number less than n1/3, if their sum (x3 + y3) is equal to the given number, we store them into hash table with the sum value as key, then if the same sum comes again, them simply print each pair
getPairs(n): begin cube_root := cube root of n map as int type key and pair type value for i in range 1 to cube_root, do for j in range i + 1 to cube_root, do sum = i3 + j3 if sum is not same as n, then skip next part, go to second iteration if sum is present into map, then print pair, and (i, j), else insert (i,j) with corresponding sum into the map done done end
#include <iostream> #include <cmath> #include <map> using namespace std; int getPairs(int n){ int cube_root = pow(n, 1.0/3.0); map<int, pair<int, int> > my_map; for(int i = 1; i<cube_root; i++){ for(int j = i + 1; j<= cube_root; j++){ int sum = i*i*i + j*j*j; if(sum != n) continue; if(my_map.find(sum) != my_map.end()){ cout << "(" << my_map[sum].first << ", " << my_map[sum].second << ") and (" << i << ", " << j << ")" << endl; }else{ my_map[sum] = make_pair(i, j); } } } } int main() { int n = 13832; getPairs(n); }
(2, 24) and (18, 20)
|
https://www.tutorialspoint.com/find-cube-pairs-a-n-2-3-solution-in-cplusplus
|
CC-MAIN-2021-49
|
refinedweb
| 279
| 77.2
|
A class layer to model a single cloud layer.
More...
#include <cloud.hxx>
List of all members.
This is the list of available cloud coverages/textures.
A class layer to model a single cloud layer.
Definition at line 50 of file cloud.hxx.
Constructor.
Definition at line 132 of file cloud.cxx.
get the transition/boundary layer depth in meters.
This allows gradual entry/exit from the cloud layer via adjusting visibility.
Definition at line 259 of file cloud.cxx.
repaint the cloud colors based on the specified fog_color
Definition at line 603 of file cloud.cxx.
0.0
reposition the cloud layer at the specified origin and orientation.
Definition at line 620 of file cloud.cxx.
[inline]
set the alpha component of the cloud base color.
Normally this should be 1.0, but you can set it anywhere in the range of 0.0 to 1.0 to fade a cloud layer in or out.
Definition at line 156 of file cloud.hxx.
set coverage type
Definition at line 277 of file cloud.cxx.
set the cloud movement direction
Definition at line 130 of file cloud.hxx.
true
set the layer elevation.
Note that this specifies the bottom of the cloud layer. The elevation of the top of the layer is elevation_m + thickness_m.
Definition at line 234 of file cloud.cxx.
set the cloud span
Definition at line 219 of file cloud.cxx.
set the cloud movement speed
Definition at line 142 of file cloud.hxx.
set the layer thickness.
Definition at line 253 of file cloud.cxx.
set the transition layer size in meters
Definition at line 265 of file cloud.cxx.
|
http://simgear.sourceforge.net/doxygen/classSGCloudLayer.html
|
CC-MAIN-2017-30
|
refinedweb
| 274
| 62.64
|
/* * Copyright (c) 1989, 1993 *[] = "@(#)crypt.c 8.1 (Berkeley) 6/4/93"; #endif /* LIBC_SCCS and not lint */ #ifdef HAVE_UNISTD_H #include <unistd.h> #endif #include <limits.h> #ifdef HAVE_PWD_H #include <pwd.h> #endif #include <stdio.h> #ifndef _PASSWORD_EFMT1 #define _PASSWORD_EFMT1 '_' #endif /* * UNIX password, and DES, encryption. * By Tom Truscott, trt@rti.rti.org, * from algorithms by Robert W. Baldwin and James Gillogly. * * References: * "Mathematical Cryptology for Computer Scientists and Mathematicians," * by Wayne Patterson, 1987, ISBN 0-8476-7438-X. * * "Password Security: A Case History," R. Morris and Ken Thompson, * Communications of the ACM, vol. 22, pp. 594-597, Nov. 1979. * * "DES will be Totally Insecure within Ten Years," M.E. Hellman, * IEEE Spectrum, vol. 16, pp. 32-39, July 1979. */ /* ===== Configuration ==================== */ /* * define "MUST_ALIGN" if your compiler cannot load/store * long integers at arbitrary (e.g. odd) memory locations. * (Either that or never pass unaligned addresses to des_cipher!) */ #if !defined(vax) #define MUST_ALIGN #endif #ifdef CHAR_BITS #if CHAR_BITS != 8 #error C_block structure assumes 8 bit characters #endif #endif /* * define "LONG_IS_32_BITS" only if sizeof(long)==4. * This avoids use of bit fields (your compiler may be sloppy with them). */ #if !defined(cray) #define LONG_IS_32_BITS #endif /* * define "B64" to be the declaration for a 64 bit integer. * XXX this feature is currently unused, see "endian" comment below. */ #if defined(cray) #define B64 long #endif #if defined(convex) #define B64 long long #endif /* * define "LARGEDATA" to get faster permutations, by using about 72 kilobytes * of lookup tables. This speeds up des_setkey() and des_cipher(), but has * little effect on crypt(). */ #if defined(notdef) #define LARGEDATA #endif int des_setkey(), des_cipher(); /* compile with "-DSTATIC=int" when profiling */ #ifndef STATIC #define STATIC static #endif STATIC void init_des(), init_perm(), permute(); #ifdef DEBUG STATIC void prtab(); #endif /* ==================================== */ /* * Cipher-block representation (Bob Baldwin): * * DES operates on groups of 64 bits, numbered 1..64 (sigh). One * representation is to store one bit per byte in an array of bytes. Bit N of * the NBS spec is stored as the LSB of the Nth byte (index N-1) in the array. * Another representation stores the 64 bits in 8 bytes, with bits 1..8 in the * first byte, 9..16 in the second, and so on. The DES spec apparently has * bit 1 in the MSB of the first byte, but that is particularly noxious so we * bit-reverse each byte so that bit 1 is the LSB of the first byte, bit 8 is * the MSB of the first byte. Specifically, the 64-bit input data and key are * converted to LSB format, and the output 64-bit block is converted back into * MSB format. * * DES operates internally on groups of 32 bits which are expanded to 48 bits * by permutation E and shrunk back to 32 bits by the S boxes. To speed up * the computation, the expansion is applied only once, the expanded * representation is maintained during the encryption, and a compression * permutation is applied only at the end. To speed up the S-box lookups, * the 48 bits are maintained as eight 6 bit groups, one per byte, which * directly feed the eight S-boxes. Within each byte, the 6 bits are the * most significant ones. The low two bits of each byte are zero. (Thus, * bit 1 of the 48 bit E expansion is stored as the "4"-valued bit of the * first byte in the eight byte representation, bit 2 of the 48 bit value is * the "8"-valued bit, and so on.) In fact, a combined "SPE"-box lookup is * used, in which the output is the 64 bit result of an S-box lookup which * has been permuted by P and expanded by E, and is ready for use in the next * iteration. Two 32-bit wide tables, SPE[0] and SPE[1], are used for this * lookup. Since each byte in the 48 bit path is a multiple of four, indexed * lookup of SPE[0] and SPE[1] is simple and fast. The key schedule and * "salt" are also converted to this 8*(6+2) format. The SPE table size is * 8*64*8 = 4K bytes. * * To speed up bit-parallel operations (such as XOR), the 8 byte * representation is "union"ed with 32 bit values "i0" and "i1", and, on * machines which support it, a 64 bit value "b64". This data structure, * "C_block", has two problems. First, alignment restrictions must be * honored. Second, the byte-order (e.g. little-endian or big-endian) of * the architecture becomes visible. * * The byte-order problem is unfortunate, since on the one hand it is good * to have a machine-independent C_block representation (bits 1..8 in the * first byte, etc.), and on the other hand it is good for the LSB of the * first byte to be the LSB of i0. We cannot have both these things, so we * currently use the "little-endian" representation and avoid any multi-byte * operations that depend on byte order. This largely precludes use of the * 64-bit datatype since the relative order of i0 and i1 are unknown. It * also inhibits grouping the SPE table to look up 12 bits at a time. (The * 12 bits can be stored in a 16-bit field with 3 low-order zeroes and 1 * high-order zero, providing fast indexing into a 64-bit wide SPE.) On the * other hand, 64-bit datatypes are currently rare, and a 12-bit SPE lookup * requires a 128 kilobyte table, so perhaps this is not a big loss. * * Permutation representation (Jim Gillogly): * * A transformation is defined by its effect on each of the 8 bytes of the * 64-bit input. For each byte we give a 64-bit output that has the bits in * the input distributed appropriately. The transformation is then the OR * of the 8 sets of 64-bits. This uses 8*256*8 = 16K bytes of storage for * each transformation. Unless LARGEDATA is defined, however, a more compact * table is used which looks up 16 4-bit "chunks" rather than 8 8-bit chunks. * The smaller table uses 16*16*8 = 2K bytes for each transformation. This * is slower but tolerable, particularly for password encryption in which * the SPE transformation is iterated many times. The small tables total 9K * bytes, the large tables total 72K bytes. * * The transformations used are: * IE3264: MSB->LSB conversion, initial permutation, and expansion. * This is done by collecting the 32 even-numbered bits and applying * a 32->64 bit transformation, and then collecting the 32 odd-numbered * bits and applying the same transformation. Since there are only * 32 input bits, the IE3264 transformation table is half the size of * the usual table. * CF6464: Compression, final permutation, and LSB->MSB conversion. * This is done by two trivial 48->32 bit compressions to obtain * a 64-bit block (the bit numbering is given in the "CIFP" table) * followed by a 64->64 bit "cleanup" transformation. (It would * be possible to group the bits in the 64-bit block so that 2 * identical 32->32 bit transformations could be used instead, * saving a factor of 4 in space and possibly 2 in time, but * byte-ordering and other complications rear their ugly head. * Similar opportunities/problems arise in the key schedule * transforms.) * PC1ROT: MSB->LSB, PC1 permutation, rotate, and PC2 permutation. * This admittedly baroque 64->64 bit transformation is used to * produce the first code (in 8*(6+2) format) of the key schedule. * PC2ROT[0]: Inverse PC2 permutation, rotate, and PC2 permutation. * It would be possible to define 15 more transformations, each * with a different rotation, to generate the entire key schedule. * To save space, however, we instead permute each code into the * next by using a transformation that "undoes" the PC2 permutation, * rotates the code, and then applies PC2. Unfortunately, PC2 * transforms 56 bits into 48 bits, dropping 8 bits, so PC2 is not * invertible. We get around that problem by using a modified PC2 * which retains the 8 otherwise-lost bits in the unused low-order * bits of each byte. The low-order bits are cleared when the * codes are stored into the key schedule. * PC2ROT[1]: Same as PC2ROT[0], but with two rotations. * This is faster than applying PC2ROT[0] twice, * * The Bell Labs "salt" (Bob Baldwin): * * The salting is a simple permutation applied to the 48-bit result of E. * Specifically, if bit i (1 <= i <= 24) of the salt is set then bits i and * i+24 of the result are swapped. The salt is thus a 24 bit number, with * 16777216 possible values. (The original salt was 12 bits and could not * swap bits 13..24 with 36..48.) * * It is possible, but ugly, to warp the SPE table to account for the salt * permutation. Fortunately, the conditional bit swapping requires only * about four machine instructions and can be done on-the-fly with about an * 8% performance penalty. */ typedef union { unsigned char b[8]; struct { #if defined(LONG_IS_32_BITS) /* long is often faster than a 32-bit bit field */ long i0; long i1; #else long i0: 32; long i1: 32; #endif } b32; #if defined(B64) B64 b64; #endif } C_block; /* * Convert twenty-four-bit long in host-order * to six bits (and 2 low-order zeroes) per char little-endian format. */ #define TO_SIX_BIT(rslt, src) { \ C_block cvt; \ cvt.b[0] = src; src >>= 6; \ cvt.b[1] = src; src >>= 6; \ cvt.b[2] = src; src >>= 6; \ cvt.b[3] = src; \ rslt = (cvt.b32.i0 & 0x3f3f3f3fL) << 2; \ } /* * These macros may someday permit efficient use of 64-bit integers. */ #define ZERO(d,d0,d1) d0 = 0, d1 = 0 #define LOAD(d,d0,d1,bl) d0 = (bl).b32.i0, d1 = (bl).b32.i1 #define LOADREG(d,d0,d1,s,s0,s1) d0 = s0, d1 = s1 #define OR(d,d0,d1,bl) d0 |= (bl).b32.i0, d1 |= (bl).b32.i1 #define STORE(s,s0,s1,bl) (bl).b32.i0 = s0, (bl).b32.i1 = s1 #define DCL_BLOCK(d,d0,d1) long d0, d1 #if defined(LARGEDATA) /* Waste memory like crazy. Also, do permutations in line */ #define LGCHUNKBITS 3 #define CHUNKBITS (1<<LGCHUNKBITS) #define PERM6464]]); \ OR (d,d0,d1,(p)[(4<<CHUNKBITS)+(cpp)[4]]); \ OR (d,d0,d1,(p)[(5<<CHUNKBITS)+(cpp)[5]]); \ OR (d,d0,d1,(p)[(6<<CHUNKBITS)+(cpp)[6]]); \ OR (d,d0,d1,(p)[(7<<CHUNKBITS)+(cpp)[7]]); #define PERM3264]]); #else /* "small data" */ #define LGCHUNKBITS 2 #define CHUNKBITS (1<<LGCHUNKBITS) #define PERM6464(d,d0,d1,cpp,p) \ { C_block tblk; permute(cpp,&tblk,p,8); LOAD (d,d0,d1,tblk); } #define PERM3264(d,d0,d1,cpp,p) \ { C_block tblk; permute(cpp,&tblk,p,4); LOAD (d,d0,d1,tblk); } STATIC void permute(cp, out, p, chars_in) unsigned char *cp; C_block *out; register C_block *p; int chars_in; { register DCL_BLOCK(D,D0,D1); register C_block *tp; register int t; ZERO(D,D0,D1); do { t = *cp++; tp = &p[t&0xf]; OR(D,D0,D1,*tp); p += (1<<CHUNKBITS); tp = &p[t>>4]; OR(D,D0,D1,*tp); p += (1<<CHUNKBITS); } while (--chars_in > 0); STORE(D,D0,D1,*out); } #endif /* LARGEDATA */ /* ===== (mostly) Standard DES Tables ==================== */ static unsigned char IP[] = { /* initial permutation */, }; /* The final permutation is the inverse of IP - no table is necessary */ static unsigned char ExpandTr[] = { /* expansion operation */ 32, 1, 2, 3, 4, 5, 4, 5, 6, 7, 8, 9, 8, 9, 10, 11, 12, 13, 12, 13, 14, 15, 16, 17, 16, 17, 18, 19, 20, 21, 20, 21, 22, 23, 24, 25, 24, 25, 26, 27, 28, 29, 28, 29, 30, 31, 32, 1, }; static unsigned char PC1[] = { /* permuted choice table 1 */ unsigned char Rotates[] = { /* PC1 rotation schedule */ 1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1, }; /* note: each "row" of PC2 is left-padded with bits that make it invertible */ static unsigned char PC2[] = { /* permuted choice table 2 */ 9, 18, 14, 17, 11, 24, 1, 5, 22, 25, 3, 28, 15, 6, 21, 10, 35, 38, 23, 19, 12, 4, 26, 8, 43, 54, 16, 7, 27, 20, 13, 2, 0, 0, 41, 52, 31, 37, 47, 55, 0, 0, 30, 40, 51, 45, 33, 48, 0, 0, 44, 49, 39, 56, 34, 53, 0, 0, 46, 42, 50, 36, 29, 32, }; static unsigned char S[8][64] = { /* 48->32 bit substitution tables */ { /* S[1] */, }, { /* S[2] */, }, { /* S[3] */, }, { /* S[4] */, }, { /* S[5] */, }, { /* S, }, { /* S[7] */, }, { /* S[8] */ unsigned char P32Tr[] = { /* 32-bit permutation function */ 16, 7, 20, 21, 29, 12, 28, 17, 1, 15, 23, 26, 5, 18, 31, 10, 2, 8, 24, 14, 32, 27, 3, 9, 19, 13, 30, 6, 22, 11, 4, 25, }; static unsigned char CIFP[] = { /* compressed/interleaved permutation */ 1, 2, 3, 4, 17, 18, 19, 20, 5, 6, 7, 8, 21, 22, 23, 24, 9, 10, 11, 12, 25, 26, 27, 28, 13, 14, 15, 16, 29, 30, 31, 32, 33, 34, 35, 36, 49, 50, 51, 52, 37, 38, 39, 40, 53, 54, 55, 56, 41, 42, 43, 44, 57, 58, 59, 60, 45, 46, 47, 48, 61, 62, 63, 64, }; static unsigned char itoa64[] = /* 0..63 => ascii-64 */ "./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; /* ===== Tables that are initialized at run time ==================== */ static unsigned char a64toi[128]; /* ascii-64 => 0..63 */ /* Initial key schedule permutation */ static C_block PC1ROT[64/CHUNKBITS][1<<CHUNKBITS]; /* Subsequent key schedule rotation permutations */ static C_block PC2ROT[2][64/CHUNKBITS][1<<CHUNKBITS]; /* Initial permutation/expansion table */ static C_block IE3264[32/CHUNKBITS][1<<CHUNKBITS]; /* Table that combines the S, P, and E operations. */ static long SPE[2][8][64]; /* compressed/interleaved => final permutation table */ static C_block CF6464[64/CHUNKBITS][1<<CHUNKBITS]; /* ==================================== */ static C_block constdatablock; /* encryption constant */ static char cryptresult[1+4+4+11+1]; /* encrypted result */ /* * Return a pointer to static data consisting of the "setting" * followed by an encryption produced by the "key" and "setting". */ char * crypt(key, setting) register const char *key; register const char *setting; { register char *encp; register long i; register int t; long salt; int num_iter, salt_size; C_block keyblock, rsltblock; for (i = 0; i < 8; i++) { if ((t = 2*(unsigned char)(*key)) != 0) key++; keyblock.b[i] = t; } if (des_setkey((char *)keyblock.b)) /* also initializes "a64toi" */ return (NULL); encp = &cryptresult[0]; switch (*setting) { case _PASSWORD_EFMT1: /* * Involve the rest of the password 8 characters at a time. */ while (*key) { if (des_cipher((char *)&keyblock, (char *)&keyblock, 0L, 1)) return (NULL); for (i = 0; i < 8; i++) { if ((t = 2*(unsigned char)(*key)) != 0) key++; keyblock.b[i] ^= t; } if (des_setkey((char *)keyblock.b)) return (NULL); } *encp++ = *setting++; /* get iteration count */ num_iter = 0; for (i = 4; --i >= 0; ) { if ((t = (unsigned char)setting[i]) == '\0') t = '.'; encp[i] = t; num_iter = (num_iter<<6) | a64toi[t]; } setting += 4; encp += 4; salt_size = 4; break; default: num_iter = 25; salt_size = 2; } salt = 0; for (i = salt_size; --i >= 0; ) { if ((t = (unsigned char)setting[i]) == '\0') t = '.'; encp[i] = t; salt = (salt<<6) | a64toi[t]; } encp += salt_size; if (des_cipher((char *)&constdatablock, (char *)&rsltblock, salt, num_iter)) return (NULL); /* * Encode the 64 cipher bits as 11 ascii characters. */ i = ((long)((rsltblock.b[0]<<8) | rsltblock.b[1])<<8) | rsltblock.b[2];[3]<<8) | rsltblock.b[4])<<8) | rsltblock.b[5];[6])<<8) | rsltblock.b[7])<<2; encp[2] = itoa64[i&0x3f]; i >>= 6; encp[1] = itoa64[i&0x3f]; i >>= 6; encp[0] = itoa64[i]; encp[3] = 0; return (cryptresult); } /* * The Key Schedule, filled in by des_setkey() or setkey(). */ #define KS_SIZE 16 static C_block KS[KS_SIZE]; /* * Set up the key schedule from the key. */ int des_setkey(key) register const char *key; { register DCL_BLOCK(K, K0, K1); register C_block *ptabp; register int i; static int des_ready = 0; if (!des_ready) { init_des(); des_ready = 1; } PERM6464(K,K0,K1,(unsigned char *)key,(C_block *)PC1ROT); key = (char *)&KS[0]; STORE(K&~0x03030303L, K0&~0x03030303L, K1, *(C_block *)key); for (i = 1; i < 16; i++) { key += sizeof(C_block); STORE(K,K0,K1,*(C_block *)key); ptabp = (C_block *)PC2ROT[Rotates[i]-1]; PERM6464(K,K0,K1,(unsigned char *)key,ptabp); STORE(K&~0x03030303L, K0&~0x03030303L, K1, *(C_block *)key); } return (0); } /* * Encrypt (or decrypt if num_iter < 0) the 8 chars at "in" with abs(num_iter) * iterations of DES, using the the given 24-bit salt and the pre-computed key * schedule, and store the resulting 8 chars at "out" (in == out is permitted). * * NOTE: the performance of this routine is critically dependent on your * compiler and machine architecture. */ int des_cipher(in, out, salt, num_iter) const char *in; char *out; long salt; int num_iter; { /* variables that we want in registers, most important first */ #if defined(pdp11) register int j; #endif register long L0, L1, R0, R1, k; register C_block *kp; register int ks_inc, loop_count; C_block B; L0 = salt; TO_SIX_BIT(salt, L0); /* convert to 4*(6+2) format */ #if defined(vax) || defined(pdp11) salt = ~salt; /* "x &~ y" is faster than "x & y". */ #define SALT (~salt) #else #define SALT salt #endif #if defined(MUST_ALIGN) B.b[0] = in[0]; B.b[1] = in[1]; B.b[2] = in[2]; B.b[3] = in[3]; B.b[4] = in[4]; B.b[5] = in[5]; B.b[6] = in[6]; B.b[7] = in[7]; LOAD(L,L0,L1,B); #else LOAD(L,L0,L1,*(C_block *)in); #endif LOADREG(R,R0,R1,L,L0,L1); L0 &= 0x55555555L; L1 &= 0x55555555L; L0 = (L0 << 1) | L1; /* L0 is the even-numbered input bits */ R0 &= 0xaaaaaaaaL; R1 = (R1 >> 1) & 0x55555555L; L1 = R0 | R1; /* L1 is the odd-numbered input bits */ STORE(L,L0,L1,B); PERM3264(L,L0,L1,B.b, (C_block *)IE3264); /* even bits */ PERM3264(R,R0,R1,B.b+4,(C_block *)IE3264); /* odd bits */ if (num_iter >= 0) { /* encryption */ kp = &KS[0]; ks_inc = sizeof(*kp); } else { /* decryption */ num_iter = -num_iter; kp = &KS[KS_SIZE-1]; ks_inc = -sizeof(*kp); } while (--num_iter >= 0) { loop_count = 8; do { #define SPTAB(t, i) (*(long *)((unsigned char *)t + i*(sizeof(long)/4))) #if defined(gould) /* use this if B.b[i] is evaluated just once ... */ #define DOXOR(x,y,i) x^=SPTAB(SPE[0][i],B.b[i]); y^=SPTAB(SPE[1][i],B.b[i]); #else #if defined(pdp11) /* use this if your "long" int indexing is slow */ #define DOXOR(x,y,i) j=B.b[i]; x^=SPTAB(SPE[0][i],j); y^=SPTAB(SPE[1][i],j); #else /* use this if "k" is allocated to a register ... */ #define DOXOR(x,y,i) k=B.b[i]; x^=SPTAB(SPE[0][i],k); y^=SPTAB(SPE[1][i],k); #endif #endif #define CRUNCH(p0, p1, q0, q1) \ k = (q0 ^ q1) & SALT; \ B.b32.i0 = k ^ q0 ^ kp->b32.i0; \ B.b32.i1 = k ^ q1 ^ kp->b32.i1; \ kp = (C_block *)((char *)kp+ks_inc); \ \ DOXOR(p0, p1, 0); \ DOXOR(p0, p1, 1); \ DOXOR(p0, p1, 2); \ DOXOR(p0, p1, 3); \ DOXOR(p0, p1, 4); \ DOXOR(p0, p1, 5); \ DOXOR(p0, p1, 6); \ DOXOR(p0, p1, 7); CRUNCH(L0, L1, R0, R1); CRUNCH(R0, R1, L0, L1); } while (--loop_count != 0); kp = (C_block *)((char *)kp-(ks_inc*KS_SIZE)); /* swap L and R */ L0 ^= R0; L1 ^= R1; R0 ^= L0; R1 ^= L1; L0 ^= R0; L1 ^= R1; } /* store the encrypted (or decrypted) result */ L0 = ((L0 >> 3) & 0x0f0f0f0fL) | ((L1 << 1) & 0xf0f0f0f0L); L1 = ((R0 >> 3) & 0x0f0f0f0fL) | ((R1 << 1) & 0xf0f0f0f0L); STORE(L,L0,L1,B); PERM6464(L,L0,L1,B.b, (C_block *)CF6464); #if defined(MUST_ALIGN) STORE(L,L0,L1,B); out[0] = B.b[0]; out[1] = B.b[1]; out[2] = B.b[2]; out[3] = B.b[3]; out[4] = B.b[4]; out[5] = B.b[5]; out[6] = B.b[6]; out[7] = B.b[7]; #else STORE(L,L0,L1,*(C_block *)out); #endif return (0); } /* * Initialize various tables. This need only be done once. It could even be * done at compile time, if the compiler were capable of that sort of thing. */ STATIC void init_des() { register int i, j; register long k; register int tableno; static unsigned char perm[64], tmp32[32]; /* "static" for speed */ /* * table that converts chars "./0-9A-Za-z"to integers 0-63. */ for (i = 0; i < 64; i++) a64toi[itoa64[i]] = i; /* * PC1ROT - bit reverse, then PC1, then Rotate, then PC2. */ for (i = 0; i < 64; i++) perm[i] = 0; for (i = 0; i < 64; i++) { if ((k = PC2[i]) == 0) continue; k += Rotates[0]-1; if ((k%28) < Rotates[0]) k -= 28; k = PC1[k]; if (k > 0) { k--; k = (k|07) - (k&07); k++; } perm[i] = k; } #ifdef DEBUG prtab("pc1tab", perm, 8); #endif init_perm(PC1ROT, perm, 8, 8); /* * PC2ROT - PC2 inverse, then Rotate (once or twice), then PC2. */ for (j = 0; j < 2; j++) { unsigned char pc2inv[64]; for (i = 0; i < 64; i++) perm[i] = pc2inv[i] = 0; for (i = 0; i < 64; i++) { if ((k = PC2[i]) == 0) continue; pc2inv[k-1] = i+1; } for (i = 0; i < 64; i++) { if ((k = PC2[i]) == 0) continue; k += j; if ((k%28) <= j) k -= 28; perm[i] = pc2inv[k]; } #ifdef DEBUG prtab("pc2tab", perm, 8); #endif init_perm(PC2ROT[j], perm, 8, 8); } /* * Bit reverse, then initial permutation, then expansion. */ for (i = 0; i < 8; i++) { for (j = 0; j < 8; j++) { k = (j < 2)? 0: IP[ExpandTr[i*6+j-2]-1]; if (k > 32) k -= 32; else if (k > 0) k--; if (k > 0) { k--; k = (k|07) - (k&07); k++; } perm[i*8+j] = k; } } #ifdef DEBUG prtab("ietab", perm, 8); #endif init_perm(IE3264, perm, 4, 8); /* * Compression, then final permutation, then bit reverse. */ for (i = 0; i < 64; i++) { k = IP[CIFP[i]-1]; if (k > 0) { k--; k = (k|07) - (k&07); k++; } perm[k-1] = i+1; } #ifdef DEBUG prtab("cftab", perm, 8); #endif init_perm(CF6464, perm, 8, 8); /* * SPE table */ for (i = 0; i < 48; i++) perm[i] = P32Tr[ExpandTr[i]-1]; for (tableno = 0; tableno < 8; tableno++) { for (j = 0; j < 64; j++) { k = (((j >> 0) &01) << 5)| (((j >> 1) &01) << 3)| (((j >> 2) &01) << 2)| (((j >> 3) &01) << 1)| (((j >> 4) &01) << 0)| (((j >> 5) &01) << 4); k = S[tableno][k]; k = (((k >> 3)&01) << 0)| (((k >> 2)&01) << 1)| (((k >> 1)&01) << 2)| (((k >> 0)&01) << 3); for (i = 0; i < 32; i++) tmp32[i] = 0; for (i = 0; i < 4; i++) tmp32[4 * tableno + i] = (k >> i) & 01; k = 0; for (i = 24; --i >= 0; ) k = (k<<1) | tmp32[perm[i]-1]; TO_SIX_BIT(SPE[0][tableno][j], k); k = 0; for (i = 24; --i >= 0; ) k = (k<<1) | tmp32[perm[i+24]-1]; TO_SIX_BIT(SPE[1][tableno][j], k); } } } /* * Initialize "perm" to represent transformation "p", which rearranges * (perhaps with expansion and/or contraction) one packed array of bits * (of size "chars_in" characters) into another array (of size "chars_out" * characters). * * "perm" must be all-zeroes on entry to this routine. */ STATIC void init_perm(perm, p, chars_in, chars_out) C_block perm[64/CHUNKBITS][1<<CHUNKBITS]; unsigned char p[64]; int chars_in, chars_out; { register int i, j, k, l; for (k = 0; k < chars_out*8; k++) { /* each output bit position */ l = p[k] - 1; /* where this bit comes from */ if (l < 0) continue; /* output bit is always 0 */ i = l>>LGCHUNKBITS; /* which chunk this bit comes from */ l = 1<<(l&(CHUNKBITS-1)); /* mask for this bit */ for (j = 0; j < (1<<CHUNKBITS); j++) { /* each chunk value */ if ((j & l) != 0) perm[i][j].b[k>>3] |= 1<<(k&07); } } } /* * "setkey" routine (for backwards compatibility) */ int setkey(key) register const char *key; { register int i, j, k; C_block keyblock; for (i = 0; i < 8; i++) { k = 0; for (j = 0; j < 8; j++) { k <<= 1; k |= (unsigned char)*key++; } keyblock.b[i] = k; } return (des_setkey((char *)keyblock.b)); } /* * "encrypt" routine (for backwards compatibility) */ int encrypt(block, flag) register char *block; int flag; { register int i, j, k; C_block cblock; for (i = 0; i < 8; i++) { k = 0; for (j = 0; j < 8; j++) { k <<= 1; k |= (unsigned char)*block++; } cblock.b[i] = k; } if (des_cipher((char *)&cblock, (char *)&cblock, 0L, (flag ? -1: 1))) return (1); for (i = 7; i >= 0; i--) { k = cblock.b[i]; for (j = 7; j >= 0; j--) { *--block = k&01; k >>= 1; } } return (0); } #ifdef DEBUG STATIC void prtab(s, t, num_rows) char *s; unsigned char *t; int num_rows; { register int i, j; (void)printf("%s:\n", s); for (i = 0; i < num_rows; i++) { for (j = 0; j < 8; j++) { (void)printf("%3d", t[i*8+j]); } (void)printf("\n"); } (void)printf("\n"); } #endif
|
http://opensource.apple.com/source/ruby/ruby-75.2/ruby/missing/crypt.c
|
CC-MAIN-2015-48
|
refinedweb
| 4,011
| 68.6
|
Post your Comment
The XML Style Sheet Translation (XSLT) APIs
The XML Style Sheet Translation (XSLT) APIs
The XSLT Packages
The XSLT APIs are defined in the following packages
XS
Passing parameters to XSLT style sheets
Passing parameters to XSLT style sheets
Passing parameters to XSLT style sheets...() {
// For display purposes only
$.ajax({
url: 'xslt-test.xml'
Introduction to XSLT
a simple XML file into HTML using XSLT APIs.
To develop this program, do...;
Extensible Stylesheet Language Transformations (XSLT) is an XML-based language... language for XML.
With XSLT you can add/remove elements and attributes to or from
Cascading Style Sheet(CSS)
Cascading Style Sheet(CSS)
Cascading Style Sheet (CSS) is known as style sheet language.... The application of
style sheet is to style the web pages written in HTML and XHTML
style sheet properties
style sheet properties What are style sheet properties
The JAXP APIs
.
javax.xml.transform
Defines the XSLT APIs that let's to transform XML... in javax.xml.transform
package of JAXP-APIs. The XSLT APIs let you convert XML... of JAXP-APIs. The "Simple API" for XML (SAX) is the
event-driven, serial
Flex External Style sheet uses
Flex Style with External Style Sheet:-
In this tutorial you can see how to use External Style Sheet inn your flex
application. Firstly we have create Style in different CSS file. you can use
that style sheet directly in flex
An Overview of the XML-APIs
An Overview of the XML-APIs
Here we have listed all the major
Java APIs
for XML... for
creating and using the standard SAX, DOM, and XSLT APIs in Java
An XSLT Stylesheet
.style1 {
border-style: solid;
border-width: 1px;
}
XSLT document is a well formed XML document which defines how the
transformation of an xml...
writing templates in XSLT document.
<?xml version="1.0
External style sheets in Flex4
external
style sheet. You can choose a CSS file and set the properties of components in
it. After that you call this style sheet in to main application...;
<!-- call external
style sheet using
Inline style in Flex4
Inline style in Flex4:
When you can set the properties of components in its own
tag called a inline style sheet. In this example you can see how we can use a
inline style sheet in our application.
Example:
<?xml
XSLT Introductions
XSLT Introductions
XSLT is an xml based language for transforming XML documents into other XML
documents. XSLT stands for eXtensible Styles Language...).
XSLT document is itself a well formed XML document which defines how
The Simple API for XML (SAX) APIs
The Simple API for XML (SAX) APIs
... to configure and obtain a SAX based parser to parse XML
documents....
Brief description of the key SAX APIs:
SAXParserFactory
SAXParserFactory
Orangevolt XSLT
xml documents by widely configurable XSLT
launch configurations... Orangevolt XSLT
OrangevoltXSLT
for Eclipse provides XSLT
support
Transforming XML with XSLT
Transforming XML with XSLT
This Example shows you how to Transform XML with the XSLT in a DOM document. JAXP (Java
API for XML Processing) is an interface which provides
The Simple API for XML (SAX) APIs
The Simple API for XML (SAX) APIs
...
applications to configure and obtain a SAX based parser to parse XML....
Brief description of the key SAX APIs:
SAXParserFactory
Flex Style Property
external style sheet i will
discuss later. This is example of local style...Style in Flex:-
Style is the main part of the Flex because user can change... a way to use Style for
flex component. You can see how to use local style in flex
ToolTip style in Flex4
ToolTip style in Flex4:
In this example you can see how we can change the style
of ToolTip. In this example we use a internal style sheet for change the style
of ToolTip.
Example:
<?xml version="1.0" encoding
Style in Flex
. External style sheet
Ex:/* CSS file */
@namespace s "library...-family:"Verdana";
color:blue;
}
2. Local style sheet
Ex:
<...Style in Flex Hi...
What are some ways to specify styles
Introduction to XML
; that
declares the document to be an XSL style sheet.
Now, read more information...Introduction to
XML
... to develop XSL because there was a need for an XML-based
Stylesheet Language. Thus
Using the jQuery Transform plug-in(XSLT)
Using the jQuery Transform plug-in(XSLT)
Using the jQuery Transform plug-in(XSLT)
XSLT stands for XSL Transformations (Extensible Stylesheet
XML Related Technologies: An overview
is a language for navigating in XML documents.
XSL-FO
(Extensible Style Sheet Language...) is a stricter and cleaner version of HTML.
XSL
(Extensible Style Sheet Language) - XSL consists of three parts: XSLT -
a language for transforming XML documents
Java Xml Transform
.
To know more about this, just click:
http:/...
Java Xml Transform
There are generic APIs included in the J2EE API
like javax.xml
Post your Comment
|
http://www.roseindia.net/discussion/18517-The-XML-Style-Sheet-Translation-(XSLT)-APIs.html
|
CC-MAIN-2015-18
|
refinedweb
| 804
| 65.22
|
- Introduction
- Creating a Sample Project
- Writing Code to Create Logs
- Testing the Application
Writing Code to Create Logs
Since generating logs isn't automatic in LightSwitch, you need to write some code. This isn't a big effort; however, it requires you to write some code snippets. Since you should rewrite the same code a lot of times, one easy option is to create a shared method that can be used anywhere in the application. In Solution Explorer, enable the File View. When LightSwitch shows the list of projects that are part of the solution, right-click the project called Common and then select Add > Class. Call the new class LogHelper. Listing 1 shows the code for the new class.
Listing 1Implementing a shared method for generating new logs.
Public Class LogHelper 'Create a new log Public Shared Sub CreateLog(taskType As String, Optional ByVal notes As String = Nothing) 'Create a new data workspace which ensures isolation from user data Dim ws = LightSwitchApplication.Application.Current.CreateDataWorkspace 'Add a new instance of the Log entity Dim log = ws.ApplicationData.Logs.AddNew log.TaskType = taskType log.TimeStampe = Date.Now 'The application class must be reached including the root namespace log.User = LightSwitchApplication.Application.Current.User.FullName If notes IsNot Nothing Then log.Notes = notes 'Save the log ws.ApplicationData.SaveChanges() 'Release resources ws.Dispose() End Sub End Class
The CreateLog method receives two arguments:
- The taskType argument is supplied by the calling code and specifies the type of activity that must be recorded.
- The notes argument is optional; it's supplied by the calling code and provides additional information.
One very important thing to consider here is that the code creates a new data workspace in the first line of the method body. Every data workspace is a unit of isolation between data. Therefore, if you write logs using the active workspace, when you save changes for your logs you would also save user changes over data. By creating a new workspace instead, you ensure that only the data in that workspace is saved, so saving logs won't interfere with data on which the user is working.
In Solution Explorer, revert to the Logical View; then double-click the Contacts table. What you'll do now is call the CreateLog method every time the user inserts, updates, or deletes a contact. In the Table Designer, select the Contacts_Inserted method from the Write Code drop-down in the upper-right corner. This will open the code editor, pointing to the ApplicationDataService code file. Delete the empty method stub and write the code shown in Listing 2.
Listing 2Generating logs for user actions.
Public Class ApplicationDataService Private Sub Contacts_Inserted(entity As Contact) LogHelper.CreateLog("Inserted a contact", "Contact ID=" + entity.LastName) End Sub Private Sub Contacts_Updated(entity As Contact) LogHelper.CreateLog("Updated a contact", "Contact ID=" + entity.LastName) End Sub Private Sub Contacts_Deleted(entity As Contact) LogHelper.CreateLog("Deleted a contact", "Contact ID=" + entity.LastName) End Sub End Class
There are other places where you can write logs (such as Inserting, Deleting, and Updating), but this is enough to show how the mechanism works.
Next, imagine that you want to create a log every time the user opens or closes a screen. Open the Screen Designer for the CreateNewContact screen and then click the Write Code button. Listing 3 shows how to generate a log every time the user opens or closes the screen.
Listing 3Recording screen opening and closing.
Private Sub CreateNewContact_Created() LogHelper.CreateLog("Opened a screen", "Data Entry") End Sub Private Sub CreateNewContact_Closing(ByRef cancel As Boolean) LogHelper.CreateLog("Closed a screen", "Data Entry") End Sub
In this particular case, the second argument of the CreateLog method is being populated with a value that specifies the type of screen, but you're totally free to replace it with something different.
Another common situation is recording the moment in which the user starts the application. This must be done in the so-called "application code," but it requires the Access Control feature to be enabled. (In other words, anonymous authentication doesn't fire the appropriate events.) To reach the Application's class code file, follow these steps:
- In Solution Explorer, right-click Screens.
- Select Edit Screen Navigation.
- In the designer, click the hyperlink called "Click here to view application code" (see Figure 4).
There's a method hook called Application_LoggedIn that you can handle as shown in Listing 4.
Listing 4Recording logins.
Public Class Application Private Sub Application_LoggedIn() LogHelper.CreateLog("Logon") End Sub End Class
By using these techniques, you can track anything in your applications and provide all the details that you need for further analysis.
|
http://www.informit.com/articles/article.aspx?p=1846585&seqNum=3
|
CC-MAIN-2019-09
|
refinedweb
| 775
| 57.06
|
I have a method
def index
if request.xhr?
@users = User.find( :all )
end
end
And then I have a test for it:
def test_index
user = users( :foo )
xhr :post, :index, {}, { :user_id => user.user_id }
assert_response :success
assert_template ‘users/index’
end
The test works fine. If I put a puts in there it fires, so I know the
test is hitting the block in question.
So then when I run rcov, it produces a report that says the code block
has not yet been covered with a test.
I also tried xhr :get in my test but got the same result.
Anyone know anything about this? Is there a workaround, or another way?
Thanks,
–
Greg D.
|
https://www.ruby-forum.com/t/rcov-not-detecting-request-xhr-test/160815
|
CC-MAIN-2018-51
|
refinedweb
| 115
| 92.83
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.