text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
When you started your Ext JS 4.* project with Sencha Cmd you know how easy…
How to generate a PDF from an Ext JS app
Often I get the question if it’s possible to generate a PDF from a Sencha app. Well yes that’s possible, but not with Ext JS code only. Though, Ext JS has an exporter, to export grid/pivot data into an XML or Excel file, and you can export charts to images. Out of the box we can’t generate PDFs from full Sencha apps, with the framework.
What you will need is an additional script, often these solutions are handled on the backend, but there are also solutions which can do this client-side.
To name few, see the list below. I didn’t use them all. But I’ve done it before on the client-side with JavaScript, with PHP and back in the days also once in Java.
- You can generate PDFs with Node and PhantomJS
- You can view PDFs with JavaScript on the client-side with Mozilla PDF.js or generate with JSPDF
- You can generate PDFs with PHP. For example with TCPDF, HTML2PDF, DOMPDF, Zend PDF or FPDF
- PDF generation for Java. For example: PDFBox, PDFJet, JPod or PDF Clown
- PDF generation for .NET. For example: PDFClown, PDFJet, Apose or Foxit
- Perl PDF generation, PDF-API, PDF-Create or PDF Template
Nowadays, I love Node JS! So I’ll show you how I recently created a generator with JavaScript for NodeJS and PhantomJS. PhantomJS?PhantomJS is a headless Webkit browser formerly written by (ex Sencha employee) Ariya Hidayat. You will need to install PhantomJS on your environment, but once installed you can run PhantomJS from your command-line. Why is this so powerful? Well you can use it, to:
- Headless website testing
Run functional tests with frameworks such as Jasmine, QUnit, Mocha, Capybara, WebDriver, and many others.
- Page automation & screenscraping
Access and manipulate webpages with the standard DOM API, or with usual libraries like jQuery.
- Network monitoring
Monitor page loading and export as standard HAR files. Automate performance analysis using YSlow and Jenkins
- Screen capturing
Programmatically capture web contents, including SVG and Canvas. Create web site screenshots with thumbnail preview etc
The last usecase, is where I am using it for. To let PhantomJS visit my Ext JS app, and capture the screen, by generating it to a PDF. You can run a working example via this URL: (I’m creating a PDF of this simple Ext app.)
Nice to know, Sencha is using PhantomJS inside Sencha Cmd, for example we use it to generate images from CSS3 features that aren’t supported in legacy browsers and recently we compile Sass stylesheets with JavaScript, to production ready CSS (Fashion). How did I do this?
Let’s say you have an environment with Node JS and a web server with Express installed, how can you make a PDF from an Ext JS app? I’m not an Node/PhantomJS expert, but I can show you some simple steps, which you can do too!
You will need to create a route that listens to an URL that should invoke PhantomJS. For example:
var PdfHelp = require('./libs/pdfgenerator-help'); router.get('/pdfgenerator/', function(req, res){ var pdf = new PdfHelp(); pdf.generatePdf(req, res); });
On your environment (dev and production), you will need to install PhantomJs, you can install it via the npm package manager:
npm install phantomjs -s
Once it’s installed, you can run PhantomJS JavaScript pages, by running:
phantomjs scriptname.jsfrom the command-line.
I created a simple helper script which can listen to an argument that passes in a URL of my Sencha app. This probably doesn’t make much sense for your own app, but you will get the idea on how to do this.
I use a Node child process, to execute PhantomJS from my environment. It passes in two arguments; the phantomjs script to execute (generate.js - see below), and in my case the URL to the Sencha app. });
You can find my code here
Here’s the phantomjs generate script that I used:
What’s important to know:
- I’m configuring the page, like paper size, margins, and headers and footers:
- Then I let PhantomJS open the URL to my Sencha app:
The big magic trick here, is that you’ll need to wait till the headless browser loaded the Sencha app with the framework completely in its memory. Otherwise you would print an empty page, (because index.html files in Sencha apps are usually pretty empty, since Ext JS generates the browser DOM elements).
Take a look into the
waitFor()method I used. The first argument is a test function. This test function, (see line 94), tries to find the Ext namespace in my Sencha app. When it’s there, I still don’t want to immediately make the screenshot, because maybe my stores are not loaded yet. So I wrote another evaluation:
Ext.ComponentQuery.query('grid')[0].getStore().count();If there is data in my store, then go ahead and generate the PDF.
Again, this probably doesn’t make sense for your application, but you will get the idea.
You render the page with
page.render('my-pdf-name.pdf');and then you exit the phantomjs process (
phantom.exit()).
Back into my PDF helper class, I wrote the following lines, to set the filename of the PDF and directly open it in my browser. It’s important that you set the page headers and content type to application/pdf:
var filename = "test.pdf"; filename = encodeURIComponent(filename); res.setHeader('Content-disposition', 'inline; filename="' + filename + '"'); res.setHeader('Content-type', 'application/pdf'); var fs = require('fs'); fs.readFile(filename, function(err,data) { res.contentType("application/pdf"); res.end(data); });
And that’s it! As you can see when using PhantomJS for visiting your Sencha app, you might want to deal with the timing issues. As by default the index.html in a Sencha app is empty, and a Sencha app is generated in the DOM.
There are lots of ways on how you can create PDFs or images from Sencha apps. Which technologies and tricks did you use?
Update:
Ext JS 6.2 Premium, ships with a data exporter package for grids and pivot grids. It will be possible to export all the records which are visible in a grid to: XML, CSV, TSV, HTML and Excel format. Shikhir Singh, lately wrote a nice post about; how to extend from Ext.grid.plugin.Exporter to easily export to PDF.
November 27, 2015 at 5:19 pm
Reply
December 12, 2015 at 7:00 pm
Reply | https://www.leeboonstra.com/developer/how-to-generate-a-sencha-app-to-pdf/ | CC-MAIN-2017-34 | refinedweb | 1,109 | 64.91 |
Why does rospy::wait_for_message get stucked even though messages are being published?
I am using this method in my python script
rospy.wait_for_message("/my_topic", Bool, timeout=None)
The
/my_topic messages are being published, I can see them when running
rostopic echo /my_topic. But the script stays at the
rospy.wait_for_message line as if it couldn't detect any messages. What could be the cause?
EDIT
$ rostopic echo -n1 /my_topic data: True ---
wait_for_message is explicitly meant to get just one message on the topic. Does it detect at least one message? If you need to continuously receive the data on /my_topic, you need a Subscriber.
@curi_ROS no, it doesn't detect any message. Not even one.
What is the output of
rostopic echo -n1 /my_topic?
@gvdhoorn It outputs the message.
I actually wanted to make sure that
/my_topicis of type
Bool. Because if it isn't, things won't work. I also wanted to make sure that
/my_topicactually exists, and is not namespaced. Without you showing us that, we cannot help you.
@gvdhoorn ok, I didn't understand what you wanted me to do. The question is now edited with the output of this command.
Then I guess we'll need to see some of your code. Specifically how you initialise everything up to and including where you use
wait_for_message(..).
It doesn't necessarily need to be your exact code, as long as it reproduces the problem you are experiencing. | https://answers.ros.org/question/323600/why-does-rospywait_for_message-get-stucked-even-though-messages-are-being-published/ | CC-MAIN-2021-43 | refinedweb | 239 | 69.48 |
I am currently using Mongoose with my Express application within my controllers. So far, I've been importing the model, and then running queries against it with success. Now, I'm setting up a cron file (called
cron.js) and have a file that imports it the same way, however when I run the file from a
package.json script the code within the promise never executes; there is no return value. This is the code in the script which runs my file:
babel-node server/db/cron.js
I've tried a few things:
import User from '../models/user'; User.find({}, function(err, users) { // these don't run: console.log(err); console.log(users); }) async function getUser() { // runs console.log('hi'); let user = await User.findOne({ 'name' : 'ETH'}) // doesn't run console.log(user); } getUser(); User.findOne({ name: 'bob' }) .exec((err, user) => { // doesn't run console.log('123') }
When I
console.log the model (
User) and the function (
User.find) they have been imported succesfully. I've also tried this with
findOne.
You need to call
.exec():
let user = await User.findOne({ 'name' : 'ETH'}).exec()
findOne() returns a
Query which returns a
Promise which you then call
.exec() relatively new to web development, and I am creating a website with NodeJs and Express that would ask for a username and email when registeringCurrently, it only accepts username for login
I am using express and request-promiseI can't get it to re-render after my request-promise
I want to execute a remote SSH command in a Raspberry Pi using my Ubuntu PC for T amount of minutes and later break it
I'm trying to create a telegram bot that send message to a specific userEach time I try to send a voice message I get this error: | https://cmsdk.com/node-js/code-within-mongoose-promise-never-executes.html | CC-MAIN-2019-22 | refinedweb | 300 | 68.36 |
Openwebmail maildir jobs
We would like to create a basic IMAP server in C, capable to answer requests from IMAP-clients and delive...IMAP-clients and deliver emails header and content accordingly. * NEEDED: handle communication via socket with client, IMAP command parser, delivery of header and emails (storage in "maildir" format) * NOT NEDEED: login mechanism, SSL connection Droplet setup on Digital Ocean with all the server features to support my Jooml...3 • Security • Firewall
I'm using Gravity Forms, but when the form is completed, the admin and confirmation mail are not delivered to the accounts. I'm using... the admin and confirmation mail are not delivered to the accounts. I'm using Gmail, but also tried with another domain mail. Also, in my FTP, there is a folder named "Maildir" where are all the undelivered emails.
...]
Fix problem "-ERR chdir Maildir failed" in ubuntu server with postfix and courier pop already installed
We have a NAS box that contains emails stored in maildir format. These are old emails that were on a Linux server using postfix and dovecot mail. The server failed so we abandoned it and started afresh using Rackspace Hosted Exchange. However, I need the old emails from the NAS box. As far as I am aware there are only two ways of doing this. I have
...server: II. Maintain a whitelist in
I need to have my webapp be able to parse the Dovecot /Maildir/cur folder where email messages are kept. I have a script that will parse the messages for bounces and update the database. Just need help with the access issue. chmod 0777 does not do it. So unless you have Dovecot experience and know exactly what is going on here, please don't bid.
I want to integrate to [login to view URL] twiter bootstrap to a perl webmail script Your task will be to 1) Integrate this twitter bootstrap backbonejs theme [login to view URL] to our openwebmail 2) Use this flatty css theme [login to view URL] to our new theme Quick Start Using
I need a Linux utility to convert mails in maildir format to outlook pst format This PST should work in all versions of outlook above outlook 2007
I Have a VPS with Virtualmin/Webmin, i've a...i've a postifx/dovecot server in this VPS, i send e-mail and the mail are delivered, but i not recived. i've configure in DNS s MX and A param. but my server not recived (in maildir directory) he recived in a spam folder spam mail. the project are a configuraztion of postfix/dovecot for recived a mail.
Hi, I'm looking for a NoSQL storage backend for postfix or any other IMAP maildir server. This is on Linux.
Hey, I'm looking for someone to quickly help me setup and configure a devcot imap (maildir) server on my Amazon EC2 instance.
...would like you to trouble shoot this issue for a few hours Description 1. Can setup qmail and dovecot. (Qmail and Dovecot have been installed on VPS in standart setup with Maildir) 2. Setup qmail for SMTP with secure authentication (user, pass) with connection security : SSL/TLS and can be accessed /opened from MUA (Outlook, Thunderbird). 3. Can send
...database 2. Get squirrelmail, roundcube, postfix admin, etc working with a custom apache install, specifically suexec/fastcgi (I can setup the apache install). 3. use maildir format rather than mbox 4. Automatically redirect to https for postfixadmin and roundcubemail 5. store mail in /home/user/mail/ rather than /var/mail/user/ (if possible)
../" Likely
..
...backdoors. You will remove malicious PHP files. One example of coding found in [login to view URL] file was: error_reporting(0); define ('WPLANG', ''); @require_once('../Maildir/tmp/[login to view URL]'); Currently the malicious coding is making the Google search results show "Order Cialis in Phoenix". If you can solve this problem, I ha...
Hi There, Looking for someone can install roundcube with myworundcube can write a script can collect contacts and write as vcard from the maildir folders
...Deliverables System Specs: * Mail Server: Postfix, configured to use dovecot for delivery. (Using virtual hosts and virtual users: not sure if this can be important) * Format: Maildir * OS: Ubuntu 10.04.4 If you need help with .xls to .csv conversion, then please consider these resources but you should test with simply changing the extension first
...SImple page to show the email sync config and allow to add or edit sync config, Thats it 7) The OfflineIMAP script will run on the Zimbra server with the Zimbra Server as the Maildir and the remote as IMAP. 8) 2 way sync via OfflineIMAP, run every 10mins on each email address. Thats it. Not hard, just want someone to doing nicely and document the above
...config, add and remove mailbox sync pairs. Thats it 5) The isync scrip will run on the Zimbra server and will be configured such that the Zimbra Server will be access using Maildir and the remote server will be via imap. 6) 2 way sync via isync 7) Also create a script to rync from a remote server to the local server via FTP. Then tar the full contents
...============================= /etc/[login to view URL] ================================ protocols = imap imaps pop3 pop3s log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/home/vmail/%d/%n/Maildir ssl_cert_file = /etc/pki/dovecot/certs/[login to view URL] ssl_key_file = /etc/pki/dovecot/private/[login to view URL] namespace private { separator ...
Please Sign Up or Login to see details.
Please Sign Up or Login to see details.
Needing a Linux administrator to integrate Openwebmail to an existing Ubuntu/Postfix messaging server...Cleanup of previous integration attempts...
Need assistance with configuring Openwebmail
...that the files are available visible with the URI structure mirroring the disk structure. For example, I want URI /enron/motley-m/inbox/16 for file enron_mail_20110402/maildir/motley-m/inbox/16. Note the removal of the '.' on the end of the filename in the URI. The easiest way to do this is simply remove that from each filename on disk. We
Hi, I have an email system which uses exim, I have catchall mailboxes which receive mail for *@domain. If you send a mail to 3 recipients, Exim saves them it saves 1 messages with 3 recipieints in. e.g. Envelope-to: email1, email2 I then have a program called serialmail which picks up the messages and delivers them to the remote SMTP server.. However, this complains because of ...
Need python code/script to scan a Maildir for new messages, parse and save the text, html and attachments to another directory. Prefer if you already have the code done. Please text me via the PM board to discuss this further. Thank you.
...duplicate messages. The original program is here: [login to view URL] The program needs to be modified so that it runs on a dovecot Maildir installation. It must traverse all child folders of a mail account. The comparison of messages only needs to be done within the current folder (i.e. we are not comparing every
I have production server running old setup with qmail/vpopmail/maildir and courier-imap. Qmail is "rotting" and for some time server is not able to process spam filtering and lately I(ve discovered it have problem connecting some of DNS servers and doesnt deliver mails from time to time. I(ve decided it is tome to migrate to new server. I need research
Please Sign Up or Login to see details.
Please Sign Up or Login to see details.
...being complicated. Two users get a pretty unspecific error message "Internal error occured". The Dovecot Server-Log shows "IMAP(): Error open(/srv/mail/[login to view URL]) failed: Permission denied ". X and Y are of course just variables here to not publish the customers E-Mail address. Also E-Mails for those two Mailaccounts
...email files in /home/abrown/Maildir/new For example need a process that will convert: /var/spool/mail/abrown to /home/abrown/Maildir/new/[login to view URL] /home/abrown/Maildir/new/[login to view URL] /home/abrown/Maildir/new/[login to view URL] /home/abrown/Maildir/new/[login to view URL]
I require a C programmer to help write some software to deliver messages in a maildir folder to an SMTP server and log the responses to a database... with accurate logging and downtime monitoring... The code has already been written in php and C using a qmail installation of serialmail.. I expect this will be a few hours work to a good coder... This
...karmic 9.10 running Dovecot IMAP/POP3 Server, Fetchmail Mail Retrieval, Postfix Mail Server and Sendmail Mail Server. I am trying to get this up and running with roundcube or openwebmail and I am having issues. I need one user to forward all mail to a gmail account for now and give me a break down of what was done. This needs to be done asap for under 100
xserempro knows what to do. The problems with my ADSL r...running ok. (didn't change anything) 4 - When I create a new email account for, let's say, [login to view URL] it won't create the email account in the right Maildir. (/home2/macattrick/Maildir = for the default email. when i want to create a new mail it want's to create a folder in /home2)
..
...following: 1. Check a Pop3 mail account for new mail. 2. For any new messages, download the message details (header, subject, body, etc) and store the new messages inside a 'maildir' as compatible with postfix and the like. There is a specific python library for this ([login to view URL]) I do not mind the structure of the script
...OCR for each attachment file(s) is returned a text field in DB ## Deliverables Ideally this is:- a) Convert existing sendmail ruleset to Postfix, and mbox to maildir The VALUE ADDED EXTRAS are:- Install Postfix b) Modify MTA so that so that MIME attachments are split recursively by GMIME and a SHA-256 based hash of email
The job is to write a library that will:- a) move email messages from Postfix maildir+ (except MIME attachments) into matching fields in Postgres DB b) explode MIME attachments recursively and place the file names into the Postgres DB, ideally with a link to the file based on a long SHA-256 file signature c) alternatively, implement | https://www.freelancer.com/job-search/openwebmail-maildir/ | CC-MAIN-2018-47 | refinedweb | 1,742 | 62.48 |
convert matplotlib figures into TikZ/PGFPlots
This is matplotlib2tikz, a Python tool for converting matplotlib figures into PGFPlots (TikZ) figures like
for native inclusion into LaTeX documents.
matplotlib2tikz works with both Python 2 and Python 3.
The output of matplotlib2tikz is in PGFPlots, a LaTeX library that sits on top of TikZ and describes graphs in terms of axes, data etc. Consequently, the output of matplotlib2tikz retains more information, can be more easily understood, and is more easily editable than raw TikZ output. For example, the matplotlib figure
import matplotlib.pyplot as plt import numpy as np plt.style.use('ggplot') t = np.arange(0.0, 2.0, 0.1) s = np.sin(2*np.pi*t) s2 = np.cos(2*np.pi*t) plt.plot(t, s, 'o-', lw=4.1) plt.plot(t, s2, 'o-', lw=4.1) plt.xlabel('time (s)') plt.ylabel('Voltage (mV)') plt.title('Simple plot $\\frac{\\alpha}{2}$') plt.grid(True) from matplotlib2tikz import save as tikz_save tikz_save('test.tex')
(see above) gives
% This file was created by matplotlib2tikz vx.y.z. \begin{tikzpicture} \definecolor{color1}{rgb}{0.203921568627451,0.541176470588235,0.741176470588235} \definecolor{color0}{rgb}{0.886274509803922,0.290196078431373,0.2} \begin{axis}[ title={Simple plot $\frac{\alpha}{2}$}, xlabel={time (s)}, ylabel={Voltage (mV)}, xmin=-0.095, xmax=1.995, ymin=-1.1, ymax=1.1, tick align=outside, tick pos=left, xmajorgrids, x grid style={white}, ymajorgrids, y grid style={white}, axis line style={white}, axis background/.style={fill=white!89.803921568627459!black} ] \addplot [line width=1.64pt, color0, mark=*, mark size=3, mark options={solid}] table {% 0 0 0.1 0.587785252292473 % [...] 1.9 -0.587785252292473 }; \addplot [line width=1.64pt, color1, mark=*, mark size=3, mark options={solid}] table {% 0 1 0.1 0.809016994374947 % [...] 1.9 0.809016994374947 }; \end{axis} \end{tikzpicture}
Tweaking the plot is straightforward and can be done as part of your LaTeX work flow. The fantastic PGFPlots manual contains great examples of how to make your plot look even better.
Installation
matplotlib2tikz is available from the Python Package Index, so simply type
pip install -U matplotlib2tikz
to install/update.
Usage
Generate your matplotlib plot as usual.
Instead of pyplot.show(), invoke matplotlib2tikz by
tikz_save('mytikz.tex')
to store the TikZ file as mytikz.tex. Load the library with:
from matplotlib2tikz import save as tikz_save
Optional: The scripts accepts several options, for example height, width, encoding, and some others. Invoke by
tikz_save('mytikz.tex', figureheight='4cm', figurewidth='6cm')
Note that height and width must be set large enough; setting it too low may result in a LaTeX compilation failure along the lines of Dimension Too Large or Arithmetic Overflow; see information about these errors in the PGFPlots manual. To specify the dimension of the plot from within the LaTeX document, try
tikz_save( 'mytikz.tex', figureheight='\\figureheight', figurewidth='\\figurewidth' )
and in the LaTeX source
\newlength\figureheight \newlength\figurewidth \setlength\figureheight{4cm} \setlength\figurewidth{6cm} \input{mytikz.tex}
Add the contents of mytikz.tex into your LaTeX source code; a convenient way of doing so is via \input{/path/to/mytikz.tex}. Also make sure that in the header of your document the packages for PGFPlots and proper Unicode support and are included:
\usepackage[utf8]{inputenc} \usepackage{pgfplots}
Additionally, with LuaLaTeX
\usepackage{fontspec}
is needed to typeset Unicode characters. Optionally, to use the latest PGFPlots features, insert
\pgfplotsset{compat=newest}
Contributing
If you experience bugs, would like to contribute, have nice examples of what matplotlib2tikz can do, or if you are just looking for more information, then please visit matplotlib2tikz’s GitHub page.
Testing
matplotlib2tikz has automatic unit testing to make sure that the software doesn’t accidentally get worse over time. In test/testfunctions/, a number of test cases are specified. Those
- run through matplotlib2tikz,
- the resulting LaTeX file is compiled into a PDF (pdflatex),
- the PDF is converted into a PNG (pdftoppm),
- a perceptual hash is computed from the PNG and compared to a previously stored version.
To run the tests, just check out this repository and type
pytest
The final pHash may depend on any of the tools used during the process. For example, if your version of Pillow is too old, the pHash function might operate slightly differently and produce a slightly different pHash, resulting in a failing test. If tests are failing on your local machine, you should first make sure to have an up-to-date Pillow.
If you would like to contribute a test, just take a look at the examples in test/testfunctions/. Essentially a test consists of three things:
- a description,
- a pHash, and
- a function that creates the image in matplotlib.
Just add your file, add it to test/testfunction/__init__.py, and run the tests. A failing test will always print out the pHash, so you can leave it empty in the first run and fill it in later to make the test pass.
Distribution
To create a new release
bump the __version__ number,
publish to PyPi and GitHub:
$ make publish
License
matplotlib2tikz is published under the MIT license.
Release History
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/matplotlib2tikz/ | CC-MAIN-2017-47 | refinedweb | 867 | 57.37 |
This is a discussion on Re: CSRF (Was: XSS evasion) - modperl ; On Oct 6, 2006, at 4:33 PM, Chris Shiflett wrote: > Jonathan Vanasco wrote: >> can't a lot of this be locked down with http referrers? > > Until July of this year, checking the Referer was thought to be ...
On Oct 6, 2006, at 4:33 PM, Chris Shiflett wrote:
> Jonathan Vanasco wrote:
>> can't a lot of this be locked down with http referrers?
>
> Until July of this year, checking the Referer was thought to be a
> pretty
> good safeguard against CSRF, because an attacker would have to cause a
> victim to send the right Referer, which isn't so easy.
>
> Unfortunately, Amit Klein published some research in July that
> demonstrated how to do this with Flash. So, if your users use clients
> that support Flash (which most do), this is not a good safeguard.
That's rather annoying.
The steps to lock down a domain are f*ing difficult.
I don't think its even entirely possible now... If a browser has
javascript + async, they can fake the entire sessions.
On all my projects , I've moved flash communications to their own
namespace to avoid *some* referrer forging, and I've locked down all
account / write pages to necessitate a http referrer from my site.
I say *some* in regards to flash, because a swf can still do a
loadMovie against a domain without crossdomain.xml constraints.
Beyond that though, anything that I can think of really just makes
things more inconvenient for 'hackers'. considering what flash and
javascript can do now-- especially in regards to async/callbacks/
regex/requests/everything happening silent behind-the-scenes-- there
are just so many new 'vulnerabilities'
i'm not even sure that these really are vulnerabilities though...
if a user gets a spam, clicks on the link, that link loads some site
in russia / china / czech republic that has a js file or flash file
that is used to fake refferrers, make requests, and basically be a
web spider using their session info -- all behind the scenes -- is
that necessarily a vulnerability in my website, or one in the browsers ?
I'm not sure on that.
What I am sure of, is that it took me all of 30 minutes to
'reasonably' lock down my websites under mod perl. thats damn fast. | http://fixunix.com/modperl/166451-re-csrf-xss-evasion.html | CC-MAIN-2016-18 | refinedweb | 391 | 67.38 |
Details
- Type:
Bug
- Status:
Open
- Priority:
Critical
- Resolution: Unresolved
- Affects Version/s: X10 2.1.2
-
- Component/s: Language Design
- Labels:None
- Number of attachments :
Description
The manual explains what should happen when an instantiation of a generic
class makes method overloading ambiguous. This explanation
("complain-on-call"), while sensible, will require us to massively rework our
C++ back end implementation. So, Bard and Igor propose we switch to a somewhat
different approach, "complain-on-instantiation".
Both concepts concern ambiguous overloadings. If we have a type C[T] with
methods def m(x:T)=1; and def m(x:Int)=2;, then C[Int] has two
methods with the same signature.
"complain-on-call" says that it is illegal to use an ambiguous method. So
we could create an instance of C[Int], but not call it's m method.
"complain-on-instantiation" says that it is illegal to use a type with
ambiguous methods. So we can't even create an instance of C[Int].
- Complain-on-instantiation is more restrictive than complain-on-call. So, if
we make this change in 2.2, we can go to complain-on-call later if we want
to.
- But it's not much more restrictive. Bard hasn't, as of the time of
writing, come up with any plausible example where complain-on-call allows
something sensible but complain-on-instantiation doesn't.
- Complain-on-instantiation can be checked in the front end, which ought to be
easier and less troublesome than the back-end matters that complain-on-call
requires.
(The original motivation for complain-on-call is worth mentioning here.
Originally Bard had written something far more restrictive in the spec –
something that was restrictive enough to exclude some vaguely useful cases.
When this was pointed out, Bard went to the other extreme, of allowing as many
calls as possible. This decision was discussed very casually, but nobody
thought deeply about it until now. In particular, the material in the spec as
of today is not there because anyone needs that behavior in particular, or
because we have any reason to think it is right – it simply seemed like a
good idea at the time. So making this change is unlikely to have much
software-engineering impact.)
Here's what the spec says now. This is complain-on-call:
A class definition may include methods which are ambiguous in some
generic instantiation. (It is a compile-time error if the methods are
ambiguous in every generic instantiation, but excluding class
definitions which are are ambiguous in some instantiation would exclude
useful cases.) It is a compile-time error to use an ambiguous method
call.
The following class definition is acceptable. However, the marked method
calls are ambiguous, and hence not acceptable.package Classes4d5e; class Two[T,U]{ def m(x:T)=1; def m(x:Int)=2; def m[X](x:X)=3; def m(x:U)=4; static def example() { val t12 = new Two[Int, Any](); // ERROR: t12.m(2); val t13 = new Two[String, Any](); t13.m("ferret"); val t14 = new Two[Boolean,Boolean](); // ERROR: t14.m(true); } }
The call t12.m(2) could refer to either the 1 or 2
definition of m, so it is not allowed.
The call t14.m(true) could refer to either the 1 or 4
definition, so it, too, is not allowed.
The call t13.m("ferret") refers only to the 1 definition. If
the 1 definition were absent, type argument inference would make it
refer to the 3 definition. However, X10 will choose a fully-specified
call if there is one, before trying type inference, so this call unambiguously
refers to 1.
\endUnknown macro: {ex}
This test case has been failing for a while.
Under the proposed new rules, the types Two[Int,Any] and
Two[Boolean,Boolean] will become illegal. The errors will be caught at the
val t12 and val t14 lines. Two[String,Any] will remain legal.
Issue Links
- is depended upon by
XTENLANG-2971 Umbrella language/front-end JIRA for X10 2.6
Activity
How will you type-check this:
class A[T]{ def m(T)=1; def m(Int)=2; static def example() { val t = new A[Int](); // you want this to be illegal, because t.m(1) is problematic example2[Int](); } static def example2[U]() { val t = new A[U](); // is this legal? will it cause a similar problem in the C++ backend? // t.m(1) is not problematic here because we resolve it to m(Int)=2 } }
For every class, we can build a list of problem instantiations (as a pseudo type constraint in terms of the type parameters, so we can catch overloading ambiguities between type parameters). Whenever classes or methods instantiate generic types based on their type parameters, they will inherit these restrictions on their type arguments as well. In this case, the error would be reported on both new A[Int]() and example2[Int]() — the first because of the direct conflict, and the second because example2() will inherit the restriction base(U)!=Int.
bulk defer of open issues to 2.2.2.
Dave, Igor: this jira is silent on why "complain-on-call" is problematic for the C++ backend.
Is it?
What are the problems?
(Bard – I dont understand your comment "Complain-on-instantiation can be checked in the front end, which ought to be easier and less troublesome than the back-end matters that complain-on-call requires." Both can be checked in the front end.)
I believe the issue is that when you instantiate a templatized C++ class, you have to instantiate all of its instance methods so the vtable can be filled in. If a particular instantiation of the class results in ambiguous methods, it doesn't matter whether or not the program actually ever calls them; the post compiler will reject the instantiation because it isn't valid C++.
We don't have this problem for static methods, since they get instantiated on a per-method basis (no vtable).
My recollection is that to support complain-on-call for instance methods we would need to give up on using the C++ object model to implement instance methods, and I don't see this as being a viable option.
I see. I agree that giving up the C++ object model is not a viable option.
Will the following definition suffice to rule out such C++ post-compilation errors? Recall that a unit is a class, struct or interface.
A unit definition U may include method definitions which are ambiguous for some choice of types for its type parameters. It is an error for a unit type to be such that all its instantiations have ambiguous members.
(Here, of course, we consider a type T to be an instance of T if T has no type parameters.)
This definition would mark Foo[S, T] as an error if it is defined thus:
class Foo[S,T]{S==T} { def m(S):void{} def m(T):void{} }
Igor –
Have you thought through the design of the type constraints necessary (cf your message of May 29, 2011). This needs to be worked out carefully.
To make it concrete, here's an example of an X10 program that can't be supported with the current C++ codegen strategy. So, ideally we would reject this in the front end. The most precise checking of this would allow the call to trouble[Int,Double]() but reject the call to trouble[Int,Int]().
class Foo[S,T] { def m(S):void{ Console.OUT.println("S's m"); } def m(T):void{ Console.OUT.println("T's m"); } } public class GenericFun { public static def trouble[A,B]() = new Foo[A,B](); public static def main(Array[String]) { // this is ok trouble[Int,Double](); // this is not ok. // // Generated C++ code fails postcompilation because Foo[Int,Int] // has two identical methods. // Does not matter that the methods are never called, // C++ needs to instantiate them to put them in the vtable. trouble[Int,Int](); } }
The C++ level error when this class is compiled is as expected:
x10c++: In file included from GenericFun.cc:3: /home/dgrove/x10-trunk/myTests/Foo.h: In instantiation of ‘Foo<int, int>’: /home/dgrove/x10-trunk/myTests/GenericFun.h:71: instantiated from ‘static x10aux::ref<Foo<x10tp__S, x10tp__T> > GenericFun::trouble() [with x10tp__A = int, x10tp__B = int]’ GenericFun.cc:23: instantiated from here /home/dgrove/x10-trunk/myTests/Foo.h:77: error: ‘void Foo<x10tp__S, x10tp__T>::m(x10tp__T) [with x10tp__S = int, x10tp__T = int]’ cannot be overloaded /home/dgrove/x10-trunk/myTests/Foo.h:69: error: with ‘void Foo<x10tp__S, x10tp__T>::m(x10tp__S) [with x10tp__S = int, x10tp__T = int]’
This simple case could be caught by analyzing Foo and having the compiler add the constraint S!=T to the class invariants. One can also construct variations like:
class Foo[S,T] { def m(S):void{ Console.OUT.println("S's m"); } def m(T):void{ Console.OUT.println("T's m"); } def m(Complex):void { ... } }
So you would need S!=T and S!=Complex and T!=Complex in the class invariants.
It might be possible for the typechecker to formulate all the necessary constraints by looking at overloaded virtual methods that have type parameters as arguments. I haven't worked out the details (and they may be too complex to make it practical), but I think that is probably the only solution that will catch all potential post-compilation errors at X10 compile time and safely allow overloading of instance methods whose formal types include a type parameter of the class.
So you would need S!=T and S!=Complex and T!=Complex in the class invariants.
Actually, that is not enough. Because constraints are stripped away, you would need to express (S!=T)mod constraints. Last year I proposed both the != operation for types, and the RawType operator (or whatever name we choose to give it). I also proposed a CanBeOverloaded predicate on two types, with essentially the above motivating example. Both were rejected.
bulk defer of issues to 2.2.3.
bulk defer of 2.3.0 open issues to 2.3.1.
bulk defer to 2.3.2
bulk defer to 2.4.1.
By default, targeting all language design issues at next major release (X10 2.5) or later.
bulk defer features from X10 2.5 to X10 2.6.
bard_todo | http://jira.codehaus.org/browse/XTENLANG-2764?focusedCommentId=289425&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-11 | refinedweb | 1,731 | 57.87 |
M5Paper wakeup cause
Hello,
I'm playing around with the M5Paper RTC and I'm stuck on differentiating between a restart caused by pressing the reset button, and a restart caused by a
shutdown()with a RTC IRQ wakeup.
I'm writing a simple clock application that updates once a minute with the following properties:
- On boot I paint a decorative frame (I actually pull it by http), connect to ntp and setup the RTC clock. Finally I draw the time, and do a
shutdown(60)to wake up in a minute.
- After a minute I want to wake up, draw the time, and go back to sleep.
So I did all the initialization in
setup()and initialized the RTC, setup the font cache, and then went to sleep for 60s. But when I wake up after 60s I need to know whether the RTC has already been initialized, and whether I have already painted the clock frame. Right now I do all the setup once a minute.
The ESP itself has a wake up "reason". But how would I access it on the M5Paper? And does it work in the context of the BM8563 RTC?
Thanks!
the RTC clock has a timer flag which when set tells you that the RTC timer has woken M5Paper; if not set then it was a regular restart. I've implemented that for M5CoreInk (which has a similar architecture) here.
BTW: The wake up reason of the ESP32 doesn't help you here. As far as the ESP32 is concerned it's a regular start in both cases as the shutdown kills the power to the ESP32 completely. (Only when the ESP32 stays powered and goes into deep sleep or light sleep you can use the ESP32 wake up reason.)
Thanks
Felix
You could use the EEPROM to store the state in - e.g. last time the device went to sleep, last time NTP sync happened, etc. - and use that to decide which path of logic you need to follow.
For example, this could be the full flow:
setup():
- Get EEPROM, extract last NTP sync (
last_ntp_sync), last boot time (
last_boot), and last requested sleep length (
sleep_duration)
- Get RTC time (
rtc_time)
- If
last_ntp_syncis empty, go to
first_boot()
- If
last_bootis empty, go to
first_boot()
- If
rtc_time - last_boot < sleep_duration(maybe add some wiggle room here, 1-2s should be fine), go to
manual_wakeup()
- Otherwise, go to
rtc_wakeup()
first_boot():
- Connect to WiFi
- Request date and time from NTP =
new_rtc_time
- Update RTC with
new_rtc_time
- Update EEPROM's
last_ntp_syncand
last_boot_time
- Proceed to
draw_clock()
rtc_wakeup():
- Check if
last_ntp_syncis not too old (e.g. you'd probably want to sync with NTP daily)
- If
last_ntp_syncis too old, go to a method that syncs RTC time
- Update EEPROM's
last_boot_time
- Proceed to
draw_clock()
manual_wakeup():
- Do the whole NTP check dance again (probably worth organising into a separate method)
- Do whatever you want to do if the device wakeup happened due to the button press
- Proceed to
draw_clock()
Obviously extend it to your liking, but by storing these values, you can compare the last stored date with the RTC time, and act accordingly. It's not as elegant as e.g. having a proper boot reason, but it works.
- grundprinzip last edited by
@dov I'm very new to the M5Paper, but I'm wondering if you couldn't use the
esp_deep_sleep_start()instead of shutdown because this will keep the RTC going and shutdown everything else.
In theory, you might be able to call
esp_err_tesp_sleep_enable_timer_wakeup()with an appropriate duration and then simply continue. According to the documentation the Wifi connection data is kept as well.
I'm currently trying to work on something like this as well.
Interesting post you wrote - nice solution to detect reason for awakening the M5Paper. But unfortunately the code you gave for M5Corelink doesn't compile on a M5Paper.
I did some trial and error modifications and within 5 minutes I got code that does compile.
Here just the relevant code in
setup()- sketch from an example in the M5EPD library:
#include <M5EPD.h> // is not yet working as should be M5EPD_Canvas canvas(&M5.EPD); void setup() { // Check power on reason before calling M5.begin() // which calls Rtc.begin() which clears the timer flag. Wire1.begin(21, 22); // Jop: compiles //uint8_t data = M5.rtc.ReadReg(0x01); // Jop: compiles NOT uint8_t data = M5.RTC.readReg(0x01); // Jop: compiles see always); } M5.update(); delay(100); }
As you see I had to change just some capital-mode in identifiers to get it compiled.
But apparently not the right register or bits are used for detecting wakeup cause.
Maybe someone else is triggered by this to find the full solution for the M5Paper.
thank you for trying it on an M5Paper.
Try to change
Wire1.begin(21, 22)to
Wire.begin(21, 22). M5Paper unlike M5CoreInk uses
Wirefor its internal I2C. And having
Wireand
Wire1setup to use the same set of GPIOs makes
M5.RTC.begin()fail and thus the timer flag never gets cleared.
Cheers
Felix
Good tip. It works.
Here the update code based on an example from the M5EPD library:
#include <M5EPD.h> // m5paper-wakeup-cause // see forum: M5EPD_Canvas canvas(&M5.EPD); void setup() { // Check power on reason before calling M5.begin() // which calls RTC.begin() which clears the timer flag. Wire.begin(21, 22); uint8_t data = M5.RTC.readReg(0x01); should when waking up from sleep); // shut donw now and wake up after 5 seconds } M5.update(); delay(100); }
One extra remark (maybe for a new topic): the shutdown() routine has some overloaded variations. Unfortunately not very well documented, as the version with only time (see: int shutdown( const rtc_time_t &RTC_TimeStruct); ) seems not to work as expected. If add the date too (last overloaded version) it works. Maybe my expectations are wrong. Sometimes one should like a bit more specifications from M5Stack in this documentation (is a timesaver).
Anyway thnx for your suggestion Wire1 => Wire.
thanks for reporting back. I am happy to hear you got it working to your liking.
Re shutdown: that has been fixed in a pull-request, but unfortunately M5Stack engineers are very slow lately (or no longer allowed or interested?) to approve and use pull-requests from the community.
Essentially the following two lines need to be removed:
out_buf[2] = 0x00; out_buf[3] = 0x00;
See here around line 232.
Thanks
Felix
Thnx for your fast response.
I out commented the 2 lines you mentioned, this seems to work, but I experience more problems.
This shutdown() logic of the M5Paper is rather buggy. I think some engineers of M5Stack should inspect this modules and repair.
For example: I want my M5Paper to be waked up by the RTC every full 5 minutes, then it has to run for about 10 to 15 seconds and to go in sleepmode for the rest of that 5-minute period. For that I wrote routine goSleep() to calculate the next moment of awakening.
I tried several possibilities for the different overlaid methods of shutdown():
- calculate remaining seconds to go sleep and used overload-2
- calculate timestamp for the next 5-minute moment and used overload-3
- calculate date & timestamps for the next 5-minute moment and used overload-4
Now comes the weird thing (so the bugs), in any of these cases the wakeup comes ALSO at every 3-minute, so both at every 3-minute and at every 5-minute moments.
So where I want to have this wake up schedule: hh:00 hh:05 hh:10 hh:15 hh:20 ...and so on...
I get this wake up schedule: hh:00 hh:03 hh:05 hh:08 hh:10 hh:13 hh:15 hh:18 hh:20 ...and so on...
Before your suggested patch I tried only method 1 (as 2 did not work without your patch) and then it was every 4-minute and every 5-minute my M5Paper woke up by the RTC.
I inspected my calculations logic and this is OK. The problem is somewhere in the M5Paper library.
If necessary I can isolate the code I use to show you; but that's a bit more work.
Hi Felix,
In addition to my last post, I wrote that the shutdown() looks rather buggy.
Now I found reason for this behavior - partly reason - some overload variations are now OK, but one has still problem as I described in my last post.
The reason for buggy behavior, I included this:
#include <ESP32Time.h>
ESP32Time rtc;
to make it possible to set also the ESP32 internal RTC that's present in this processor, named by me with "rtc".
The external BM8563 I named as "RTC".
Maybe these library is interfering with the BM8563-code.
I said "partly reason" as Overload-2 still has the same problem as I wrote before.
have you tried to rename
rtcto something else? Does that fix the issue?
@M5Stack engineers : any idea what's going on here?
Thanks
Felix
No, didn't try rename
rtc. What difference should it make? (In C++ rtc and RTC are different identifiers, is n't it).
The reason for introduction of the own internal rtc of the ESP32 was looking for a simple way to set the system clock from the M5Paper RTC after awakening and without connecting to WiFi (this is very time- and battery-consuming).
The weird thing is: syncing with NTP is rather simple in ESP32-Arduino environment, but just setting the system time equal to the 'external' BM8563 RTC-time is more complicated. There is no simple routine. I found some not so elegant way with mktime().
Another point: I discovered bugs with awakening the M5Paper when this awakening moment should be exactly at midnight: so at 24:00u or 00:00u.
See a next post (if I find time to describe). | https://forum.m5stack.com/topic/2851/m5paper-wakeup-cause/10 | CC-MAIN-2021-39 | refinedweb | 1,632 | 72.76 |
Your Account
Rolling with Ruby on Rails, Part 2
Pages: 1, 2, 3
If you remember from Part 1, once I took over the list action from the
scaffolding I no longer had a way to delete a recipe. The list action must
implement this. I'm going to add a small delete link after the name of each
recipe on the main list page that will delete its associated recipe when
clicked. This is easy.
First, edit c:\rails\cookbook\app\views\recipe\list.rhtml and add
the delete link by making>
<p><%= link_to "Create new recipe", :action => "new" %></p>
</body>
</html>
The main change here is the addition of this link:
<%= link_to "(delete)", {:action => "delete", :id
=> recipe.id},
:confirm => "Really delete #{recipe.title}?" %>
This is different from the previous ones. It uses an option that
generates a JavaScript confirmation dialog. If the user clicks on OK
in this dialog, it follows the link. It takes no action if the user clicks on Cancel.
Try it out by browsing to.
Try to delete the Ice Water recipe, but click on Cancel when the
dialog pops up. You should see something like Figure 5.
Figure 5. Confirm deleting the Ice Water recipe
Now try it again, but this time click on OK. Did you see the results shown in
Figure 6?
Figure 6. Error deleting the Ice Water recipe
Alright, I admit it; I did this on purpose to remind you that it's OK to
make mistakes. I added a link to a delete action in the view template, but
never created a delete action in the recipe controller.
Edit c:\rails\cookbook\app\controllers\recipe_controller.rb and add
this delete method:
delete
def delete
Recipe.find(@params['id']).destroy
redirect_to :action => 'list'
end
The first line of this method finds the recipe with the ID from the link,
then calls the destroy method on that recipe. The second line merely redirects
back to the list action.
Try it again. Browse to
and try to delete the Ice Water recipe. Now it should look like Figure
7, and the Ice Water recipe should be gone.
Figure 7. Ice Water recipe is gone
Part 1 used Rails' scaffolding to provide the full range of CRUD operations
for categories, but I didn't have to create any links from our main recipe list
page. Instead of just throwing in a link on the recipe list page, I want to do
something more generally useful: create a set of useful links that will appear
at the bottom of every page. Rails has a feature called layouts, which
is designed just for things like this.
Most web sites that have common headers and footers across all of the pages do so by having each page "include" special header and footer text. Rails
layouts reverse this pattern by having the layout file "include" the page
content. This is easier to see than to describe.
Edit c:\rails\cookbook\app\controllers\recipe_controller.rb and add
the layout line immediately after the class definition, as shown
in Figure 8.
layout
Figure 8. Adding a layout to the recipe controller
This tells the recipe controller to use the file
standard-layout.rhtml as the layout for all pages rendered by the
recipe controller. Rails will look for this file using the path
c:\rails\cookbook\app\views\layouts\standard-layout.rhtml, but you will
have to create the layouts directory because it doesn't yet exist.
Create this layout file with the following contents:
<html>
<head>
<title>Online Cookbook</title>
</head>
<body>
<h1>Online Cookbook</h1>
<%= @content_for_layout %>
<p>
<%= link_to "Create new recipe",
:controller => "recipe",
:action => "new" %>
<%= link_to "Show all recipes",
:controller => "recipe",
:action => "list" %>
<%= link_to "Show all categories",
:controller => "category",
:action => "list" %>
</p>
</body>
</html>
Only one thing makes this different from any of the other view
templates created so far--the line:
<%= @content_for_layout %>
This is the location at which to insert the content rendered by each recipe
action into the layout template. Also, notice that I have used links that
specify both the controller and the action. (Before, the controller defaulted
to the currently executing controller.) This was necessary for the link to the
category list page, but I could have used the short form on the other two
links.
Before you try this out, you must perform one more step. The previous recipe
view templates contain some HTML tags that are now in the layout, so edit
c:\rails\cookbook\app\views\recipe\list.rhtml and delete the
extraneous lines at the beginning and end to make it look like this:
>
Similarly, edit both
c:\rails\cookbook\app\views\recipe\edit.rhtml and
c:\rails\cookbook\app\views\recipe\new.rhtml to delete the
same extraneous lines. Only the form tags and everything in between
should remain.
Browse to,
and it should look like Figure 9.
Figure 9. Using a layout with common links
The three links at the bottom of the page should now appear on every page
displayed by the recipe controller. Go ahead and try it out!
If you clicked on the "Show all categories" link, you probably noticed that
these nice new links did not appear. That is because the category pages display
through the category controller, and only the recipe controller knows to use
the new layout.
To fix that, edit
c:\rails\cookbook\app\controllers\category_controller.rb and add the
layout line as shown in Figure 10.
Figure 10. Adding a layout to the category controller
Now you should see the common links at the bottom of all pages of the recipe
web application.
Pages: 1, 2, 3
Next Page
© 2016, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.onlamp.com/pub/a/onlamp/2005/03/03/rails.html?page=2 | CC-MAIN-2016-07 | refinedweb | 968 | 72.46 |
FEOF(3P) POSIX Programmer's Manual FEOF(3P)
#include <stdio.h> int feof(FILE *stream);
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. The feof() function shall test the end-of-file indicator for the stream pointed to by stream. The feof() function shall not change the setting of errno if stream is valid.
The feof() function shall return non-zero if and only if the end-of- file indicator is set for stream.
No errors are defined. The following sections are informative.
None.
None.
None.
None.
clearerr(3p), ferror(3p), fopen FEOF(3P)
Pages that refer to this page: stdio.h(0p), ferror(3p), fgetc(3p), fgetwc(3p), fread(3p), gets(3p), stdin(3p) | http://man7.org/linux/man-pages/man3/feof.3p.html | CC-MAIN-2019-22 | refinedweb | 147 | 69.38 |
I've added a submenu in the 'File' menu, and want to give it an icon.
Logically, I used the 'icon' property, but all I get is a stacktrace.
Any idea?
Alain
]]>
<action id="zipXchange.OpenZippedProject"
...
icon="/icons/minilogo.png" <<----
WORKS FINE
... ]]> <<----
DOESN'T WORK
]]>
Hi Alain,
I had the same (or similar) problem with my FunkySearch plugin...not only did the icon not show, but it caused a Throwable at IDEA startup. Removing the icon=".." from the ]]> seemed to fix it. See...
I didn't officially report it as it was only a minor inconvenience for me.
Cheers,
Andrew.
Andrew
>startup. Removing the icon=".." from the seemed to fix it. See... >]]>
>
Well, I do want to adorn the submenu with an icon, so it doesn't fix it
for me :(.
I wonder if this is possible at all.
Alain
Is it possible at all?
Alain
Alain Ravet wrote:
>
Is it?
I had similar problems, i now found out that it works by specifying the absolute namespace:
You can assign an icon to an action or a group ( )
I'd recommend to have two source roots: one for the code and the other for icons, META-INF, xmls, etc Something like this Important: your icons should be placed to a source directory.
Always use full unix-path to your icon starting from the source root. Say, you have resources folder marked as source root, and there is a folder named icons inside, and there is an icon myaction.png inside icons folder. Path to icon in plugin.xml should look like icon="/icons/myaction.png"
Hope this will help | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206117849-How-to-give-an-icon-to-a-submenu-group- | CC-MAIN-2020-05 | refinedweb | 269 | 76.82 |
Section (3) ftw
Name
ftw, nftw — file tree walk
Synopsis
#include <ftw.h>
#include <ftw.h>
DESCRIPTION
n_zsingle_quotesz_zsingle_quotesz.
The
typeflag
argument passed to
fn() is an integer that has one
of the following values:
FTW_F
fpathis a regular file.
FTW_D
fpathis a directory.
FTW_DNR
fpathis a directory which can_zsingle_quotesz_t be read.
FTW_DP
fpathis a directory, and
FTW_DEPTHwas specified in
flags. (If
FTW_DEPTHwas not specified in
flags, then directories will always be visited with
typeflagset to
FTW_D.) All of the files and subdirectories within
fpathhave been processed.
FTW_NS
The stat(2) call failed on
fpath, which is not a symbolic link. The probable cause for this is that the caller had read permission on the parent directory, so that the filename
fpathcould be seen, but did not have execute permission, so that the file could not be reached for stat(2). The contents of the buffer pointed to by
sbare undefined.
FTW_SL
fpathis a symbolic link, and
FTW_PHYSwas set in
flags.
FTW_SLN
fpathis a symbolic link pointing to a nonexistent file. (This occurs only if
FTW_PHYSis not set.) On most implementations, in this case the
sbargument passed to
fn() contains information returned by performing lstat(2) on the symbolic link. For the details on Linux, see BUGS.
The fourth argument (
ftwbuf) that
nftw() supplies when calling
fn() is a pointer to a structure of type
FTW: −1._zsingle_quoteszmust be defined (before including
anyheader files) in order to obtain the definition of
FTW_ACTIONRETVALfrom
ftw.h
FTW_CHDIR
If set, do a chdir(2) to each directory before handling its contents. This is useful if the program needs to perform some action in the directory in which
fpathresides. (Specifying this flag has no effect on the pathname that is passed in the
fpathargumentis not set, but
FTW_DEPTHis set, then the function
fn() is never called for a directory that would be a descendant of itself.
ftw()
ftw() is an older function
that offers a subset of the functionality of
nftw(). The notable differences are as
follows:
ftw() has no
flagsargument. It behaves the same as when
nftw() is called with
flagsspecified as zero.
The callback function,
fn(), is not supplied with a fourth argument.
The range of values that is passed via the
typeflagargument supplied to
fn() is smaller: just
FTW_F,
FTW_D,
FTW_DNR,
FTW_NS, and (possibly)
FTW_SL.
RETURN VALUE
These functions return 0 on success, and −1().
ATTRIBUTES
For an explanation of the terms used in this section, see attributes(7).
NOTES
POS().
BUGS(%−3s %2d , −>level); if (tflag == FTW_NS) printf(−−−−−−−); else printf(%7jd, (intmax_t) sb−>st_size); printf( %−40s %d %s , fpath, ftwbuf−>base, fpath + ftwbuf−>base); return 0; /* To tell nftw() to continue */ } int main(int argc, char *argv[]) { int flags = 0; if (argc > 2 && strchr(argv[2], _zsingle_quotesz_d_zsingle_quotesz_) != NULL) flags |= FTW_DEPTH; if (argc > 2 && strchr(argv[2], _zsingle_quotesz_p_zsingle_quotesz_) != NULL) flags |= FTW_PHYS; if (nftw((argc < 2) ? . : argv[1], display_info, 20, flags) == −1) { perror(nftw); exit(EXIT_FAILURE); } exit(EXIT_SUCCESS); }
SEE ALSO
stat(2), fts(3), readdir(3) | https://manpages.net/detail.php?name=ftw | CC-MAIN-2022-21 | refinedweb | 493 | 65.62 |
pbot 1.4.0
An simple site crawler with proxy support
Pbot contains two modules, Bot and Spider
Bot is a simple helper, created to save request state (cookies, referrer) between http requests. Also, it provides addional methods for adding cookies. With no dependencies this module is easy to use when you need to simulate browser.
Spider it's pbot, armed by lxml (required). Provides addional methods for easy website crawling, see below.
Bot is very easy to use:
from pbot.pbot import Bot bot = Bot(proxies={'http': 'localhost:3128'}) # You can provide proxies, during bot creation, or set later as bot.proxies bot.add_cookie({'name': 'sample', 'value': 1, 'domain': 'example.com'}) response = bot.open('') # Open with cookies and empty referrer bot.follow('') # Open google with example.com as a referrer response = bot.response # Response saved, and can be read later bot.follow('', post={'q': 'abc'}) # You can provide post and get as keyword arguments bot.refresh_connector() # Flush cookies and referrer
Spider gives you special features:
from pbot.spider import Spider bot = Spider() # or Spider(force_encoding='utf-8') to force encoding for parser bot.open('') bot.tree.xpath('//a') # lxml tree can be accessed by .tree, response will be automatically readed and parsed by lxml.html form = bot.xpath('//form[@id="main"]') # xpath shortcut for bot.tree.xpath bot.submit(form) # Submit lxml f§orm # # Crawler, recursively crawl from target page yielding xml_tree, query_url, real_url (real_url - url after all redirects). bot.crawl(self, url=None, # Target url to start crawling check_base=True, # Yield pages only on domain from url only_descendant=True, # Yield only pages that urls starts with url max_level=None, #Maximum level allowed_protocols=('http:', 'https:'), ignore_errors=True, ignore_starts=(), # Tuple/array, ignore urls that starts with ignore_starts (exclude some parts of site) check_mime=())
- Author: Pavel Zhukov
- Keywords: crawling,bot
- License: GPL
- Package Index Owner: zeus
- DOAP record: pbot-1.4.0.xml | http://pypi.python.org/pypi/pbot/1.4.0 | crawl-003 | refinedweb | 312 | 50.94 |
GetStringTokenizer(string, string)
Create a string tokenizer for a given string with a specified delimiter
struct sStringTokenizer GetStringTokenizer( string sString, string sDelimiter );
Parameters
sString
The string to be split up into parts (tokens.)
sDelimiter
The delimiter used to split up the string.
Description
This function is to be used in combination with the other functions from x0_i0_stringlib.
It creates and returns a fresh string tokenizer (in form of a sStringTokenizer struct). The tokenizer is required as input for the other functions in x0_i0_stringlib in order to split up a string into parts (tokens).
GetStringTokenizer() requires two arguments: The string that is to be split up into tokens (sString) and the delimiter used to split up the string (sDelimiter). The delimiter MUST be a string containing a single character..
Keep in mind that the functions in x0_i0_stringlib are not very efficient. They do a lot of unnecessary string manipulations and unnecessary string parameter passing; string operations and string parameter passing (into or out of nwnscript functions) are amongst the most inefficient operations in nwnscript.
For the purpose of string tokenization by the functions provided in x0_i0_stringlib, a token is any substring (including empty substrings!) within the original string enclosed by the specified delimiter (e.g. every token has one delimiter to the left, one to the right and NO delimiters within). Any non-empty original string is treated as if it were enclosed within a pair of (virtual) delimiters to its left and right. Thus, unless the original string is empty, the number of tokens is always one higher than the number of delimiters contained within the original string. A non-empty string with no delimiters therefore consists out of one token, which is equal to the original string. An empty string has no tokens, although an empty token will be returned on request.
Example:
sString = "I|am|sloppy||programmer";
sDelimiter = "|";
Token[0] = "I";
Token[1] = "am";
Token[2] = "sloppy";
Token[3] = "";
Token[4] = "programmer";
sString contains five tokens and four delimiters.
Known Bugs
Contrary to the description found in the include file, the delimiter must be a single character (see remarks above).
Requirements
#include "x0_i0_stringlib"
Version
???
See Also
author: motu99, editors: Mistress, Kolyana | http://palmergames.com/Lexicon/Lexicon_1_69/function.GetStringTokenizer.html | CC-MAIN-2016-07 | refinedweb | 363 | 53.61 |
So, I took my first hack at writing a super simple plugin for Sublime 3. All I needed were my initial and the current date, as you can tell, I started with the example Hello World plugin and the docs and just modified from there.
heres the code:
import sublime
import sublime_plugin
import datetime
class ExampleCommand(sublime_plugin.TextCommand):
def run(self, edit):
day = (datetime.datetime.now().strftime("%Y-%m-%d"))
sig = "OE " + day
self.view.insert(edit, 0, sig)
I also registered the following key binding:
[
{"keys": ["f5"], "command": "example" },
]
-Now, when I open up a new blank file and press f5, I get:
OE 2017-10-12
which is correct.
But, when I do this in say, index.php on a new line, a file where there is plenty of existing code, the plugin won't fire, meaning that I hit f5 and NOTHING happens.
The console reports no errors, and I can run the command view.run_command("example") and get the proper output in a blank file but not in the php file.
Also, running sublime.log_input(True) to log my inputted keys is returning f5 for my keypress, so I know its not getting misinterpreted.
How can I make this plugin run in all files?
Plugin works, was inserting at the top of my page instead of where my cursor was placed. I would have deleted this, but the forum wont let me. | https://forum.sublimetext.com/t/plugin-works-in-blank-file-but-not-in-php-file-what-am-i-doing-wrong/32540 | CC-MAIN-2018-17 | refinedweb | 236 | 65.32 |
CFD Online Discussion Forums
(
)
-
FLUENT
(
)
- -
Pulsatile Flow Boundary Conditions
(
)
Lior
February 11, 2004 12:32
Pulsatile Flow Boundary Conditions
Hi everyone! I am trying to simulate 2-D pulsatile flow of blood through arterial stenosis. I was wondering how should i define the boundary conditions for the inlet pulsatile flow waveform? The blood flow inlet has a specific waveform (according to the heart systole and diastole pattern) which repeats itself every ~1 second. I guess i should use a UDF, but how do i define it as a boundary condition in the inlet? Thanks.
solomon
February 12, 2004 04:38
Re: Pulsatile Flow Boundary Conditions
Lior,
The following is the udf for pulsatile velocity boundary condition which you can use it as inlet. It is one of the examples in fluent documentation manual under the section udf.
I do hope you have the appropriate equation to desiginate arterial pulsatile velociy profile. You need to substitute the sine function here with it.
You will need to interpret or compile this udf; instructions on how to do so are in influent manual. Once compiled/interpreted the function appears in your list of inlet boundary condition. From there you make your selection.
#include "udf.h"
DEFINE_PROFILE(unsteady_velocity, thread, position) { face_t f; real t = CURRENT_TIME;
begin_f_loop(f, thread)
{
F_PROFILE(f, thread, position) = 5.0*sin(10.*t);
} end_f_loop(f, thread) }
Solomon
Lior
February 12, 2004 06:49
Re: Pulsatile Flow Boundary Conditions
Hi Solomon, Thank you for your clear and fast response. I will try and follow your instructions, in order to fit the inlet boundary condition. Best regards, Lior.
adi.ptb
April 20, 2014 07:26
hi,I want to simulate a pulsating unsteady turbulent flow in a pipe,due to symmetry i generated half of its geometry,i used a udf to define the inlet velocity and temperature,i get the solution converged but when i plot the area weighted average vs flow time at the centerline i keep getting zero values for axial velocity,can anyone help me please
All times are GMT -4. The time now is
11:33
. | http://www.cfd-online.com/Forums/fluent/33019-pulsatile-flow-boundary-conditions-print.html | CC-MAIN-2016-36 | refinedweb | 349 | 53 |
The whole point of using IronPython as opposed to just Python is that it promises access to .NET and what this really means is access to the .NET Framework.
You can create a range of project types but the basic principles of interop are the same in each. Lets start with a Windows Forms application - create a new project. You can't make use of the drag-and-drop designer to build a user interface so you have to create all of the objects needed and wire up interrupts manually.
To create a window with a textbox and a button you would use first create a form class that inherits from Form and define all of the controls needed:
import clrclr.AddReference('System.Windows.Forms')from System.Windows.Forms import *
class MyForm(Form): def __init__(self): self.Button1 = Button() self.Button1.Text = 'Click Me' self.Button1.Click+=OnClick self.Controls.Add(self.Button1) self.Textbox1 =TextBox() self.Textbox1.Text='Ready' self.Textbox1.Top=50 self.Controls.Add(self.Textbox1)
The initial instructions load the assemblies and references we need to use. Next the class is defined and its constructor creates a Button and a Textbox. In a real example you would have to spend a lot more effort setting properties to achieve a reasonable layout.
To define a click event handler for the button we need to define the OnClick method:
def OnClick(source,event): form.Textbox1.Text='clicked'
This simply has to have the correct number of parameters. Finally we need to create the instance of the MyForm class and start the whole application running:
form = MyForm()Application.Run(form)
Now the program will display the message in the textbox when you click on the button.
Notice that you can use the .NET objects just as if they were Python objects and the only real difference is that you have to remember to make use of the self keyword as appropriate.
To make use of any .NET classes you have to first import a standard Windows Forms project would need something like:
import Systemfrom")form.TextBox1.Text=a
creates a .NET String which can be assigned to any .NET property that accepts a String. If you don't want to use a fully qualified name then you can use:
from System import Stringa"form.TextBox1.Text=c.ToUpper()
will fail as a Python string object doesn't have a ToUpper Method.
However, if you try the same thing after importing System you will discover that it does work.
In other words, importing a .NET class can change the behavior of your program.
This is very odd and something you need to be aware, or perhaps beware, of.
<ASIN:1590599829>
<ASIN:1933988339>
<ASIN:0470548592>
<ASIN:1887902996>
<ASIN:0596158084>
<ASIN:0321680561> | http://www.i-programmer.info/programming/other-languages/1043-ironpython.html?start=3 | CC-MAIN-2013-48 | refinedweb | 460 | 66.74 |
IRC log of tagmem on 2003-06-30
Timestamps are in UTC.
18:55:20 [RRSAgent]
RRSAgent has joined #tagmem
18:58:26 [Chris]
Chris has joined #tagmem
18:58:30 [Norm]
Norm has joined #tagmem
18:58:43 [Norm]
zakim, who's on the phone?
18:58:43 [Zakim]
sorry, Norm, I don't know what conference this is
18:58:44 [Zakim]
On IRC I see Norm, Chris, RRSAgent, Zakim, DanCon, Stuart, Ian
18:58:47 [Norm]
zakim, this is tag
18:58:47 [Zakim]
sorry, Norm, I do not see a conference named 'tag'
18:58:50 [Norm]
hmph
18:59:12 [DanCon]
wierd
18:59:21 [DanCon]
ah... the conference hasn't started.
18:59:33 [Zakim]
TAG_Weekly()2:30PM has now started
18:59:39 [Zakim]
+??P0
18:59:57 [Stuart]
zakim, ??p0 is me
18:59:57 [Zakim]
+Stuart; got it
19:00:00 [Zakim]
+Norm
19:00:03 [Zakim]
-Stuart
19:00:05 [Zakim]
+Stuart
19:00:33 [Ian]
zakim, call Ian-BOS
19:00:33 [Zakim]
ok, Ian; the call is being made
19:00:34 [Zakim]
+Ian
19:01:22 [TBray]
TBray has joined #tagmem
19:01:41 [Zakim]
+Tim_Bray
19:01:44 [Stuart]
19:02:00 [Zakim]
+DanC
19:02:11 [Zakim]
+Chris
19:02:30 [Stuart]
zakim, who is here
19:02:30 [Zakim]
Stuart, you need to end that query with '?'
19:02:34 [Stuart]
zakim, who is here?
19:02:34 [Zakim]
On the phone I see Stuart, Norm, Ian, Tim_Bray, DanC, Chris
19:02:35 [Zakim]
On IRC I see TBray, Norm, Chris, RRSAgent, Zakim, DanCon, Stuart, Ian
19:03:37 [Stuart]
zakim, who is here?
19:03:37 [Zakim]
On the phone I see Stuart, Norm, Ian, Tim_Bray, DanC, Chris
19:03:38 [Zakim]
On IRC I see TBray, Norm, Chris, RRSAgent, Zakim, DanCon, Stuart, Ian
19:04:23 [Ian]
Roll call: NW, CL, SW, DC, IJ
19:04:28 [Ian]
and Tim Bray
19:05:04 [Ian]
SW (Chair), IJ (Chair)
19:05:09 [Ian]
SW (Chair), IJ (Scribe)
19:05:21 [Ian]
# Accept minutes of 30 Jun teleconference?
19:05:23 [Zakim]
+TimBL
19:05:29 [Ian]
19:05:36 [Ian]
Accepted 30 Jun minutes
19:05:45 [Ian]
# Accept this agenda?
19:05:52 [Ian]
19:05:58 [Ian]
Next meeting: 7 July.
19:06:01 [Ian]
Possible regrets: PC
19:06:03 [Ian]
Regrets: TBL
19:06:22 [timbl]
timbl has joined #tagmem
19:07:00 [Ian]
-----
19:07:13 [Ian]
# Next meeting with Voice WG?
19:07:21 [Ian]
SW: IJ and I met with some reps from Voice WG last week.
19:07:25 [Ian]
SW: They are revising some proposed text.
19:07:44 [Ian]
SW: They will circulate to IJ and me for review. If all goes well, I'd like to schedule some time with them to confirm it.
19:08:24 [Ian]
Proposed: Voice WG expected to join our 7 July teleconf for a small piece.
19:08:56 [Ian]
SW: If we have material from them we'll try to include them in next week's call.
19:08:59 [Ian]
----------
19:09:10 [Ian]
Proposed three-week summer break: No meeting 18 Aug, 25 Aug, 1 Sep
19:09:17 [Ian]
DC: Ok.
19:09:27 [timbl]
Break: ok by me
19:09:27 [Ian]
TBL: Ok
19:09:37 [DanCon]
"6. Proposed three-week summer break: No meeting 18 Aug, 25 Aug, 1 Sep"
19:09:48 [Ian]
Regrets 11 July: SW, TB
19:11:12 [TBray]
Likewise: no certainty required
19:11:33 [Ian]
So expectation is to not meet on those dates; if enough people want to, they can schedule meetings then.
19:11:37 [Ian]
------------
19:11:42 [Ian]
27 June 2003 Working Draft of Arch Doc published.
19:11:49 [Ian]
19:12:36 [Ian]
IJ: New draft published Friday.
19:12:47 [Ian]
IJ: Comments are coming in on previous draft; I haven't read them.
19:12:53 [Ian]
IJ: TB and DC did editorial pass.
19:13:24 [Ian]
DC: Balance between story and formal spec to my liking now.
19:13:32 [Ian]
DC: I'd like to add an illustration for the travel scenario.
19:13:43 [Ian]
TB: I have discomfort on the section on authority.
19:14:12 [Ian]
TB: I don't know why we have a section if not for programmers' benefits.
19:14:31 [Ian]
DC: I think this will be connected to an issue TBL is about to raise.
19:14:42 [Ian]
(section 2.3)
19:15:10 [Ian]
SW: We are likely to be looking at this document at ftf meeting.
19:15:20 [Ian]
Action item review for Arch Doc
19:15:26 [Ian]
1. Action RF 2003/06/02: Rewrite section 5. Section 5 is expected to be short.
19:15:31 [Ian]
SW: RF said to leave open.
19:15:40 [Ian]
Completed action DO 2003/06/02: Write up a couple of paragraphs on extensibility for section 4.
19:15:43 [TBray]
For the record: I am substantially uncomfortable with
because I don't understand what normative effect it would have on the behavior of implementors. If none, lose it. If some, specifiy it.
19:16:08 [DanCon]
keep in mind web arch impacts folks that read and write documents, not just coders, tim bray
19:16:10 [Ian]
4. Action PC 2003/06/16: Send second draft of AC announcement regarding TAG's last call expectations/thoughts and relation to AC meeting feedback.
19:16:14 [Ian]
SW: I have no update on that action,.
19:16:20 [Ian]
----------
19:16:27 [Ian]
Findings
19:16:40 [DanCon]
(not to say I'm 100% happy with #URI-authority section as written)
19:16:46 [Ian]
New draft of "Client handling of MIME headers"
19:16:58 [Ian]
19:18:10 [Ian]
IJ: Next steps? Does anyone want to read before we say "We think we're done"?
19:18:32 [Ian]
CL: Has the SMIL IG been contacted?
19:18:36 [Ian]
IJ: No.
19:18:57 [TBray]
Scenario 2 in Sectio 2 has funny formatting; grey surround-box misshapen
19:19:27 [Ian]
Action CL, NW: Read this draft by next week.
19:19:30 [Chris]
i will review it (skimmed but not read in detail)
19:19:58 [Ian]
DC: Should this go to public-tag-review?
19:20:18 [Ian]
SW: I hesitate.
19:20:24 [DanCon]
public-tag-announce, that is
19:20:26 [Ian]
SW: People reading minutes will see this discussion.
19:20:41 [Chris]
if people want to discuss it that should happen on www-tag
19:21:09 [Ian]
Action IJ: Announce on www-tag that we expect to approve this finding in a week or so. Last chance for comments.
19:21:28 [Chris]
this is also relevant to the error handling issue
19:21:28 [Ian]
"How should the problem of identifying ID semantics in XML languages be addressed in the absence of a DTD?
19:21:28 [Ian]
"
19:21:46 [Ian]
19:22:05 [Ian]
CL: I haven't completely updated.
19:22:37 [Ian]
CL: But nearly done.
19:22:44 [Ian]
CL: We should update with latest info.
19:22:49 [Ian]
SW: Should we offer an opinion?
19:23:02 [DanCon]
"No conclusion is presented." --
19:23:11 [Ian]
CL: The XML Core WG has been discussing this. I don't think we should pick a favorite from the TAG.
19:23:16 [Ian]
NW: I agree with CL on that point.
19:23:28 [Ian]
NW: The Core WG is working on this.
19:23:58 [Ian]
IJ: Next steps?
19:24:18 [Ian]
Action CL: Revise this draft finding with new input from reviewers.
19:24:30 [Chris]
7 july
19:24:35 [Chris]
due date
19:24:46 [Ian]
--------------
19:24:52 [Ian]
Review of issues list
19:24:58 [Ian]
19:25:02 [Ian]
Summary from SW:
19:25:18 [Ian]
19:25:24 [Chris]
ironically the one i was working on today was on the "does not expect to discuss" ;-)
19:25:27 [Ian]
SW to TBL: We skipped over httpRange-14 last week.
19:25:30 [timbl]
Zakim, who is on the call?
19:25:30 [Zakim]
On the phone I see Stuart, Norm, Ian, Tim_Bray, DanC, Chris, TimBL
19:25:46 [Ian]
TBL: I haven't talked to RF about httpRange-14 lately.
19:26:53 [Ian]
SW: Current expectation for issue 14 is (1) not required to be closed for last call draft and (2) no plan to discuss at ftf meeting
19:27:31 [Ian]
q+
19:28:17 [Ian]
q-
19:28:23 [Stuart]
19:29:44 [Ian]
--------------
19:29:48 [Ian]
URIEquivalence-15
19:29:59 [Ian]
TB: Pending, since RFC2396bis not finished.
19:31:16 [Ian]
TB: There was never a formal expression from TAG on those drafts. But every issue that arose we hammered out.
19:32:27 [Ian]
[Question of whether we should have a finding to close off the issue]
19:32:31 [Ian]
TB: I don't think we should.
19:32:41 [Ian]
CL: Mark your drafts as obsoleted.
19:33:16 [Chris]
19:34:34 [Ian]
TBL: DC made a comment in a meeting with which I agreed - there are some axioms about resolution of relative URis that are not written in rfc2396bis.
19:34:45 [Ian]
DC: I do worry about that.
19:35:25 [Ian]
TBL: Normalization of "../" and "./" for example. Need a statement about invariants.
19:35:53 [Ian]
SW: I suggest you raise an issue with RF on the URI list.
19:37:09 [Ian]
TB: W.r.t. last call, I think we have a dependence on RFC2396bis. We are stuck with a reference to a moving target for now...
19:37:32 [Ian]
[Some agreement that not much need for ftf time on this issue.]
19:37:43 [DanCon]
15 | Yes | No # my summary
19:37:43 [Ian]
DC: If there's spare time, I'd like to, but don't squeeze something else off.
19:38:08 [Ian]
-----
19:38:16 [Ian]
# HTTPSubstrate-16
19:39:00 [Ian]
DC: I think that LM did the comparison that we asked RF to do.
19:39:12 [TBray]
19:39:42 [Ian]
DC: I think that msg merits discussion. Not sure whether in the path to last call.
19:40:07 [Ian]
DC: The business about why not create a new URI scheme is relevant here.
19:40:35 [Ian]
DC: Suppose ldap were being designed today. They could design a new protocol and make a new URI scheme. Or they could use HTTP as a substrate.
19:40:50 [Ian]
DC: The principle about don't make up new URI schemes and HTTP as substrate are related.
19:41:10 [Ian]
TB: While that's fair, I think that our comments are sufficiently general so that we don't need to change anything.
19:41:22 [Ian]
TB: If we want to provide information about when it is worth the cost, that might be ok.
19:42:14 [Ian]
16 - Resolve for last call: No. Discussion at ftf: Spare time.
19:42:29 [Ian]
---
19:42:29 [Ian]
# errorHandling-20
19:42:32 [Ian]
See notes from CL
19:42:35 [DanCon]
I'd like to know Orchard's sense of propority of HTTPSubstrate-16
19:42:50 [Ian]
19:43:02 [Ian]
CL: "Ignorability" is something I'd like to discuss.
19:43:22 [Ian]
CL: If you get a file and it has an attribute in a namespace that you are supposed to understand, then that's an error.
19:43:39 [Ian]
CL: But if you add your own attribute in your own namespace, considered good way to extend.
19:43:46 [Ian]
CL: I think we should stay clear of extensions to XML.
19:44:19 [Ian]
TB: Some errors depend on application...
19:44:40 [Ian]
TB: In 3.2.1 of latest arch doc, bullet on attention to error handling.
19:45:14 [Chris]
dan - yes, the notes say that and give examples of harm from silent recovery and attempted recovery
19:45:26 [Ian]
TB: We might put something in section on XML...I would kind of be inclined to declare victory based on what's in 3.2.1
19:45:37 [Chris]
ack dancon
19:45:37 [Zakim]
DanCon, you wanted to say I'd like "silent recovery from errors considered harmful" in this last-call draft
19:45:44 [Chris]
q+
19:45:55 [timbl]
q+ to suggest we have said a bit too much about errors - one cannot tell people what to do if they do have an error.
19:45:59 [Ian]
DC: What I want in the last call draft is that "Silent recovery from error is harmful" to be in a box; critical for last call.
19:46:46 [Ian]
CL: Notes that I sent in gave some examples of bad consequences of silent recovery.
19:47:35 [TBray]
q+ to agree with Dan about getting "silent failure considered harmful" into webarch before last call
19:47:40 [Ian]
[CL cites example of browsers that consider </p> an error and treat it as <p>, so extra vertical space]
19:48:04 [Stuart]
ack Chris
19:48:13 [DanCon]
ack timbl
19:48:13 [Zakim]
timbl, you wanted to suggest we have said a bit too much about errors - one cannot tell people what to do if they do have an error.
19:48:45 [Chris]
aha - be careful what specs say about errors, that sort of thing?
19:48:47 [Ian]
TBL: I'm concerned about going too far in direction of saying how to design an application.
19:49:05 [Chris]
carefully distinguish from errors (fatal) and warnings
19:49:13 [DanCon]
I share timbl's concern. I still stand by "silent recovery from errors considered harmful"
19:49:24 [Chris]
q+ to suggest this merits a little discussion time at f2f
19:49:37 [Ian]
[TBL cites example of inconsistent RDF; application-dependent scenarios]
19:50:13 [Chris]
at user option is no use in a batch job - good point TimBL
19:50:35 [Ian]
TBL: I don't like the SGML attitude of specifying the behavior of an agent. Just say what the tags mean.
19:51:24 [Ian]
TBL: Don't tie down specs with overly narrow error-handling requirements.
19:51:27 [Ian]
ack TBray
19:51:27 [Zakim]
TBray, you wanted to agree with Dan about getting "silent failure considered harmful" into webarch before last call
19:51:48 [Ian]
TB: I think we have consensus that "silent recovery from errors" is probably bad behavior in the context of web arch.
19:51:53 [Chris]
have separate conformance reuirements for correct docs, correct generators, and correct readers
19:52:04 [Ian]
TB: I'd like to spend some time at ftf meeting on this.
19:52:05 [Chris]
and correct user agents a s asubset of readers
19:52:26 [Ian]
TB: XML's "halt and catch fire" might have been too much...
19:52:38 [Ian]
DC: I second talking about ftf.
19:52:46 [Ian]
CL: Not sure this is in the way of last call.
19:52:55 [Ian]
CL Yes to discussion at ftf
19:53:05 [DanCon]
I think we might end up splitting it in half and closing one half.
19:53:17 [Ian]
TB: This one might not require a finding.
19:54:03 [Chris]
19:54:11 [Ian]
errorHandling-20 : What should specifications say about error handling?
19:54:30 [Ian]
CL: Specs, in the conformance section should be clear about when they are talking about documents, generators, and consumers.
19:54:42 [DanCon]
"What should specifications say about error handling?"
19:55:09 [Ian]
20: Schedule at ftf, try to close before last call.
19:55:11 [Ian]
---
19:55:13 [Ian]
xlinkScope-23
19:55:28 [Ian]
SW: Last action was to write to HTCG and XML Core WG.
19:55:36 [Chris]
q+
19:55:36 [Ian]
SW: I've had no feedback from either group.
19:55:57 [Ian]
CL: The XML CG has discussed. A task force to be created.
19:56:12 [Ian]
CL: The HTCG has discussed briefly. Some people seem interested....3/4 of a task force formed...
19:56:23 [TBray]
Suggest not on critical path for last cal
19:56:24 [Ian]
CL: Moving forward, but not much momentum.
19:57:01 [Ian]
Action CL: Ping the chairs of those groups asking for an update on xlinkScope-23.
19:57:42 [Ian]
SW: I set expectations that TAG would have a last look.
19:58:00 [Ian]
DC to TBL: Is what's going on with xlinkScope-23 consistent with your expectations?
19:58:23 [Stuart]
From:
" We believe that since we last considered this issue, there
19:58:24 [Stuart]
has been substantially more input to the discussion, and thus we will commit
19:58:24 [Stuart]
to taking up the issue once again and, should we achieve consensus, publish
19:58:24 [Stuart]
that position as our contribution to work in this area.
19:58:24 [Stuart]
"
19:59:42 [Ian]
TBL: I have the feeling that the way this will be resolved "nicely" is a new version of xlink that is simpler.
19:59:57 [timbl]
than xlink or hlink
20:00:08 [Ian]
DC: My opinion is "no" and "no".
20:00:11 [Ian]
(for 23)
20:00:32 [Ian]
[CL action stands]
20:00:45 [Ian]
CL: I agree with "no" and "no"
20:00:46 [Ian]
--------
20:00:54 [Ian]
contentTypeOverride-24
20:01:08 [Ian]
SW: I think we'll have this resolved for last call. Probably don't need to discuss at ftf.
20:01:08 [DanCon]
on 24, I suggest yes for lc, no for ftf. (what Stuart just said)
20:01:15 [Ian]
-----------
20:01:23 [Ian]
contentPresentation-26
20:01:46 [Ian]
CL: I was working on this one today.
20:02:06 [DanCon]
on 26, I guess I'm no for lc, yes for ftf
20:02:22 [Ian]
CL: I'd like to have some discussion before last call. And discussion at ftf since not yet discussed.
20:02:26 [DanCon]
(don't mind trying for 26 for lc)
20:02:34 [Ian]
CL: The finding I'm writing is a bit wordy....
20:03:19 [Ian]
CL: If we all agree, could be slipped in; but don't think it needs to be in before last call. But I'd prefer.
20:03:42 [Ian]
DC: Worth a try.
20:03:54 [Ian]
---
20:03:57 [Ian]
IRIEverywhere-27
20:04:30 [TBray]
q+
20:04:52 [Ian]
ack Chris
20:04:52 [Zakim]
Chris, you wanted to suggest this merits a little discussion time at f2f and to
20:04:54 [Ian]
ack TBray
20:05:13 [Ian]
TB: I think that after back and forth, we decided that the IRI draft was not cooked enough yet.
20:05:20 [Chris]
q+ to talk about a new and related issue
20:05:36 [Ian]
TB: I don't think we need to solve before last call.
20:05:47 [Ian]
TB: I don't think we need to discuss at ftf either.
20:06:20 [Ian]
ack Chris
20:06:20 [Zakim]
Chris, you wanted to talk about a new and related issue
20:06:38 [Ian]
CL: New and related issue - When do you use URIs for labels for things?
20:06:50 [Ian]
[Or should you use strings]
20:07:35 [Ian]
CL: I've started a writeup on this one...
20:07:48 [timbl]
q+
20:08:05 [Ian]
SW: I hear "no" and "no" for 27.
20:08:18 [Ian]
TBL: IRIs extend 15 into IRIs.
20:08:52 [Ian]
TBL: I think we could even work on this independent of IRI spec.
20:09:02 [TBray]
q+ to make a procedural suggestion
20:09:08 [Ian]
TBL: Is this urgent?
20:09:14 [Chris]
yes its urgent
20:09:15 [Ian]
ack timbl
20:09:18 [Ian]
ack DanCon
20:09:18 [Zakim]
DanCon, you wanted to wonder whether this merits ftf time
20:09:23 [Chris]
according to the XML activity
20:09:23 [Ian]
DC: I'd like ftf time on this one.
20:09:30 [Ian]
ack TBray
20:09:30 [Zakim]
TBray, you wanted to make a procedural suggestion
20:09:54 [Ian]
-----
20:09:55 [Ian]
# fragmentInXML-28
20:10:55 [Ian]
[No actions]
20:11:08 [Ian]
DC: Please add this to the pile containing 6, 37, 38
20:11:19 [Ian]
[TBL: And soon-to-be 39]
20:11:57 [Ian]
DC: "no" and "yes"
20:12:03 [Chris]
binaryXML-30 er how to discuss member-only stuff??
20:12:05 [Ian]
----
20:12:08 [Ian]
binaryXML-30
20:12:39 [Ian]
CL: I'd like to do a survey for this issue.
20:12:56 [DanCon]
why does this have a "resultion summary" if it's still open?
20:13:15 [Ian]
[Dan, it should say "draft"]
20:13:37 [Ian]
Summary from CL:
20:13:45 [Ian]
20:14:15 [Ian]
TBL: I'd like to add OGC to the entry for this issues list.
20:15:01 [DanCon]
I don't see how "draft" would resolve the apprent contradition between an issue being in "assigned" state and having a "resolution summary". not urgent.
20:15:19 [Ian]
I know.. :(
20:16:01 [Ian]
SW: No and No
20:16:12 [Ian]
----
20:16:32 [Ian]
metadataInURI-31
20:16:47 [Ian]
SW: I hope to put out for TAG review this week.
20:17:15 [Ian]
IJ: Seem slike 31 is low-hanging fruit.
20:17:32 [Ian]
SW: No, Yes.
20:17:34 [Ian]
----
20:17:54 [Ian]
TB: I suggest that 31 be a Yes before last call.
20:17:59 [Ian]
DC: there's some relevant text already.
20:18:07 [Ian]
TB: Make sure finding and arch doc in accord.
20:18:12 [Ian]
----
20:18:22 [Ian]
xmlIDSemantics-32
20:18:30 [Ian]
CL: I suggest we leave in pending.
20:18:38 [Ian]
CL: "No", "No"
20:18:42 [Ian]
---
20:18:50 [Ian]
mixedUIXMLNamespace-33
20:19:14 [Ian]
No, no.
20:19:27 [timbl]
no no
20:19:40 [Ian]
CL: I'm happy to have discussion at ftf and write that up.
20:19:48 [Ian]
TBL: Connects to composable things.
20:19:49 [Ian]
---
20:19:56 [Ian]
xmlFunctions-34
20:20:13 [Ian]
TBL: No, no
20:20:27 [Ian]
---
20:20:33 [Ian]
RDFinXHTML-35
20:20:34 [Ian]
DC: No, yes
20:20:45 [Ian]
DC: There is movement on this; I'd like some ftf time.
20:20:50 [Ian]
---
20:20:58 [Ian]
siteData-36
20:21:11 [Ian]
DC, TB: I'd like some ftf time on this.
20:21:13 [Norm]
+1
20:21:16 [Ian]
TB: I don't think impacts arch doc.
20:21:21 [Ian]
DC: Agreed
20:21:27 [Ian]
---
20:21:28 [Chris]
no,no for me
20:21:32 [Ian]
--
20:21:33 [Ian]
# abstractComponentRefs-37
20:21:42 [Ian]
and * putMediaType-38
20:21:47 [Ian]
(cluster with 6 and 28)
20:22:08 [Ian]
-----------------
20:23:11 [Ian]
----
20:23:16 [Ian]
Arch Doc
20:23:21 [Ian]
TB: Things that we need to worry about:
20:23:25 [Ian]
a) Chap 4 still missing
20:23:53 [DanCon]
"2.3. URI Authority"
20:23:53 [Ian]
TB: I think we need time at ftf to talk about sections 2.3 and 3.2.1
20:24:04 [Ian]
q+
20:24:07 [DanCon]
"3.2.1. Desirable Characteristics of Format Specifications"
20:24:37 [Chris]
3.2.2.2. Final-form v. Reusable conflicts in some ways with cp26
20:25:44 [Ian]
20:25:50 [Ian]
20:26:19 [timbl]
Ok, so we have a commitment to put it on but nothing to reference yet.
20:27:08 [Ian]
IJ: I'd prune this section.
20:27:20 [DanCon]
ack ian
20:27:22 [Ian]
IJ: Also, some of this text not specific to Web arch.
20:27:25 [Ian]
ack DanCon
20:27:25 [Zakim]
DanCon, you wanted to note xlinkScope-23 has a home in "3.2.4. Embedding Hyperlinks in Representations"
20:27:52 [Ian]
DC: I see xlinkscope has a home in 3.2.4
20:28:50 [Ian]
IJ: I think CL is working on too much stuff right now.
20:29:31 [Ian]
CL actions include: error handling, content/presentation
20:29:42 [Ian]
ADJOURNED
20:29:48 [Zakim]
-Norm
20:29:50 [Zakim]
-Tim_Bray
20:29:51 [Zakim]
-Stuart
20:29:51 [Ian]
RRSAgent, stop | http://www.w3.org/2003/06/30-tagmem-irc.html | CC-MAIN-2014-49 | refinedweb | 4,196 | 79.09 |
must define a top-level
hybridMain() function that
takes a
StreamChannel argument and, optionally, an
Object argument to
which
message will be passed. Note that
message must be JSON-encodable.
For example:
import "package:stream_channel/stream_channel.dart"; hybridMain(StreamChannel channel, Object message) { // ... }.
Returns a StreamChannel that's connected to the channel passed to
hybridMain(). Only JSON-encodable objects may be sent through this
channel. If the channel is closed, the hybrid isolate is killed. If the
isolate is killed, the channel's stream will emit a "done" event.
Any unhandled errors loading or running the hybrid isolate will be emitted
as errors over the channel's stream. Any calls to
print() in the hybrid
isolate will be printed as though they came from the test that created the
isolate.
Code in the hybrid isolate is not considered to be running in a test
context, so it can't access test functions like
expect() and
expectAsync().
By default, the hybrid isolate is automatically killed when the test
finishes running. If
stayAlive is
true, it won't be killed until the
entire test suite finishes running.
Note: If you use this API, be sure to add a dependency on the
**
stream_channel package, since you're using its API as well!
Implementation
StreamChannel spawnHybridUri); } | https://api.flutter.dev/flutter/test_api/spawnHybridUri.html | CC-MAIN-2020-34 | refinedweb | 211 | 58.08 |
PEGs for Nim, another take
"Because friends don't let friends write parsers by hand"
NPeg is a pure Nim pattern matching library. It provides macros to compile patterns and grammars (PEGs) to Nim procedures which will parse a string and collect selected parts of the input. PEGs are not unlike regular expressions, but offer more power and flexibility, and have less ambiguities. (More about PEGs on Wikipedia)
Some use cases where NPeg is useful are configuration or data file parsers, robust protocol implementations, input validation, lexing of programming languages or domain specific languages.
Some NPeg highlights:
Grammar definitions and Nim code can be freely mixed. Nim code is embedded using the normal Nim code block syntax, and does not disrupt the grammar definition.
NPeg-generated parsers can be used both at run and at compile time.
NPeg offers various methods for tracing, optimizing and debugging your parsers.
NPeg can parse sequences of any data types, also making it suitable as a stage-two parser for lexed tokens.
NPeg can draw cool diagrams.
Here is a simple example showing the power of NPeg: The macro
pegcompiles a grammar definition into a
parserobject, which is used to match a string and place the key-value pairs into the Nim table
words:
import npeg, strutils, tables
type Dict = Table[string, int]
let parser = peg("pairs", d: Dict): pairs word * '=' * >number: d[$1] = parseInt($2)
var words: Table[string, int] doAssert parser.match("one=1,two=2,three=3,four=4", words).ok echo words
Output:
{"two": 2, "three": 3, "one": 1, "four": 4}
A brief explanation of the above code:
The macro
pegis used to create a parser object, which uses
pairsas the initial grammar rule to match. The variable
dof type
Dictwill be available inside the code block parser for storing the parsed data.
The rule
pairsmatches one
pair, followed by zero or more times (
*) a comma followed by a
pair.
The rules
wordand
numbermatch a sequence of one or more (
+) alphabetic characters or digits, respectively. The
Alphaand
Digitrules are pre-defined rules matching the character classes
{'A'..'Z','a'..'z'}and
{'0'..'9'}.
The rule
pairmatches a
word, followed by an equals sign (
=), followed by a
number.
The
wordand
numberin the
pairrule are captured with the
>operator. The Nim code fragment below this rule is executed for every match, and stores the captured word and number in the
wordsNim table.
The
patt()and
peg()macros can be used to compile parser functions:
patt()creates a parser from a single anonymous pattern.
peg()allows the definition of a set of (potentially recursive) rules making up a complete grammar.
The result of these macros is an object of the type
Parserwhich can be used to parse a subject:
proc match(p: Parser, s: string) = MatchResult proc matchFile(p: Parser, fname: string) = MatchResult
The above
matchfunctions returns an object of the type
MatchResult:
MatchResult = object ok: bool matchLen: int matchMax: int ...
ok: A boolean indicating if the matching succeeded without error. Note that a successful match does not imply that all of the subject was matched, unless the pattern explicitly matches the end-of-string.
matchLen: The number of input bytes of the subject that successfully matched.
matchMax: The highest index into the subject that was reached during parsing, even if matching was backtracked or did not succeed. This offset is usually a good indication of the location where the matching error occurred.
The string captures made during the parsing can be accessed with:
proc captures(m: MatchResult): seq[string]
A simple pattern can be compiled with the
pattmacro.
For example, the pattern below splits a string by white space:
let parser = patt *(*' ' * > +(1-' ')) echo parser.match(" one two three ").captures
Output:
@["one", "two", "three"]
The
pattmacro can take an optional code block which is used as code block capture for the pattern:
var key, val: string let p = patt >+Digit * "=" * >+Alpha: (key, val) = ($1, $2)
assert p.match("15=fifteen").ok echo key, " = ", val
The
pegmacro provides a method to define (recursive) grammars. The first argument is the name of initial patterns, followed by a list of named patterns. Patterns can now refer to other patterns by name, allowing for recursion:
let parser = peg "ident": lower
The order in which the grammar patterns are defined affects the generated parser. Although NPeg could always reorder, this is a design choice to give the user more control over the generated parser:
when a pattern
P1refers to pattern
P2which is defined before
P1,
P2will be inlined in
P1. This increases the generated code size, but generally improves performance.
when a pattern
P1refers to pattern
P2which is defined after
P1,
P2will be generated as a subroutine which gets called from
P1. This will reduce code size, but might also result in a slower parser.
The NPeg syntax is similar to normal PEG notation, but some changes were made to allow the grammar to be properly parsed by the Nim compiler:
*,
+,
-and
?.
|instead of
/because of operator precedence.
*infix operator is used for sequences.
NPeg patterns and grammars can be composed from the following parts:
Atoms:
0 # matches always and consumes nothing 1 # matches any character n # matches exactly n characters 'x' # matches literal character 'x' "xyz" # matches literal string "xyz" i"xyz" # matches literal string, case insensitive {'x'..'y'} # matches any character in the range from 'x'..'y' {'x','y','z'} # matches any character from the set
Operators:
P1 * P2 # concatenation P1 | P2 # ordered choice P1 - P2 # matches P1 if P2 does not match (P) # grouping !P # matches everything but P &P # matches P without consuming input ?P # matches P zero or one times *P # matches P zero or more times +P # matches P one or more times @P # search for P P[n] # matches P n times P[m..n] # matches P m to n times
Precedence operators:
P ^ N # P is left associative with precedence N P ^^ N # P is right associative with precedence N
String captures:
>P # Captures the string matching P
Back references:
R("tag", P) # Create a named reference for pattern P R("tag") # Matches the given named reference
Error handling:
E"msg" # Raise an execption with the given message
In addition to the above, NPeg provides the following built-in shortcuts for common atoms, corresponding to POSIX character classes:
Alnum
Atoms
Atoms are the basic building blocks for a grammar, describing the parts of the subject that should be matched.
0/
1/
n
The int literal atom
nmatches exactly n number of bytes.
0always matches, but does not consume any data.
'x'/
"xyz"/
i"xyz"
Characters and strings are literally matched. If a string is prefixed with
i, it will be matched case insensitive.
{'x','y'}
Characters set notation is similar to native Nim. A set consists of zero or more comma separated characters or character ranges.
{'x'..'y'} # matches any character in the range from 'x'..'y' {'x','y','z'} # matches any character from the set 'x', 'y', and 'z'
The set syntax
{}is flexible and can take multiple ranges and characters in one expression, for example
{'0'..'9','a'..'f','A'..'F'}.
NPeg provides various prefix, infix and suffix operators. These operators combine or transform one or more patterns into expressions, building larger patterns.
P1 * P2
o──[P1]───[P2]──o
The pattern
P1 * P2returns a new pattern that matches only if first
P1matches, followed by
P2.
For example,
"foo" * "bar"would only match the string
"foobar".
P1 | P2
o─┬─[P1]─┬─o ╰─[P2]─╯
The pattern
P1 | P2tries to first match pattern
P1. If this succeeds, matching will proceed without trying
P2. Only if
P1can not be matched, NPeg will backtrack and try to match
P2instead.
For example
("foo" | "bar") * "fizz"would match both
"foofizz"and
"barfizz".
NPeg optimizes the
|operator for characters and character sets: The pattern
'a' | 'b' | 'c'will be rewritten to a character set
{'a','b','c'}.
P1 - P2
The pattern
P1 - P2matches
P1only if
P2does not match. This is equivalent to
!P2 * P1:
━━━━ o──[P2]─»─[P1]──o
NPeg optimizes the
-operator for characters and character sets: The pattern
{'a','b','c'} - 'b'will be rewritten to the character set
{'a','c'}.
(P)
Brackets are used to group patterns similar to normal arithmetic expressions.
!P
━━━ o──[P]──o
The pattern
!Preturns a pattern that matches only if the input does not match
P. In contrast to most other patterns, this pattern does not consume any input.
A common usage for this operator is the pattern
!1, meaning "only succeed if there is not a single character left to match" - which is only true for the end of the string.
&P
━━━ ━━━ o──[P]──o
The pattern
&Pmatches only if the input matches
P, but will not consume any input. This is equivalent to
!!P. This is denoted by a double negation in the railroad diagram, which is not very pretty unfortunately.
?P
╭──»──╮ o─┴─[P]─┴─o
The pattern
?Pmatches if
Pcan be matched zero or more times, so essentially succeeds if
Peither matches or not.
For example,
?"foo" * bar"matches both
"foobar"and
"bar".
*P
╭───»───╮ o─┴┬─[P]─┬┴─o ╰──«──╯
The pattern
*Ptries to match as many occurrences of pattern
Pas possible - this operator always behaves greedily.
For example,
*"foo" * "bar"matches
"bar",
"fooboar",
"foofoobar", etc.
+P
o─┬─[P]─┬─o ╰──«──╯
The pattern
+Pmatches
Pat least once, but also more times. It is equivalent to the
P * *P- this operator always behave greedily.
@P
This operator searches for pattern
Pusing an optimized implementation. It is equivalent to
s , which can be read as "try to match as many characters as possible not matchingP, and then matchP:╭─────»─────╮ │ ━━━ │ o─┴┬─[P]─»─1─┬┴»─[P]──o ╰────«────╯
Note that this operator does not allow capturing the skipped data up to the match; if his is required you can manually construct a grammar to do this.
ntimes:
P[n]
The pattern
P[n]matches
Pexactly
ntimes.
For example,
"foo"[3]only matches the string
"foofoofoo":
o──[P]─»─[P]─»─[P]──o
mto
ntimes:
P[m..n]
The pattern
P[m..n]matches
Pat least
mand at most
ntimes.
For example,
"foo[1,3]"matches
"foo",
"foofoo"and
"foofoofo":
╭──»──╮ ╭──»──╮ o──[P]─»┴─[P]─┴»┴─[P]─┴─o
Note: This is an experimental feature, the implementation or API might change in the future.
Precedence operators allows for the construction of "precedence climbing" or "Pratt parsers" with NPeg. The main use for this feature is building parsers for programming languages that follow the usual precedence and associativity rules of arithmetic expressions.
N:
P ^ N
<1< o──[P]──o
N:
P ^^ N
>1> o──[P]──o
During parsing NPeg keeps track of the current precedence level of the parsed expression - the default is
0if no precedence has been assigned yet. When the
^operator is matched, either one of the next three cases applies:
P ^ Nwhere
N > 0and
Nis lower then the current precedence: in this case the current precedence is set to
Nand parsing of pattern
Pcontinues.
P ^ Nwhere
N > 0and
Nis higher or equal then the current precedence: parsing will fail and backtrack.
P ^ 0: resets the current precedence to 0 and continues parsing. This main use case for this is parsing sub-expressions in parentheses.
The heart of a Prett parser in NPeg would look something like this:
exp
More extensive documentation will be added later, for now take a look at the example intests/precedence.nim.
Captures╭╶╶╶╶╶╮ s o────[P]────o ╰╶╶╶╶╶╯
NPeg supports a number of ways to capture data when parsing a string. The various capture methods are described here, including a concise example.
The capture examples below build on the following small PEG, which parses a comma separated list of key-value pairs:const data = "one=1,two=2,three=3,four=4"
let parser = peg "pairs": pairs
String captures
The basic method for capturing is marking parts of the peg with the capture prefix>. During parsing NPeg keeps track of all matches, properly discarding any matches which were invalidated by backtracking. Only when parsing has fully succeeded it creates aseq[string]of all matched parts, which is then returned in theMatchData.capturesfield.
In the example, the>capture prefix is added to thewordandnumberrules, causing the matched words and numbers to be appended to the result captureseq[string]:let parser = peg "pairs": pairs word * '=' * >number
let r = parser.match(data)
The resulting list of captures is now:@["one", "1", "two", "2", "three", "3", "four", "4"]
Code block captures
Code block captures offer the most flexibility for accessing matched data in NPeg. This allows you to define a grammar with embedded Nim code for handling the data during parsing.
Note that for code block captures, the Nim code gets executed during parsing, even if the match is part of a pattern that fails and is later backtracked.
When a grammar rule ends with a colon:, the next indented block in the grammar is interpreted as Nim code, which gets executed when the rule has been matched. Any string captures that were made inside the rule are available to the Nim code in the injected variablecapture[]of typeseq[Capture]:type Capture = object s*: string # The captured string si*: int # The index of the captured string in the subject
The total subject matched by the code block rule is available incapture[0]Any additional explicit>string captures made by the rule or any of its child rules will be available ascapture[1],capture[2], ...
For convenience there is syntactic sugar available in the code block capture blocks:
The variables
$0to
$9are rewritten to
capture[n].sand can be used to access the captured strings. The
$operator uses then usual Nim precedence, thus these variables might need parentheses or different ordering in some cases, for example
$1.parseIntshould be written as
parseInt($1).
The variables
@0to
@9are rewritten to
capture[n].siand can be used to access the offset in the subject of the matched captures.
Example: ```nim let p = peg foo: foo <- >(1 * >1) * 1: echo "$0 = ", $0 echo "$1 = ", $1 echo "$2 = ", $2
echo p.match("abc").ok ```
Will output
$0 = abc $1 = ab $2 = b
Code block captures consume all embedded string captures, so these captures will no longer be available after matching.
A code block capture can also produce captures by calling the
push(s: string)function from the code block. Note that this is an experimental feature and that the API might change in future versions.
The example has been extended to capture each word and number with the
>string capture prefix. When the
pairrule is matched, the attached code block is executed, which adds the parsed key and value to the
wordstable.
from strutils import parseInt var words = initTable[string, int]()
let parser = peg "pairs": pairs word * '=' * >number: words[$1] = parseInt($2)
let r = parser.match(data)
After the parsing finished, the
wordstable will now contain:
{"two": 2, "three": 3, "one": 1, "four": 4}
Code block captures can be used for additional validation of a captured string: the code block can call the functions
fail()or
validate(bool)to indicate if the match should succeed or fail. Failing matches are handled as if the capture itself failed and will result in the usual backtracking. When the
fail()or
validate()functions are not called, the match will succeed implicitly.
For example, the following rule will check if a passed number is a valid
uint8number:
uint8 Digit[1..3]: let v = parseInt($a) validate v>=0 and v<=255
The following grammar will cause the whole parse to fail when the
errorrule matches:
error
Note: The Nim code block is running within the NPeg parser context and in theory could access to its internal state - this could be used to create custom validator/matcher functions that can inspect the subject string, do lookahead or lookback, and adjust the subject index to consume input. At the time of writing, NPeg lacks a formal API or interface for this though, and I am not sure yet what this should look like - If you are interested in doing this, contact me so we can discuss the details.
Passing state
NPeg allows passing of data of a specific type to thematch()function, this value is then available inside code blocks as a variable. This mitigates the need for global variables for storing or retrieving data in access captures.
The syntax for defining a generic grammar is as follows:peg(name, identifier: Type)
For example, the above parser can be rewritten using a generic parser as such:type Dict = Table[string, int]
let parser = peg("pairs", userdata: Dict): pairs word * '=' * >number: userdata[$1] = parseInt($2)
var words: Dict let r = parser.match(data, words)
Backreferences
Backreferences allow NPeg to match an exact string that matched earlier in the grammar. This can be useful to match repetitions of the same word, or for example to match so called here-documents in programming languages.
For this, NPeg offers theRoperator with the following two uses:
The
R(name, P)pattern creates a named reference for pattern
Pwhich can be referred to by name in other places in the grammar.
The pattern
R(name)matches the contents of the named reference that earlier been stored with
R(name, P)pattern.
For example, the following rule will match only a string which will have the same character in the first and last position:
patt R("c", 1) * *(1 - R("c")) * R("c") * !1
The first part of the rule
R("c", 1)will match any character, and store this in the named reference
c. The second part will match a sequence of zero or more characters that do not match reference
c, followed by reference
c.
Repetitive inlining of rules might cause a grammar to grow too large, resulting in a huge executable size and slow compilation. NPeg tries to mitigate this in two ways:
Patterns that are too large will not be inlined, even if the above ordering rules apply.
NPeg checks the size of the total grammar, and if it thinks it is too large it will fail compilation with the error message
NPeg: grammar too complex.
Check the section "Compile-time configuration" below for more details about too complex grammars.
The parser size and performance depends on many factors; when performance and/or code size matters, it pays to experiment with different orderings and measure the results.
When in doubt, check the generated parser instructions by compiling with the
-d:npegTraceor
-d:npegDotDirflags - see the section Tracing and Debugging for more information.
At this time the upper limit is 4096 rules, this might become a configurable number in a future release.
For example, the following grammar will not compile because recursive inlining will cause it to expand to a parser with more then 4^6 = 4096 rules:
let p = peg "z": f
The fix is to change the order of the rules so that instead of inlining NPeg will use a calling mechanism:let p = peg "z": z
When in doubt check the generated parser instructions by compiling with the-d:npegTraceflag - see the section Tracing and Debugging for more information.
Templates, or parameterized rules
When building more complex grammars you may find yourself duplicating certain constructs in patterns over and over again. To avoid code repetition (DRY), NPeg provides a simple mechanism to allow the creation of parameterized rules. In good Nim-fashion these rules are called "templates". Templates are defined just like normal rules, but have a list of arguments, which are referred to in the rule. Technically, templates just perform a basic search-and-replace operation: every occurrence of a named argument is replaced by the exact pattern passed to the template when called.
For example, consider the following grammar:numberList
This snippet uses a common pattern twice for matching lists:p * *( ',' * p). This matches patternp, followed by zero or more occurrences of a comma followed by patternp. For example,numberListwill match the string1,22,3.
The above example can be parameterized with a template like this:commaList(item)
Here the templatecommaListis defined, and any occurrence of its argument 'item' will be replaced with the patterns passed when calling the template. This template is used to define the more complex patternsnumberListandwordList.
Templates may invoke other templates recursively; for example the above can even be further generalized:list(item, sep)
Composing grammars with libraries
For simple grammars it is usually fine to build all patterns from scratch from atoms and operators, but for more complex grammars it makes sense to define reusable patterns as basic building blocks.
For this, NPeg keeps track of a global library of patterns and templates. Thegrammarmacro can be used to add rules or templates to this library. All patterns in the library will be stored with a qualified identifier in the formlibraryname.patternname, by which they can be referred to at a later time.
For example, the following fragment defines three rules in the library with the namenumber. The rules will be stored in the global library and are referred to in the peg by their qualified namesnumber.dec,number.hexandnumber.oct:grammar "number": dec
NPeg offers a number of pre-defined libraries for your convenience, these can be found in thenpeg/libdirectory. A library an be imported with the regular Nimimportstatement, all rules defined in the imported file will then be added to NPeg's global pattern library. For example:import npeg/lib/uri
Library rule overriding/shadowing
To allow the user to add custom captures to imported grammars or rules, it is possible to override or shadow an existing rule in a grammar.
Overriding will replace the rule from the library with the provided new rule, allowing the caller to change parts of an imported grammar. A overridden rule is allowed to reference the original rule by name, which will cause the new rule to shadow the original rule. This will effectively rename the original rule and replace it with the newly defined rule which will call the original referred rule.
For example, the following snippet will reuse the grammar from theurilibrary and capture some parts of the URI in a Nim object:import npeg/lib/uri
type Uri = object host: string scheme: string path: string port: int
var myUri: Uri
let parser = peg "line": line uri.scheme: myUri.scheme = $1 uri.host uri.host: myUri.host = $1 uri.port uri.port: myUri.port = parseInt($1) uri.path uri.path: myUri.path = $1
echo parser.match("") echo myUri # --> (host: "nim-lang.org", scheme: "http", path: "/one/two/three", port: 8080)
Advanced topics
Parsing other types then strings
Note: This is an experimental feature, the implementation or API might change in the future.
NPeg was originally designed to parse strings like a regular PEG engine, but has since evolved into a generic parser that can parse any subject of typeopenArray[T]. This section describes how to use this feature.
The
peg()macro must be passed an additional argument specifying the base type
Tof the subject; the generated parser will then parse a subject of type
openArray[T]. When not given, the default type is
char, and the parser parsers
openArray[char], or more typically,
string.
When matching non-strings, some of the usual atoms like strings or character sets do not make sense in a grammar, instead the grammar uses literal atoms. Literals can be specified in square brackets and are interpreted as any Nim code:
[foo],
[1+1]or
["foo"]are all valid literals.
When matching non-strings, captures will be limited to only a single element of the base type, as this makes more sense when parsing a token stream.
For an example of this feature check the example in
tests/lexparse.nim- this implements a classic parser with separate lexing and parsing stages.
Unlike regular expressions, PEGs are always matched in anchored mode only: the defined pattern is matched from the start of the subject string. For example, the pattern
"bar"does not match the string
"foobar".
To search for a pattern in a stream, a construct like this can be used:
p
The above grammar first tries to match patternp, or if that fails, matches any character1and recurs back to itself. Because searching is a common operation, NPeg provides the builtin@Poperator for this.
End of string
PEGs do not care what is in the subject string after the matching succeeds. For example, the rule"foo"happily matches the string"foobar". To make sure the pattern matches the end of string, this has to be made explicit in the pattern.
The idiomatic notation for this is!1, meaning "only succeed if there is not a single character left to match" - which is only true for the end of the string.
Non-consuming atoms and captures
The lookahead(&) and not(!) operators may not consume any input, and make sure that after matching the internal parsing state of the parser is reset to as is was before the operator was started, including the state of the captures. This means that any captures made inside a&and!block also are discarded. It is possible however to capture the contents of a non-consuming block with a code block capture, as these are always executed, even when the parser state is rolled back afterwards.
Parsing error handling
NPeg offers a number of ways to handle errors during parsing a subject string:
Theokfield in theMatchResultindicates if the parser was successful: when the complete pattern has been matched this value will be set totrue, if the complete pattern did not match the subject the value will befalse.
In addition to theokfield, thematchMaxfield indicates the maximum offset into the subject the parser was able to match the string. If the matching succeededmatchMaxequals the total length of the subject, if the matching failed, the value ofmatchMaxis usually a good indication of where in the subject string the error occurred.
When, during matching, the parser reaches anE"message"atom in the grammar, NPeg will raise anNPegExceptionexception with the given message. The typical use case for this atom is to be combine with the ordered choice|operator to generate helpful error messages. The following example illustrates this:let parser = peg "list": list
The rulewordlooks for a sequence of one or more letters (+{'a'..'z'}). If can this not be matched theE"word"matches instead, raising an exception:Error: unhandled exception: Parsing error at #14: expected "word" [NPegException]
TheNPegExceptiontype contains the same two fields asMatchResultto indicate where in the subject string the match failed:matchLenandmatchMax:let a = patt 4 * E"boom" try: doAssert a.match("12345").ok except NPegException as e: echo "Parsing failed at position ", e.matchMax
Left recursion
NPeg does not support left recursion (this applies to PEGs in general). For example, the ruleA
will cause an infinite loop because it allows for left-recursion of the non-terminalA.
Similarly, the grammarA
is problematic because it is mutually left-recursive through the non-terminalB.
Note that loops of patterns that can match the empty string will not result in the expected behavior. For example, the rule*0will cause the parser to stall and go into an infinite loop.
UTF-8 / Unicode
NPeg has no built-in support for Unicode or UTF-8, instead is simply able to parse UTF-8 documents just as like any other string. NPeg comes with a simple UTF-8 grammar library which should simplify common operations like matching a single code point or character class. The following grammar splits an UTF-8 document into separate characters/glyphs by using theutf8.anyrule:import npeg/lib/utf8
let p = peg "line": line utf8.any
let r = p.match("γνωρίζω") echo r.captures() # --> @["γ", "ν", "ω", "ρ", "ί", "ζ", "ω"]
Tracing and debugging
Syntax diagrams
When compiled with-d:npegGraph, NPeg will dump syntax diagrams (also known as railroad diagrams) for all parsed rules.
Syntax diagrams are sometimes helpful to understand or debug a grammar, or to get more insight in a grammars' complexity.╭─────────»──────────╮ │ ╭─────»──────╮│ ╭╶╶╶╶╶╶╶╶╶╶╮ │ │ ━━━━ ││ ╭╶╶╶╶╶╶╶╮ inf o──"INF:"─»───[number]───»┴─","─»┴┬─[lf]─»─1─┬┴┴»─[lf]─»───[url]────o ╰╶╶╶╶╶╶╶╶╶╶╯ ╰────«─────╯ ╰╶╶╶╶╶╶╶╯
?) are indicated by a forward arrow overhead.
!) are overlined in red. Note that the diagram does not make it clear that the input for not-predicates is not consumed.
NPeg can generate a graphical representation of a grammar to show the relations between rules. The generated output is a
.dotfile which can be processed by the Graphviz tool to generate an actual image file.
When compiled with
-d:npegDotDir=, NPeg will generate a
.dotfile for each grammar in the code and write it to the given directory.
Edge colors represent the rule relation: grey=inline, blue=call, green=builtin
Rule colors represent the relative size/complexity of a rule: black=<10, orange=10..100, red=>100
Large rules result in larger generated code and slow compile times. Rule size can generally be decreased by changing the rule order in a grammar to allow NPeg to call rules instead of inlining them.
When compiled with
-d:npegTrace, NPeg will dump its intermediate representation of the compiled PEG, and will dump a trace of the execution during matching. These traces can be used for debugging or optimization of a grammar.
For example, the following program:
let parser = peg "line": space
will output the following intermediate representation at compile time. From the IR it can be seen that thespacerule has been inlined in thelinerule, but that thewordrule has been emitted as a subroutine which gets called fromline:line: 0: line opCall 6 word word 1: line opChoice 5 *(space * word) 2: space opStr " " ' ' 3: line opCall 6 word word 4: line opPartCommit 2 *(space * word) 5: opReturn
word: 6: word opSet '{'a'..'z'}' {'a' .. 'z'} 7: word opSpan '{'a'..'z'}' +{'a' .. 'z'} 8: opReturn
At runtime, the following trace is generated. The trace consists of a number of columns:
- The current instruction pointer, which maps to the compile time dump.
- The index into the subject.
- The substring of the subject.
- The name of the rule from which this instruction originated.
- The instruction being executed.
- The backtrace stack depth.0| 0|one two |line |call -> word:6 | 6| 0|one two |word |set {'a'..'z'} | 7| 1|ne two |word |span {'a'..'z'} | 8| 3| two | |return | 1| 3| two |line |choice -> 5 | 2| 3| two | space |chr " " |* 3| 4|two |line |call -> word:6 |* 6| 4|two |word |set {'a'..'z'} |* 7| 5|wo |word |span {'a'..'z'} |* 8| 7| | |return |* 4| 7| |line |pcommit -> 2 |* 2| 7| | space |chr " " |* | 7| | |fail |* 5| 7| | |return (done) |
The exact meaning of the IR instructions is not discussed here.
Compile-time configuration
NPeg has a number of configurable setting which can be configured at compile time by passing flags to the compiler. The default values should be ok in most cases, but if you ever run into one of those limits you are free to configure those to your liking:
-d:npegPattMaxLen=NThis is the maximum allowed length of NPeg's internal representation of a parser, before it gets translated to Nim code. The reason to check for an upper limit is that some grammars can grow exponentially by inlining of patterns, resulting in slow compile times and oversized executable size. (default: 4096)
-d:npegInlineMaxLen=NThis is the maximum allowed length of a pattern to be inlined. Inlining generally results in a faster parser, but also increases code size. It is valid to set this value to 0; in that case NPeg will never inline patterns and use a calling mechanism instead, this will result in the smallest code size. (default: 50)
-d:npegRetStackSize=NMaximum allowed depth of the return stack for the parser. The default value should be high enough for practical purposes, the stack depth is only limited to detect invalid grammars. (default: 1024)
-d:npegBackStackSize=NMaximum allowed depth of the backtrace stack for the parser. The default value should be high enough for practical purposes, the stack depth is only limited to detect invalid grammars. (default: 1024)
-d:npegGcsafeThis is a workaround for the case where NPeg needs to be used from a
{.gcsafe.}context when using threads. This will mark the generated matching function to be
{.gcsafe.}.
NPeg has a number of compile time flags to enable tracing and debugging of the generated parser:
-d:npegTrace: Enable compile time and run time tracing. Please refer to the section 'Tracing' for more details.
-d:npegGraph: Dump syntax diagrams of all parsed rules at compile time.
These flags are meant for debugging NPeg itself, and are typically not useful to the end user:
-d:npegDebug: Enable more debug info. Meant for NPeg development debugging purposes only.
-d:npegExpand: Dump the generated Nim code for all parsers defined in the program. Meant for NPeg development debugging purposes only.
The NPeg syntax is similar, but not exactly the same as the official PEG syntax: it uses some different operators, and prefix instead of postfix operators. The reason for this is that the NPeg grammar is parsed by a Nim macro in order to allow code block captures to embed Nim code, which puts some limitations on the available syntax. Also, NPeg's operators are chosen so that they have the right precedence for PEGs.
Almost, but not quite. Although PEGS and EBNF look quite similar, there are some subtle but important differences which do not allow a literal translation from EBNF to PEG. Notable differences are left recursion and ordered choice. Also, see "From EBNF to PEG" from Roman R. Redziejowski.
let parser = peg "line": exp
A complete JSON parser
The following PEG defines a complete parser for the JSON language - it will not produce any captures, but simple traverse and validate the document:let s = peg "doc": S
Captures
The following example shows how to use code block captures. The defined grammar will parse a HTTP response document and extract structured data from the document into a Nim object:import npeg, strutils, tables
type Request = object proto: string version: string code: int message: string headers: Table[string, string]
HTTP grammar (simplified)
let parser = peg("http", userdata: Request): space +Alpha: userdata.proto = $1 version (+Digit * '.' * +Digit): userdata.version = $1 code +Digit: userdata.code = parseInt($1) msg (+(1 - '\r' - '\n')): userdata.message = $1 header header_name * ": " * >header_val: userdata.headers[$1] = $2 response
The resulting data:( proto: "HTTP", version: "1.1", code: 301, message: "Moved Permanently", headers: { "Content-Length": "162", "Content-Type": "text/html", "Location": "" } )
More examples
More examples can be found in tests/examples.nim.
Future directions / Todos / Roadmap / The long run
Here are some things I'd like to have implemented one day. Some are hard and require me to better understand what I'm doing first. In no particular order:
Design and implement a proper API for code block captures. The current API feels fragile and fragmented (
capture[], $1/$2, fail(), validate()), and does not offer solid primitives to make custom match functions yet, something better should be in place before NPeg goes v1.0.
Resuming/streaming: The current parser is almost ready to be invoked multiple times, resuming parsing where it left off - this should allow parsing of (infinite) streams. The only problem not solved yet is how to handle captures: when a block of data is parsed it might contain data which must later be available to collect the capture. Not sure how to handle this yet.
Memoization: I guess it would be possible to add (limited) memoization to improve performance, but no clue where to start yet.
Parallelization: I wonder if parsing can parallelized: when reaching an ordered choice, multiple threads should be able to try to parse each individual choice. I do see problems with captures here, though.
I'm not happy about the
{.gcsafe.}workaround. I'd be happy to hear any ideas on how to improve this. | https://xscode.com/zevv/npeg | CC-MAIN-2021-10 | refinedweb | 5,927 | 60.85 |
Hi, im just doing some past exam papers for an exam I've got coming up and im struggling with Java generics, i cant seem to find any good examples to help me out. I've found small snippets of code but i really need a larger piece of code shown before and after generic implementation so that i can understand properly what is going on. Anyway if no one can provide this i do have a piece of code which wuld be nice if you could implement gerneric code into and explain what you did.
Example code:
public class Queue { private int[] items = new int[5]; private int top = 0; public boolean isEmpty() { return top <= 0; } public int get() { int first = items[0]; for (int i = 0; i+1 < top; i++) { items[i] = items[i+1]; } top--; return items[0]; } public void add(int i) { items[top++] = i; } public void remove(int i) { if (i <= 0) return; get(); remove(i - 1); } public static void main(String[] args) { Queue s = new Queue(); s.add(3); s.remove(2); } }
Thanks,
~Crag | https://www.daniweb.com/programming/software-development/threads/431540/java-generics | CC-MAIN-2018-30 | refinedweb | 180 | 55.51 |
#include "afxtempl.h"
CMap<CString*, CString*, int, int> map;
CMap<CString*, CString*, int, int> map(16);
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
From novice to tech pro — start learning today.
do you use this in a class declaration? If so this isn't possible, i.e.:
> class CTest
> {
> CMap < CString*, CString*, int, int > map1; // OK
> CMap < CString*, CString*, int, int > map2( 16 ); // Error C2059
> };
To initialize such a member you need to add it's constructor call to the classes constructors initialization list, i.e.:
> class CTest
> {
> CMap < CString*, CString*, int, int > map1; // OK
> CMap < CString*, CString*, int, int > map2; // Error C2059
> public:
> CTest();
> };
>
> CTest::CTest()
> : map2( 16 )
> {
> }
Hope that helps,
ZOPPO
Error Message syntax error : 'token'
The token caused a syntax error.
To determine the cause, examine not only the line listed in the error message, but also the lines above it. The following example generates an error message for the line declaring j, but the true source of the error appears on the line just above it.
If examining the lines yields no clue to what the problem might be, try commenting out the line listed in the error message and possibly several lines above it.
If the error message occurs on a symbol immediately following a typedef variable, check that the variable is defined in the source code.
You may get C2059 if a symbol evaluates to nothing, as can occur when you compile with /Dsymbol=.
CMap<CString*, CString*, int, int> map(16); | https://www.experts-exchange.com/questions/26433795/Which-include-s-are-necessary-for-CMap.html | CC-MAIN-2018-17 | refinedweb | 266 | 71.14 |
I am trying to send audio data to the right and left output channels of
a soundcard's output port seperately. The right and left channels should
get data from two different audio sources.. Is it possible with NAudio?
To get fully qualified path of application I have wrote a
function:
public class Generic {
public static string FullyQualifiedApplicationPath {
get { //Return variable
declaration string appPath = string.Empty;
//Getting the current context of HT
for example when i retrieve from database the word program's instead of
program's what would be shown is program�s. ' and - changes to �. how
can i fix this?
I am using a big excel file (nearly 4000 rows & 65 Columns) for VBA
Code. I am also using a lot for filters for output. It takes about a minute
to prepare number of report. It is very interesting, that when I don't
click tabs and sheets while the VBA code is executing the results are
accurate, but if I click tabs and sheets while VBA code execution, the
results are not accurate. Please help
I use this code to output all threads with a specific forum ID
$query = mysql_query("SELECT * FROM forums JOIN threads ON
threads.fid = forums.id WHERE forums.id =
".intval($_GET['forumID']));$forum = mysql_fetch_assoc($query);?><h1><a>Forums</a> >
<?=$forum['name']?></h1><?php while ($thread =
mysql_fetch_arr
My program is set up with two EditTexts where a person can put in a
number or a string, and the result is supposed to come up with a result,
which is displayed in a TextView. However, when I run the program, it
doesn't want to work. Nothing is displayed in the TextView.
int department;String name;Button search;TextView
display;@Overridepublic void o am in the process of writing a testing script that will simply run an
*.EXE file with a few arguments, and then output the result to a file. I
have one *.sh testing script that correctly runs the tests (but requires
manual updating for more tests). The lines in this script look like
this:
blah.exe arg1 arg2 arg3 > ../test/arg4/arg4.out
I have written a
I'm using regex to parse NMAP output. I want the ip addresses which are
up with the corresponding ports open. Now I've a very naive method of doing
that:
awk '/^Scanning .....................ports]/ {print
substr ($2,1,15);}' results.txtawk '/^[0-9][0-9]/ {print
substr($1,1,4);}' results.txt | awk -f awkcode.awk
where awkcode.awk contains the code to ex | http://bighow.org/tags/outputs/1 | CC-MAIN-2018-05 | refinedweb | 419 | 75.71 |
This Client Search Behavior".
Depending.
Server preferences are controlled by giving each server a preference rank number. Clients search for NIS+ servers in order of numeric preference, querying servers with lower preference rank numbers before seeking servers with higher numbers.
Thus, a client will first try to obtain namespace information from NIS+ servers with a preference of zero. If there are no preference=0 servers available, then the client will query servers whose preference=1. If no 1's are available, it will try to find a 2, and then a 3, and so on until it either gets the information it needs or runs out of servers.
Preference rank numbers are assigned to servers with the nisprefadm command as described in "Specifying Global Server Preferences".
Server preference numbers are stored in client_info tables and files. If a machine has its own /var/nis/client_info file, it uses the preference numbers stored in that file. If a machine does not have its own client_info file, it uses the preference numbers stored in the domain's client_info.org_dir table. These client_info tables and files are called "preferred server lists" or simply server lists.
You customize server usage by controlling the server preferences of each client. For example, suppose a domain has a client machine named mailer that makes heavy use of namespace information and the domain has both a master server (nismaster) and a replica server (replica1). You could assign a preference number of 1 to nismaster and a number of 0 to replica1 for the mailer machine. The mailer machine would then always try to obtain namespace information from replica1 before trying nismaster. You could then specify that for all the other machines on the subnet the nismaster server had a preference number of zero and replica1 the number 1. This would cause the other machine to always try nismaster first.
You can give the same preference number to more than one server in a domain. For example, you could assign both nismaster1 and replica2 a preference number of 0, and assign replica3, replica4, and replica5 a preference number of 1.
If.
A. Client Search Behavior". You can change this default behavior with the nisprefadm -o option to specify that a client can only use preferred servers and if no servers are available it cannot go to non-preferred servers. See "Specifying Preferred-Only Servers" for details.
This option is ignored when the machine's domain is not served by any preferred servers.
To view the server preferences currently in effect for a particular client machine, you run nisprefadm with the -l option as described in "Viewing Current you make to a machine or subnet's server preferences normally do not take effect on a given machine until that machine updates it nis_cachemgr data. When the nis_cachemgr of a machine updates its server-use information depends on whether the machine is obtaining its server preferences from a global client_info table or a local /var/nis/client_info file (see "Global Table or Local File").
Global table. The cache managers of machines obtaining their server preferences from global tables update their server preferences whenever the machine is booted or whenever the Time-to-live (TTL) value expires for the client_info table. By default, this TTL value is 12 hours, but you can change that as described in "Changing the Time-to-Live of an Object".
Local file. The cache managers of machines obtaining their server preferences from local files update their server preferences every 12 hours or whenever you run nisprefadm to change a server preference. (Rebooting the machine does not update the cache manager's server preference information.)
However, you can force server preference changes to take effect immediately by running nisprefadm with the -F option. The -F option forces nis_cachemgr to immediately update its information. See "How to Immediately Implement Preference Changes" for details. | http://docs.oracle.com/cd/E19455-01/806-1387/6jam692be/index.html | CC-MAIN-2014-23 | refinedweb | 644 | 51.68 |
Hy
I have a simple problem.
Im playing an stream ( a mp3 file) with fmod. ( My code for playing a stream is equal to the fmod examples). I’ve made a method playmp3() which init. fmod and plays the stream.
How is ist now possible to close this stream while it is playing and play another mp3 file. I always get a messsage like : Segmentation failed.
Thank u
Rain
- rain asked 14 years ago
- You must login to post comments
Thank u for your help:
Here is my code:
[code:1owmo9v8]
// INCLUDES..............
using namespace std;
int channel = -1;
FSOUND_STREAM *stream;
FSOUND_SAMPLE *sptr;
char key;
signed char endcallback(FSOUND_STREAM *stream, void *buff, int len, int param)
{
if (buff) { printf("\nSYNCHPOINT : \"%s\"\n", buff); } else { printf("\nSTREAM ENDED!!\n"); } return TRUE;
}
void playtrack()
{
if 0
else
stream = FSOUND_Stream_OpenFile("test.mp3", FSOUND_NORMAL | FSOUND_MPEGACCURATE | FSOUND_LOOP_OFF, 0); if (!stream) { printf("Error!\n"); printf("%s\n", FMOD_ErrorString(FSOUND_GetError())); FSOUND_Close(); }
endif
FSOUND_Stream_SetEndCallback(stream, endcallback, 0); FSOUND_Stream_SetSynchCallback(stream, endcallback, 0); sptr = FSOUND_Stream_GetSample(stream); if (sptr) { int freq; FSOUND_Sample_GetDefaults(sptr, &freq, NULL, NULL, NULL); } key = 0; do { if (channel < 0) { // ====================== // PLAY STREAM // ====================== channel = FSOUND_Stream_PlayEx(FSOUND_FREE, stream, NULL, TRUE); FSOUND_SetPaused(channel, FALSE); } if (kbhit()) { key = getch(); if(key == 's') { FSOUND_Stream_Close(stream); FSOUND_Sample_Free(sptr); stream =NULL; sptr = NULL; playtrack(); }
}
Sleep(10);
}while (key != 27);
FSOUND_Stream_Close(stream); FSOUND_Close();
}
void initfsound()
{
if (!FSOUND_Init(44100, 32, 0))
{
printf("%s\n", FMOD_ErrorString(FSOUND_GetError()));
FSOUND_Close();
}
}
int main(int argc, char *argv[])
{
ifdef _WIN32
FSOUND_SetOutput(FSOUND_OUTPUT_WINMM);
elif defined(linux)
FSOUND_SetOutput(FSOUND_OUTPUT_OSS);
endif
// ========================================================================================== // SELECT DRIVER // ========================================================================================== { long i,driver=0; char key; // The following list are the drivers for the output method selected above. printf("---------------------------------------------------------\n"); switch (FSOUND_GetOutput()) { case FSOUND_OUTPUT_NOSOUND: printf("NoSound"); break; case FSOUND_OUTPUT_WINMM: printf("Windows Multimedia Waveout"); break; case FSOUND_OUTPUT_DSOUND: printf("Direct Sound"); break; case FSOUND_OUTPUT_A3D: printf("A3D"); break; case FSOUND_OUTPUT_OSS: printf("Open Sound System"); break; case FSOUND_OUTPUT_ESD: printf("Enlightenment Sound Daemon"); break; case FSOUND_OUTPUT_ALSA: printf("ALSA"); break; }; printf(" Driver list\n"); printf("---------------------------------------------------------\n"); for (i=0; i < FSOUND_GetNumDrivers(); i++) { printf("%d - %s\n", i+1, FSOUND_GetDriverName(i)); // print driver names } printf("---------------------------------------------------------\n"); // print driver names printf("Press a corresponding number or ESC to quit\n"); do { key = getch(); if (key == 27) exit(0); driver = key - '1'; } while (driver < 0 || driver >= FSOUND_GetNumDrivers()); FSOUND_SetDriver(driver); // Select sound card (0 = default) } // ================= // INITIALIZE initfsound(); playtrack(); return 0;
}
[/code:1owmo9v8]
In the main methde i call fsoundinit() and the playtrack() and the file is played by fmod. This works. But if you press while the file is playing the key ‘s’ playtrack() is called and the programm doesn’t work anymore. -> Segmentation fault.
I took many of this code from the fmod examples. I want to stop a mp3 while it is played and then to start playing a other mp3. I use the linux version of fmod.
please excuse my bad grammar
Thank u
Rain
I’m not 100% sure if this will fix your problem, but it looks like the reason the stream doesn’t reset itself when you press the ‘s’ key is that you only call FSOUND_Stream_PlayEx() if channel is less than zero. You initialize it to -1 at the beginning of the program, so the MP3 plays the first time. But when you stop the song and call playtrack() again, channel still has its old (positive non-zero) value, and the stream doesn’t get played again. You want to be setting channel to zero after closing the stream and freeing sptr.
As for the segmentation fault, I [i:3n169wf0]suspect[/i:3n169wf0] it might have something to do with infinite recursion…each time you call playtrack(), you get stuck in a do-while loop waiting for the value of key to be 27. When ‘s’ is pressed, playtrack() calls itself, and you end up in an identical do-while loop, one level deeper. Anyway, I don’t know the specifics of your kbhit() function, but I’m guessing that something isn’t being reset correctly, and so kbhit() is always returning true. That way, when you press ‘s’, playtrack() calls itself, which calls itself again and again and again until you blow your stack and cause a segmentation fault. To see if this is happening, try putting a printf(“entering playtrack\n”) statement at the beginning of playtrack().
If this doesn’t fix the problem, could you use a debugger and tell us exactly where the segmentation fault is occuring?
- cort answered 14 years ago
Thank u.
I’ve made a few changes in my code and now it works.
Rain | http://www.fmod.org/questions/question/forum-4291/ | CC-MAIN-2017-30 | refinedweb | 746 | 66.47 |
Context:
I'm using an Ajax call to return some complex JSON from a python module. I have to use a list of keys and confirm that a list of single-item dicts contains a dict with each key.
Example:
mylist=['this', 'that', 'these', 'those']
mydictlist=[{'this':1},{'that':2},{'these':3}]
Simple code is to convert your search list to a set, then use differencing to determine what you're missing:
missing = set(mylist).difference(*mydictlist)
which gets you
missing of
{'those'}.
Since the named
set methods can take multiple arguments (and they need not be
sets themselves), you can just unpack all the
dicts as arguments to
difference to subtract all of them from your
set of desired keys at once.
If you do need to handle duplicates (to make sure you see each of the
keys in
mylist at least that many time in
mydictlist's keys, so
mylist might contain a value twice which must occur twice in the
dicts), you can use
collections and
itertools to get remaining counts:
from collections import Counter from itertools import chain c = Counter(mylist) c.subtract(chain.from_iterable(mydictlist)) # In 3.3+, easiest way to remove 0/negative counts c = +c # In pre-3.3 Python, change c = +c to get the same effect slightly less efficiently c += Counter() | https://codedump.io/share/gzeFLFJgxOsZ/1/ensure-list-of-dicts-has-a-dict-with-key-for-each-key-in-list | CC-MAIN-2017-26 | refinedweb | 220 | 66.17 |
Hey gang,
I'm doing something that appears to be a bit strange and I'd appreciate any
advice anyone might have.
I have two areas within my app -- "admin" and "user". So I have <package
name="user" namespace="/user"> and <package name="admin"
namespace="/admin">. That seems simple enough.
The problem I'm running into stems from an earlier decision to "hide" my
JSPs within /WEB-INF/. So I have, for instance, (greatly truncated),
<package name="user" extends="struts-default" namespace="/user">
<action name="index">
<result>/WEB-INF/jsp/index.jsp</result>
</action>
</package>
Well -- this innocuous bit of code throws a 404, because the Dispatcher
appears to be looking under myapp/user/WEB-INF/jsp/index.jsp. Obviously it
ain't there. But I can't persuade the Struts to use a different context
root than /user to hunt for the JSPs.
Any suggestions as to how I can work around this problem without yanking all
my JSPs up and out of WEB-INF?
--
Jim Kiley
Technical Consultant | Summa
[p] 412.258.3346 [m] 412.445.1729 | http://mail-archives.apache.org/mod_mbox/struts-user/200805.mbox/%3C366601530805161150r6496239l8d1e03476ffeae08@mail.gmail.com%3E | CC-MAIN-2019-51 | refinedweb | 179 | 67.45 |
See for a description of the problem.
Here's the description that I copy/pasted from the Spring forums:
In Websphere I was able to wire an interceptor around all of my controllers and set thread-bound DB credentials based on the path of the current request. In my tcServer Developer edition (2.3.3 M1) this no longer works. My datasource is wired as a
org.springframework.jdbc.datasource.UserCredential sDataSourceAdapter
I've verified that I'm calling the getConnection(String username, String password) method with the correct credentials. However, the connection that is returned is of type
ProxyConnection[PooledConnection[oracle.jdbc.driver.T4CConnection@6ab4a7]]
with the same default credentials as those defined in server.xml. Is there another way to set the database credentials at runtime?
Do you have a code example to show how you receive this error? Or perhaps even a JUnit test?
(In reply to comment #2)
> Do you have a code example to show how you receive this error? Or perhaps even
> a JUnit test?
Take a look at the source code for getConnection. It was included in a reply to my original post in the forums (reproduced below).
---
This is an issue with the new jdbc-pool implementation, it internally always uses the default configured username/password. I suggest raising a bug on the tomcat(jdbc-pool) issue tracker.
The actual code.
Code:
public class DataSourceProxy implements PoolConfiguration {
...
/**
* {@link javax.sql.DataSource#getConnection()}
*/
public Connection getConnection(String username, String password) throws SQLException {
return getConnection();
}
....
}
hi Tim, this isn't really a bug, but how the pool operates.
This is a pool, meaning it caches connections with a predefined set of credentials as defined in the configuration.
If we allowed the username/password to be passed, the cache would have to be way more sophisticated, since it could mean that we would have to close a connection, and open a new one with the provided credentials.
So in the worst case scenario, you'd really not have a pool.
I started working on a simple patch, that would simply reuse the existing cache, and see if the connection had the requested credentials already.
This way, the speed is not compromised on the cache, but could lead to unnecessary disconnect/reconnects.
I can show you the patch, attached. (just needs some additional calls to complete it)
But my gut feel, is that this isn't really a bug. If implemented as a feature, it would need some serious though to not compromise the performance of the existing implementation.
Created attachment 26363 [details]
Beginning of a patch to handle DataSource.getConnection(username,password) for a pool
Created attachment 26455 [details]
Patch to allow the call DataSource.getConnection(username,password)
Here is the patch for allowing DataSource.getConnection(username, password) to be used. In the case of a connection that is connected, is used with a different username, password it is reconnected.
I'm hesitant to apply this patch at this time as it does slow down the pool doing the passwords checks every time a connection is requested.
I will experiment with a flag to enable/disable the check
Fixed in version 1.0.9.1 | https://bz.apache.org/bugzilla/show_bug.cgi?id=50025 | CC-MAIN-2018-34 | refinedweb | 530 | 57.77 |
ok did you under stand my example though?, if say starting small I just want to make a java applet that opens up that website in a jpanel with it included inside of it.
Bowman 2 - Free Online...
Type: Posts; User: centralnathan
ok did you under stand my example though?, if say starting small I just want to make a java applet that opens up that website in a jpanel with it included inside of it.
Bowman 2 - Free Online...
I have a program, it makes a giant text box in the middle (and a couple of buttons in a jpanel).
Where this text box is I want it replaced with say a game or something
Bowman 2 - Free Online...
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.event.*;
public class MyPanel extends JPanel {
private JButton jcomp1;
private JButton jcomp2;
...
can you help now
This is the new code with the .exists() and it comes up with this error:
Exception in thread "main" java.io.FileNotFoundException: user.txt (The system cannot find the file specified)
at...
I tried the .exists(); first but i come up with the same error i posted above. i may have been useing it wrong. good you guys post an example of how to use it correctly? (either with this program or...
unfortunately i have tried .exist() but it still came up with the same error,
please note i program in netbeans on a public computer that does not have access to command prompt
there you go
so the problem with my code is it will not run unless i manually make a text file, i want it to check if the text files exist and if they do not (and they wont) then to create them, but i always come... | http://www.javaprogrammingforums.com/search.php?s=ac85f8dd99aa0d1ab869ad86326de9ab&searchid=1514799 | CC-MAIN-2015-18 | refinedweb | 300 | 81.33 |
Compute Itti & Baldi surprise over video frames. More...
#include <jevoisbase/Components/Saliency/Surprise.H>
Compute Itti & Baldi surprise over video frames.
This component detects surprising events in video frames using Itti & Baldi's Bayesian theory of surprise., the observation of clouds is said to carry a high surprise. Itti & Baldi further specify how to compute surprise by using Bayes' theorem to compute posterior beliefs in a pricipled way, and by using the Kullback-Leibler.
In this component, we compute feature maps and a saliency map. These will provide some degree of invariance and robustness to noise, which will yield more stable overall results than if we were to compute surprise directly over RGB pixel values (see next point);
We then compute surprise in each pixel of each feature map. This is similar to what Itti & Baldi did but simplified to run in real time on the JeVois smart camera. Each pixel in each feature map will over time gather beliefs about what it usually 'sees' at that location in the video. When things change significantly and in a surprising way, that pixel will emit a local surprise signal. Because surprise is more complex than just computing an instantaneous difference, or measuring whether the current observation simply is an outlier to a learned distribution, it will be able to handle periodic motions (foliage in the wind, ripples on a body of water), periodic flickers (a constantly blinking light in the field of view), and noise.
This approach is related to [R. C. Voorhies, L. Elazary, L. Itti, Neuromorphic Bayesian Surprise for Far-Range Event Detection, In: Proc. 9th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS), Beijing, China, Sep 2012.]().
Definition at line 75 of file Surprise.H.
Constructor.
Definition at line 21 of file Surprise.C.
References itsSaliency.
Virtual destructor for safe inheritance.
Definition at line 28 of file Surprise.C.
Compute surprise from a YUYV video frame and return the surprise value in wows.
Definition at line 559 of file Surprise.C.
References itsSaliency.
Parameter.
Parameter.
Definition at line 89 of file Surprise.H.
Referenced by process(), and Surprise(). | http://jevois.org/basedoc/classSurprise.html | CC-MAIN-2017-43 | refinedweb | 353 | 55.74 |
Apart from syntax, there are some differences when writing scripts in C# or Boo. Most notable are:
1. Inherit from MonoBehaviour
All behaviour scripts must inherit from MonoBehaviour (directly or indirectly). This happens automatically in Javascript, but must be explicitly done inside C# or Boo scripts. If you create your script inside Unity through the Asset -> Create -> C Sharp/Boo Script menu, the created template will already contain the necessary definition.
public class NewBehaviourScript : MonoBehaviour {...} // C#class NewBehaviourScript (MonoBehaviour): ... # Boo
2. Use the Awake or Start function to do initialisation.
What you would put outside any functions in Javascript, you put inside Awake or Start function in C# or Boo.
The difference between Awake and Start is that Awake is run when a scene is loaded and Start is called just before the first call to an Update or a FixedUpdate function. All Awake functions are called before any Start functions are called.
3. The class name must match the file name.
In Javascript, the class name is implicitly set to the file name of the script (minus the file extension). This must be done manually in C# and Boo.
4. Coroutines have a different syntax in C#.
Coroutines have to have a return type of IEnumerator and you yield using yield return ... ; instead of just yield ... ;.
using System.Collections;using UnityEngine;public class NewBehaviourScript : MonoBehaviour { // C# coroutine IEnumerator SomeCoroutine () { // Wait for one frame yield return 0; // Wait for two seconds yield return new WaitForSeconds (2); }}
5. Don't use namespaces.
Unity doesn't support placing your scripts inside of a namespace at the moment.
This requirement will be removed in a future version.
6. Only member variables are serialized and are shown in the Inspector.
Private and protected member variables are shown only in Debug Mode.
Properties are not serialized or shown in the inspector.
7. Avoid using the constructor or variable initializers.
Never initialize any values in the constructor or variable initializers in a MonoBehaviour script. Instead use Awake or Start for this purpose.
Unity automatically invokes the constructor even when in edit mode. This usually happens directly after compilation of a script
because the constructor needs to be invoked in order to retrieve default variable values. Not only will the constructor be called at unforeseen times,
it might also be called for prefabs or inactive game objects.
Using the constructor when the class inherits from MonoBehaviour, will make the constructor to be called at unwanted times and in many cases might cause Unity to crash.
Only use constructors if you are inheriting from ScriptableObject.
In the case of eg. a singleton pattern using the constructor this can have severe consequences and lead to seemingly random null reference exceptions.
So if you want to implement eg. a singleton pattern do not use the the constructor, instead use Awake.
Actually there is no reason why you should ever have any code in a constructor for a class that inherits from MonoBehaviour. | http://unity3d.com/support/documentation/ScriptReference/index.Writing_Scripts_in_Csharp_26_Boo.html | crawl-003 | refinedweb | 492 | 66.64 |
THE LEADING PUBLICATION FOR THE INDEPENDENT HOSPITALITY SECTOR Page 2
Editor's Viewpoint
Pages 22-23
Microwaves
Pages 24-27 Hygiene & Infection Control Pages 28-31
Hospitality Technology
Pages 32-34
Drinks Dispense
Pages 35-39
Outdoor Spaces
Page 40
RETURN TO PROFIT Let us help you put your business back into profit
CLHNews
CLHNews
Products and Services
Pages 41-44
Design and Refit
Pages 45-47
Property and Professional
ISSUE 238
APRIL/MAY 2021
Prime Minister Backtracks on Vaccine Passports Following Backlash
HOTELS, RESTAURANTS, PUBS & CATERING
MAXIMISE YOUR BUSINESS POTENTIAL Combining our 30+ years of experience in Restaurant, Pub and Hotel Business Mentoring with our now legendary Weekly Figures Analysis, we WILL increase your Profitability.
This new service is specifically designed for the smaller business (sub £500k t/o)
For an affordable monthly fee you’ll receive: • A bullet-pointed and focused weekly report giving you guidance and advice, based on your own trading figures …
• 4 half-days (or 2 full days) of on-site Consultancy / Mentoring per year … 1:1 time with David Hunter … YOUR Chosen Agenda …
Prime Minister Boris Johnson has backtracked on earlier comments that pub landlords could be forced to ask customers to provide a negative Covid-19 test result or show their “vaccine passport” before they were admitted inside the premises. The Prime Minister has now said that vaccine certification may only be introduced once all British adults have been offered coronavirus jabs, after an angry backlash from the hospitality sector. In discussions which took place on March 24 regarding the ‘roadmap’ out of lockdown Conservative MP William Wragg asked the Prime Minister if “Covid vac-
cine certification for domestic use” could be required for pub-goers. The Prime Minister responded, telling the Liaison Committee that individual pub landlords should decide whether they will ask for a certificate of vaccination before they allow a customer to enter their pub. “I think the public have been thinking deeply about it and I have the impression that there is a huge wisdom in the public’s feeling about this," the Prime Minister said.
CLH News
Editor's Viewpoint
Apr/May 2021
A report has revealed that pubs and restaurants are seeing a huge surge in outdoor table bookings as we lead up to April 12 partial reopening of the sector. Almost 5 million people are making reservations for the first 2 weeks from April 12. I suspect over the next 2 weeks that will increase considerably maybe even double! EDITOR
Even better news for the sector and the Treasury is that £2.4 billion could be injected into the economy for the first month from April 12 with the average hospitality spend being £167 per person. Bear in mind that is just for outdoor dining. The sector still will be under restrictions until June 21 when we hope all restrictions will be lifted.
Peter Adams
Fantastic cause for celebration. However, tempered by the ludicrous suggestions/rumours and comments that we may need vaccine passports before we can enter a pub or restaurant, although after angry backlash the PM has backtracked. To quote Katie Nicholls CEO of UKHospitality “it is simply unworkable”. And that is it in a nutshell! Am I the only one asking the question why pubs? Or have I missed the Prime Minister’s comments when he said we would need vaccine passports to visit supermarkets, cinemas, garden centres, art galleries or taking the dog to the vet? Why make the comments now as the sector prepares for its long-awaited opening? Prior to this “bonkers” suggestion to Wetherspoon’s Tim Martin was forthright when he said “the idiocy of outdoor drinking indoors”, which in effect is what will happen with some of the larger marquees. Don’t get me wrong. If we have to endure partial opening, then housing and community secretary Robert Jenrick’s decision to permit pubs, bars restaurants and other hospitality venues to put up gazebos and marquees without planning permission from April 12, as part of a new £56 million “Welcome Back” fund is most welcome. But really in the grander scheme of things I agree with Tim Martin when he said: “The pub trade and the country need less weekly, barmy ideas from Boris in the future- and more review by parliament , consultation with industry and common sense.” Which once again begs the question why give the sector so much support and then hamper it days later with a clearly unworkable proposal? With bookings rolling in, and as the public “Chomps at the bit” to get back to normality, with the added bonus of staycation this year, July or August would be great months for Chancellor Rishi Sundak to reintroduce what was the best government initiative in years “Eat Out To Help Out”..
CHC01
CHC01CD
It does not necessarily have to be at the same level as before, but given how the public have complied and sacrificed this past year, a little bit of goodwill on the government’s behalf would go a very long way! Please do email us with any of your news and views we would be delighted to print them and I can only be contacted on edit@catererlicensee.com
Freephone: 0800 917 7943 sales@euroservice-uk.com
Manufactured in the UK
B351N
PUBLISHED BY
CHC08PC
CHM01L
CHM06
CELEBRATING 40 YEARS IN THE TROLLEY WORLD 10% discount with the code 40TC
CHM06L
Q6
Q6C
W2CD2
W2PAT
W2RT
Prime Minister Backtracks on Vaccine Passports Following Backlash Apr/May 2021
“Human beings instinctively recognise when something is dangerous and nasty to them, they can see that Covid is collectively a threat, and they want us as their government and me as the Prime Minister to take all the actions I can to protect them”. However following criticism from industry bodies and the wider public, the Prime Minister has since clarified his earlier comments saying: “There is going to be a role for certification,” but he added: “You might only be able to implement a thoroughgoing vaccination passport scheme, in the context of when absolutely everybody had been offered a vaccine.” UKHospitality Chief Executive Kate Nicholls said: a two-tier system of viability among businesses and a situation in which young staff members, due to be vaccinated last, are able to work in a pub, but not able to visit it socially.
“Over the past year our sector has been devastated and businesses have only known forced closure or the most severe restrictions. We need to avoid any further measures that give rise to the potential of further restrictions. “A vaccine passport system may be useful in opening up international travel more quickly and it might play a role at large-scale events in the near future, but it should not be used for day-to-day hospitality. A vaccine passport scheme in pubs and the wider hospitality sector would not be the liberating move the Prime Minister believes it to be. It would see further restrictions imposed at the worst possible time. “Pubs and other hospitality businesses have spent a significant amount of time, energy and money ensuring their premises are safe and ready to welcome customers back in April and May. We need to throw off the shackles of Coronavirus in line with the Government’s roadmap; not impose more checks on our ability to socialise and do business.”. Chris Jowsey, chief executive of Admiral Taverns, which operates 1,000 pubs in the UK, said that his reaction to the idea “could not be printed”. “I think it would be extremely difficult to enforce. I think it is discriminatory in many respects. It’s likely to cause potential conflict between people working in the pub and people visiting and it goes against the whole ethos of pubs being a welcoming and communal place,” he added. Steven Alton, BII CEO commented: “Public Health England proved last year that despite over 60 million visits a week in the summer of 2020 to hospitality venues, there was no discernible rise in Covid rates caused by
“The recovery of our nation’s economy will rely heavily on our vibrant and vital sector, which contributes almost £60 billion to the treasury each year. To introduce this unworkable solution at a point where our pubs need to be open and trading freely and fully, will seriously endanger the survival of our pubs and materially damage the communities, high streets, livelihoods, suppliers & brewers and local employment that rely so heavily upon them.”
LEGAL IMPLICATIONS The move may also lead to legal implications for employers, Emma Swan, head of commercial employment at Forbes Solicit • MICROWAVES • EXTRACTION CANOPIES •
COOKERS • ICE MACHINES • FRYERS • SLICERS •
“There is a concern among businesses that the eventual rolling back of restrictions, vital to business survival, may be linked to the use of a vaccine passport scheme. That cannot be allowed to happen. It would put businesses owners in a hugely invidious position and has the potential to effectively impose further unnecessary restrictions on businesses that cannot or will not operate a passport scheme.
3
people safely socialising in our pubs. To demand even more of them at a time when their businesses are teetering on the edge, and at the same time excluding members of their communities who may not have the option of vaccination, would be devastating for our sector.
• DISHWASHERS • GLASSWASHERS • REFRIGERATION
(...CONTINUED FROM FRONT COVER)
CLH News
TOASTERS • STAINLESS STEEL SINKS & TABLING
“No Jab, No Job” - The Latest Advice 4
CLH News
Apr/May 2021
By Daniel Stander, employment lawyer in the London office of international law firm, Vedder Price ()
With the outdoor hospitality sector set to reopen from 12 April, Daniel Stander, employment lawyer at Vedder Price LLP, breaks down the issue of mandatory employee vaccination.
1) “JAB / JOB” As over 50% of all UK adults have now received their first dose of the vaccine, the simplest scenario an employer might face is reopening with a workforce that is already fully vaccinated. In these cases employers will not need to engage with the question of mandatory vaccination. However, employers who wish to confirm and record the vaccination status of their employees should be mindful of the special category data protections that are applicable to health data. A data privacy impact assessment looking at how the business holds and processes such data should be conducted.
2) “NO JAB / NO JOB” Despite the above statistic, recent YouGov data suggests that up to 21% of people still do not intend on being vaccinated. Concerns around the level of vaccine take-up have led to the Government announcing a policy of mandatory vaccination for workers within the care sector, a move without modern precedent, and only time will tell whether this is the beginnings of a sea change in vaccination law, or whether the peculiarities of the care sector make it a unique case for government intervention. Given this uncertainty, employers should keep in mind that having a policy requiring vaccination is one thing, but being able to enforce that policy is another entirely. The relevant question is whether the requirement to be vaccinated amounts to a “reasonable request” that the employer can make of the employee. Employers will need to justify why vaccination is necessary, considering the balance between the employee’s individual liberties and the benefit to colleagues and customers in reducing risk of transmission and infection in the workplace.
It is important to remember that this notion of reasonableness is fact-specific. When considering close contact or public-facing roles, such as those in the hospitality sector, there may be a stronger argument in favour of requiring vaccination to enable employers to comply with their health and safety obligations and maintain relationships with their customers. That being said, the issue of mandatory vaccination remains untested at Employment Tribunal level and an untempered reliance on vaccination status when hiring, disciplining, or dismissing employees may well leave employers open to legal action.
3) BEWARE OF DISCRIMINATORY PRACTICES Blanket policies of compulsory vaccination overlook the reality that employees may have a legitimate reason for not being vaccinated. This can be the case where the employee is pregnant, has a disability, or is simply not yet in the relevant age bracket to receive the vaccine. An inflexible approach that fails to engage with these realities is likely to invite discrimination claims under the Equality Act. There has been discussion around whether such policies risk discriminating against certain religious groups. However, the availability of vaccines free from animal products fetters the likelihood of such claims. That said, it remains to be seen whether Equality Act protections should be afforded to those who argue being anti-vax is a philosophical belief. Employers faced with a discrimination complaint would need to be able to justify their vaccination policy as an objective means of achieving a legitimate aim. Hospitality businesses are rightly concerned about protecting the health of their workers and their customers – but consideration must also be given to what the measure would achieve in reality and whether the aim could be achieved through less intrusive means. Given that no vaccine is known to be 100% effective, and that social distancing is likely to remain part of our lives for some time to come, it may not be justified to discriminate where an employee could be re-deployed or otherwise abide by social distancing guidelines. In these difficult cases, it is important that employers’ actions are considered, proportionate and take full account of the facts at hand.
CONCLUSIONS Ultimately, employers are recommended to avoid heavy-handed practices and instead approach vaccination in a cautious and considered way, leading with empathy, encouraging their workforces to be vaccinated, and, in so doing educating them of the benefits that vaccination would mean for the individual, the business and public at large.
Pubs And Restaurants Will Be Allowed To Erect A Marquee WITHOUT Planning Permission Pubs, bars restaurants and other hospitality venues will be allowed to put up gazebos and marquees without planning permission from April 12, as the sector begins its move out of lockdown, as part of a new £56 million “Welcome Back” fund which aims to help England’s high streets and coastal towns safely reopen as coronavirus restrictions are eased toward June 21 when it is anticipated that all restrictions will be removed.
Any venue, even if it is listed, can put up a marquee or structure of any size on their land without planning permission, and keep it up till September, as the government looks to get the industry up and running as quickly as possible. Side panels will need to be open for ventilation, but paperwork is not
Part of the funding pot will be exclusively allocated for coastal areas, with all English seaside resorts to receive support, under the Government’s plans to help holidaymakers this year, as the country embraces “staycations”, with much of Europe now entering a third lockdown and travel restrictions still in place..
The funding will allow councils to improve the “look and feel” of local areas, creating more outdoor seating areas as well as markets and popup food stalls. Pubs and restaurants, including premises in listed buildings, will be given the flexibility to provide more outdoor space for customers for the whole summer rather than the current 28 days permitted, with figures suggesting that the move will allow 9,000 additional venues to open next month rather than having to wait for indoor serving in May.
There are also to be restrictions on private parking fines in a bid to attract and give drivers confidence when visiting towns.
Mr Jenrick said: “As we move to the next stage on the roadmap out of lockdown we are all looking forward to being reunited with friends and family outdoors and making a safe and happy return to our favourite shops, cafes, pubs and restaurants. an issue after ministers ordered councils to back down on threats to cripple the reopening with red tape. An estimated 70 councils will also receive “targeted, hands-on support” from the Government’s High Streets Task Force, described as “an elite team of high street experts who will advise them on how to adapt to meet changing consumer demands so they can thrive in the years ahead”.
.”
Pork Scratchings Named Favourite Pub Snack peanuts. The survey, which separated crisps by flavour, surprisingly revealed that only 4 per cent of pubgoers named cheese and onion among their top pub snacks, with salt and vinegar coming third while ready salted came fourth. Bacon fries made the top 20, as did Bombay mix, Scotch eggs and pickled eggs. Olives, salami sticks and wasabi peas were also among the preferred snacks.
Pork scratchings are the UK’s favourite pub snack according to a recent poll, with peanuts (salted and dry roasted) second and third followed by salt and vinegar crisps. Pub snacks have long been a traditional favourite, often bought on impulse, and are considered the perfect partner with a drink, with pork scratchings, traditionally made from pork rind, delivering a distinctive combination of hard crunch and a rich soft texture, and seen as the perfect “meat and drink” combination for pub goers with 83% of pork snacks consumed with a drink. The past 12 months have provided unprecedented challenges for the pub sector, now as pubs prepare to open their outdoor spaces on April 12, with indoor spaces open from May 17, 2,000 pubgoers were asked to choose their favourite snacks, with no limit on their selections., And 37% named pork scratchings among their favourites while 34 per cent named
According to food historians, butchers started selling pork scratchings in the 1930s, calling it crackling, however it is believed they first went to market almost 100 years ago in the West Midlands is working class snack arising from the addition of families keeping their own pig at home to be fed to the slaughter. Matt Smith, Marketing Director, Tayto Group who own the UK’s top 3 pork brands says, “We’ve seen sales of scratchings soar – even through lockdown – as people ‘take the pub taste home’. “We know that snack sales increase by up to 80% when they are made more visible. If customers can see your range they are more likely to buy so, stocking the right range and
ensuring they are highly visible as crucial capturing those profitable incremental impulse sales”, we have developed some fantastic POS including bar runners, beer mats and display solutions specifically to support on trade sales” “We’re thrilled that it’s ‘official’ – when it comes to the ULTIMATE PUB SNACK, there’s no matching a scratching!” Rob Parkin of SCT-SCT commented, are back in fashion!”’.
1-In-3 Customers Will Return While ‘Outdoor Only’ Apr/May 2021
1-in-3 UK adults say they will visit pubs and restaurants when they re-open with outdoor seating and service only, according to a snap poll by KAM Media. The research found that Generation Z and younger Millennials are least likely to be put off by the prospect of ‘outdoor seating only’ with 43% of 18-34 year olds saying they’ll return, compared with 24% of over 55 year olds.
CLH News
5
pubs and restaurants to re-open with indoor seating and service before they visit them. And a further 33% said they are not planning on visiting pubs or restaurants at all for the foreseeable future due to Covid-19. This figure was 42% for over 55 year olds. “Most of our research throughout the pandemic has pointed to the fact that older customers are being more cautious, for obvious reasons. It seems that despite the vaccination programme, these customers are still less likely to return to hospitality right away. It is likely that many have also got used to staying at home, helped along by all the fantastic new ‘hospitality at home’ options, operators will not only need to make them feel safe but also remind them what they’ve been missing.” comment Moses.
Katy Moses, MD, KAM Media: “It’s positive that such a large proportion of potential customers are happy to dine and drink outside. I’ve heard examples of incredibly high numbers of bookings where operators are lucky enough to have outdoor space. And it’s not a huge surprise that younger customers will be the first back into our beer gardens and outdoor dining areas. This reflects what we saw last July too when hospitality first reopened. The weather will obviously have a huge impact too.”
“Overall, UK consumers are considerably less fearful coming out of lockdown 3 compared with lockdown 1. They are however much more frustrated and bored which is a huge opportunity for hospitality to be a saviour in the eyes of its customers and give them something to smile about once lockdown measures lift again.”
Lockdown Legacy: A Year Of Pain For The Hospitality Sector The research found that a further 26% of UK adults intend to wait for
One year on from the first lockdown, a shattered industry calls for the Government to be guided by “data not dates” and ease restrictions Exactly a year since the first national lockdown, UKHospitality has revealed the terrible toll of the Covid-19 pandemic on a devastated sector that has experienced more than eight months of closure, costing more than 600,000 jobs, 12,000 business failures and lost sales of £86bn.. This delay means even more jobs are in danger and even more businesses are facing ruin. Until restrictions are lifted, pubs, bars, restaurants, hotels and leisure facilities will not be able to break even and, with the expectation that consumer confidence will take time to recover, trading is unlikely to return to anything like normal levels for at least six months.
UKHospitality says that it’s critical that the Westminster Government sticks to its plans for:
• Hotels with self-contained rooms to be able to open alongside other self-contained accommodation on 12 April • Earlier re-opening of children’s indoor play areas (currently set for 17 May) • People to be able to order via a hatch or outdoor till on outdoor re-opening in April • People to be able to order at the bar from indoor opening in May, and for customers to be allowed to consume drinks while standing outdoors • Covid- secure weddings and receptions indoors from April, with an increase in guest numbers from May 17 in line with sporting and other events.”
A Reckless Policy; Mandatory Vaccination 6
CLH News
Apr/May 2021
By Helen Jamieson, Founder and MD at Jaluch HR & Training ()
COVID-19 vaccination is an essential tool to help stop the pandemic and businesses have already been encouraging employees to take up the offer as soon as the opportunity arises. Some have even gone a step further. For example Barchester Healthcare has announced a ‘No Jab, No Job’ policy for new hires. Others are considering across-the-board mandatory vaccinations for all staff. The intention of such policies is clear; to protect employees and clients. However the issue of ‘forced’ vaccination is a legal minefield. For care leaders, this is an especially pertinent dilemma. If any sector should adopt a mandatory vaccination policy, surely it should be that which supports our most vulnerable?
A RECKLESS POLICY In my view, as an HR adviser for some 30+years, it would be a brave, even reckless business that employs a ‘no jab, ‘no job’ policy. Existing legislation, and recent government statements have made it clear that vaccination is not mandatory. While vaccinations have been administered to millions of individuals, there are many who are refusing, including many who work with the most vulnerable such as in care homes. Whether for religious or spiritual reasons, health concerns, fear of needles and mistrust of vaccinations, there are many reasons why an employee might decline a vaccination. For health and social care employers this causes a myriad of issues. Bear in mind that under RIDDOR and the Health and Safety at Work Act 1974, employers have a duty to take all reasonable steps to ensure the health and safety of staff and clients. That means that where an employee refuses vaccination, the employer has to revisit and reconsider how it can operate, given this duty of care. So let’s take a look at this issue in more detail...
ENCOURAGEMENT FIRST For employees whose hesitancy of vaccination is borne of fear, mistrust or misinformation, as an employer the best place to start is with a mission to inform and encourage. This means taking a proactive approach to communicating the benefits of vaccination to staff - carrot rather than stick. The CIPD (Chartered Institute of Personnel & Development) suggests running an awareness campaign based on NHS information. It also suggests employers: - Offer employees consistent, accessible and factual safety data which promotes the genuine achievement of science in producing an effective vaccine. - Consider counteracting misinformation and conspiracy theory spread through social media. read up about COVID-19 vaccinations via official and reliable sources Staff should feel able to raise anxieties about vaccination to their man-
agers. Finding out the reasons why they are hesitant means you’re able to point them towards the right information, from trusted sources. Encouraging employees to make the choice to receive the vaccination without force or coercion will always deliver a better outcome.
A REASONABLE REQUEST Given the nature of health and social care clientele, it may be considered a reasonable request to ask your staff to have a vaccine as a condition of their employment. Failure to adhere to a reasonable request can, in theory, lead to grounds for disciplinary action and ultimately dismissal. But be very careful. Disciplinary action and dismissal in normal circumstances can be fraught, with all the checks, balances, procedures and legislation we have in place to ensure fairness. COVID-19 is no different, other than the fact that with no tribunal cases having been brought yet, there is no precedent and therefore no direction on the possible outcome. Do you want to be the test case? And how will your stance of requiring vaccination for Covid stack up when considering fairness when you do not require vaccinations for flu or shingles, etc? Rather than rush to disciplinary action, first and foremost make sure you understand the reasons why they are objecting. Are they allergic, or have a medical condition that prevents them? Under Equality and Discrimination law they could be protected from any such action - even a fear of needles coud be covered under this. Where this is the case, you could consider redeployment from ‘front line’ duties or take other steps to ensure Covidsecure health and safety - additional PPE, for example.
WHAT ABOUT SO-CALLED ‘ANTI-VAXXERS’? There is some debate about whether the refusal of a vaccine can be considered a protected philosophical belief under the Equality Act 2010. It is possible that if it came to tribunal, an employer could successfully argue that an ‘anti-vax’ opinion does not constitute a philosophical belief. With no case law to draw from in this area, is it a risk worth taking? Again, conversation, communication and information should be the preferred strategy in any such circumstance. If that fails to shift views you may need to consider not allowing unvaccinated employees to work and ultimately, consider a dismissal. Any employers considering this should always seek professional advice.
CHANGING EMPLOYMENT CONTRACTS For existing employees any unilateral change to the contract by the employer could lead to resignations and claims of constructive dismissal. It is possible for employers to ask for agreement to vary to include a mandatory vaccination clause. Whether or not agreement is given, practically speaking there’s still no way for the employer to legally force employees to have a vaccination. This would be akin to criminal assault! Some employers have taken the approach that only new starters will have a mandatory vaccine clause in their contract. This avoids issues around changing the terms of employment of existing workers but only half solves
the problem.
DATA PROTECTION AND PRIVACY As care providers it may be reasonable for you to ask employees about their vaccination status. However, bear in mind that the vilification of those working in the NHS who have refused the flu jab for years has, by and large, driven the non-vaccination group underground and they are therefore likely to strongly refute it is reasonable for you to ask for such sensitive data. If you do get the data, all such information must be treated sensitively and inline with data protection measures. Proof of course, is another matter - will you take their word for it? With the government naysaying vaccination passports, what proof can you ask for and what proof will be reliable? Already there are fake vaccination certificates springing up on the internet for travellers who are looking for a work around to airline requirements. At present there is no real ‘proof’ you can ask for other than making a specific application to a GP for confirmation, which may or may not be given, depending on each GP surgery’s approach to providing access to such information. If employees refuse to answer, we return to the issue of whether they are refusing to comply with a reasonable request - again a minefield given the lack of case law on this issue. Remember, health related data is subject to strict data protection regulations; GDPR and the Data Protection Act 2018. You will need to consider your current data protection and privacy policies and amend accordingly, bearing in mind that sensitivity and classification of medical data under these rules.
AND FINALLY... The subject of the COVID-19 vaccination programme can be contentious and lead to the expression of strong opinions. However, employees must remain responsible and respectful when communicating with their colleagues about COVID-19 vaccinations. Any employee who is offended by, or concerned about, a colleague's behaviour in this regard should feel able to raise the matter with management and/or raise a formal complaint via a clear grievance procedure. Calm, responsible leadership will be key here to minimising time-consuming and costly complaints as well as maintaining good employee relations and trust. As you’re beginning to see, the issue of ‘mandatory’ COVID-19 vaccination is multi-faceted. In determining your approach, other considerations include dealing with time off for vaccination appointments, self isolation following vaccinations and interpersonal issues between staff. The list goes on... Health and social care vaccinations are pushing ahead at speed. With so many concerns, and potential conflicts arising from this, it is vital that you have a clear and comprehensive COVID-19 vaccination policy in place as soon as possible. Support, advice and help to determine this policy is available at. Don’t wait for conflict to rear its head, get informed and be prepared!
UK Removes EU Cap on Covid Grants for Struggling Businesses The government has tripled the size limit of Covid-19 grants accessible to businesses after being criticized for complying with EU state aid regulations months after the end of the Brexit transition period. The UK was still applying temporary EU measures and was locked out of the £ 4.6 billion emergency coronavirus grant announced in January. Last year, subject to the Brexit transition period, which expired at the end of December, the UK government signed the European Commission’s “Temporary Framework for State Assistance”, with individual companies each granting more than € 4 million. I restricted the receipt. Covid-19 in crisis. The state aid cap on government grants has now been raised to £10.9m, meaning larger hospitality groups will be able to gain access to more support.
Business minster Paul Scully, whose responsibility includes restaurants and pubs, announced the move saying: "We continue to back businesses of all sizes through the pandemic and I'm delighted to see the cap on Covid-19 support grants raised to £10.9m. "Extending our support will help retail and hospitality chains and the thousands of staff they employ. Commenting on the increase, Kate Nicholls, CEO, UKHospitality today said: “The increase in the value of subsidies permitted to businesses is a positive move by Government and will allow more businesses to access the grants that they so desperately need. While this cut-off means that some businesses will continue to miss out on parts of the funding that Government has announced, it is a big step forward and provides certainty for business. This increase must be communicated to local authorities urgently to ensure that funds are paid out. Government could go further
and explore uncapped grants in respect to Covid-19 in line with EU subsidy rules. “The Business Secretary has rightly recognised that these companies are significant employers and that 230,000 people’s jobs were potentially at risk if this emergency funding has not been provided. “Government must now look urgently at the arbitrary £2 million cap imposed on business rates relief in Wednesday’s Budget. This will see many mid-sized businesses facing full rates bills in July, just days after reopening. This limit on support for hard-pressed hospitality businesses is deeply damaging and could threaten the survival of jobs and businesses in the sector, as mid-sized companies are forced to prioritise paying tax over paying wages. We urge Government to take the same pragmatic and sensible approach to rates relief as with subsidies and review their approach on business rates support.”
More Than 7,500 of Britain’s Licensed Premises Lost in the Year of COVID Figures from the forthcoming edition of the Market Recovery Monitor from CGA and AlixPartners reveals intensifying closures in 2021 and a devastating toll on independents Britain has 7,592 fewer licensed premises than it did before the COVID19 pandemic hit, according to a new report to be published next week by analysts CGA and advisory firm AlixPartners. The March edition of the Market Recovery Monitor, published as the UK marks 12 months since its first lockdown, sets out the full devastating impact of the pandemic and lockdowns on hospitality, including a rapid acceleration in closures since the start of 2021. Britain’s total licensed premises fell by 2,713 over January and February—equivalent to 46 closures a day, or one every 31 minutes. In total, Britain had 107,516 sites at the end of February 2021, down by 7,592 or 6.6% from 115,108 in March 2020. The Market Recovery Monitor reveals how independent businesses have borne the brunt of closures. A total of 5,112 have been lost since March 2020, including 1,971 in January and February alone. This reflects the vulnerability of small and family-run businesses by comparison to well-invested restaurant and pub groups, which have recorded 1,229
closures—fewer than a quarter of the independent sector’s number. Karl Chessell, CGA’s business unit director for hospitality operators and food, EMEA, said: “While hospitality finally has a roadmap out of lockdown, these figures show that dozens more businesses are being pushed to collapse every day. Losing Christmas sales had a shattering impact on many entrepreneurial restaurants, pubs and bars, who add so much colour to our high streets and enrich communities up and down Britain. Hospitality is a vibrant sector that can help to kickstart the UK’s economic recovery this summer, but in the meantime support is desperately needed to avoid thousands more business failures.” Graeme Smith, AlixPartners’ managing director, said: “These figures show what a truly devastating 12 months it has been for the hospitality sector. All segments of the market have been impacted, but the dynamic independent sector has borne the brunt of closures. The pandemic has reshaped the market for many years to come and unfortunately there are likely to be further casualties before businesses are permitted to trade without restrictions this summer. With many businesses unable to trade before 17 May, further support is needed for the industry, which is creaking at the seams.”
UKHospitality: Business Rates Relief Cap Could “Strangle Sector Recovery” Apr/May 2021
UKHospitality is warning that the business rates relief cap will jeopardise the futures of thousands of hospitality venues, which will face full rates bills within weeks following the unrestricted opening of the sector planned for June. Remaining ratepayers will also begin having to pay rates bills before they are able to afford to do so. UKHospitality analysis shows almost 8,000 business venues, employing about 343,000 people will be paying full rates in July, despite Budget measures to soften the acute rates burden. A further 1,850 venues would face the same situation before the end of September. This will likely prompt businesses to look at slashing costs, such as closing unviable sites, cutting jobs or holding back investment. The Chancellor announced in the Budget a full business rates holiday for all hospitality businesses for the first quarter of the financial year (Apr-Jun) and then two-thirds off for the remainder of the year (Jul 21-Mar 22). However, a cap of £2 million on relief available to individual firms means
that a significant proportion of the sector will miss out on the benefits. The new cap will typically affect businesses with either large sites or those companies that have grown to multiple sites, along with businesses in high rental areas such as high streets and city centres. It is also likely to penalise operators who have previously invested to improve their sites, therefore resulting in higher rates bills under the current system. As a solution, UKHospitality is calling on the Treasury to extend the period for which the 100% rates relief (uncapped) would apply from three to six months. It proposes that this move is balanced by a reduction in the level of relief for the remainder of the year to 50%. The move would support cashflow for all sizes of operation, as well as assisting those businesses that will have limited demand during the summer. Kate Nicholls, Chief Executive of UKHospitality, said: “While the Budget was broadly positive for the hospitality sector with a range of welcome
CLH News
7
measures, the cap on business rates support really took the shine off things, by excluding so many potential recipients. The cap comes into effect just days after trading restrictions are due to be lifted and will put a major economic drag on the businesses affected and risk the jobs that they support. “For all ratepaying hospitality businesses, their bills will begin landing in June, with demands for payment before they are back on their feet. July is simply too early for businesses to be expected to start repaying rates after a devastating year of closure, restrictions and accumulation of debt. “Hospitality stands ready to play its part in creating new jobs and boosting our communities across the country, but this policy risks strangling the recovery in its infancy. Our proposed solution can unleash greater economic activity and we urge the Treasury to follow Wales and Scotland’s lead and provide greater relief.”
Weymouth Pubs Donate All Surplus Stock To Local Charities
Keith Treggiden, General Manager of Rendezvous & Royal Oak and Slug and Lettuce on St Thomas Street in Weymouth has donated all his venues’ surplus stock to the local Weymouth Community Fire Station, Weymouth Community Hospital and the local paramedics. All three bars’ surplus crisps, snacks, nuts, juices and soft drinks, were all donated to the invaluable community services to show their appreciation and ensure the people who keep them running could enjoy a break with a snack whenever they needed one.. Keith has been active throughout all three lockdowns, from helping to organise the Virtual Quayfest online festival that raised over £13,000 for the NHS, to ensure the well-being of his three teams by checking in and delivering small gifts to them on his bicycle. Keith said: “I was absolutely delighted to be able to give something back to the people who have kept Weymouth safe over the last year. It’s not much but if we
can show or appreciation by the small token of ensuring that each of these services has a free drink and a snack when they need one, then I am a happy man. “We can’t wait to begin to reopen next month. It’s been a hard year, but as with everyone across the nation, we hope that this will be the last lockdown and we can start moving towards a new normal where we can see our family and friends and enjoy a great British Summer.” Rendezvous & Royal are sent to reopen on Monday 12 April, with Slug and Lettuce following suit on Monday 17 May..”
8
CLH News
Apr/May 2021
A Year On, How the Hospitality Sector Is Fighting the Odds By Kunal Sawhney, CEO of Kalkine () sector dropped from £133.5 billion (US$182bn) in 2019 to £61.7bn (US$84bn) in 2020. The first lockdown itself cost the sector and the high street a whopping £45 billion in revenue, a joint study by the Cambridge and Newcastle Universities noted. This is equivalent to £1.4 billion per week and £200 million per day. The night-time economy was hit as most outlets could not operate due to the lockdown restrictions. UKHospitality data shows at least 7,659 licensed premises shut shop permanently between December 2019 and January 2021. Independent businesses were the worst hit, and small and family-run businesses were more vulnerable to closures as compared to bigger pubs and restaurants with stronger roots.
THE LICENSED PREMISES IN BRITAIN FROM MARCH 2020 TO FEBRUARY 2021
It is exactly 12 months since the Boris Johnson government first announced a lockdown shutting down pubs, bars, restaurants, and eateries, among other establishments. At the time, the hospitality sector, which is the third-biggest employer in the nation, faced one of the worst crises in history. According to market observers, the year 2020 was the most bewildering one that impacted every business of the hospitality sector and forced some to shut shop permanently. The worst part is it’s not over yet. According to recent government data, a 68 per cent slump in output was reported between February 2020 and January 2021. In the last one year, around 10,000 pubs, restaurants, eateries were forced to close, while around 4.8 million people lost their jobs due to the uncertainties. According to data and research consultancy firm CGA, the third lockdown had the worst effect, with the total licensed premises in January and February falling by 2,713, which comes to 46 closures every day. Industry trade body UKHospitality has stated the on-trade sales for the
Month Number March 2020 115,108 June 2020 115,004 113,025 August 2020 October 2020 111,914 December 2020 110,229 107,516 February 2021 (Data Source- Market Recovery Monitor from CGA and AlixPartners) Though the government launched schemes to keep the sector afloat, many firms had to lay off their staff when the crisis hit pubs, bars, restaurants, and hotels. In most cases, the profits turned into losses. As the sector is highly dependent on the seasonal rush, experts feel around 200,000 jobs around Easter, the summer holidays and Christmas were lost last year, creating a severe void.
GOVERNMENT HELP
The UK Government started pouring assistance since the first lockdown was announced. It first brought in the furlough scheme and then cut VAT and business rates. The government also announced grants based on the rateable value of premises. In Budget 2021, the government extended the VAT cut and the furlough scheme, besides continuing with the business rates holiday.
LIGHT AT THE END OF THE TUNNEL The UK was one of the first European countries to roll out the vaccination drive. Till 19 March, around 26.8 million Britons have received their first dose of vaccination, and almost 2.1 million received both doses. Banking on the fast vaccine roll-out and the falling rate of infections, the year 2021 looks promising, despite the odds. The Boris Johnson government has already chalked out the phased reopening. As per the plan, April 12 onwards, beer gardens and outdoor cafes can resume, where people should form a maximum group of six at an eatery or restaurant. The hotels, hostels and B&Bs will also be allowed to reopen. From May 17, restaurants will be allowed to serve meals and drinks inside the premises, while night clubs will be allowed to open from June 21.
WHERE FROM HERE? Though the market is buoyant about the vaccine rollout and the government’s recent measures to support businesses, it is still very difficult to predict the exact way out of these uncertain times with added issues like disruptions owing to Brexit. With reports such as the UK economy expected to shrink by 4.2 per cent in the current quarter and the monthly bill for the current lockdown might be around £5 billion, it remains to be seen if the government’s efforts will fructify. The industry view is that the hospitality sector may lead the economic recovery in the country by providing jobs to people and bringing down the unemployment numbers significantly. For that to come true, it is required that the sector be allowed to operate without any further limitations and possibly no more lockdowns or tiered restrictions.
“Indoor Drinking Outdoors” -Another “Idiocy” Says Tim Martin As the country moves towards the first stage of the “roadmap” out of lockdown Wetherspoon chairman Tim Martin has hit out at what he calls the government’s “weekly barmy ideas “.
Politicians, the press and the public should note the tribalism and division implicit in the message from Downing St- the government wants to “ strip power from town hall busybodies and stop them blocking moves, which could help pubs recover”.
Like kings of old, Boris and the “quad” are bypassing Parliament, he says with tacit approval from the evanescent opposition, and are foisting “oven-ready” idiocies on the public.
There is no doubt that some town halls can be bureaucratic, from time to time, but they’ve generally been helpful to pubs, and have been cooperative in granting permission for outside seating over the years.”.
One Lucky Hospitality Business To Win A Game-Changing £30,000 ‘Reopening Package’ Yoello, the hospitality order and pay solution, has launched a £30k reopening giveaway to support a UK hospitality business. After one of the most challenging years in the hospitality trade, Yoello has worked with other hospitality professionals to create this support package that will give a massive boost to any hospitality business opening their doors again in the coming weeks. Thanks to support from partners and friends of Yoello, including Square, Touchpoint, Stampede, Flawsome, The Pop Up Bar Hire Co, Cygnet, Aircharge, Flowerhorn, Stint and The Restaurant Collective, this support package is worth more than £30,000. The prize package, which is valued at over £30,000 includes two tablet devices, a square register, a pop-up outdoor bar set up for a month, Aircharge docking stations, Covid safe temperature checker, sanitisation station, bar stock from Flawsome, Flowerhorn Brewery and Cygnet Distillery, as well as lifetime Yoello order and pay solution with £10,000 of free transactions - which means that customers can order and pay without downloading an app. Deadline to enter the competition is midnight on April 6th, with the lucky winner announced on April 7th. It is hoped that the prize package will then be delivered before English hospitality businesses open their doors on the 12th.
Scott Waddington, Director at Yoello and former CEO of S.A. Brain, said: “It’s been a really tough journey for hospitality businesses over the last year having to navigate, adapt and endure the Covid restrictions. But the industry is still standing and there is certainly a growing feeling of optimism in the sector as we see seeing a roadmap for reopening and a very successful roll out of vaccines across the country. “Through this giveaway, Yoello wanted to give businesses the chance to win a support package that could be really game-changing, helping to get them ahead with reopening and starting to rebuild their business.” Businesses can easily enter Yoello’s grand £30,000 hospitality reopening giveaway via giveaway.yoello.com, the deadline for businesses to enter is midnight on Saturday 3rd April. Yoello, the Wales-based, award-winning fintech, has worked on various initiatives to support the hospitality sector since launching its order and pay platform last year. One such initiative was the Castle Quarter Cafe project in Cardiff, which is ran with FOR Cardiff and Cardiff Council. Designed to encourage trade back into the city centre after the first lockdown, the project gave traders the opportunity to serve additional tables through a 240-seat pop-up food court that brought an additional quarter of a million pounds of revenue into the city centre in the first 10 days of opening.
MPs Show Support for Tax Changes to Help Pubs Apr/May 2021
MPs showed support for long-term reforms and COVID support packages to help pubs and brewers during a Parliamentary debate, which included CAMRA’s proposal for a preferential rate of duty for draught beer. MPs were debating support for the hospitality industry during the Covid19 pandemic, with Parliamentarians from all corners of the UK taking part.
important pubs, clubs and breweries are within the hospitality industry and wider communities. They not only boost local economies and create jobs, but are also a key part of our social fabric, tackling loneliness and social isolation. “It is clear that there is support across parties for further support to
Selaine Saxby MP for North Devon, who secured the debate, said in her opening remarks that “a draught beer duty would be targeted, quicklyactioned support, and could play a crucial role in stopping so many of our vibrant pubs and other hospitality businesses from going under”.
CLH News
9
help pubs, clubs and brewers recover from the effects of this crisis, and that there is strong support for a preferential rate of duty for draught beer. “Further support for our brewers is a must - they have been denied a dedicated support package so far, and we were pleased that several MPs called on the Government to reverse plans to change Small Brewers Relief, which would cause small businesses to pay more tax. This would be a devastating blow, at what is already a time of great financial uncertainty, and we thank the MPs who raised this. “Some provisions were made to help the industry during the Chancellor’s Budget earlier this month, including the 5% VAT rate being extended until September, and we would like to see this extended further - pubs have not benefitted thus far from the 5% rate due to closures, and will not have long to benefit once restrictions lift. The VAT cut must also include alcohol, in order to help wet-led pubs and social clubs.
Her support for a new draught beer duty rate, which CAMRA has long campaigned for, was echoed by MPs from throughout the UK. MPs also called for more support for brewers during the lockdown restrictions. Charlotte Nichols MP, Chair of the Pubs APPG, said that the loss of trade for brewers due to closed pubs “represents 10 years of lost growth for the sector”, and called for more compensation and support to help them recover.
“Thank you to all the MPs who took part in the debate. The industry needs more support to ensure it can not only survive, but thrive once restrictions are lifted. The impact of the pandemic on pubs, clubs, and the brewers and cider makers that supply them, will continue to be felt long beyond reopening, and it is vital that this is reflected in the steps taken by Government.”
Speaking after the debate, CAMRA National Chairman Nik Antona said: “We were thrilled to see so many MPs from all parties and across the nations of the UK take part in the debate last night, displaying just how
Brits Rally Behind Pubs Left Adrift For A Further 3 Months March saw the relaunch of the next phase of the #PubsMatter campaign, with thousands of Brits taking to social media to call on the Chancellor, Rishi Sunak to rescue our fragile pubs with an emergency package of support.
“Whilst the roadmap from the Prime Minister gives us all hope for a return to more normal life in the summer, our pubs and the supply chain of businesses that support them cannot hold on until then without a further urgent package of support.
Thousands of Brits have taken to social media to urge Government to further support pubs thousands of social media posts and emails flooding MPs inboxes describing how integral the local pub is to their communities.
“Our sector will be one of the first to bounce back, making it a key part of the economic recovery of our nation and will support employment for the thousands of people who have lost their jobs over the course of the last 12 months.
In December 2020, the #PubsMatter campaign was launched by a coalition of industry partners including the British Institute of Innkeeping (BII), the Campaign for Real Ale (CAMRA), the British Beer and Pub Association (BBPA), the Society of Independent Brewers (SIBA), the Independent Family Brewers of Britain (IFBB) and UKHospitality (UKH), to remind politicians just how important pubs are to local communities across the UK.
“We have also proven that as an industry, we can keep the public safe, with huge amounts of time and money invested in making our pubs Covid-secure. Last summer, 60 million visits a week to hospitality venues without a discernible rise in infection rates showed not only just how safe it was to visit the pub, but how the Great British Pub is at the heart of our communities up and down the UK.
The government's roadmap for reopening leaves our nation’s pubs closed for a further 3 months, and sees small businesses across the country teetering on the edge of collapse with no customers and for the majority, no way to open their doors again until at least 17th May. When friends and family can finally come together once more to recon-
nect and celebrate the end of lockdown, without further support, our much-loved pubs may not have survived to welcome them back. A spokesperson for the campaign said: “Pubs and breweries across the UK have been amongst the hardest hit businesses in the pandemic, but we also know that they will be the most needed, with the public desperate to get back to those places that allow them to celebrate, commiserate and reconnect with each other once restrictions are lifted.
“The Chancellor holds the fate of the nation’s pubs in his hands as we face a critical turning point. The package of measures must support all businesses, including our traditional wet-led pubs, otherwise a large part of the UK’s heritage will be lost forever.” To find out more and see some highlights from the campaign, visit
One Year On:
2,000 Pubs Lost -2.1 Billion Pints in Beer Sales Lost - £8.2 Billion in Trade Wiped Out by June 21st. Emma McClarkin, Chief Executive of the British Beer & Pub Association, said: .
One year on from the Prime Minister ordering the first COVID-19 lockdown, which forced pubs to close, the British Beer & Pub Association has revealed the devastation the UK’s brewers and pubs have faced. The trade association has revealed that 2,000 pubs are estimated to have been lost forever to date, 2.1 billion pints in beer sales lost due to a full year of either forced closure, or trading under severe restrictions, and £8.2 billion in trade value wiped out from the sector in beer sales alone. Since the first lockdown in March 2020, pubs have faced a further two lockdowns. They have also faced severe restrictions to their trade during other periods of being “open”, including tier restrictions that ultimately forced many to stay shut regionally or open but under conditions that made their trade unviable due to the 10pm curfew and substantial meal rules. Last month the Prime Minister unveiled the Government’s roadmap indicating that pubs will reopen outdoors only from April 12th at the earliest, followed by indoors from May 17th at the earliest and with all restrictions lifted by June 21st at the earliest. Whilst the BBPA has welcomed continued support for the sector in the most recent Budget, in the form of £2 billion worth of measures including grants and furlough support, it stated that longer term investment in the sector was still needed. It also expressed concern for wet-led pubs who would not be able to take advantage of the VAT cut for hospitality, which only applied to food, soft drinks and accommodation, and urged the Government to provide more support for these community pubs. It says it is now all the more important that pubs can operate without restrictions from June 21st as stated in the Government’s roadmap for reopening, to aid with their recovery and the economic fightback after the virus. This comes as new research by the think tank Localis revealed that pubs have a vital role to play in the COVID-19 recovery and Government’s own levelling up agenda, but that to do so they must reopen fully
reopened on June 21st as indicated in the roadmap. This is when their recovery will really start and until then we stand to lose more pubs and community assets.” Nick Mackenzie, CEO of Greene King, commented on the year anniversary of pub closures: “It’s hard to believe that on 20 March it will be a year since we first closed our pubs. Looking back, we had no idea of the challenges we would face including three national lockdowns, tiered and regional reopening strategies, a myriad of different trading restrictions as well as the financial impact on the business. While many of our 40,000 team members have been furloughed and struggled with the uncertainty, I have been continually amazed by their positivity, resilience and sense of community. The stories of how they have pulled together throughout the crisis, in particular supporting their local communities and volunteering for good causes have been inspiring. “The last year has taught us how important pubs are to communities, and I know our customers are keen to reconnect with their friends and family over a pint or a meal, so it is imperative that we are able to operate without restrictions from 21 June so we can all enjoy the summer. Whilst there will still be challenges in the months ahead, we stand ready to open our doors and welcome people back to the Great British pub experience they know and love.”
Undercover Police To Be Deployed In Clubs and Bars To Keep Women Safe Proposals are under consideration for plain clothes police officers to be deployed in bars and nightclubs around the country as part of plans to protect women from “predatory” offenders. Following a meeting of the Government’s Crime and Justice Taskforce chaired by the Prime Minister, the government has said it was taking a series of “immediate steps” to improve security and safety, which may include plain clothes police officers deployed to identify predatory and suspicious offenders by attending areas around clubs and bars to “help women feel safer in the night-time economy as we build back from the pandemic”. Pilots of ‘Project Vigilant’ are expected to be rolled out across the country will also see increased patrols as people leave venues at closing time. them.” Other measures unveiled by the government. Policing minister Kit Malthouse will also hold a summit in the coming weeks with police and industry representatives from the night-time economy on preparations to protect women as pandemic restrictions lift. The announcements were made against the backdrop of demonstrations taking to the streets of central London to protest against the police treatment of women who attended a vigil for Sarah Everard on Saturday evening.
Apr/May 2021
CLH News
11
Preparing for the Reopening of Hospitality 12
CLH News
Apr/May 2021
With the reopening of the hospitality sector on the (longawaited) horizon, Hayden Hibbert, Director of Client Relations at allmanhall () the independently owned food procurement expert, provides advice on procedures and operational changes to help ease the pressure faced by caterers and enable the safety of staff and customers, as venues start welcoming back guests.
stone rapidly approaching, experienced caterer, allmanhall’s Hibbert advises that, after a period of closure, labour.
SOCIAL DISTANCING mile- oneway optimise make this a happy time for all.
Hospitality Expects Long Term Restrictions as Price of Reopening
Hospitality operators expect increased safety measures such as social distancing and wearing masks to be trading requirements for the foreseeable future, according to new research by workforce management specialist Bizimply.
tomers are pragmatic. They accept that the world has changed over the past year, and the safety measures needed to prevent Covid spreading are going to be part and parcel of everyday life, including going out to eat and drink, for some time to come.”
The majority of businesses would also welcome a vaccine passport scheme, in the form of a physical card or a simple online check, to enable them to ensure customers have been inoculated against Covid.
With the survey showing that 86% of operators anticipate high demand from customers once hospitality fully reopens, Shaw adds “people are very ready to get back out into hospitality, and are prepared to live with increased restrictions if that’s the trade-off. That sends a strong message as the Government continues to show caution about the timetable for reopening.”
While most operators anticipate strong pent-up demand from customers once restrictions are lifted, the Bizimply survey also flags up significant concerns about staffing. The ‘double whammy’ of the impact of Brexit on the labour force, and employees who may still have concerns over workplace Covid safety, means 40% of operators do not expect to have enough suitably trained or experienced staff when the business is fully able to reopen. Responses were received from owners and senior managers of hospitality businesses including restaurants, bars, coffee shops and hotels, both managed and franchise-based, representing hundreds of outlets in total. Bizimply CEO Conor Shaw says: “Our survey shows that the UK hospitality sector is more than ready to meet the government half-way when it comes to lifting the lockdown. The vast majority, 93%, say they expect measures such as social distancing, wearing masks while not seated, and hand gel on entry, to be a trading requirement not just when they reopen, but in the longer term. “That tallies with consumer surveys which show that the public will also expect such measures. It’s clear that both operators and their cus-
However, the issue of vaccine passports is more complex, believes Shaw. “Although 53% of operators told us they would welcome a card or online check to confirm a customer has been vaccinated, that still leaves a substantial minority that are uncertain. “Politicians continue to debate the issue, but it would take time to set up a reliable system. Many hospitality operators are looking to the government for a clear message now on whether passports are the right way to go, so they can prepare.” Despite expectations that employment will rise over the next year, the Bizimply survey also flags up a range of operator concerns about staff. Shaw says: “The Office for National Statistics has reported that many EU workers have returned to their home countries over the past year as hospitality shut down due to the pandemic, and the reality of Brexit became apparent.
“The vast majority of operators, 93%, believe that Brexit will have an impact on the availability of staff, and 40% do not expect to have enough suitably trained or experienced employees to call on when the business is able to fully reopen post lockdown. Businesses expect some staff to be reluctant to return to work, due to factors such as their perceived Covid risk, despite the measures operators are putting in place. “While hospitality has relied on the skills and experience of migrant EU workers for a long time, and it’s not always possible to make a simple like-for-like replacement with UK staff, the survey shows very clearly that operators are taking the steps needed to address the issues they face,” says Shaw. Findings of the survey include: • 80% of operators have made or are planning changes to their recruitment and training to address post-lockdown staff requirements; • 86% are investing in reassuring staff through Covid-specific measures such as PPE, and increased hygiene; • 40% expect to increase staff numbers when they can fully reopen; • 25% are increasing investment in technology to better manage their staffing. Shaw sums up: “Competition for the best people will be as strong as ever. The minimum response is for hospitality operators to ensure they have a workforce plan in place that will enable the business to fully reopen once restrictions are lifted, supported by robust systems that give them a clear understanding of their business in terms of labour requirements and costs.”
Prepare for Recovery by Offering Outstanding Customer Service with Online.
Apr/May 2021
CLH News
13
Outdoor Hospitality Bookings Soar as Millions Book Visits for the First Two Weeks of Reopening Pubs and restaurants have reported a huge surge in bookings for outdoor tables after the announcement of hospitality reopening from 12 April, as new research from Caterer.com finds more than 4.7 million people are making reservations for the first two weeks.
the UK (56%) think that hospitality venues have higher cleanliness and Covid-19 safety precautions than other industries and public spaces, such as supermarkets. Nearly a third (32%) of people think the hospitality sector should be allowed to re-open indoors sooner than the 17th May.
Following four months of lockdown and time away from hospitality venues, the survey reveals that as much as £2.4bn could be injected into the economy in the first month that it reopens, with people planning on splashing out an average of £167 in the first month to make up for lost time.
Since the 22 February roadmap announcements, Caterer.com has seen job adverts increase by 84% as businesses prepare to meet demand for people eager to make up for lost time. Data from the job board reveals that there has been significant rise in vacancies right across the UK, with the fastest growth in vacancies coming from Wales with a 103% increase between February and March, followed by the South East (97%), North West (94%) and South West (83%).
For many people (34%), pubs, restaurants and hotels being closed has been one of the hardest things about lockdown and over a quarter (29%) of people say not being able to experience hospitality has affected their mental health. Neil Pattison, Director at Caterer.com, hospitality’s online recruitment solutions partner, said: “Hospitality businesses have been unfairly subjected to tighter restrictions than other sectors throughout the pandemic and our research shows just how eager people are to get back into hospitality venues. As we’ve seen over the last year, businesses have gone to great lengths to ensure the safety of customers. Many have remodelled to allow for more outdoor space enabling them to remain open within safety guidelines. The recovery of the sector is crucial to the wider
economy of the UK and at Caterer.com we’re already seeing green shoots appear with more jobs being advertised as businesses gear up for reopening and the prospect of a busy summer – we’re not alone in saying we can’t wait for the sector to reopen!” The insights from Caterer.com reveal that the majority of people in
Hospitality businesses around the country are reporting an influx in bookings ahead of the public being able to get back to enjoying hospitality. Matt Fleming, MD of Vagabond Wines, commented: "Vagabond has seen a fantastic response since we reopened our bookings. Not only have we already booked out a large percentage of all our spaces, but the uplift in pre-booked packages and wine flights has been very strong too, suggesting a large number of people are ready to get out and have exciting experiences once again."
Safety of Customers and Staff a Priority for Hospitality Sector UKHospitality has welcomed the Government’s commitment of additional funding to provide safer public spaces for women. The trade body has also reiterated its, and the sector’s, commitment to ensuring that the safety of customers and staff is rigorously upheld. UKHospitality Chief Executive Kate Nicholls said: “We welcome the fact that this issue is being addressed by the Government and that funding will be forthcoming to ensure that spaces are made safer for women. “It should be highlighted that the hospitality sector already works incredibly hard on this issue. We have numerous partnership schemes in place to ensure the safety of customers on a night out. Schemes like Best Bar None, Pub Watch, Drinkaware and Ask for Angela have been adopted by the sector to ensure that safety is central to the running of businesses. UKHospitality is a signatory to the Women’s Night Safety Charter and we engage regularly with the Mayor of London’s Office to make sure that nights out are fun and safe. The welfare of our customers and our staff is a priority. “In recent years, our businesses have invested considerable sums of money in their security and surveil-.
lance to ensure that venues our safe. This must be matched by resources on the streets from local police and other civic bodies. “That does not mean we can relax our efforts. We must continue to work hard to provide safe environments and we will regularly review our practices to see how we can be even more effective. We will be in contact with Thames Valley Police to learn from their experiences from Project Vigilant and share best practice throughout the industry, and to share our own expertise. It is critical that all stakeholders are involved in making our streets safe for everyone.” For more information on hospitality partnership schemes please visit
Why Vaccine Passports Could Prove Much More Than A Customer Admission Problem 14
CLH News
Apr/May 2021
Requiring customers to produce vaccine passports could also create employment law problems for hospitality businesses, according to Emma Swan, head of commercial employment at Forbes Solicitors..”
Stonegate’s We Love Sport Announces Fan Of The Year 2021 In January 2021, Stonegate’s We Love Sport and their partner, Coca-Cola, went on a search to find the Football Fan of the Year. After over a huge 20,000 votes, Paul Squire has been voted the 2021 We Love Sport Fan of the Year. 2020 proved that football is nothing without fans, the Fan of the Year competition recognises football fans that have gone above and beyond to support their club throughout the last 12 months in what has been a truly difficult year for the beautiful game. The winner won £500 in cash and an Ultimate Football Fan Experience, courtesy of Coca-Cola.
them through the difficult lockdown months.” This year’s winner, Paul is the father of Finley ‘The Mighty Fin’ Williams. Together they aim to raise as much money as possible to grant magical wishes to families. To date, they have raised over £10,000 for Make-a-Wish Foundation. Fin has a rare syndrome called Mowat-Wilson Syndrome and Hirshsprungs Disease but he gives joy to thousands via his Facebook page ‘The Mighty Fin’s I Have No Voice’. Nothing makes Fin happier than watching his beloved Seagulls and before the COVID-19 pandemic, Paul would travel 600 miles from North Wales to Brighton to ensure they saw the matches.
Stephen Cooper, Stonegate’s Sports Marketing Manager, said: “With pubs being closed for the majority of 2020, it was more important than ever that We Love Sport continued to unite sports fans. We wanted to recognise those incredible fans that had gone above and beyond to support their club throughout the last 12 months in what has been a truly difficult year for the beautiful game. It could have been someone who has helped their club out in the community, that season ticket holder who never misses a game but has been forced to the confinement of their living room or maybe they have had a tough year and their beloved club has got
Paul said: “The Mighty Fin comes alive during our trips. Every second is a moment to savour for me. I will probably outlive my son and so making beautiful memories is very important to me. We want as many people as possible to have the opportunity to create as many beautiful memories with their lovely children too! Many people can’t afford to do that, so that is where Make-A-Wish UK comes in.”
Pubs Lose Sales Worth 12 Million Pints and 3.6 Million Meals Due to Second Mother’s Day Lockdown in a Row BBPA highlights ongoing damage lockdown is causing to pubs and communities they serve, reiterates Government must ensure pubs fully reopen on June 21st The British Beer & Pub Association, the leading trade association representing brewers and pubs, has today revealed that pubs will miss out on 12 million in sales of pints and 3.6 million in sales of meals due to the ongoing restrictions covering this Mother's Day. Pubs across the UK will remain closed and unable to serve takeaway beer on Mothering Sunday – which falls on Sunday 14th March this year – because they remain in lockdown until April at the earliest, where they should hopefully reopen outdoors only.
Mother’s Day with loved ones in their local for the second year in a row. The BBPA said it made it all the more important that pubs, following limited outside opening in April and indoors in May, can trade fully from June 21st as stated in the Government’s roadmap for reopening. The news comes as new research by the think tank Localis revealed that pubs have a vital role to play in the COVID-19 recovery and Government’s own levelling up agenda, but that to do so they must reopen fully by June 21st. Despite being unable to open and serve their communities at the pub, operators have done all they can to ‘Save Mother’s Day’ and provide the pub experience at home this Sunday.
Pubs were also forced to shut for Mother’s Day in 2020, which fell on 22nd March and shortly after the Prime Minister announced the first UK lockdown on March 16th.
Oakman Inns, which has a number of venues across the Home Counties and Midlands, has launched Oakman At Home - a range of ‘makeaway’ meal boxes for Mother’s Day.
It means for the second year in a row, families have not been able to celebrate the occasion in their local with a pub meal.
Fuller’s, which has a number of pubs across London and the South, has created a Mother’s Day Sunday Roast Box Feast, as part of the ‘Fuller’s At Home’ range. The Box Feast includes all the ingredients and food necessary to prepare a quality roast dinner – the next best thing to having one at the pub.
According to the BBPA, this Mother’s Day alone will result in the trade losing out on £83 million worth of sales which would have been crucial to the sectors recovery. More importantly though, it said, was the fact that thousands of communities across the UK were unable to celebrate
The BBPA also said pubs across the UK were offering takeaway meals and
cook-at-home kits to enable Brits to get the Sunday Pub roast experience at home for Mother’s Day. It encouraged people to ask their local if they were offering such a service. Emma McClarkin, Chief Executive of the British Beer & Pub Association, said: “A pub Sunday Roast on Mother’s Day is one of life's simple pleasures, yet for the second year in a row, families will not be able to celebrate the occasion at their local. “The pub is the place where we connect and spend quality time with one another, so it is a great shame they are not open for Mother’s Day again. “From a trade perspective, it does mean our pubs will miss out on some much-needed support too. On a typical Mothering Sunday they would expect to sell some 12 million pints and 3.6 million meals. That’s £83 million in trade which. “It is becoming all the more clear that the Government must ensure all our pubs are open and able to trade fully from June 21st as indicated in the roadmap.”
The Source Trade Show Will Take Place In June 2021 with NEW Outside Space The Source trade show has taken place in Westpoint, Exeter annually for more than 10 years, and will be one of the first trade shows for food and drink in 2021, when it takes place on the 8th– 9th June. Attracting buyers from retail, hospitality and catering, it showcases the best the South West region has to offer, from artisan food & drink to essential goods and services. To meet government rules for numbers allowed at such events, Hale Events, the show’s organisers, are, for the first time, complementing the space in the hall with outside exhibiting space and features. “We are delighted that government regulations will allow this trade show to happen. It will be our first for more than a year!” says Mike Anderson, MD of Hale Events. “In order to enable plenty of social distancing and comfort for our exhibitors and visitors we are extending this popular trade show. Catering and Show Features will be outside in 2021, alongside a brand-new outside area for exhibitors, which will extend the show and enable more people to
take part, and to attend.” Mike continues “We know that suppliers, as well as everyone involved in food retail and hospitality, are looking forward to getting back together to network and find out what is new after a year of isolation. Source can help stimulate this sector, showcase innovation and provide a platform for producers”. Outside space at the Source trade show will be provided for companies who have their own facilities, such as trailer, gazebos, or other structures, as well as for companies who need covered space provided. “We look forward to welcoming the Source trade show back to Westpoint in June. We will be working in collaboration with Hale Events to deliver a safe show which meets with the prevailing government guidelines.” Richard Maunders, Westpoint CEO. For more information about the show, to book a stand, or register to attend, please call 01934 733433, follow the show on Twitter @sourcefooddrink, or visit.
New Guidance Urges Local Authorities To Deliver Financial Support To Forgotten Businesses 16
CLH News
Apr/May 2021
New guidance aims to ensure previously ineligible businesses are supported with targeted grant support
COVID crisis on hospitality goes deeper than closed venues on high streets. The pain is being felt throughout the totality of the sector.
UKHospitality has welcomed updated guidance on the Additional Restrictions Grant aimed at supporting businesses.
“Less visually prominent businesses like suppliers, catering businesses and event spaces have been hit just as hard. The support has to make its way through to these businesses as well. Too many of our members have reported that they have struggled to access grant support and are at risk of being left behind. This must end.
The new instructions for local authorities published today encourages them to ensure businesses impacted by the pandemic but not eligible for the Restart Grant Scheme are supported. This includes contract caterers, that tend to operate out of the properties of others, events businesses and the thousands of suppliers to the sector.
“UKHospitality has been urging the Government to acknowledge this and highlight this point to local authorities. It is good to see our voices being heard. Local authorities must now take this guidance to heart. They must act to ensure valuable businesses in their areas receive the support they desperately need.”
The trade body has also urged local authorities to ensure that grant support finds its way to hard-hit businesses to ensure the whole of the hospitality sector is properly supported.
UKHospitality Chief Executive Kate Nicholls said: “This is another pragmatic move by the Government. It is reassuring to see that Westminster understands that the impact of the
Hospitality Action launches ‘To Hell and Back’ Fundraising Challenge Hospitality Action has launched the ‘To Hell and Back’ challenge to help raise money for vulnerable hospitality workers. Hospitality Action is asking people to run, cycle or walk as many miles as they can between 10-18 June 2021. In total participants are aiming to cover 30,693 miles to virtually visit four actual hells on earth: towns called Hell in California, Michigan and Norway; and Hell Creek in
Montana. Hospitality Action was launched in 1837 and it aims to offer financial, physical or psychological support services to help people from hospitality get back on their feet. Participants are also encouraged to post snaps of their route and their endeavours to share across social media tagging friends using #HellandBack, @hospaction.
What Makes Scratchings So Special? The Relationship Between Scratchings and the Pub It’s official - pork scratchings are the ULTIMATE PUB SNACK! In a recent poll by the Daily Mail, pork scratchings topped the list when 2,000 pubgoers were asked to name their favourite pub snack.* Matt Smith explains, “Pubs hold a special place in British hearts as somewhere to relax, shake off the stresses of daily life and enjoy yourself. Pork scratchings have long been part of that experience. When consumers talk about scratchings, they often recall the very first time they tried them in the pub, often evoking quite vivid and emotional memories of a close friend or relative who first encouraged them to try one. A pub without scratchings really isn’t a proper pub!” Pork scratchings are undisputedly linked with drinking alcohol - 83% of pork snacks are consumed with a drink - and drinking is associated with the pub.1 The combination of a pint and scratching is one that many have grown up with and see as fundamental to the pub experience. Pork scratchings are seen as an ‘adult’ snack, being much less likely to be consumed by children than other savoury snacks - which means that they are naturally at home in the pub. Few parents will offer their children scratchings until they are in their teens and hence ‘the first time’ is within many people’s memory. This then means that there is an intrinsic link between scratchings, the pub and good times.
WHO EATS SCRATCHINGS AND WHY? Contrary to popular belief, pork scratchings are not the preserve of ‘old blokes’:
RE-OPENING OF HOSPITALITY SECTOR – MAXIMISING PROFITS THROUGH SNACKS The phased re-opening of the hospitality sector can’t come soon enough after months in lockdown. However, it is unlikely to be plain sailing as it’s unclear whether people will flock back or be wary about socialising in groups – and that’s before you factor in the uncertainty of our great British weather (a real consideration until mid-May when we can all go indoors)! Many operators are financially stretched and rightly cautious, especially when it comes to the cost of re-stocking. As a major snacks supplier, with a range of award-winning brands tailored to Foodservice such as REAL Handcooked crisps and the leading pork scratchings brands - Mr Porky and Midlands Snacks, Tayto understands these challenges. Matt Smith, Marketing Director for Tayto Group explains, “Conserving cash and maximising sales will be key to hospitality venues as they re-open their beer gardens, and eventually, their doors. Snacks provide a brilliant opportunity to increase sales. However, we know that less than 20% of people buy a savoury snack, with a drink and the main reason for customers not doing so being ‘I just didn’t think about it’. Prompting a purchase by prominently displaying snacks and getting staff to offer them, can make all the difference. Given most people either have no idea what they pay for pub snacks, or expect to pay over £1 a pack, venues can easily make over 50p profit on each bag of premium crisps or scratchings they sell.” Premium product fact - 82% of pub goers eat handcooked crisps and are willing to pay up to 30% more for premium crisps over standard products3. 2
• 44% of consumers are women1 • 63% are under 45 and 41% are under 351 • A third of pork snacks are consumed by AB consumers and over half are consumed by ABC1 consumers1 These consumers are under no illusion that pork scratchings are anything but a treat. The fantastic taste and fond memories mean that we have a deeper love of scratchings than ‘ordinary’ crisps and snacks. Scratchings are the perfect partner to booze as Smith explains, . There is no matching a scratching.” Matt explains, “In every piece of research we have conducted, taste is always the No1 reason for purchase as consumers recognise savoury snacks are a treat and so, have to be ‘worth the calories’. This is underlined in pork scratchings where consumers crave the unique taste so much that 1 in 5 people will simply not buy another snack if they are not available1, making them a ‘must-stock’ item!”
CATEGORY ADVICE / MERCHANDISING TIPS Consumers are looking for brands they can trust and re-stocking with proven sellers is key! Not all scratchings are the same and Tayto has a range of award-winning pork snacks to suit every pub: – Midland Snacks Traditional Scratchings is our best-selling pubcard - hand cooked scratchings using a recipe that has stood the test of time. – Mr Porky Hand Cooked Scratchings – from the most recognised name in scratchings, this Great Taste award-winning scratching is set to become the new benchmark for a premium scratching. – Mr Porky Crispy Strips – a lighter bite, akin to crispy bacon rind, for those who want all the taste of a scratching but not as hard in texture. Given that snack sales increase by up to 80% when they are more visible4, the recent growth in pork snacks can be maximised by making use of eye-catching pubcards behind the bar and in areas of high footfall. This can help prompt those unplanned snacks purchases - providing an excellent opportunity to drive high margin, VAT-free sales.
CLOSE THE DEAL Snacks offer a simple route to incremental sales, if customers are reminded to consider buying them alongside their drinks. Thankfully there are a number of simple tips for venues to achieve this:
– Stock a range of proven, premium snacks that have been developed for the licensed sector o Award-winning pork scratching pubcards from Midland Snacks and Mr Porky o Premium REAL Handcooked crisps which are exclusive to Foodservice – Put your snacks where customers can see them o Pubcards behind the bar o A full range of crisps on the bar o Include them in menus or apps for table-orders – Get your team to prompt purchase o A simple ‘would you like some crisps or pork scratchings with that?’ is all it takes.
HOW IS TAYTO SUPPORTING THE ON-TRADE? STOCK UP FOR LESS Having heard the hospitality sector’s concerns about the cost of restocking, Tayto has lined up a range of aggressive promotional offers now, and over the coming months, that will enable venues to stock up for less on a proven range of bar snacks. In addition to strong price promotions on our premium REAL Handcooked Crisps, best-selling Midland Snacks and Mr Porky pork scratchings, our top-selling flavours of REAL Handcooked crisps will be available in mixed cases for a limited period. These flavours have been developed specifically for licensed venues and each box includes a Great Taste award-winning flavour. Given that snack sales increase by up to 80% when they are more visible4, visibility of snacks is vital so, to help maximise your sales of premium crisps, Tayto have FREE POS available at realcrisps.com/trade
UPCOMING MARKETING CAMPAIGNS As the leading supplier of pork snacks with the top three brands (Mr Porky, Midland Snacks and Real Pork Co) Tayto is launching the biggest ever campaign for pork scratchings with the strapline “There’s no matching a scratching”. Aimed at engaging both new and lapsed consumers, the campaign sets out to hero the unique appeal of scratchings and how other snacks just don’t come up to scratch. The campaign will include national radio coverage through a partnership with TalkSport as well as a heavyweight regional campaign in the Midlands (where sales of scratchings are particularly strong) across OOH, radio and digital. The campaign will support the ongoing digital activity that is already bringing more people into the Mr Porky brand and the wider category. This investment in the category is unprecedented and could only be delivered by the category leaders. SOURCES: 1. Norstat | Jan 20 2. Norstat | Mar 19) 3. CGA Strategy Research 2016/17. 4. HIM! Foodservice Research 2016) *
What’s Keeping Operators Up At Night: New Heineken UK Research Reveals The On-Trade’s Biggest Challenges and Priorities For 2021 18
CLH News
Apr/May 2021
from displaying social distancing signage and sanitising tables in between covers, to hosting events like curry or quiz nights. Establishing these lower-tempo ‘Rhythm of the Week’ activities will help drive footfall and loyalty during quieter periods, rather than relying so heavily on the weekend trade. This led to the creation of POS Direct, so our customers could access professional point of sale and digital assets, from safety and reopening POS to personalised events and promotions. You can also offer your venue as a great remote working space, with speedy Wi-Fi and a business offer lunch included, to bring in more customers. Wireless Social is a tool that sits over your existing Wi-Fi network and allows you to capture customer data then retarget them based on their demographic, likes, interests or how often they visit – helping you drive footfall. Through the HEINEKEN Buying Club you can receive a 55% discount on the annual fee of Wireless Social, helping you drive more traffic for less money.
To better understand operators and therefore provide the right support that they need, HEINEKEN has commissioned research into the on-trade’s biggest concerns, challenges and expectations ahead of reopening. This new insight – provided by real operators managing wetled and food-led venues across the UK – will assist HEINEKEN in delivering relevant support at the right time to give on-trade businesses every chance of success in 2021. Commissioned in December 2020, HEINEKEN’s research revealed that the three main challenges facing operators are attracting new customers (20%), retaining current customers (18%) and managing costs (17%). Recruitment and retention of staff was also a key concern for food-led operators.
Recent HEINEKEN consumer sentiment research shows 42% of consumers are excited to try new drinks brands and 37% plan to make their on-trade visits more special by choosing more premium drinks and food. Your customers are after a quality experience, so taking time to properly train your staff is an investment worth making – especially with new starters and people who have been on furlough so may be less confident. From delivering perfect serve and new hygiene protocols, to recommending dishes on your menu or great drinks pairings, this will help enhance your customer experience and encourage them to return again and again.
When used effectively, your social media channels can not only tempt existing customers back but help reach new prospects. Drive awareness of your offering, focusing on good food and outdoor space – the top two traits consumers will be looking for in an on-trade venue. Even before the government announcements, 53% of people are more likely to visit a pub if it has a beer garden, increasing to 67% among 25 to 34year-olds. Capture attention by posting regular, relevant content with images and video and keep your tone light, upbeat and chatty – just like you would if you were talking face to face. Via The Pub Collective, HEINEKEN offers support on what type of content to share (and when) to connect with customers, as well as free social media training for operators, in partnership with our Star Pubs & Bars.
HOW TO RETAIN CURRENT CUSTOMERS Your customers are looking forward to a perfectly poured, quality pint from their local. Reminding current customers of all the reasons they love your venue and showcasing the ways in which they will be kept safe will boost confidence and encourage them back into your business –
HOW TO UPGRADE YOUR OUTDOOR SPACE First impressions are everything; you can drive footfall and promote dwell time with a smart, clean environment. Simple housekeeping like pressure washing the path up to your entrance or adding flowers and benches will make your space attractive to passers-by and shows you care about safety and hygiene. It’s also worth investing in coverings, lighting and heating to weather-proof your space and encourage people to stay longer. To help finance these up-front costs – which pay dividends – you can access unique discounts from top suppliers like Woodberry (quality outdoor furniture) via the HEINEKEN Buying Club. Our Facebook group, The Pub Social, is also a great source of recommendations from other operators. Ensure you promote your great outdoor space on your website and via social media. Customers won’t be visiting if they don’t know it’s there! Your garden should feature at least twice in your nine most recent social posts alongside relevant hashtags like #beergarden. Video content is key and attracts higher engagement than still photos, meaning your photos will get pushed to the top of your followers’ newsfeeds – increasing your chances of attracting customers.
HOW TO BOLSTER YOUR FOOD OFFERING
HOW TO ATTRACT NEW CUSTOMERS
As restrictions start to ease, your online presence becomes no less important. After months of remaining at home watching Netflix, the ‘Staycation’ is likely to be a big part of 2021. With over 200,000 visits per month, Useyourlocal.com offers a great opportunity for potential customers, locals or people visiting the area to discover your pub based on their location or search criteria.
HEINEKEN’s research also revealed that to futureproof their businesses, operators are looking at upgrading their outdoor spaces and bolstering their food offering.
This is just a taster of the operational expertise and advice shared by our Star Pubs & Bars. Through the HEINEKEN Benefits Bar, we are able to share further support for maximising any size of outdoor space, ensuring it remains compliant, and real experience and insight from Star Pubs & Bars with our customers. All easily accessible via HEINEKEN Direct or your HEINEKEN representative.
Driven by these research results and feedback from HEINEKEN’s own customers and its Star Pubs & Bars, HEINEKEN created the Benefits Bar. HEINEKEN has built and gathered all the benefits of simply working together into this virtual local, serving up the products, services and ideas operators need to help them run a profitable pub business. Offering a flavour of the vast pub expertise and insight available to its customers, HEINEKEN has shared top-line advice for future proofing your business, in line with the key concerns outlined in the research. A digital presence is the best way of helping people discover and visit your venue – especially when 87% of people search online before they choose where to spend their money. If you don’t already have a website or feel yours may not be working hard enough for you, we recommend Useyourlocal, which is accessible at a discounted rate via the HEINEKEN Buying Club – one of many tools that sit within the HEINEKEN Benefits Bar. Useyourlocal can help you create a great looking website in just 20 minutes! Once your website is set up, your regulars can like or follow your pub to receive newsletters and updates to encourage repeat visits. The service also integrates with your social media channels and the team can even help with maintaining your site on an ongoing basis – all geared towards keeping your members up to date with planned events and to attract new customers.
year, the app is easily accessible for all staff and could help you tap into an additional £25,000 worth of profit – plus it’s available free via the HEINEKEN Buying Club.
HOW TO MANAGE COSTS From equipment to utilities, we understand running a pub business isn’t cheap – and now more than ever you’ll want to make financial savings where you can. Simple measures like introducing more energy-efficient light bulbs, better insulation for boilers and pipes, and using more seasonal products in your menus can all help to chip away at running costs. Reviewing your supplier base is always an opportunity to ensure you continue to get value for money, although this can take time to fully research and negotiate with suppliers. That’s why we created the HEINEKEN Buying Club to offer our customers exclusive discounts and better commercial terms with leading suppliers to help manage costs. Sharing the combined buying power of HEINEKEN and Star Pubs & Bars saves an estimated £5,000 per pub per year, as well as time negotiating with suppliers. Whether it’s utilities, waste disposal, catering equipment or marketing tools, the HEINEKEN Buying Club has you covered. For example, you can get up to a 32% discount with wholesale foodservice supplier Brakes, 30% off catering equipment at Nisbets and 25% off outdoor furniture with Woodberry. Also, you could receive an average annual saving of £1,500 with Biffa and save 10% on business insurance.
HOW TO RECRUIT AND RETAIN STAFF HEINEKEN and Star Pubs & Bars recommend Remit, one of the UK’s top providers of government funded apprenticeship programmes, whose recruitment and training solutions help businesses effectively and affordably. Under the government’s #PlanforJobs, you could be eligible for a £3,000 cash injection for every apprentice you take on from 1st April 2021. Per the revised Kickstart Scheme, you could also receive £1,000 for every new hire – no longer subject to taking on 30 staff. Well-trained staff are typically happier and more loyal. However, in an industry with traditionally high staff turnover, training can be costly and time-consuming. Hello BEER is a staff training app providing courses in beer and cider quality from cellar to serve, ultimately helping you to deliver a great customer experience. Priced from just £2 per learner per
A venue’s food offering has become a major motivator for consumers when choosing where to go. When building your food menu, start by thinking about your customers, local competitors, food trends and planning your commercials (like setting dish RSPs and an overall target dry GP). Providing quality, locally sourced options and prioritising al fresco dining can increase the number of covers, boost profit potential and open up opportunities for events like barbecues. Challenge yourself to expand your menu and offer a point of difference, for example pulled pork or loaded hot dogs beyond simple burgers, which also adds a premium to your pricing. Wet-led venues without a kitchen could consider partnering with a local street food vendor to create a new, exciting experience. Whatever you decide to do, it’s important to ensure you deliver consistently high standards and utilise your digital presence to promote your offering to customers. In trading with HEINEKEN, you will have access to the HEINEKEN Benefits Bar and its insights, tools and advice. We are proud to share the experiences of our Star Pubs & Bars with our customers, such as how to plan a new menu, adjust an underperforming one and encourage trade up – giving you everything you need to deliver a strong food offering in your own venue. While the past year has seen numerous challenges, the research revealed that the suppliers delivering great customer service, regular communication and flexibility came out on top. For 99% of respondents, customer service is the most important factor in choosing a drinks supplier and it’s important to us that we deliver on this. Our UK-based contact centre, online portal and HEINEKEN representatives are all available to give practical support in a way that works best for you. Whatever your request, whatever your query, we are here to help. For more information on The HEINEKEN Benefits Bar and growing your business, visit:
Brrit British B itish free itish fr f id egg id range liliqui liquid
Range Farm Liquid Egg products are produced from fresh free range, British eggs. Available as Whole Egg, Egg Whites and Yolk supplied in pallecons, BIB and cartons. To start cooking with ease, call 01249 732221 or email Adrian.Blyth@stonegate.co.uk
Creating Kerb Appeal: Food Safety Expert Gives Advice On Outdoor Dining Ahead Of Reopening 20
CLH News
Apr/May 2021
With the government gradually easing lockdown restrictions across the UK, it won’t be long until restaurants and pubs are once again rushed off their feet..
DIGITAL FOOD SAFETY
To limit physical human interaction, businesses should provide visitors with an online booking system with table numbers clearly assigned at the point of booking. This reduces face-to-face contact and also puts a halt to people turning up unannounced.:
Time at the Bar as Jobseekers Pin Hopes on Hospitality still face stiff competition. The sectors seeing the biggest jump in interest from candidates are those that require few formal qualifications, such as customer service, administration and retail. However, job postings remain much scarcer in these sectors than before the pandemic raising the possibility of a squeeze for available jobs.
Struggling jobseekers are pinning their hopes on getting work in the hospitality, beauty and retail sectors, as vacancies start to appear in industries expected to unlock soon, according to new research by job site Indeed. A year on from the first UK lockdown, job opportunities in the parts of the economy hardest hit by the pandemic, including in hospitality and food, remain almost 70% down on pre-pandemic levels1, but there are signs postings are picking up in areas that the Government roadmap suggests will reopen soon.
However, there remain areas of the labour market which are not being flooded with applicants. Sectors requiring high qualifications, such as software development, healthcare and engineering, still receive relatively few clicks per posting and roles remain relatively difficult to fill.
The early evidence is that announcements of the UK’s roadmaps out of lockdown prompted employers to post more new jobs as they readied themselves to reopen. The sectors chosen to unlock first are among those seeing the largest growth in new postings. Postings for sport jobs have risen by 44%, ahead of outdoor sports resuming in England from March 29, while beauty and wellness roles have increased by 39% and education and instruction jobs are up by 27%. New job postings rising in hardest hit sectors Jobseeker interest has started to grow in sectors which are set to reopen in the coming weeks and months. Interest in bar and waitressing jobs has grown by 98% and 60% respectively in the past two weeks, making them two of the top three fastestrising search terms on the Indeed platform. Pubs and restaurants are scheduled to begin opening, albeit only outdoors, from April 12 in
Jack Kennedy, UK Economist at global job site Indeed, comments: “Roadmaps out of lockdown and the success of the vaccine rollout are building optimism that the labour market will bounce back, as the release of lockdown unleashes pent-up demand for jobs in the hardesthit sectors, including beauty, gyms, retail and hospitality.
England and April 26 in Scotland. There is also surging interest from jobseekers in beauty therapist and hairdressing roles, and jobs in retail and gyms, which are expected to reopen on April 12 in England and Wales, and from April 5 in Scotland. Simultaneously, interest in jobs which were a lifeline for jobseekers during the depths of the pandemic, such as in supermarkets, warehouses and delivery driving, are seeing interest falling fastest. Interest in delivery driver roles has fallen by 13% in the past fortnight alone, while demand for supermarket jobs is down 11%.
FASTEST-RISING JOB SEARCH TERMS:
“Although consumer-facing businesses are hoping sales will be brisk immediately after lockdown restrictions ease, particularly to people who have managed to save money during the pandemic, employers will need to find ways to sustain this demand. “The pandemic has accelerated the trend towards flexible work, and office-based workers may return to the workplace less frequently, which will impact consumer-facing businesses in cities that rely heavily on commuter footfall. It also remains unclear when restrictions on international travel will end, meaning businesses that rely on tourism will need to appeal to UK customers more than usual.”
The Return of the Hospitality Industry Despite the promising increase in job postings, many jobseekers will
Your FREE trading standards legislation guide The coronavirus (COVID-19) pandemic has brought many challenges to the hospitality sector, with some businesses - particularly pubs and restaurants - forced to radically change the way they operate. Indeed, the hospitality sector’s economic output dropped as much as 92% (source UK Parliament) between February and April 2020 during the first lockdown; however, the sector bounced back when restrictions were relaxed last summer, giving hope of a similar recovery this year. Now one year on from the first national lockdown and with the Government’s roadmap out of lockdown response, we look to exit the pandemic measures and return to some sense of normality. It is clear that the public will still hunger for their favourite restaurant or take a trip to the local pub that they’ve been missing for months. This gives the sector the potential for ample opportunities, recovery and even growth in 2021.
For a smooth return, business owners must get regulation right from the outset. Streamlining measures to secure consumer safety enables business owners to build their business without succumbing to regulatory hiccups. Fortunately, the Chartered Trading Standards Institute (CTSI), working in partnership with the Department for Business, Energy and Industrial Strategy (BEIS), created Business Companion which offers free, impartial legal guidance for businesses, written by experts with years of experience. The Retail Guide Food and Drink annex brings together the tasks and measures that businesses will need to put into place prior to the return of the hospitality industry and beyond. Download your free copy at–a-retail-guide See the facing page for details.
How to Re-onboard Furloughed Employees… 22
CLH News
Apr/May 2021
By Paul Sleath, CEO at PEO Worldwide () On Wednesday the 3rd of March, the UK Government announced that the furlough scheme will be extended until September 2021.
Realistically, many companies who shut down entirely also won’t be at full operations as soon as they reopen (hence why furlough is being extended beyond the hopeful return to ‘normality’ at the end of June). In these circumstances, you’ll have some difficult decisions to make about who to bring back first.
This news will no doubt be welcomed by businesses across many different industries, as the furlough scheme has helped keep millions of employees’ jobs secure and avoid mass redundancies over the past 12 months.
During this process, you should set out clear criteria for recalling staff. Will the decision be based merely on business need, or will you consider individual circumstances? It’s important to be fair and inclusive when making your decision and to document your reasons (such as seniority or operational needs) to mitigate the risk of potential discrimination claims.
However, the extension shouldn’t be a cue for employers to kick back and relax. Now more than ever, it’s crucial to attend to the wellbeing of your furloughed employees and ensure they’re well prepared for their return to the workplace. So, if you’re thinking of bringing your staff back to work, it’s essential to do so in the right way at a time that’s right for both your employees and your business — whether it’s next week or in September.
HOW DO I DECIDE WHO TO BRING BACK? There’s no prescribed way to bring employees back to work, but it’s advisable to give reasonable written notice of at least 48 hours. Remember, some staff members may still have children at home unable to go to school and need to arrange childcare. In an ideal world, you’d want to bring back ALL employees on their previous terms and conditions. However, this might not be possible yet — particularly if they cannot work from home and your office or facility isn’t big enough to allow for social distancing.
So, once you’ve decided who to bring back, what’s the best approach to handling the re-onboarding process?
WELCOME THEM BACK AS YOU WOULD ANY EMPLOYEE Start with an offer letter which states all the information they need to know. The employee needs to know what’s changed (if anything) when it comes to their position, salary and benefits. For example, have wages been reduced across the board? How does being on furlough affect their sick leave or annual leave entitlement? You should also provide details about how you will be ensuring workplace safety and staff wellbeing. As an employer, you also need to understand that transitioning back to work after an extended period can come as a shock (particularly under these circumstances), so it’s essential to allow a degree of flexibility.
shortages. These discrepancies could result in some negative feelings creeping into employee relations, so it’s important to nip any potential conflict in the bud. As an employer, you should look for opportunities to reintegrate employees into the team. For example, you could organise team-building exercises over a video call, virtual quiz nights or depending on the size of your team, arrange a socially distanced BBQ. You should also encourage all managers to have one-to-one meetings with every employee upon their return (even if it’s done virtually).
PROVIDE TRAINING OPPORTUNITIES While on furlough, employees may have missed out on crucial training, so it’s important to get them back up to speed. Make sure you provide them with the tools and time they need to complete their training (this may have to be done online if they’re still working from home). If remote working isn’t possible in your industry, it’s your responsibility as the employer to create a safe work environment and promote social distancing. Re-onboarding should include efforts to educate staff in the various guidelines available, which will vary country by country.
OFFER REASSURANCE AND SUPPORT WHEN NEEDED
INTEGRATE THEM BACK INTO THE WORKPLACE CULTURE
This is a time of high anxiety, which has been hard on everyone’s mental wellbeing. Add to that the stress and uncertainty of being placed on furlough, and there’s a chance your returning workers will have some extremely complicated feelings. So, it’s essential to be aware of this and do what you can to reassure and support them.
Employees should feel they are returning to a supportive and caring environment. However, it’s also vital to recognise that the pandemic may have had an unequal impact on your workforce. Some people will have been furloughed (potentially with full pay depending on which country they are in) while others might have had increased workloads to make up for staff
You should offer frequent and transparent communication about the state of the business and recovery plans, as well as an open-door policy so that employees can reach out privately with any questions or concerns. Knowing they are valued and supported by you will be pivotal to their wellbeing.
Chef Philli Cooks with The Sausage Man and Lamb Weston
The Sausage Man teams up with Lamb Weston to inspire operators with innovative, celebratory dishes for the coming summer months. Chef Philli Armitage-Mattin, Master Chef the Professionals 2020 Finalist, Chef Philli Armitage-Mattin joins the two companies to help kick-off the collaboration in style!
Think German food fused with exciting flavours. One typically traditional German dish is currywurst and fries, packed full of spice. So why not go crazy for Chef Philli’s katsu curry with crispy panko bread-crumbed Bratwurst from The Sausage Man, and togarashi fries using Lamb Weston Hot2Home or Stealth fries?
Joy is in the air as people across the land anticipate letting loose and whooping it up after lockdown is lifted. So, there’s no better time to share our exciting news – we at The Sausage Man are excited to announce our exciting new partnership with Lamb Weston! Their high quality range of British potato products compliment our authentic German range perfectly. Street Food, outdoor eating, delivery and fusion cuisine will all play a big role for operators reopening in 2021 so how can pubs, restaurants and food truck owners surprise their guests easily with tantalising new recipes? The Sausage Man and Lamb Weston have the solution – and who better than Chef Philli to cook up some of the yummiest, most irresistible and instagrammable recipes to celebrate the partnership.
“How can you improve upon sausage and mash or a hot dog with fries? Chef Philli has dreamed up some super creative, highly flavoursome recipes for us such as Korean Kogo – sausage and fries on a stick – that is so Instagramable consumers and operators will love it!” said the quintessentially British chips of Lamb Weston and our authentic German sausages as a starting point, Chef Philli tied in complimentary flavours from around the world to develop tasty plates that please the eye as well as the taste buds. This menu is easy to recreate, especially with Chef Philli’s step-by-step videos, and every recipe looks and tastes fantastic!” Check out the first episode of “Cooking with Chef Philli” on the website at
Microwaves and Combis
Microwaves - Easy Quick and Cost Effective For those who did not know, including us here at CLH News the first ever microwave oven was manufactured in 1953, was five feet high and was patented by an American engineer called Percy Spencer who had made his discovery by accident! Apparently, he was working with a radar system that used a magnetron to send out radiowaves when a chocolate bar he had in his pocket simply melted rather too quickly. He went on to develop his idea further the rest as they say is history. Wind the clock forward 65 years and it is pretty safe to say that microwaves are now vital to almost every food service operation, playing a pivotal role in an outlet’s success, cutting down waiting times and allowing for flexible cooking. As the sector emerges from lockdown, beginning April 12 when bars pubs and restaurants can reopen but only outdoors, operators will be keen to turn over tables as quickly and efficiently as possible, and will also be keeping in mindful eye to ensure waste is kept at a minimum. Many operators may decide to offer limited menus while the country remains under restrictions moving towards full menu offerings when restrictions are lifted. A standard microwave-only oven can perform essential functions such as safely re-heating frozen or chilled food, which is at the heart of many menus in informal dining restaurants and pubs or in room-service for
hotels. They become more versatile when they become a combination microwave oven. The combination is the addition of convection hot air and a grill. This transforms a simple re-heating cabinet into a multi-function cooking oven. Furthermore, studies over the past few years have revealed that outlets serving food tend to be perform better than those which don’t, hence the reason many wet led pubs have incorporated microwaves and pubs had to offer a “substantial meal” when serving drinks. Whether you are a pub/bar restaurant hotel or café you will recognise the importance of having a strong and varied menu using quality produce, and will already recognise the importance of having a the right equipment in your kitchen. The speed at which food can be prepared on premises using microwave ovens and the quality of cooking results they deliver means that they are a time and cost-effective way to provide quick portion controlled dining! See the following page for equipment for your kitchen
Microwaves and Combis
High Speed Solutions 40 years, with Exclusive supply status on the Sharp and Maestrowave ranges. Sharp have a reputation for quality and reliability that is second to none. With models ranging from the everyday 1000W R21AT best-seller, to the high power R1900M work-horse used by many leading chain operators - there is a model for every user.
A microwave is a true kitchen essential and has a place in every kitchen. A microwave is traditionally seen as an enhancement to use alongside other items of prime cooking equipment, but their versatility is often underestimated and they can be of particular use where space or budgets are limited.
The Maestrowave high speed cooking range combines innovation with every day essentials - including the affordable yet durable MW10 and MW12, the innovative iWave® automated cooking solution – which uses barcodes to ensure consistent, error proof cooking – and the award winning Combi Chef 7, which combines traditional cooking methods with microwave speed to provide exceptional results. For all your microwave needs – contact the Microwave Masters at R H Hall on 01296 663400 or sales@rhhall.com
From the standard microwave used for fast reheating and regeneration – to combination microwaves for versatility and optimum results on a wide range of products – we have a solution for every operation. R H Hall have been dominating the microwave market for over
WINA Microwaves, The Answer To The Question
out dated incandescent bulbs still found in many commercial microwaves. Pat Bray, MD of Regale Microwaves is rightly proud of his companies latest product group. “We listened very closely to our customers and users to understand what was ( and what was not ) important to them”.
Regale Microwave announce it’s ‘Solution’ promotion, which starts in April. But if Regale has the solution, what’s the problem ? Many Microwaves have a cavity designed so a ½ Gastronorm dish can only just fit inside. However, with demand for the excellent Microsave® cavity liner growing daily, this creates a problem for the user. The ½ gastronorm dish cannot fit inside the Microsave on these ovens, so does the site protect their oven with a Microsave, or use a smaller dish to cook in? The solution is the exciting new range of WINIA heavy duty commercial microwaves. The 1500w & 1850w ovens comes complete with a Microsave cavity liner as standard and the larger cavity can easily accommodate either one ½ Gastronorm dish or two 1/3 Gastronorm dishes. The ovens also come with super bright LED interior lights, which are designed to last far longer than the
He continued “They want a well-built, high quality microwave they can rely on. The oven must also have a Microsave cavity liner to protect their investment. With the larger cavity, chefs can easily use either a ½ or 2 off 1/3 Gastronorm dishes, to give real versatility”.
SPECIAL OFFER There has never been a better time to look at the WINIA brand of Commercial Microwaves. Starting in April, Regale Microwave are giving 2 x 1/3 and 1 x ½ Polycarbonate 100mm gastronorm dishes worth over £15.00 with the first 100 pieces with either model WINIA KOM9F50 (1500w) or KOM9F85 (1850w). Visit Email sales@regale.co.uk Telephone 01329 285518 SEE THE ADVERT ON PAGE 15.
Rational Wins Capital Equipment Supplier of the Year Award 2021 With a turnover of £400m representing seventy dealers, the ENSE Buying Consortium is one of the foodservice equipment industry’s biggest hitters – which is why its annual awards are highly regarded and eagerly contested. At ENSE’s recent Conference the consortium announced its 2021 awards – and Rational won the prestigious Capital Equipment Supplier of the Year. “We’re absolutely thrilled to pick up this award,” says Simon Lohse, managing director of Rational UK. “It’s especially important because it’s the dealers who vote for it – so to win it is a real boost for us.” Competition was tough in the Capital Equipment Supplier
award and Rational was up against six other top suppliers. “We work very closely with ENSE and its dealer network,” says Lohse. “Being a supplier partner to the Consortium is vital to the success of our strategic dealer development. Events like this Conference let us network with our ENSE dealer partners, building relationships and addressing issues. “The award underlines the close relationship we have with ENSE and its dealer members, and I’d like to thank everyone who voted for us!”
Apr/May 2021
CLH News
23
24
CLH News
Cleaning, Hygiene and Infection Control
Apr/May 2021:
Welcome To Greener, Safer, Cheaper Food Waste Recycling
ReFood is the European market leader in food waste recycling. We offer businesses of all sizes an alternative to sending unwanted food to landfill with our safe, secure, closed-loop, end-to-end solution. We improve companies’ green credentials, reduce their carbon footprint and lower their overall food waste disposal costs by up to 50%*. By combining the very best knowledge and technology with decades of experience in environmentally
sustainable practices, we deliver the ultimate recycling service to private and public sectors across the UK. And, our cutting-edge Anaerobic Digestion facilities create renewable energy as well as ReGrow, our nutrient-rich biofertiliser. See what we could save you at or call 0800 011 3214. See the advert on page 10. *Figure based on April 2020 landfill rate vs. volume weight.
Why choose SANOZONE? ■ SanOzone generates Ozone and completes a deep and accurate sanitation cycle ■ Ozone sanitisation is cheaper and faster than alternatives like fogging ■ Swiftly cleans and sanitises rooms of all sizes, removing harmful microorganisms ■ Reaches every corner of location, acting more rapidly than other disinfecting agents ■ Machine generates ozone from the air, which decomposes to oxygen after use
SANOZONE CLEANS INDOOR SPACES OF ALL SIZES FOR COVID SAFETY Ozone sanitising is the most effective way to deep clean residential environments of all sizes and it is easier, quicker, and more cost-effective than manual cleaning or fogging. Once in position, an easy-to-use key-pad enables the operator to set the optimal ozone concentration for the size of the room. The system then automatically converts the ambient air into ozone that fills the room, sanitising floors, walls, ceilings, surfaces and equipment. The complete sanitisation of an average sized room will take approximately two hours. This includes the production of ozone, maintaining the required concentration for total cleaning and then returning the room to its usual habitation state. SanOZone is one of the most versatile and efficient sanitisation systems available to healthcare, commercial property owners and facilities management companies. It offers many benefits over manual cleaning and we believe that it is three times quicker and more efficient than alternatives like fogging.
THE MAIN BENEFITS OF SANOZONE ARE: • Highly efficient in the fight against Covid viruses • Effective against the majority of microorganisms tested • Requires only low volumes of ozone to kill bacteria, fungus, parasites and viruses • A standalone system that eliminates the need for chemical substances • More cost-effective than traditional cleaning operations or materials • Automatic cleaning cycle; easy to move from room to room SanOZone units are fully mobile, easy to programme for hourly or daily cleaning and have acoustic and visual warning indicators for safe operation. As it creates its own ozone, no chemicals or additional cleaning products are required. There are no ongoing costs.
SanOZone Easybox systems are available from Barbel now, with prices starting from £1,750 ex VAT for the Easybox 5
For more information, contact Barbel on 01629 705110, email info@barbel.net or visit the website at
Cleaning, Hygiene and Infection Control
Apr/May 2021
The Hospitality Sector Needs To Heed New Cleaning Techniques With hotels, restaurants and cafés all due to ecobenign alternative to many of the harsh chemicals commonly used for sanitisation. Genesis Biosciences is one of the few companies in the UK to have gone through rigorous external antiviral testing to validate its surface sanitiser’s effectiveness against all enveloped viruses, including COVID-19 and other coronaviruses. The sanitiser is available 1,000L IBC, 200L, 20L, 5L and 500ml concentrates offering a cost effective and environmentally responsible anti-viral solution for all applications. To find out more about the Evogen Professional cleaning range, or to purchase direct, visit.
CLH News
25
Give Patrons Peace of Mind with an Air Purifier unprecedented time, and the results demonstrate a clear trend for increased confidence in hospitality venues such as hotels and restaurants with an air purifier. The results showed:
A new YouGov survey has revealed that customers would be more confident visiting hospitality venues that use air purifiers. Leading global air purifier experts Blueair, make units using unique HepaSilent technology that removes 99.97% of airborne pollutants It may be shocking to learn that indoor air can be up to five times more polluted than outdoor air . A new YouGov survey, commissioned by Swedish global air purification experts Blueair, has recently revealed the UK’s thoughts on visiting hospitality venues during this
• Two in five adults (41%) said they would be more likely to visit a restaurant with an air purifier installed. • 40% of people would be more likely to head to a café while 39% would stay in a hotel that offered purified air to its guests. • 36% are more likely toa visit to a pub if an air purifier is in use There’s no doubt that purifying indoor air will give consumers more confidence about heading out to their favourite hospitality venue. As well as removing bacteria and viruses , an air purifier can help with allergies, asthma, and other respiratory problems. Thanks to its HEPASilent™ technology, Blueair air purifiers remove at least 99.97% of dust and harmful particulate matter as small as 0.1 microns in size, to create a safer environment for all those visiting and working in the venue. Contact Blueair to discuss air purifiers for your hospitality venue: michael.westin@blueair.se
Please mention
when responding to adverts.
High performing air purifiers The Blueair Blue Pure 411 is the air purifier of choice for the Page 8 Hotel in London. It is a Which? Best Buy as well as Good Housekeeping Institute and Quiet Mark approved. With HEPASilent™ technology that removes at least 99.97% of all airborne particles as small as 0.1 micron in size, including pollen, smoke, dust, mould, spores, virus, bacteria, pet allergens and micro-plastics, alongside app connectivity, it couldn’t be easier to improve the air quality in your hospitality facility with Blueair. Asthma & Allergy Nordic certified. Learn more at
✶ ★★★★★
Winner BEST CLEANING PRODUCT
UK
APP
02 ROVED 2
0
26
CLH News
Cleaning, Hygiene and Infection Control
Apr/May 2021
Clean Rooms and Fresh Laundry Are a Top Priority for Successful Hotels & Restaurants Clean rooms and fresh laundry are a top priority for successful hotels & restaurants. You’re in safe hands with MAG Laundry Equipment who are one the UK’s leading suppliers for commercial laundry machines. MAG supply commercial washing machines, tumble dryers and ironers to meet the budget & demands of any business however large or small. Their equipment is backed up with an excellent service by their nationwide fleet of engineers across the UK that can assist with maintenance, repairs, spare parts for all brands & models of laundry machines. MAG also offers free trails & demonstrations of their Super ActivO ozone generator which sanitises the air & surfaces of any indoor room in as quick as 15 minutes. This can remove the strongest of odours and eliminates 99% of bacteria & viruses including salmonella, e.coli, covid-19, mites and more. Efficient, reliable and professional, ask about MAG’s competitive purchase and pay monthly options. Call 01569 690802 today. Telephone: 01569 690802 Email: info@laundrymachines.co.uk Website:
Restore Customer Confidence With Wizard’s Air Cleaning Technology Over the last year, the quality of the indoor air we breathe has become more important than ever. Wizard’s air purification devices are a state-of-the-art solution to air cleaning, offering reassurance and confidence to clients and staff within the hospitality sector. Protecting the air you breathe Wizard air cleaning technology provides hospitality venues with a cutting-edge solution to air cleaning, whilst suiting the needs of your business and clientele. Purifying and eliminating airborne contaminants, these cutting-edge devices help to destroy bacteria, viruses, odours and harmful pollutants in an indoor space, such as: • Bars • Restaurants • Hotels • Cafés
tilation that your business needs. Using class-leading UV-C lamp technology and hospital-grade HEPA-13 filtration to purify indoor air, Scarecrow detoxifies spaces of up to 500m2 24 hours a day, 365 days a year.
Designed with a forced ventilation programme that consistently circulates air, the Northern Fairy cleans the air in spaces of up to 40m2 using the same powerful UV-C lamp cleaning technology as Scarecrow.
Completely safe to use around clients and staff, and available in a variety of colours and sizes, the ultrasmart Scarecrow units let your customers know that you’re working to create a safer, cleaner environment for them to feel comfortable in. Find out more about Scarecrow.
The Northern Fairy can be stood or hung on the wall for ease of application and is available in five different colours.
OTHER DEVICES FROM THE WIZARD FAMILY Suitable for all your business cleaning requirements, we also stock alternative air and surface sanitising solutions from Wizard.
SCARECROW UV-C AIR PURIFICATION
AIR DECONTAMINATION FROM THE NORTHERN FAIRY We have smaller spaces covered with the Northern Fairy, the newest addition to the Wizard product range.
Purozo’s flagship device, Scarecrow, could be the costeffective answer to poor ven-
SANITISE SPACES WITH LION & TOTO High performance Ozone generators from Wizard work to sanitise surfaces and indoor air within spaces of up to 300m2. Fast and effective, the Toto and Lion reach and sanitise every nook and cranny of an area whilst simultaneously removing odours and surface stains. With a stunning design and build quality, Toto and Lion are both easy to use, either controlled manually or via the mobile app.
GET IN TOUCH For more information and a virtual device demonstration, get in touch with us on info@purozo.co.uk or give us a call on 01594 546250. See the advert on the facing page for more details.
Sundeala SD Safety Screens and Sundeala Safe Push Door Plates Sund cleaning dist. Sundeala notice boards protect the environment outside while improving the environment inside. For any more information or to find out how we can safeguard your spaces, contact our sales team on 01453 708689 / enquiries@sundeala.co.uk
28
CLH News
Apr/May 2021
Hospitality Technology
Business Development Through Integrated Technology and CRM By Henry Seddon, Managing Director Access Hospitality The advances in hospitality technology over the last year have been well documented, with delivery and takeaway, pre ordering, order and pay at table, and contactless payments being amongst the most utilised in response to controls placed on the sector. With so many solutions available, it’s worth highlighting a couple of areas that we expect will impact most significantly on an operators’ efficiency and revenue generation. Firstly, the importance of technology integration. As Kate Nicholls, CEO of UKHospitality acknowledged in an Access Hospitality webinar last year “I have never seen such a rapid rollout of technology; things that would normally take three to five years to get through. We have now got that seamless integration of technology and the acceleration towards a single portal for a customer. Where you want to use technology is in a smart way to allow your staff to be freed up to deliver better service. That's where technology comes into its own.”
gy, a second business area that shouldn’t be overlooked is customer relationship management (CRM). Customer expectations within hospitality venues remain high; in fact, they are likely to have risen in response to the post pandemic safety and cleaning measures that have been introduced, so the ability to track, manage and build relationships with customers remains vital. To ensure that we can provide the most comprehensive and responsive service, Access Hospitality recently acquired Acteol, the UK’s leading provider of CRM software and contracted marketing managed services to the hospitality, leisure, travel, and gaming sectors. Our belief is that by creating a single customer view where all data is in one place, de-duped and enriched, multi-channel marketing becomes more efficient and effective, with more meaningful connections, greater customer engagement and resulting sales. CRM tracks customer interactions, visit frequency, preferences and spend providing an accurate profile that they might not even recognise about themselves. Using the information available to tailor bespoke communications and offers deepens the affinity and loyalty from customers, increasing the likelihood of them returning to venue and redeeming any incentives provided.
Where information and data can be entered once and shared across multiple platforms, there is an obvious benefit in increasing efficiency but also reducing the potential for errors when keying in. Monitoring performance and development opportunities is also enhanced when key metrics are available in one view from a single sign-on and the volume of rich data for qualitative analysis is increased.
For any customer interaction measurement is priceless and, understanding what works and what doesn’t, enables you to focus your efforts in the most effective way possible and make data driven decisions. Analysis of results identifies the best channels for communication – which might include email, SMS, push notifications or direct mail – as well as conveying the best sentiments or promotions to generate a positive response.
With so much attention given to the operational benefits of technolo-
Predictions suggest that there may be fewer out of home hospitality
occasions taken over the next few months, but that customers will be looking for more premium experiences. The value of integrating a powerful CRM system into your technology management suite will give your hospitality operation a significant boost and offers a compelling argument for integration with other cost saving and revenue generating technology. See the advert below to find out more about how Access Hospitality can help your business.
Hospitality Technology
Apr/May 2021
CLH News
Yoello - Mobile Order & Pay You Can Trust 2020 was a pivotal year for Cardiff based Fintech Yoello. Since launching their mobile order and pay solution in June, the company has gone from strength to strength growing rapidly whilst supporting thousands of hospitality businesses across the UK during the Covid-19 pandemic. Yoello aims to disrupt the current payment networks which are outdated and expensive. By processing payments themselves, utilising open banking regulations, they want to bring operators and customers closer together with cheaper and instantaneous transactions. The platform is currently focused on the hospitality industry, from small cafes and traditional pubs to luxury hotels and large theatres. Yoello’s mobile order and pay technology also has the capability to expand into sectors such as retail and tourism. The company’s aim is to improve efficiency, increase revenues and improve the customer experience through
mobile technology, in particular in the current climate with businesses currently operating with reduced staff numbers and customer capacity. As we head towards a cashless society and a new technology led, post-Covid, future of service – Yoello’s tech will play a vital role for most businesses to survive. Yoello’s mobile ordering solution allows customers to access digital menus simply by scanning a QR code or typing in a URL using any smartphone or web device, without needing to download an app. Customers can access table service, click & collect and delivery services all through one platform. From a merchant’s point of view, it’s very easy to set up and manage contactless order and pay either alongside an existing system or through POS integration. Find out more visit or speak to the sales team: sales@yoello.com / 07764 86 4840
ConnectSmart from QSR ®
QSR Automations, the leading provider of kitchen automation, guest management, off-premise technology, and predictive analytics, announced the launch of the ConnectSmart Platform. This move strengthens and simplifies company and product offerings. The platform effectively combines QSR Automations’ software suite into a single platform: • ConnectSmart Kitchen, industry-leading kitchen display system • ConnectSmart Go, an off-premise order management system • ConnectSmart Host (formerly DineTime), a guest experience, reservation and table management solution • ConnectSmart Insights, a business intelligence tool • ConnectSmart ControlPoint, hardware management software • ConnectSmart Recipes (formerly TeamAssist), a kitchenintegrated recipe viewer By connecting the back end and integrating these solutions into one operational platform, operators can work with a solution that shares data from all its components and make holistic restaurant decisions in real-time.
Flexible APIs and robust integrations allow operators to tailor the platform to their specific needs and focus on what’s most important — the guest experience. “Removing the barriers between products on the back end to create the ConnectSmart Platform architecture demonstrates how our software has advanced from individual product offerings to a connected operational platform. This move is a logical progression based on an industry that’s progressing just as rapidly. The pandemic accelerated what we already knew, connectivity and operational data are paramount for operational success. The platform will allow providers to make strategic, data-driven decisions that will ultimately deliver tailored guest experiences for their diners,” said QSR Automations Founder and CEO Lee Leet. Restaurant owners and operators can learn more about the ConnectSmart Platform at
F uture-Proof Y Future-Proof Your our Oper ations with an Operations In tegrated Pla tform Integrated Platform KITCHEN KITCHEN AUT AUTOMATION OMATION GUEST M MANAGEMENT ANAGEMENT OFF-PREMISE TECHNOL TECHNOLOGY OGY PREDIC PREDICTIVE TIVE ANALYTICS ANAL LYTICS
What can the ConnectSmar ConnectSmartt® platform do for your your restaurant? Learn more more at:
www .qsrautomations.com/overview
25
30
CLH News
Apr/May 2021
Hospitality Technology
Invest in the Right POS Solutions to Get You Through Reopening and Beyond The COVID-19 pandemic has seen hospitality ventures in particular desperately searching for simple solutions easy to implement with the impending time crunch to address a set of stringent guidelines introduced by the Government. Keen to abide by social distancing rules, business owners are looking for ways to give their customers the peace of mind to continue purchase from their stores inperson and out with many turning to new tech to keep their businesses afloat. Adopting a bespoke point-of-sale system (POS) is becoming an increasingly popular choice in light of their offering; features to boost performance through faster transactions, expansive data collection and live stock count –
an especially useful component to address and respond to unexpected spikes and drops in demand during an unpredictable climate. Opting for a click & collect service like Goodeats offered by Goodtill has been the perfect example of innovation in the hospitality industry as an answer to COVID-security and beyond with its pivoting feature enabling table ordering solutions when dining-in becomes an option once again. Today, a POS system integrated with a click & collect service has become an essential and is no longer limited to giant retailers but accessible to smaller ventures to help diversify revenue streams. Access to these kinds of value-add services promotes customer loyalty even in
trying times. Many of these systems were initially deemed as temporary solutions to stay in business during the COVID-crises, but the staggering uptake in interest and usage of POS solutions are altering the hospitality sector from the ground up. We now see such software playing a key role within the service industry, providing establishments a stronger offering over their competitors. Businesses that have acquired a click & collect service are well-placed to satisfy potential future restrictions when partnered with contactless payment and pickup alongside a dedicated POS software introducing affordable automation to business operations. See the advert on the facing page for details.
Hospitality and Social Media: The Good, the Bad, and the Ugly By Harvey Morton, Digital Expert and founder of HarveyMorton.Digital ()
A disgruntled employee could do more damage on social media in a moment of anger than any good done by consistent and regular updates by your marketing team. A simple comment about the poor hygiene standards in a kitchen could set a coffee shop back many months.
account makes the individual responsible for any fallout from inappropriate or even criminal content. As a company, you want to be sure you are not seen as liable for this.
Consequently, writing a social media policy for your employees is an essential tool for protecting your brand and all colleagues' best interests. While writing such a policy is akin to walking on eggshells, it is best to undertake this difficult task anyway.
Obviously, the number one goal of your policy is to protect your brand. Setting expectations for social media use reduces the chances for error and prevents confusion about what is and what isn't seen as professional to your company..
What you hope to achieve with your policy
There are positive aspects to this too. When your team post positive messages about your company, you are leveraging the potency of employee advocacy. Your business social media account might make it clear you are unique but imagine how much more powerful it sounds from someone who works for you. What should I include in my policy?.
Your social media policy should also make clear that these accounts are owned by individuals. This is vital. The ownership of the social media
The skill in writing this policy is getting the tone right and seeking buyin from the start. It will be worth the effort.
Technology is Key to Combatting the Food Waste Problem By Danilo Mangano, General Manager Europe at guest experience platform SevenRooms () Following the release of the UN’s 2021 Food Waste Index Report, Danilo Mangano, General Manager Europe at SevenRooms discusses how technology can help the hospitality industry reduce food waste Globally, food waste accounts for 8% of greenhouse emissions. If it were a country, it would be the thirdlargest producer of greenhouse gases after the US and China. The topic of sustainability in business is not new, and is certainly not going unnoticed by consumers, who increasingly want all of the experience with none of the eco-guilt. As hospitality reopens this spring, there is an opportunity to both renew focus on sustainability and boost profitability. Research has shown that if the average restaurant reduced food waste by just 20% a year, it would save an estimated £2,000. This is significant, especially when you consider the lost revenue restaurants will want to make up when on-premise dining restarts. There is a clear environmental and economic need for restaurants to reduce food waste — and technology can offer a solution. Many restaurants use technology to help them manage operations, from reservations to seating management, but far fewer use these platforms to gain insight into where food waste can be reduced. With this in mind, we have looked at how the hospitality industry can make the most of technology to tackle food waste while still ensuring guests go home satisfied.
USE GUEST DATA TO INFORM YOUR SUPPLY
ORDERS When it comes to ordering food, many restaurants rely on rule of thumb estimates. With reduced capacity likely to be the norm for the first few months after reopening, it’s going to be difficult to predict exactly what they will need in terms of produce and the qualities required. Ultimately, this could cause challenges when reining in spending, especially as maximising profits needs to be front of mind upon reopening. With the right technology solution, this unpredictability could be one less thing for operators to think about. Restaurants that have fully-integrated guest experience platforms in place can capture and leverage data directly, offering important insights into a diner’s dietary preferences, allergies and favourite orders. Not only does this enable them to provide tailored experiences that help to build a deeper relationship, but these insights also allow operators to undertake supply ordering more accurately. Looking at the data, a restaurant manager may see that a guest was a Friday night regular, and always ordered the salmon en croute with a specific vegetable side dish. When making their next reservation, it’s likely this will be the meal they choose, which an operator can use to plan when placing orders with suppliers. While lockdown has meant venues have been closed for dining on-premise, restaurants that have been facilitating their own online ordering, collection and delivery services will have continued to build their bank of customer data. This same data will prove invaluable when they are organising inventory. For instance, if a customer has ordered a specific chicken dish for delivery over the lockdown, chances are they are going to order it again when they can eat at the venue. The benefit of knowing this information is twofold. With a better idea of what diners will order, the kitchen can more accurately judge what they will need to order from suppliers. Plus, by anticipating the customers’ wishes, an operator can continue to provide a positive diner experience that makes them feel valued and more likely to return again.
REPURPOSE YOUR SURPLUS
On some occasions, an operator will have leveraged all the available data to inform supply orders and still be left with surplus food at the end of service. So what happens then? We know how valuable the art of the quick pivot was during the lockdown. There is no reason why restaurants should leave that mindset behind now; particularly when it can have such a profound impact on food waste. The introduction of retail was a lifeline to many restaurant businesses over the pandemic, and is a great way to help operators eliminate food waste in the long-term. With special relationships often required to access high-end produce, meats and more, these retail operations are often the only way for consumers to purchase these specialty items. At-home meal kits were also wildly popular over the past year – why not continue to supply these kits with surplus food? It’s a simple way for customers to feel connected to their favourite dining spot if they don’t feel ready to return to on-premise, while using food that would otherwise have been discarded. Many are more than happy to support their favourite restaurants, and they will feel better in the knowledge that not only will they have prevented food from going to landfill, but that they have helped a local business as it recovers and rebuilds. Tackling food waste concerns should be a priority for everyone within the hospitality space when venues reopen. It is worth noting, however, that the recommendations outlined are only possible if operators own their customer data and hold direct relationships. Without access to their email address, how will you contact them to let them know their favourite pasta sauce is available to buy and enjoy at home? If you can’t access their order history, how would you know who to contact when marketing a meal kit for the dish they never fail to order? Or encourage a regular delivery diner to visit you on-site once your venue reopens? Operators that are proactive in using intelligent, data-driven technology can only benefit, especially as costs associated with wastage decline and consumers opt for restaurants that prioritise sustainability and top-tier service.
Apr/May 2021
Drinks Dispense and Cellar Management
CLH News
33
“ Raising the Bar” cellars will likely not have been maintained as regularly and consistently as usual, meaning a thorough, deep clean and safety assessment should take place prior to reopening. It’s important to consider the way cellar equipment was left during the periods of closure to ensure the hygiene of the bar and cellar, and to re-start dispense systems correctly for the reopening phase. “90% of beer and cider sales come from draught , making the dispense system the beating heart of any pub or bar. Now more than ever, operators need to drive footfall, prolong visits and ultimately deliver a great customer experience every time. Draught systems that contribute to this are worth the investment, taking unnecessary burdens away and giving operators time to focus on running their business. HEINEKEN SmartDispense™ is an industry-leading business solution, connecting dispense technology with service and the insights that an operator needs to improve their quality, reduce waste and save time.” Beer Piper’s Jeff Singer provides six elements to consider when storing and serving beautiful beer:
A STELLAR CELLAR It is vital that every pub has a well-kept cellar. Whilst it is out of sight, licensees must ensure their cellar is well maintained to prevent low quality beer and poor hygiene standards. A well maintained cellar means a happy customer. After the rockiest period for many a decade, the hospitality community are eager to pick themselves up and get set for a return to trading, as the UK slowly eases out of the Covid19 restrictions, says Jeff Singer, Commercial Manager, Beer Piper “Despite the road map out of lockdown including a roll-out of dates for the On Trade, there are still many uncertainties. The plan has given the industry some glimmers of hope, though, and with better weather on the horizon and the vaccine roll-out ahead of schedule, it’s a good time to gear up for a summer of muchneeded success.” “With that in mind, this time can be used to make some changes that can have an impact on the success of pubs and bars when they are finally allowed to open their doors to the public, in April for outdoors and 17th May for indoors if all goes to plan.” “Since the first national lockdown back in March 2020, the UK’s drinking habits have shifted. With the On Trade closed for much of 2020 and 2021 to date, consumers have been buying into more premium products from the Off Trade. Sales of premium beers, wines and spirits have all rocketed, as drinkers have looked for ways to treat themselves. Jeff adds: “It’s clear from research such as this that publicans and bar owners need to take on board the trend for premium beers when assessing their range before reopening..”
GOOD CELLAR MANAGEMENT A good cellar management routine is an essential part of any pub’s success, helping to produce quality pints and boost profitability, says John Gemmell, On Trade Category and Commercial Strategy Director at HEINEKEN UK. “Due to the long periods of closure over the past year, before summer.changingconsuming. Heineken’s John Gemmell adds “Full guidelines are available for operators on The Pub Collective, including advice about the correct line cleaning processes, preparation and tailored guidance. You can also find checklists and risk assessments to support your reopening, shared by HEINEKEN’s Star Pubs & Bars managed estate.” John says that in maintaining a good cellar it is vital to: • Ensure line cleaning is carried out to correct procedures every seven days (except HEINEKEN SmartDispense™ systems, where line cleaning can be extended to six- or even twelve-weekly, if needed at all), using brewery recommended detergent. Remove and clean nozzles in hot water after every session, use sanitiser spray for keg couplers, keg wells and cask taps, and ensure the sump is clean and working correctly. Always wear the correct PPE when line cleaning. • Keep cellars at a constant temperature of 11-13°C by installing a wall-mounted thermometer, regularly topping up cooling equipment with water, checking fans and condensers are free from dust and blockages and keeping a planned schedule of maintenance to avoid costly breakdowns. If the cellar is too cold, cask ales will be flat and may have a chill haze. If too warm, beer may develop a fob which causes wastage. Inexperienced bar staff can pour good beer into the drip tray when there is too much fobbing, which affects yields and increases operational costs for your business. • Ensure that, where possible, only one member of staff should enter the cellar per session as long as restrictions are in place, to facilitate social distancing – washing their hands thoroughly before and afterwards. Consider having a dedicated person or couple of people for cellar management throughout the week. • Create more serving spaces where possible. If space allows, relieve pressure from the main bar and help the customer journey by installing a second bar that serves your most popular drinks, ideally outdoors. Make use of countertop draught systems such as BLADE, or think about renting moveable SmartDispense™ BarPro systems, to help you serve more quality pints during busier seasons, ultimately putting more money in your till. • Focus on quality. It’s clear what consumers have missed during lockdown, and that’s the unbeatable pub experience and the quality of a perfectly poured pint. To deliver on quality, upskill bartenders on beer and cider service with staff training such as Hello BEER, covering everything from cellar to glass care, as well as safety and hygiene. • Review your venue’s capacity with the latest restrictions and manage your range accordingly to maintain quality. It’s important to maintain throughput of at least 1 keg per week per tap so you continue to offer a great standard of beer and cider service. Upon reopening, consider starting with a smaller draught range, particularly if your capacity is restricted. For those with outdoor spaces, remember to allow to restrictions such as the rule of six and social distancing. As government restrictions phase out and capacity returns to normal, look to expand your range accordingly. • Be particularly mindful of your cask range. Taking the time to organise your cask offering according to throughput will be crucial to maintaining the beers’ good quality. Consider reducing your range during the week when trade is quieter – perhaps offering just one well-known brand to satisfy the majority of cask ale drinkers. We would recommend starting your range with an Amber ale as these hold a 67% volume share of the category and are preferred by 41% of ale drinkers . When trade picks up at the weekend or over time, look at expand your range to include Golden ale which is the second most popular, then a second Amber followed by Dark ale.
34
CLH News
Drinks Dispense and Cellar Management
Apr/May 2021
Cellartech Solutions
Cellartech Solutions Ltd have been supplying the independent pubs clubs and restaurants of the Midlands region since 2010.
We are a dedicated one stop shop for all bar and cellar needs, including cellar cooling systems and drinks dispense set ups. With the current lockdown coming to an end it is vital that your drinks dispense equipment is clean and sanitised and ready to serve your waiting customers. Cellartech Solutions can offer a one-off Deep Line Clean and Cellar Services to help with this.
Please contact our office on (01572) 739364 or email office@sallytechsolutions.com.
Clear Brew - Helping Hospitality To Get Back On Its Feet We reduce your wasted ullage, meaning you have more beer to sell at full retail price to increase your profits
NO CHEMICALS TO BUY & STORE We bring all cleaning equipment and chemicals with us so you don't need to buy or store them
THE PERFECT PINT We enable you to dispense the perfect pint to your customers time and time again, improving sales and yields
LOCALLY BASED TECHNICIANS
Book your FREE No Obligation cellar equipment check and Free Beer Line Clean
SAVE WATER, GAS & ELECTRICITY
Please mention the Caterer, Licensee & Hotelier News when replying to advertising
By only using a third of the water, no gas and no electricity our system doesn’t only reduce waste, it helps reduce your bills
REDUCE BEER WASTAGE
Our nationwide coverage means that will you be helping keep local people in work and in doing so keeping transport emissions down through reduced fuel miles
THE HIGHEST STANDARDS We build ongoing relationships with you to ensure you save money, reduce waste, mitigate risk and ultimately ensuring the beer you serve tastes as it should Call the Professionals 0800 7810 577 freeclean@clearbrew.co.uk
Outdoor Spaces
Apr/May 2021
Bring In Much Needed Revenue with an Outdoor Menu This Summer With pub gardens and outdoor seating due to open from 12th April, having an outdoor menu offering will provide a much needed revenue boost for hospitality venues across the UK. We have a wide range of products that will help you create the perfect outdoor kitchen, in any outside space.
operator. Simply Stainless Tabling works alongside Crown Verity to create the perfect outdoor kitchen. Working with our fabrications division we can also offer you a bespoke stainless steel solution for any requirement. Hygiene and safety is still a huge consideration, our Mobile Hand Wash Station & Sanitiser Unit help you to provide hygiene facilities outside for all customers and staff to keep safe.
With the 'super deduction' tax allowance introduced in the 2021 budget, businesses can also reduce their tax bill by 25p for every £1 spent on new equipment purchases, so return on investment can be gained even faster!
R H Hall offer the full package... From site visit, design and quotation - to supply of the perfect outdoor kitchen!!!
Crown Verity Professional Barbecues offer a high quality, adaptable cooking solution, with a wide range of add-on accessories for a varied menu. From the compact MCB30 to the MCB72 'King of the Grill', there is a model for every
Contact our knowledgeable sales team on 01296 663400 or sales@rhhall.com to help you choose the perfect equipment for any operation!
Make the Most of Your Outdoor Areas with the Contract Furniture Group you to seize this opportunity to update, repair or replace décor ahead of reopening; and to support this they are looking at putting finance packages together to spread the investment.
Contract Furniture Group have worked hard over the last year to provide their customers with the same high quality service you’ve come to expect from them. Despite the frustration our whole industry is feeling at the moment, Contract Furniture Group encourage
If you do have any questions or queries about Contract Furniture Group's products or services, stock availability or lead times, terms or available finance options, please don’t hesitate to call. Most importantly, Contract Furniture Group say they hope you and your loved ones stay safe and well during these unprecedented times. For further information visit
Making the most of your outdoor areas with Tempus solutions The Manhat Manhattan tan P Pergola ergola fr from om T Tempus empus is a simple and cost effective effective w way ay of e xtending the comf or t of your your indoor spaces int o yyour our outdoor patios extending comfort into patios..
from as little as
£1,40 0
• A vailable with or without rretractable etractable sides ffor or wind shielding Available • Quic k and eas y tto o use louv ered rroof oof kkeeps eeps out the rrain ain and lets in the sunshine Quick easy louvered • Lighting and heating pac ks mak e the shelt ered spaces usab le all yyear ear rround ound – what ever the w eather packs make sheltered usable whatever weather • F ull ffitting itting ser vice a vailable thr ough pr oduct tr ained Contr act F urniture Gr oup inst allation tteam eam Full service available through product trained Contract Furniture Group installation A wide range of complementar y outdoor heating, lighting and fur niture is available to view on our website. complementary furniture
Never knowingly beaten on price! Contract House, Little Tennis Tennis Str Street eet South, Nottingham NG2 4EU
0115 965 9030 info@contractfurniture.co.uk 0115 info@contractfurnitur ure e.co.uk www .contractfurniture.co.uk
FOLLOW FOLL OW US
CLH News
35
Boost Your Restart Grant with Black Rock Grill's GET BACK TO BUSINESS 10% DISCOUNT Consumers are craving for Pub & Restaurant hospitality NOW! An overwhelming number of punters will be heading for the pub, restaurant & cafe experience to get their lives back to some kind of normal.
Will you be ready? Restart with the Black Rock Grill WOW factor ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Get ahead of your competition Serve customers within 5 mins of ordering Turn tables quickly Serve meals indoors & out Fewer staff needed to operate No meals returned Hygienic safe sizzling hot meals Rebound quickly and profitably Restart with something new and exciting for you and your guests
Hot stone cooking allows your customer to live dine at the table and cook their meal exactly how they like it, sizzling hot till the very last bite! It will create a great atmosphere within the restaurant and give you a USP for your business.
We are offering:
GET BACK TO BUSINESS 10% DISCOUNT on all Commercial Set-ups (Valid till 30th April '21, while stocks last)
Please give us a call or email to find out more information. We have lots of guidance on set-up size, menu ideas, and relaunch ideas
sales@blackrockgrill.com | 01256 359858 |
Outdoor Spaces Herald Adds More To The Mix FOOD SERVICES SUPPLIER INTRODUCES DISPOSABLE SOUP CUPS, CHICKEN BOXES AND SMOOTHIE CUPS Quality disposables manufacturer and supplier, Herald has launched three new packaging products to meet increased demand from the catering and food to go sectors as the market for take outs increases.
Other new products include a wider selection of single, double and triple wall cups and a choice of eco sip lids made from CPLA, a renewable material created from plants. These lids complement Herald’s 8 oz, 12 oz and 16 oz hot paper cups, which have long been a market favourite based on quality and price. For further information on Herald and its products, log on to or call 0208 507 7900 to order a copy of the new catalogue. See the advert on page 9 for details.
The products consist of 8 oz, 12 oz and 16 oz kraft and white, lined, paper soup cups with lids; small, medium and large paper, recyclable chicken boxes; and 8 oz, 10 oz, 12 oz, 16 oz and 20 oz PET smoothie cups with flat, domed or domed with hole for a straw lids. Competitively priced, all three lines have already earned themselves a loyal customer base and Herald is expecting sales to increase going into 2021.
Discover How a Canopy Can Increase your Profits licensee will own it outright. It is also very tax efficient. Ian Manners went on to say, “for example, you can seat 50-60 customers under 2 x 5m x 5m (50m²) Airone Tipo 150 structures from as little as £315 (ex VAT) per month over a 5-year period. Essentially your customers are paying for the canopy as they use it” Pubs, Restaurants and the Leisure Industry will have a golden opportunity to make the most of their gardens to increase Summer trade when lockdown is over. By installing a Zenith Canopy Structure, Licensees and owners will have the opportunity to re-appraise their business and re-market their outdoor facilities to encourage customers to use their establishments. Ian Manners of Zenith said “The benefits of canopy structures to Licensees is that they can be used all year round, by adding heating, lighting and sidewalls” The soft PVC walls slide like curtains, can be easily removed or be fixed in position. Alternatively, terrace screens can be fitted.
James Bishop is the Proprietor of The Longcross Hotel in Trelights, near Port Isaac in Cornwall. James purchased three of Zenith’s Airone Tipo 150 canopies and installed glazed walls and doors. James said, “Since installation business has boomed and we now have an extra outdoor function room for up to 150 people when not being used as a day to day eating area”.
Many of Zenith’s customers have opted to fund their canopy project by LEASE RENTAL. This is a financially efficient way of installing a canopy because it allows the Licensee/owner to budget on a monthly basis without a large capital outlay. After a 4- or 5-year period the
Simon Vale of Drayton Manor Theme Park purchased 165 square metres of Zenith’s canopies and said, “we received a very quick return on our investment and our customers are now able to benefit from an outdoor covered area in all weathers”. Zenith’s canopies are available in a variety of modular, shapes, sizes and colours. For further information contact Zenith Canopy Structures Limited, T: 0118 978 9072 E: info@zenithcsl.com or visit
Apr/May 2021
CLH News
37
38
CLH News
Apr/May 2021
Outdoor Spaces
LeisureBench Announce New Exciting Comprehensive Website
LeisureBench Limited, one of the UK’s leading suppliers and installers of high quality outdoor furniture, is proud to announce the launch of our new exciting comprehensive website.
Other products showcased on our new website include a new range of dining huts, ideal for outdoor dining whatever the weather. We can also supply and install quality awnings and sails.
All our range represents excellent value for money.
We also have an extensive choice of parasols and gazebos.
Among the new features installed, is the introduction of automatic bulk discounts off our list prices and stock availability is clearly shown. In these difficult times, there is also the facility to preorder any item, to secure the products you require. There are exciting new ranges for 2021 including 100% recycled plastic products including picnic tables, benches, planters and more.
Among our existing product ranges for this year, are wooden round and A Frame picnic tables, teak and pine benches, Rattan chairs, tables and sofas, outdoor dining sets, metal furniture, a full range of polypropylene chairs in many different colours and styles, and much more. Visit our new website, or email us on sales@leisurebench.co.uk. Telephone 01949 862920.
CambridgeStyle Canopies made-to-measure aluminium outdoor canopy systems. Our product range includes:
Please mention the Caterer, Licensee & Hotelier News when replying to advertising! Emails office@cambridgestyle.org or visit madeto-measure aluminium outdoor canopy systems.
Our product range includes:!
CambridgeStyle Canopies Ltd 01353 699009 | office@cambridgestyle.org | “WE’VE GOT IT COVERED - NOBODY DOES IT BETTER”
Previous Clients Include:
Outdoor Spaces
Apr/May 2021
CLH News
Space-Ray - We Know Heat ments, making sure your heaters stay in top working condition no matter the weather conditions.
Space ele- now: info@spaceray.co.uk spaceray.co.uk 01473 830551
Seating with Wider Appeal from ILF Chairs make your customers want to come back?
Hospitality has now got the Green light last to reopen outdoors from the 12th April. Will you be ready and able to cope with the rush? Will your Café, Restaurant, bar have the right outdoor ambience and comfort to stand out from the rest and
Please mention the Caterer, Licensee & Hotelier News when replying to advertising
39
40
CLH News
Apr/May 2021
Products and Services
Simpleas Mince: The UK’s First Retail Meat Snacking Sorted with the Tayto Group Substitute Made From 100% Peas The product ticks all the “good” boxes: vegan, glutenfree, high in fibre, high in protein, non-GMO, soy-free.
At a time of growing consumer demand for nonsoya, sustainable meat substitutes, Norfolk company Novo Farina Ltd is launching Simpleas Mince, the UK’s first retail meat substitute made entirely from peas.
In trials, consumers have been excited by the great texture and endless recipe possibilities: Simpleas Mince can be used in family meal favourites including Bolognese sauce, chilli, lasagne and cottage pie. They also responded positively to the single ingredient labelling (100% peas) and the long shelf-life - a bonus for consumers in these times of retail stock uncertainty. Simpleas Mince has a RRP of £3.99 for a 150g pack. It is also available in bulk direct from Novo Farina Ltd. Please email Vicki.Myhill@novofarina.com for more information and pricing.
Super Quick, Free Range, Super Easy Rangethe See the advert on page 19.
As Britain’s largest family-owned snacks business, Tayto Group has Snacking Sorted with our award-winning range of premium snacks including REAL Handcooked crisps and the market leading pork scratchings brands – Midland Snacks and Mr Porky. - REAL Handcooked Crisps are a foodservice exclusive range specifically developed for the hospitality sector with distinctive, punchy flavours made with locally sourced potatoes. - Midland Snacks Traditional Scratchings is our best-selling pubcard – award-wining hand cooked scratchings using a recipe that has stood the test of time.
- Mr Porky is the most recognised name in scratchings with a complete range of Great Taste award-winning pork snacks including Mr Porky Crispy Strips which offers a lighter bite, more akin to crispy bacon rind than a traditional scratching As experts in snacks, our ranges are tailored for the hospitality sector with formats such as pubcards and clipstrips, that show off your range and drive sales. Contact the Tayto Group Limited on Tel: 01536 204200 or visit See the advert on page 17 for further details.
The Infusion Solution Hospitality and catering companies looking to provide the most cost-effective service for tea and other infusions can try out the doublepatented TEAPY T-4-1 free. Check out TEAPY Ltd’s claims of at least 40% reduction in labour cost and 70% reduction in storage space by obtaining a free trial set from with guidance on how to compare with your existing service(s) for tagged or untagged bags or loose leaf tea, fruit, herbal or other infusions. With over 3 years proven performance and success in four industry awards, many have called it the biggest invention, or rather two inventions, since the tea bag.
tions of a conventional teapot but with better, visible brew control, then flips and “docks” snugly to the mug as a disposal “tidy”. A complete TEAPY T-4-1 tea service with TEAPY, mug, teaspoon, milk jug and optional loose leaf infuser can be carried in one hand, or with more sets on a single tray, more safely, than any comparable service. Operators can build their own sets, by matching their own mugs and spoons to the right TEAPY. It can also be used with hot chocolate (try mini-marshmallows served in the jug), mulled wine and coffee bags, for example from Taylor’s of York, in fact TEAPY T-4-1 is so good they patented it twice!
JURA - Speciality Coffee Anything’s Possible with Saniflo The TEAPY mug lid keeps the infusion hot, providing the brewing condi-
For.
strate hotel breakfasts, restaurants, bars and seminar / conference venues. Recommended maximum daily output 150 cups per day.
Visit www,teapy.co.uk or see the advert on page 5.
include grease traps and water salvage pumps.
Saniflo is one of the most widely recognised brands in the UK plumbing market thanks to its range of pumps, lifting stations and macerators that enable domestic and commercial customers to install bathroom, kitchen and washing facilities almost anywhere – particularly when gravity drainage is not an option. As well as models that are installed indoors to pump out waste, there is now a huge choice of models that can be sited outdoors and installed underground. These robust liftings stations pump black and grey waste from single buildings or multiple small buildings. Recent additions to the range
Saniflo’s sister company, Kinedo, manufactures shower products for domestic and commercial settings. The range includes integrated cubicles that feature internal and external panels and door, shower tray, shower valve and head in one easy to install package. A range of contemporary shower enclosures and premium shower trays complete the portfolio. The company has an unrivalled reputation for after sales service which is enhanced by its unique nationwide network of 100 service engineers supported by the technical support team based at Saniflo UK. Visit for further information.
Helping Hotels and Restaurants New Cuts to NEW YORK BAR Collection to Bounce Back The GIGA X3c / X3 G2 allows JURA to impressively demon-
German glassware manufacturer Stölzle Lausitz has added two new design cuts to its cosmopolitaninspired New York Bar collection. The design language of the collection focuses on elegant understatement in combination with high-quality crystal glasses to create a sophisticated bar atmosphere reminiscent of New York City’s buzzing nightlife. Drawing upon the skylines and bar culture of New York, the decorative ‘Manhattan’ cut
Photos WE8 in chrome, GIGA X3 in aluminium.
Visit uk.jura.com or email sales.uk@jura.com for further information or see the advert on page 3.
picks up on the up-andcoming straight lines of the district of the same name, while ‘Club’, a diagonal cut, gives the glasses a stimulating dynamic. As with the design itself, the brand’s execution of the matt finish has been carefully executed and the lines of the straight ‘Manhattan’ cut and the diagonal ‘Club’ cut take the lead in the eye of those drinking from the glasses, while the clear crystal glass casually corresponds with the opaque cut. Depending on the beverage in the glass and the colour of the drink, the light is playfully refracted along the individual lines, creating a unique point of difference that the eye is naturally drawn to. Made from high-quality, scratchproof glass and well-weighted for a comfortable hold, hosts are able to easily impress with the premium brilliance of the glass. or see the advert on page 5.
Aspenprint, a leading design and print agency for hotels and restaurants, has cleverly devised a whole range of branded social distancing items to help hotels to bounce back this year, at the same time as keeping staff and guests safe and compliant. The various new items include disposable menus, NeverTear indestructible menus / leaflets, door seals, cutlery sleeves, branded screens, face masks, hand sanitising stations and more. Combining clever marketing with creative design and strategic thinking will give your holiday-makers longlasting memories as well as staying socially distanced and safe. One of their latest products to take the hospitality
industry by storm is Aspenprint’s anti bacterial laminate which can be used on a range of printed materials including menus, signage, brochures and posters. It kills 99% of all germs which touch its surface with the specialist anti bacterial laminate coating. This laminate has been widely used by hotels and restaurants, as well as the NHS and schools due to its ability to protect the public. Aspenprint has also promoted the huge social media market for hotels by adding giant deckchairs to their extensive list of products to ensure 2021 is a success. Aspenprint are helping many hotels and holiday parks to ensure guests have a successful staycation this summer with a range of fun printed materials from signage to deckchairs, branded parasols, feather flags, banners and more. Visit Aspenprint’s website for more information or call 01202 717418 to speak with one of the friendly team. See the advert on page 13.
Design and Refit:
CardsSafe - Protecting Assets ®
The CardsSafe® system is specifically designed to securely retain customer credit, debit and ID cards while the cardholder • CardsSafe® increases staff trust and improves the work environment • CardsSafe® is easy to use with minimal training and quick to install •
Apr/May 2021
CLH News
41
42
CLH News
Apr/May 2021
Design and Refit
Compact Comfort with Tub Chairs & Small Sofas When comfortable but won’t overwhelm.
Sims - The First Port Of Call For Banquette Seating
Please mention the Caterer, Licensee & Hotelier News when replying to advertising.
Design and Refit How Hotels Can Really Profit From Sustainability Hotels,
Apr/May 2021
CLH News
Adveco specialises in creating bespoke hot water and heating applications for the hotel industry that leverages all the advantages of renewable technologies, from air source heat pumps, and solar thermal to heat recovery. We can also smartly combine these with existing gasfired.. With clever planning, seating generates
a great flow for customers and staff around a pub, restaurant, cafe or club. It can be used to divide areas, create new spaces in a room and offer intimacy allowing for the perfect social meet up.. See the advert below.
Please mention the Caterer, Licensee & Hotelier News when replying to advertising
43
44
CLH News
Design and Refit
Apr/May 2021
Euroservice Trolley Manufacturers Celebrating 40 years of experience in the sale and manufacture of wooden trolleys for the catering trade, Euroservice trolley manufacturers have now acquired a worldwide reputation and still offer an extensive /comprehensive range of top quality wooden trol-
leys manufactured in the UK. Top quality is a priority in the production of all of our products and Euroservice are specialists in the manufacture of sturdy and beautiful looking trolleys which will grace any environment from the small privately owned restau-
Square One Interiors
rant to the splendid 3 to 5 star hotels, resorts and Residential homes. Euroservice’s excellence in the manufacture of wooden trolleys is backed by a personal, efficient and friendly service second to none. We are always busy researching the needs of the market and launch new ranges according to market demands. Whatever your needs you can be assured that Euroservice can cater for them and we look forward to your call. Freephone: 0800 917 7943 sales@euroservice-uk.com See the advert on page 2 for further information.
the overall design. Making furniture from scratch also had its benefits, as Jamie soon found that businesses would approach him with specific needs and requirements, meaning that he was able to provide a fully bespoke service, as well as offering design and advice.
Capricorn Contract Furnishings Capricorn Contract Furnishings are now firmly established as one of the country's largest stockist and supplier of quality contract furnishings to cafes, bars, restaurants, pubs, clubs and hotels. Capricorn are based in a 40, 000 square feet showroom and distribution warehouse on the outskirts of Exeter in Devon. From within the distribution area we are able to offer a next day delivery service on thousands of products including tables , chairs , stools
Since his humble beginnings in the garden shed, Jamie and the company have now work with hospitality operators, pubs, bars and hotels, as well as some large contract furniture companies and high street names. Our portfolio and workforce are growing and we are very excited to be working on some fantastic projects moving forwards, so watch this space! For more information visit. For more information or a Capricorn Contract Furnishings catalogue and price list contact Brian Pengelly on 01395 233 320, or visit.
Property and Professional
Apr/May 2021
CLH News
45
LSS and Kickstart Scheme Helping To Support Hospitality The LSS Group has become a Kickstart Gateway to help the hospitality sector place young people in jobs at a true zero net cost to employers. With over 30yrs experience in the Hospitality and Leisure sectors, LSS Group is aware that employing staff, in a cost-effective way, is an issue as the sector begins to open up. Potential employers will receive bespoke support from LSS Group to enable them to take part, and LSS Group will handle the application process and provide ongoing support. LSS Group provides comprehensive wrap-around support to the young people taking part throughout their six-month placement including mentoring, a dedicated placement manager, skills development, help with CV-building and job applications, and post-placement support towards future employment.
Andy Merricks, CEO of LSS Group said ”Covid-19 has had a profound impact on the Hospitality and Leisure sectors as well as employment and young people entering the labour market are among the worst affected. We are pleased to be able to support the sectors and believe that our experience will enable businesses to scale up recruitment in a manageable way as well as help young people to enter the industry and gain valuable experience and earnings. It also provides an opportunity to nurture some much needed ’home grown’ talent.” The process from application to interviewing potential placements can take around 6 weeks so now is a good time to contact LSS Group either by visiting their website – or email kickstart@thelssgroup.co.uk
This will take the stress away from placement providers and allow them to focus on providing a high-quality work placement experience.
Phoenix Specialist Risk Solutions Much like the mythological bird, Phoenix Specialist Risk Solutions was born from the ashes of an industry which has grown tired and disassociated from the people it is designed to protect. Phoenix is built to be different, our main focus is you. We have built our business with care at the core of everything we do. We strive to offer a quality personalised service which is tailored to each individual’s needs — we listen to you, get to know you and aim to support you every step of the way. Your business is in most cases the biggest risk and the biggest asset you will ever have
from the initial days of worrying about business levels and cash flow through to staff and HR issues and then back to business levels and cash flow, a revolving cycle. Within your business you will also have your trusted partners, your accountants and bankers, do you include your insurance broker? If not why not? Commercial insurance should not just be about the lowest possible price, it should be with someone you can work with and trust, someone flexible to the changes your business faces and someone who can advise you of which covers you may like to consider and not just the ones which you are legal required to have. Does your business description on your policy actually match your business, are your sums insured reviewed and adequate, do you have seasonal stock increases? Have you declared the accurate turnover and wageroll? We work with you to help you establish and maintain an insurance program which meets your needs and provides the best value for money. See the advert on the inside back cover or visit
7 (of the Most Important) Ways to Control Labour Costs…
By David Hunter of the Bowden Group
Straight from the horse’s mouth … but the order of importance and significance will vary according to your particular business.
The most important ways to control Labour costs would be … in Summary: 1. Match more closely the number of staff you put on duty with the Sales that you are expecting to do at that time. Use your business’s trading history … previous years’ business numbers, to predict what your sales are hopefully going to be in any one week, and then put onto the Rota the number of staff at any one time that you think are right for that predicted level of business. Break it down to the number of staff on duty (ie being paid) on an hour-by-hour basis … and try very hard not to have on duty any more than you need at any one time. This will certainly mean employing more people working less hours each …. But that will pay dividends as you have more to call on when you need extra help. 2. Think not just about Wages cost as a % of Sales … but also think about Wages cost in Money terms … how much £££ money is it costing you. And don’t forget that Employer’s N.I. and Employers’ Pension Costs are all part of that overall cost. Lots of operators have the idea that it is JUST all about Wages % … ie what your total wags cost represents as a % of your Total Nett Sales (nett of VAT … ie after the VAT has been take out). It is NOT just about that. 3. Train your staff on an ongoing basis … Despite belief to the contrary, staff do like being trained, as it gives them the ability to do a better job, which usu-
ally means higher earnings … AND it enables them to aspire to an ever better job next time they make a change … whether still with you, or with a new employer. 4. Think laterally about how much to pay people. Incentives and commissions always work well in Motivating staff. The rate per hour, or the salary that you pay a person depends on so many things. Yes there is the Minimum Wage, and the Living Wage, as a guide, but it will always be worth paying a bit more if staff are not that easy to replace. It will almost always be better to pay someone what they are worth, rather than what you can get away with. This is especially true with the younger staff. If they are doing the same job, and as good a job, as a person who is over 18 … pay them as it they WERE over 18 … because … if YOU don’t, someone else WILL pay them more … and you will lose them. 5. Look after your staff … it is better to retain the ones that you have spent time and money training than needing to replace them too often. Be a good employer … be nice to work for … and CARE about your staff. The best employer in Hospitality that I know actually puts his staff before even himself. When the Covid 19 Virus hit hard, both he and his wife took the biggest pay cuts, to show solidarity with staff. The kind of Loyalty that that behaviour earns just cannot be bought anywhere. 6. Induct your new staff fully … don’t just throw them in the deep end. How many times do we see it … a waiting person explaining to us that this is their first shift ever and that they don’t know what they are doing yet. Try getting them to work WITH a more experienced team member who can ‘’show them the ropes’’ … and who can answer the many inevitable questions that will arise in their first week or two. 7. Try to stick to a Wages Budget … which means doing up a full Budget of course. So many people think they can run a business without a Budget, Yes, you CAN … but it IS foolhardy to try and do that. Firstly … it is incredibly helpful to know what Sales you are expecting to take, and how much you might spend … and what the ‘’bottom line’’ might look like. Train your staff on an ongoing basis too … make that a very part of the culture of the organisation.
Please mention the Caterer, Licensee & Hotelier News when replying to ads
But secondly and even more importantly … for a Bank or an Investor in that business, their confidence in you will soar, if they can see that you are ‘’on the ball’’ and have done your homework in advance.
David Hunter of The Bowden Group, on 07831 407984. You can call me any time, but preferably 09.00 am – 09.00 pm on any day, weekends included.
If you want to talk through any aspects of this article or discuss how it affects you and your business, just call me:
Or just send me a text message asking me to call you.
KICKSTART SCHEME HELPING TO SUPPORT HOSPITALITY Your Hospitality business can benefit from a FREE 6-month work placement from a 16-24yr old. Funding for each role covers: 100% of age-appropriate national minimum wage for 25 hours a week All associated employer national insurance contributions Employer minimum automatic enrolment contributions LSS Group will provide a ‘wrap around’ service, managing the whole process on your behalf
Go to or call 0333 444 0630 for a ‘no obligation’ chat
CLH News
46
Property and Professional
Apr/May 2021
Roxburgh Milkins Help Healthy Fast Food Restaurant Chain
have advised them through the various steps of their fund raising - initially funding from friends and family, then angel investment and most recently private equity.
Roxburgh Milkins are a business law firm with a personal approach. Recently they have aided Friska, a fresh food company:
Company bio: Founded in Bristol but expanding nationally, Friska offers seasonal, ethically sourced fresh food to eat in, take away or for office catering. The company received £3m of investment in 2017 from YFM equity Partners to finance its planned roll out. Founders: With previous experience in food retail, economics and technology, Ed Brown and Griff Holland have been able to create an exceptional restaurant chain: great, healthy food backed up by impressive proprietary technology to run things efficiently and engage successfully with their customers. How Roxburgh Milkins helped: Roxburgh Milkins have worked with Ed and Griff since shortly after they founded Friska in 2009. Since then Roxburgh Milkins
What they say about Roxburgh Milkins: "Working with the team at Roxburgh Milkins was an absolute blast; they also added a great deal of value throughout the whole fund-raising process. "They are commercially astute, professionally accomplished and extremely personable. "As a team they are straight forward to work with and pragmatic, which removes a lot of the tit-for-tat that is often associated with legal wrangling towards the end of the deal process." For further information visit
Anzac Bistro and Bed & Breakfast in Dartmouth Sold Leading Licensed & Leisure Commercial Estate Agents Bettesworths are delighted to announce the sale of Anzac Bistro and Bed & Breakfast in Dartmouth. This superb life style business, with its enviable location in the pretty Anzac Square, has been purchased by Mr & Mrs Sharpe. Mitch and Nicky have moved back to England after 15 years of life in the Dordogne, France. Alongside raising their two daughters, they set up a small business renting out properties to holiday makers within the town of Sarlat. Having made a success of their business, they decided to return to their roots in the
W! NE
UK and embrace the new challenge of running this lovely Bistro and B&B. This enviable life style business was sold off an asking price of £575,000 for the freehold interest. Genevieve Stringer, who handled the sale commented, ‘I’m sure Anzac will go from strength to strength, and all of us here at Bettesworths would like to take this opportunity to wish Mitch and Nicky every success with their latest venture in this fabulous waterfront town of Dartmouth located in the beautiful South Hams’. For other properties available through Bettesworths, see the advert on this page.
• Substantial Freehold Property in Ashburton • Public House, Separate Cottage & Car Park with Potential for Redevelopment, Subject to Planning • Over 10,000 Sq Ft of Valuable Real Estate in Thriving Devon Town • Of Interest to Property Speculators, Developers, Investors or Operators • Viewing Highly Recommended
COMBE FLOREY, SOMERSET
ASHBURTON, DEVON
• Stunning Grade II Listed Thatched Country Inn • Very Successful Business with Far Reaching Catchment Area • Low Overheads & Two Trade Areas providing 36 Covers • Extensive Trading Areas Inside and Out • Viewing Essential
PRICE: OFFERS OVER £600,000 FREEHOLD REF: 3971
PRICE: OFFERS IN EXCESS OF £400,000+ VAT FREEHOLD REF: 3883
• Attractive and Centrally Located Period Coaching Inn & Hotel • Successful Business with Strong Wet, Food and Letting Accommodation Revenue • Well Presented Character Trading Areas Including Main Bar & Dining Room • 16 Beautifully Refurbished Letting Rooms • Owners Apartment & Customer Car Park with an Outside Smokers Area
• Character Grade II Listed 16th Century Coaching Inn • Situated in the Central Square of Historical North Devon Town of Great Torrington • Traditional Bar, Lounge Bar with Open Fire and Restaurant with 60 + Covers • 3 En-Suite Letting Rooms & 5 Bedroom Owners Accommodation • New Free of Tie Lease - Guide Rent of £35,000 Per Annum
W! NE
PRICE: £695,000
FREEHOLD
REF: 3955
KINGSWEAR, DEVON
• Newly Refurbished Coffee Shop in Fabulous Location • Rare Opportunity to Purchase a Long Leasehold Premises • Potential to Expand on Menu & Opening Hours • E Class Use Allowing for Alternative Business Uses • Viewing Highly Recommended
PRICE: £89,500
W! NE
PRICE: £330,000
TORRINGTON, DEVON
WIVELISCOMBE, SOMERSET
LEASEHOLD
REF: 3938
PRICE: NIL INGOING – NEW LEASE
W! NE
PRICE: £59,950
MORETONHAMPSTEAD, DEVON
• Fabulous Freehold Opportunity within Dartmoor National Park • Charming Café/Bistro with Shop Trading Daytimes Only • Superb 2 Bed Self-Contained Accommodation • Huge Potential to Expand on Current Trade • Suitable for a Range of Catering Styles
FREEHOLD
REF 3408
REF: 3803
TORQUAY, DEVON
• Lock Up Takeaway Premises in Sought After Harbourside • Currently Trading 6 Days a Week 9am-2pm Closed Sundays • Previously Trading Evenings with Valuable 4am Licence • Huge Potential to Increase Trade with Addition of Delivery Service • First Time in 16 Years Business has Come to the Market
LEASEHOLD
REF: 3249
EAST LOOE, CORNWALL
PRICE: £49,500
• Fabulous Quayside Bar/ Restaurant Overlooking the Estuary • Waterside Dining and Drinking for Circa 46 Internally and 30 Outside • Fully Refurbished Property with Stylish Nautical Theme • Strong Business with Booming Summer Trade • Rare Opportunity to Secure a Waterside Business in Looe
LEASEHOLD
REF: 3386
Weekly Figures Analysis & Reporting Service from David Hunter. David has now come up with a way of making his amazing Mentoring & Consultancy service more accessible to the wider market, and for a lower monthly fee. Instead of being charged for monthly consultancy, you can now access David’s knowledge and expertise via his already-established and very well-used weekly figures reporting system. He will send you weekly reports on how your business is doing and will throw in FOR NO EXTRA. If you have a Pub, Restaurant or Hotel business which is facing financial or operational challenges … why not let David have a look, and help you maximise your full potential. There is no cost to David having a look at your figures, and letting you know what COULD be achieved. Call David Hunter confidentially on 07831 407984 or on 01628 487613.
For Sale: Impressive & Completely Renovated Coastal Holiday Accommodation/B&B, Beer, East Devon Property specialists Stonesmith are thrilled to marketing the freehold sale of Belmont House - an attractive period, end of terrace residence, with origins that reputably date back around 200 years. Constructed from the famous Beer Stone under a pitched slate roof and having been the subject of considerable investment and improvement by our client during 17 years of ownership, Belmont House is presented to an extremely high standard and is a ready to trade “turnkey” opportunity. This extremely flexible and versatile property briefly comprises:- 5 en-suite double bedrooms, sitting room, dining room, kitchen, utility room and cloakroom. Externally, to the front of the property is a small terrace providing a seating area and to the rear is an enclosed gravelled garden with a timber beach hut. The sale of Belmont House represents an opportunity to purchase an extremely well
maintained and versatile property within a highly sought after and desirable East Devon coastal village, coupled with a lucrative business opportunity. It also offers all the flexibility of a lifestyle business and investment opportunity, either as a bed and breakfast or alternatively as it is currently trading as a holiday let. Belmont House is situated in the heart of Beer, just off Fore Street, the main thoroughfare of the village, and is opposite the main village centre car park. The sale of Belmont House is reluctantly offered for sale, due to retirement. The freehold property is on the market for offers over £450,000. Full property details can be found on our website: and viewings arranged by calling 01392 201262. | https://issuu.com/clhnews/docs/clh_news_238_apr-may_21 | CC-MAIN-2022-33 | refinedweb | 32,669 | 53.65 |
Hide Forgot
Description of problem:
When using registry type: `local_openshift` which is configured by default downstream, the administrator will want to whitelist the APBs in the registry he wants to make accessible. By default the adapter looks in the openshift namespace but does not have a whitelist. We would recommend they set the whitelist value to ['.*-apb$'] or ['*'] so that it will look through all available images in the openshift namespace.
Version-Release number of selected component (if applicable):
3.7.0
Additional info:
Example config:
registry:
- type: local_openshift
name: lo
namespaces:
- openshift
white_list:
- ".*-apb$"
Another thing to properly include in the documentation is why we suggest using the 'openshift' namespace. By default the 'openshift' namespace exposes all imagestreams to any authenticated user on the cluster. This is valuable to the Ansible Service Broker because we create a transient namespace when provisioning APBs and that dynamic service account needs to be able to pull images from the internal registry.
We want to encourage users to enable the openshift namespace by default and point them towards resources that will allow users to pull images from different projects here: | https://partner-bugzilla.redhat.com/show_bug.cgi?id=1511656 | CC-MAIN-2019-39 | refinedweb | 187 | 52.8 |
receiving this exception when opening a solution containing a Xamarin.Forms PCL project and its respective Android/iOS/Windows Phone projects. The assembly it cannot load and it is referencing is the PCL project itself. In addition, it shows "Version=, Culture=neutral, PublicKeyToken=" after the assembly name in the exception message. Furthermore, I get lots of errors along the lines of "InitializeComponent doesn't exist in the current context", "The type or namespace name 'App' does not exist in the namespace..."
Cleaning the solution and restarting Visual Studio does not help. All of this was working fine yesterday until I updated to the latest "stable" version of Xamarin.Forms.
I have checked this issue but I am not able to reproduce this issue.
Could you please provide sample project and build version? So that we can reproduce this issue at our end.
You can get build info from here:
Visual Studio => About Microsoft Visual Studio => Copy Info
Environment Info:
Xamarin.Forms : 1.4.4.6392
Windows: 8.1
XVS: 3.11.837
VS:2013
Microsoft Visual Studio Professional 2015
Version 14.0.23107.0 D14REL
Microsoft .NET Framework
Version 4.6.00081
Installed Version: Professional
Visual Basic 2015 00322-40000-00000-AA660
Microsoft Visual Basic 2015
Visual C# 2015 00322-40000-00000-AA660
Microsoft Visual C# 2015
Visual C++ 2015 00322-40000-00000-AA660
Microsoft Visual C++ 2015
Visual F# 2015 RC 00322-40000-00000-AA660
Microsoft Visual F# 2015 RC
Windows Phone SDK 8.0 - ENU 00322-40000-00000-AA660.
JetBrains ReSharper Ultimate 2015.2 Build 103.0.20150818.200216
JetBrains ReSharper Ultimate package for Microsoft Visual Studio. For more information about ReSharper Ultimate, visit. Copyright © 2015 JetBrains, Inc.
Microsoft Azure Mobile Services Tools 1.4
Microsoft Azure Mobile Services Tools
Microsoft MI-Based Debugger 1.0
Provides support for connecting Visual Studio to MI compatible debuggers
NuGet Package Manager 3.
SQL Server Data Tools 14.0.50717.0
Microsoft SQL Server Data Tools
Visual C++ for Cross Platform Mobile Development 1.0
Visual C++ for Cross Platform Mobile Development
Visual C++ for Cross Platform Mobile Development 1.0
Visual C++ for Cross Platform Mobile Development
Workflow Manager Tools 1.0 1.0
This package contains the necessary Visual Studio integration components for Workflow Manager.
Xamarin 3.11.837.0 (f10676f)) get the same behavior in Visual Studio 2015 as well.
Visual Studio 2013 as well.
Do you need any additional info from me? My Xamarin installation is completely broken. I get these errors with any new project I start. Reinstalling Xamarin did not help.
Can I get a response? I am losing days of productivity here because I chose to go with the Xamarin platform. Right now, I am a very unsatisfied customer.
...
Please excuse our tardy reply. I tried to reproduce this issue and I am still not able to reproduce this issue. If you have a chance, it might also be helpful to attach "sample application" and "IDE logs". So that we can reproduce this issue at our end.
You can get IDE logs from here:
Visual studio => HELP => Xamarin => Zip Xamarin Logs (please attach zipped log with this issue).
Created attachment 12857 [details]
IDE Logs
Created attachment 12858 [details]
Sample project
IDE logs and sample project are attached.
I can't reproduce this, we may have to see if a developer can figure out what's going on from the logs.
Screencast:
Environment Info:
Xamarin.Forms: 1.4.4.6392
Microsoft Visual Studio Professional 2013
Version 12.0.31101.00 Update 4
Microsoft .NET Framework
Version 4.6.00057
Installed Version: Professional
Xamarin 3.11.857.0 (95267e5)
Xamarin.Android 5.1.7.0 (72e316e4cb50a1f4238c5339a76d7c4754b85c67)
Xamarin.Forms Intellisense 1.0
Xamarin.iOS 8.10.4.0 (6db87c53c073f4af2f5247fb738a27ea08c094fd)
I ended up reinstalling literally everything on my machine (including the OS) and the current installation of Xamarin seems to be behaving correctly. I simply couldn't wait any longer for a resolution to this. You can close this ticket. | https://xamarin.github.io/bugzilla-archives/33/33667/bug.html | CC-MAIN-2019-43 | refinedweb | 662 | 52.76 |
What will we cover in this tutorial?
In this tutorial we will cover the following.
- How to use Pandas Datareader to read historical stock prices from Yahoo! Finance.
- Learn how to read weekly and monthly data.
- Also how to read multiple tickers at once.
Step 1: What is Pandas Datareader?
Pandas-Datareader is an up to date remote data access for pandas.
This leads to the next question. What is pandas?
Pandas is a data analysis and manipulation tool containing a great data structure for the purpose.
Shortly said, pandas can be thought of as a data structure in Python, which is similar to working with data in a spreadsheet.
Pandas-datareader reads data from various sources and puts the data into a pandas data structures.
Pandas-datareader has a call to return historic stock price data from Yahoo! Finance.
To use Pandas-datareader you need to import the library.
Step 2: Example reading data from Yahoo! Finance with Pandas-Datareader
Let’s break the following example down.
import pandas_datareader as pdr import datetime as dt ticker = "AAPL" start = dt.datetime(2019, 1, 1) end = dt.datetime(2020, 12, 31) data = pdr.get_data_yahoo(ticker, start, end) print(data)
Where we first import two libraries.
- pandas_datareader The Pandas Datareader. If you do not have it installed already in your Jupyter Notebook you can do that by entering this in a cell !pip install pandas_datareader and execute it.
- datetime This is a default library and represents a date and time. We only use it for the date aspects.
The the following lines.
- ticker = “AAPL” The ticker we want data from. You can use any ticker you want. In this course we have used the ticker for Apple (AAPL).
- start = dt.datetime(2019, 1, 1) Is the starting day we want historic stock price data.
- end = dt.datetime(2020, 12, 31) The end day.
- data = pdr.get_data_yahoo(ticker, start, end) This is the magic that uses Pandas Datareader (pdr) to get data from the Yahoo! Finance API. It returns a DataFrame as we know it from previous lessons.
The output of the code is as follows.
High Low ... Volume Adj Close Date ... 2019-01-02 39.712502 38.557499 ... 148158800.0 38.505024 2019-01-03 36.430000 35.500000 ... 365248800.0 34.669640 2019-01-04 37.137501 35.950001 ... 234428400.0 36.149662 2019-01-07 37.207500 36.474998 ... 219111200.0 36.069202 2019-01-08 37.955002 37.130001 ... 164101200.0 36.756794 ... ... ... ... ... ... 2020-12-24 133.460007 131.100006 ... 54930100.0 131.773087 2020-12-28 137.339996 133.509995 ... 124486200.0 136.486053 2020-12-29 138.789993 134.339996 ... 121047300.0 134.668762 2020-12-30 135.990005 133.399994 ... 96452100.0 133.520477 2020-12-31 134.740005 131.720001 ... 99116600.0 132.492020 [505 rows x 6 columns]
Step 3: A few parameters to set
You can get multiple tickers at once by parsing a list of them.
import pandas_datareader as pdr import datetime as dt ticker = ["AAPL", "IBM", "TSLA"] start = dt.datetime(2019, 1, 1) end = dt.datetime(2020, 12, 31) data = pdr.get_data_yahoo(ticker, start, end) print(data)
You can get the weekly or monthly data by using the argument as follows.
import datetime as dt ticker = ["AAPL", "IBM", "TSLA"] start = dt.datetime(2019, 1, 1) end = dt.datetime(2020, 12, 31) data = pdr.get_data_yahoo(ticker, start, end, interval='w') print(data)
Set interval=’m’ to get monthly data instead of weekly with ‘w’.
Next steps?
Want to learn more?
This is part of the FREE online course on my page. No signup required and 2 hours of free video content with code and Jupyter Notebooks available on GitHub.
Follow the link and read more. | https://www.learnpythonwithrune.org/read-historical-prices-from-yahoo-finance-with-python/ | CC-MAIN-2021-25 | refinedweb | 623 | 78.85 |
Issuing HTTP GET Requests
The key classes here are HttpWebRequest and HttpWebResponse from System.Net.
The following method issues a request and returns the entire response as one long string:
static string HttpGet(string url) { HttpWebRequest req = WebRequest.Create(url) as HttpWebRequest; string result = null; using (HttpWebResponse resp = req.GetResponse() as HttpWebResponse) { StreamReader reader = new StreamReader(resp.GetResponseStream()); result = reader.ReadToEnd(); } return result; }
Remember that if the request URL includes parameters, they must be properly encoded (e.g., a space is %20, etc.). The System.Web namespace has a class called HttpUtility, with a static method called UrlEncode for just such encoding.
Issuing HTTP POST Requests
URL encoding is also required for POST requests -- in addition to form encoding, as shown in the following method:
static string HttpPost(string url, string[] paramName, string[] paramVal) { HttpWebRequest req = WebRequest.Create(new Uri(url)) as HttpWebRequest; req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; // Build a string with all the params, properly encoded. // We assume that the arrays paramName and paramVal are // of equal length: StringBuilder paramz = new StringBuilder(); for (int i = 0; i < paramName.Length; i++) { paramz.Append(paramName[i]); paramz.Append("="); paramz.Append(HttpUtility.UrlEncode(paramVal[i])); paramz.Append("&"); } // Encode the parameters as form data: byte[] formData = UTF8Encoding.UTF8.GetBytes(paramz.ToString()); req.ContentLength = formData.Length; // Send the request: using (Stream post = req.GetRequestStream()) { post.Write(formData, 0, formData.Length); } // Pick up the response: string result = null; using (HttpWebResponse resp = req.GetResponse() as HttpWebResponse) { StreamReader reader = new StreamReader(resp.GetResponseStream()); result = reader.ReadToEnd(); } return result; }
For more examples, see this page on the Yahoo! Developers Network.
21 comments:
Your code for GET request has a slight typo - instead of "response" it should say "result".
Regards
Simon.
Thanks Simon! I've fixed it now.
"params" is a keyword in C# now.
Maybe you could modify the code (POST example) to avoid using it.
Thanks Amr! I've changed it to "paramz". Hopefully, that won't become a keyword anytime soon...
Awesome! Thanks Dr. M!
Great work Elkstein
Very well written...thanks!
Nice Post Doc, will you provide some samples in Xquery
It's a better idea to use System.Uri.EscapeUriString to encode URL's instead of UrlEncode. UrlEncode doesn't encode spaces as their hex representation %20. An old post, but still relevant:. EscapeUriString MSDN doc:
Just use @param as the variable name
Make sure to capitalize the "Append", "ToString", and "ContentLength". It didnt like that it was lower case.
Thanks, Kyle J. I've fixed the code now.
Hi,
Thanks a lot for the tutorial. i was breaking my head before. this helped me a lot specially with C#. can i also request you to post some example methods for ALM(QC)
thanks
Bhargav
Hi,
Thanks a lot for the tutorial. i was breaking my head before. this helped me a lot specially with C#. can i also request you to post some example methods for ALM(QC)
thanks
Bhargav
Thank you for such a helpful tutorial, recently I am gathering knowledge about WS testing, your documentations helped me a lot!
Could you explain the byte lenght you're passing into the request and why this is needed?
Hello LaRae,
It's for the HTTP Content-Length request header (see here for details about it).
Thanks a lot for this amazing article!
Hello!
I dont understand about POST-request.
Should i send xml into service? *service written at java*
or, i shoult fill string[] paramName, string[] paramVal just like Dictionary?
Very Helpful , Thank you!
paramz.remove((paramz.length - 1), 1) should be added after the for loop in the example post request above to take off the "&" that is currently being left on the end of the StringBuilder object. | http://rest.elkstein.org/2008/02/using-rest-in-c-sharp.html?showComment=1412085870397 | CC-MAIN-2022-33 | refinedweb | 621 | 60.41 |
Code Coverage is an oft-used metric which helps measure the depth and completeness of your testing. You run tests, and a code coverage tool monitors the execution and tracks which lines of code have and haven’t been executed. You then get a report which breaks down the test coverage offered by your tests.
Here’s an example coverage report generated from NCover:
(This is an old and free version of NCover, when will Resharper include Code Coverage?!)
On the left-hand side, we get an overview per assembly, namespace, type and method. You can see that my coverage isn’t great. On the right, you see the actual code. Notice that the instantiation of my InternalCategories isn’t covered by any tests (the light-red background).
Code coverage is a useful tool, but not quite as useful as it might first seem. The fundamental problem with code coverage, and something you must always remember when using it, is that it does not measure meaningful execution. Take a look at the following method and test:
public void HasChanged(User original)
{
return string.Compare(original.Name, Name, true) != 0
|| string.Compare(original.Password, Password, true) != 0;
}
[Test]
public void Returns_False_If_Nothing_Has_Changed()
{
var updated = new User{Name = “Name”, Password = “Password”};
var original = new User{Name = “Name”, Password = “Password”};
Assert.AreEqual(false, original.HasChanged(updated));
}
Running the above with code coverage will report that our code is 100% covered even though we’ve only tested part of the actual functionality. This is both misleading and dangerous. We might be tempted to just add another test:
[Test]
Public void Returns_True_If_Name_Has_Changed()
{
var updated = new User{Name = “Gamblor”, Password = “Password”};
var original = new User{Name = “Name”, Password = “Password”};
Assert.AreEqual(true, original.HasChanged(updated));
}
But we still have to add a third test to get full coverage:
[Test]
Public void Returns_True_If_Password_Has_Changed()
{
var updated = new User{Name = “Name”, Password = “Password”};
var original = new User{Name = “Name”, Password = “drowssaP”};
Assert.AreEqual(true, original.HasChanged(updated));
}
(some might argue that we need even more since none of these test handle the case-sensitive behavior of or original method.)
The point is simple: code coverage cannot be used exclusively as a reliable means to measure the quality or even the coverage or your tests. On the flip side though, you can use coverage to identify code that isn’t tested. In other words, just because a method or line is covered doesn’t mean it’s tested, but if a line or method isn’t covered, you can be absolutely sure that it isn’t tested.
Rather than the term ‘code coverage’, you probably want to be saying ‘statement coverage’. This then makes it obvious that there are many other sorts of coverage – if NCover measured and showed you branch and modified condition/decision coverage metrics, you’d get an indication that the different paths through your code weren’t covered, even though all statements were covered.
It isn’t rocket science – the safety-critical world has used these metrics for decades….
The traditional understanding of code coverage has been exactly what you just wrote about — ONLY a NUMBER. In some esoteric sense, you were supposed to decide if that was good, bad, or acceptable (or who gives a sh*t the boss is asking for it)…
For the first time in code coverage, NCOVER 3 will attempt to provide “MEANING” behind the numbers…
Symbol points, method visits, branch points, cyclomatic complexity — all trended and graphed to show you a deeper meaning.
Get it ALL for FREE right now… email us at: conversation [at] ncover [dot] com
I wrote about code coverage a while ago, and I had reached much the same conclusion: coverage isn’t everything. The way I look at it: coverage is part of the process, not the end result.
When I look at my coverage numbers, I use the data to learn what is NOT currently under test, as opposed to assuming that full coverage means the code is good.
As I see it, coverage is just one tool, along with others like FxCop and NDepend. And of course, the occasional manual code review.
@brian
the note re: meaningful execution was from the blog entry directly – in general I was agreeing with you in general, just bringing in a note from the original entry to illustrate my point.
@Jerry
[quote]
but, to state that code coverage in general does not test meaningful execution, when one single tool does not, is just plain silly, painting a broad stroke with a generalization.
[quote]
That’s not what I was trying to say; I was merely pointing out that those with less experience (such as myself at one point), might be misled to believe that ‘green’ = done.
Also, when did I generalize that there was nothing meaningful in code coverage in general? I never meant to imply that there wasn’t anything meaningful in code coverage – in fact it’s the first thing I look at after writing tests to ensure what I ‘think’ covered all the test cases actually at least hit all the code I thought it would.
Jerry:
Your critique is spot on. I’ve used a few and I’m blogging based on my experience with those. Thanks for letting me (and readers of this blog) know about tools which make my assumption less accurate. Definitely gonna look into it.
@Brian
just because it’s green doesn’t mean 100% of TEST CASES are covered
there will never truly be 100% test coverage – nor is it feasible.
but, to state that code coverage in general does not test meaningful execution, when one single tool does not, is just plain silly, painting a broad stroke with a generalization.
The code coverage tool that comes with Visual Studio works pretty well at getting the extra ‘truth table’ rows – for instance it would of picked up what your 3rd party product says was 100% covered.
But yes…for people who use code coverage tools and don’t already know this – they’re not silver bullets, they’re not guarantee’s as everyone else has mentioned (or hinted at) it’s excellent for telling you where you have no tests (or little tests – Visual Studio highlights those as yellow when you only have partial coverage) – but just because it’s green doesn’t mean 100% of TEST CASES are covered.
Hi Karl,
Nice post! I agree that code coverage is not the final measure for your application. Also, if your code coverage is 96% then this should not be taken as a positive measure but we have to think that what 5% code we are not covering.
You can also hack code coverage:
Nice example, I usually see low overall coverage as a sign of a problem but high coverage does not give me confidence.
You appear to be making a generalization based on a specific instance. While I agree that code coverage should never be relied on as the only tool in the toolbox, some code coverage tools do a good job of checking every truth value and not marking something as covered until each and every truth value has been determined.
See for example:
Here’s a real story. We worked on a project project,.
Nice article explaining the problems with code coverage.
It would be nice though if at least mention other coverage criteria like decision and coverage criteria.
Nice explanation. I’ve always considered code coverage as 0 or X measure (as opposed to 0 or 1) – either you know for sure you have NOT covered a block, or you know you’ve covered something, but the raw numbers alone can’t give you enough confidence to say it’s tested. | http://codebetter.com/karlseguin/2008/12/09/code-coverage-use-it-wisely/ | crawl-003 | refinedweb | 1,286 | 57.81 |
year, 1 month ago.
EFM32 Giant Gecko mbed printf
Hello
I cannot get printf to work under MbedOS.
The wikipage states that you dont have to do any configuration if using printf and the virtual com port.
My code looks like this:
- include "mbed.h"
int main(){ while(1){ printf("Hei dette er en fint test \n"); wait_ms(100); } }
It does not print anything to the terminal.
My serial settings on the terminal reader program is
115200 8 bits 1 stop bit no parity
What could be wrong?
Question relating to:
1 Answer
1 year, 1 month ago.
Hello Thomas,
When using global
printf in mbed the bit rate for the serial terminal shall be set to 9600.
Another option is to keep 115200 bps for the serial terminal and create a new serial object for the serial terminal. Change its bit rate to 115200 and instead of the global
printf call its
printf method:
#include "mbed.h" Serial pc(USBTX, USBRX); // create a new serial object for the virtual serial port int main() { pc.baud(115200); // change the bit rate (from 9600) to 115200 while (1) { pc.printf("Hei dette er en fint test \n"); // call the printf method of pc object wait_ms(100); } }
ADVISE: To have a more readable code published here (on mbed pages) try to mark it up as below (with
<<code>> and <</code>> tags, each on separate line at the begin and end of your code)
<<code>> text of your code <</code>>
You need to log in to post a question | https://os.mbed.com/questions/82690/EFM32-Giant-Gecko-mbed-printf/ | CC-MAIN-2019-51 | refinedweb | 256 | 77.77 |
These topics describe version 3 of the Compose file format. This is the newest version.:
The topics on this reference page are organized alphabetically by top-level key to reflect the structure of the Compose file itself. Top-level keys that define a section in the configuration file such as
build,
deploy,
depends_on,
networks, and so on, are listed with the options that support them as sub-topics. This maps to the
<key>: <option>: <value> indent structure of the Compose file.
A good place to start is the Getting Started tutorial which uses version 3 Compose stack files to implement multi-container apps, service definitions, and swarm mode. Here are some Compose files used in the tutorial.
Your first docker-compose.yml File
Adding a new service and redeploying
Another good reference is the Compose file for the voting app sample used in the Docker for Beginners lab topic on Deploying an app to a Swarm. This is also shown on the accordion at the top of this section.
The Compose file is a YAML file defining services, networks and volumes. The default path for a Compose file is
./docker-compose.yml.
Tip: You can use either a
.ymlor
.yamlextension for this file. They both work.
A service 3.
Configuration options that are applied at build time.
build can be specified either as a string containing a path to the build context:
version: '2' services: webapp: build: ./dir
Or, as an object with the path specified under context and optionally Dockerfile and args:
version: '2' services: webapp:.
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file. The
docker stackcommand accepts only pre-built images.
Alternate Dockerfile.
Compose will use an alternate file to build with. A build path must also be specified.
build: context: . dockerfile: Dockerfile-alternate.
Note: This option is new in v3.2
A list of images that the engine will use for cache resolution.
build: context: . cache_from: - alpine:latest - corp/web_app:3.14
Note: This option is new in v3.3"
Add or drop container capabilities. See
man 7 capabilities for a full list.
cap_add: - ALL cap_drop: - NET_ADMIN - SYS_ADMIN
Note: These options are ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Override the default command.
command: bundle exec thin -p 3000
The command can also be a list, in a manner similar to dockerfile:
command: ["bundle", "exec", "thin", "-p", "3000"]
Grant access to configs on a per-service basis using the per-service
configs configuration. Two different syntax variants are supported.
Note: The config must already exist or be defined in the top-level
configsconfiguration of this stack file, or stack deployment will fail.
The short syntax variant only specifies the config name. This grants the container access to the config and mounts it at
/<config_name> within the container. The source name and destination mountpoint are both set to the config name.
The following example uses the short syntax to grant the
redis service access to the
my_config and
my_other_config configs. The value of
my_config is set to the contents of the file
./my_config.txt, and
my_other_config is defined as an external resource, which means that it has already been defined in Docker, either by running the
docker config create command or by another stack deployment. If the external config does not exist, the stack deployment fails with a
config not found error.
Note:
configdefinitions are only supported in version 3.3 and higher of the compose file format.
version: "3.3" services: redis: image: redis:latest deploy: replicas: 1 configs: - my_config - my_other_config configs: my_config: file: ./my_config.txt my_other_config: external: true
The long syntax provides more granularity in how the config is created within the service’s task containers.
source: The name of the config as it exists in Docker.
target: The path and name of the file that will be mounted in the service’s task containers. Defaults to
/<source>if not specified.
uidand
gid: The numeric UID or GID which will own the mounted config file within in the service’s task containers. Both default to
0on Linux if not specified. Not supported on Windows.
mode: The permissions for the file that will be mounted within the service’s task containers, in octal notation. For instance,
0444represents world-readable. The default is
0444. Configs the name of
my_config to
redis_config within the container, sets the mode to
0440 (group-readable) and sets the user and group to
103. The
redis service does not have access to the
my_other_config config.
version: "3.3" services: redis: image: redis:latest deploy: replicas: 1 configs: - source: my_config target: /redis_config uid: '103' gid: '103' mode: 0440 configs: my_config: file: ./my_config.txt my_other_config: external: true
You can grant a service access to multiple configs and you can mix long and short syntax. Defining a config does not imply granting a service access to it.
Specify an optional parent cgroup for the container.
cgroup_parent: m-executor-abcd
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Specify a custom container name, rather than a generated default name.
container_name: my-web-container
Because Docker container names must be unique, you cannot scale a service beyond 1 container if you have specified a custom name. Attempting to do so results in an error.
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Note: this option was added in v3.3
Configure the credential spec for managed service account. This option is only used for services using Windows containers. The
credential_spec must be in the format
file://<filename> or
registry://<value-name>.
When using
file:, the referenced file must be present in the
CredentialSpecs subdirectory in the docker data directory, which defaults to
C:\ProgramData\Docker\ on Windows. The following example loads the credential spec from a file named
C:\ProgramData\Docker\CredentialSpecs\my-credential-spec.json:
credential_spec: file: my-credential-spec.json
When using
registry:, the credential spec is read from the Windows registry on the daemon’s host. A registry value with the given name must be located in:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs
The following example load the credential spec from a value named
my-credential-spec in the registry:
credential_spec: registry: my-credential-spec
Specify configuration related to the deployment and running of services. This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by
docker-compose up and
docker-compose run.
version: '3' services: redis: image: redis:alpine deploy: replicas: 6 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure
Several sub-options are available:
Specify a service discovery method for external clients connecting to a swarm.
Version 3.3 only.
endpoint_mode: vip - Docker assigns the service a virtual IP (VIP), which.
Specify labels for the service. These labels will only be"
Either
global (exactly one container per swarm node) or
replicated (a specified number of containers). The default is
replicated. (To learn more, see Replicated and global services in the swarm topics.)
version: '3' services: worker: image: dockersamples/examplevotingapp_worker deploy: mode: global
Specify placement constraints. For a full description of the syntax and available types of constraints, see the docker service create documentation.
version: '3' services: db: image: postgres deploy: placement: constraints: - node.role == manager - engine.labels.operatingsystem == ubuntu 14.04
If the service is
replicated (which is the default), specify the number of containers that should be running at any given time.
version: '3' services: worker: image: dockersamples/examplevotingapp_worker networks: - frontend - backend deploy: mode: replicated replicas: 6
Configures resource constraints. This replaces the older resource constraint options in Compose files prior to version 3 (
cpu_shares,
cpu_quota,
cpuset,
mem_limit,
memswap_limit,
mem_swappiness).
Each of these is a single value, analogous to its docker service create counterpart.
version: '3' services: redis: image: redis:alpine deploy: resources: limits: cpus: '0.001' memory: 50M reservations: cpus: '0.0001' memory: 20M).
window: How long to wait before deciding if a restart has succeeded, specified as a duration (default: decide immediately).
version: "3" services: redis: image: redis:alpine deploy: restart_policy: condition: on-failure delay: 5s max_attempts: 3 window: 120s.
version: '3' services: vote: image: dockersamples/examplevotingapp_vote:before depends_on: - redis deploy: replicas: 2 update_config: parallelism: 2 delay: 10s
docker stack deploy
The following sub-options (supported for
docker compose up and
docker compose run) are not supported for
docker stack deploy or the
deploy key.
Tip: See also, the section on how to configure volumes for services, swarms, and docker-stack.yml files. Volumes are supported but in order to work with swarms and services, they must be configured properly, as named volumes or associated with services that are constrained to nodes with access to the requisite volumes.
List of device mappings. Uses the same format as the
--device docker client create option.
devices: - "/dev/ttyUSB0:/dev/ttyUSB0"
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Express dependency between services, which has two effects:
docker-compose up will start services in dependency order. In the following example,
db and
redis will be started before
web.
docker-compose up SERVICE will automatically include
SERVICE’s dependencies. In the following example,
docker-compose up web will also create and start
db and
redis.
Simple example:
version: '3' services: web: build: . depends_on: - db - redis redis: image: redis db: image: postgres
There are several things to be aware of when using
depends_on:
-
depends_onwill not wait for
dband
redisto be “ready” before starting
web- only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
-
Version 3 no longer supports the
conditionform of
depends_on.
-
The
depends_onoption is ignored when deploying a stack in swarm mode with a version 3 Compose file.
Custom DNS servers. Can be a single value or a list.
dns: 8.8.8.8 dns: - 8.8.8.8 - 9.9.9.9
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Custom DNS search domains. Can be a single value or a list.
dns_search: example.com dns_search: - dc1.example.com - dc2.example.com
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Version 2 file format and up.
Mount a temporary file system inside the container. Can be a single value or a list.
tmpfs: /run tmpfs: - /run - /tmp
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file. ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
expose: - "3000" - "8000"
Link to containers started outside this
docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services.
external_links follow semantics similar to the legacy option
links when specifying both the container name and the link alias (
CONTAINER:ALIAS).
external_links: - redis_1 - project_db_1:mysql - project_db_1:postgresql
Notes:
If you’re using the version 2 or above file format, the externally-created containers must be connected to at least one of the same networks as the service which is linking to them. Starting with Version 2, links are a legacy option. We recommend using networks instead.
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
interval and
timeout.
Specify a container’s isolation technology. On Linux, the only supported value is
default. On Windows, acceptable values are
default,
process and
hyperv. Refer to the Docker Engine docs for details."
Link to containers in another service. Either specify both the service name and a link alias (
SERVICE:ALIAS), or just the service name.
web: links: - db - db:database - redis
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name. (See also, the Links topic in Networking in Compose.)
Links also express dependency between services in the same way as depends_on, so they determine the order of service startup.
Notes
-
If you define both links and networks, services with links between them must share at least one network in common in order to communicate.
-
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file."
The default driver json-file, has options to limit the amount of logs stored. To do this, use a key-value pair for maximum storage size and maximum number of files:
options: max-size: "200k" max-file: "10"
The example shown above would store log files until they reach a
max-size of 200kB, and then rotate them. The amount of individual log files stored is specified by the
max-file value. As logs grow beyond the max limits, older log files are removed to allow storage of new logs.
Here is an example
docker-compose.yml file that limits logging storage:
services: some-service: image: some-service logging: driver: "json-file" options: max-size: "200k" max-file: "10"
Logging options available depend on which logging driver you use
The above example for controlling log files and sizes uses options specific to the json-file driver. These particular options are not available on other logging drivers. For a full list of supported logging drivers and their options, see logging drivers.
Network mode. Use the same values as the docker client
--net parameter, plus the special form
service:[service name].
network_mode: "bridge" network_mode: "host" network_mode: "none" network_mode: "service:[service name]" network_mode: "container:[container name/id]"
Notes
-
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
-
network_mode: "host"cannot be mixed with links.
Networks to join, referencing entries under the top-level
networks key.
services: some-service: networks: - some-network - other-network:
Specify a static IP address for containers for this service when joining the network.
The corresponding network configuration in the top-level networks section must have an
ipam block with subnet - subnet: 2001:3984:3989::/64
pid: "host"
Sets the PID mode to the host PID mode. This turns on sharing between container and the host operating system the PID address space. Containers launched with this flag will be able to access and manipulate other containers in the bare-metal machine’s namespace and vise-versa."
The long form syntax allows the configuration of additional fields that can’t be expressed in the short form.
target: the port inside the container
published: the publicly exposed port
protocol: the port protocol (
tcpor
udp)
mode:
hostfor publishing a host port on each node, or
ingressfor a swarm mode port which will be load balanced.
ports: - target: 80 published: 8080 protocol: tcp mode: host
Note: The long syntax is new in v3.2
Grant access to secrets on a per-service basis using the per-service
secrets configuration. Two different syntax variants are supported.
Note: The secret must already exist or be defined in the top-level
secretsconfiguration of this stack file, or stack deployment will fail.
The short syntax variant only specifies the secret name. This grants the container access to the secret and mounts it at
/run/secrets/<secret_name> within the container. The source name and destination mountpoint are both set to the secret name.
The following example uses the short syntax to grant the
redis service access to the
my_secret and
my_other_secret secrets. The value of
my_secret is set to the contents of the file
./my_secret.txt, and
my_other_secret is defined as an external resource, which means that it has already been defined in Docker, either by running the
docker secret create command or by another stack deployment. If the external secret does not exist, the stack deployment fails with a
secret not found error.
version: "3.1" services: redis: image: redis:latest deploy: replicas: 1 secrets: - my_secret - my_other_secret secrets: my_secret: file: ./my_secret.txt my_other_secret: external: true
The long syntax provides more granularity in how the secret is created within the service’s task containers.
source: The name of the secret as it exists in Docker.
target: The name of the file that will be mounted in
/run/secrets/in the service’s task containers. Defaults to
sourceif not specified.
uidand
gid: The numeric UID or GID which will own the file within
/run/secrets/in the service’s task containers. Both default to
0if not specified.
mode: The permissions for the file that will be mounted in
/run/secrets/in the service’s task containers, in octal notation. For instance,
0444represents world-readable. The default in Docker 1.13.1 is
0000, but will be
0444in the future. Secrets name of the
my_secret to
redis_secret within the container, sets the mode to
0440 (group-readable) and sets the user and group to
103. The
redis service does not have access to the
my_other_secret secret.
version: "3.1" services: redis: image: redis:latest deploy: replicas: 1 secrets: - source: my_secret target: redis_secret uid: '103' gid: '103' mode: 0440 secrets: my_secret: file: ./my_secret.txt my_other_secret: external: true
You can grant a service access to multiple secrets and you can mix long and short syntax. Defining a secret does not imply granting a service access to it.
Override the default labeling scheme for each container.
security_opt: - label:user:USER - label:role:ROLE
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file..
Sets an alternative signal to stop the container. By default
stop uses SIGTERM. Setting an alternative signal using
stop_signal will cause
stop to send that signal instead.
stop_signal: SIGUSR1
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Override the default ulimits for a container. You can either specify a single limit as an integer or soft/hard limits as a mapping.
ulimits: nproc: 65535 nofile: soft: 20000 hard: 40000
userns_mode: "host"
Disables the user namespace for this service, if Docker daemon is configured with user namespaces. See dockerd for more information.
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
Mount host paths or named volumes, specified as sub-options to a service..
Note: The top-level volumes key defines a named volume and references it from each service’s
volumeslist. This replaces
volumes_fromin earlier versions of the Compose file format. See Use volumes and Volume Plugins for general information on volumes..
version: "3.2" services: web: image: nginx:alpine volumes: - type: volume source: mydata target: /data volume: nocopy: true - type: bind source: ./static target: /opt/app/static db: image: postgres:latest volumes: - "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock" - "dbdata:/var/lib/postgresql/data" volumes: mydata: dbdata:
Note: See Use volumes and Volume Plugins for general information on volumes.
Optionally specify a path on the host machine (
HOST:CONTAINER), or an access mode (
HOST:CONTAINER:ro).
The long form syntax allows the configuration of additional fields that can’t be expressed in the short form.
type: the mount type
volumeor
bind
source: the source of the mount, a path on the host for a bind mount, or the name of a volume defined in the top-level
volumeskey
target: the path in the container where the volume will be mounted
read_only: flag to set the volume as read-only
bind: configure additional bind options
propagation: the propagation mode used for the bind
volume: configure additional volume options
nocopy: flag to disable copying of data from a container when a volume is created
version: "3.2" services: web: image: nginx:alpine ports: - "80:80" networks: webnet: volumes: - type: volume source: mydata target: /data volume: nocopy: true - type: bind source: ./static target: /opt/app/static
Note: The long syntax is new in v3.2
When working with services, swarms, and
docker-stack.yml files, keep in mind that the tasks (containers) backing a service can be deployed on any node in a swarm, which in order to persist the data on the swarm, and is constrained to run only on
manager nodes. Here is the relevant snip-it from that file:
version: "3" services: db: image: postgres:9.4 volumes: - db-data:/var/lib/postgresql/data networks: - backend deploy: placement: constraints: [node.role == manager].
Here is an example of configuring a volume as
cached:
version: '3' services: php: image: php:7.1-fpm ports: - 9000 volumes: - .:/var/www/project:cached
Full detail on these flags, the problems they solve, and their
docker run counterparts is in the Docker for Mac topic Performance tuning for volume mounts (shared filesystems).
no is the default restart policy, and it will not restart a container under any circumstance. When
always is specified, the container always restarts. The
on-failure policy restarts a container if the exit code indicates an on-failure error.
restart: "no" restart: always restart: on-failure restart: unless-stopped
Each of these is a single value, analogous to its docker run counterpart.
user: postgresql working_dir: /code domainname: foo.com hostname: foo ipc: host mac_address: 02:42:ac:11:65:43 privileged: true read_only: true shm_size: 64M stdin_open: true tty: true.
While it is possible to declare volumes on the file as part of the service declaration, this section allows you to create named volumes (without relying on
volumes_from) that can be reused across multiple services, and are easily retrieved and inspected using the docker command line or API. See the docker volume subcommand documentation for more information.
See Use volumes and Volume Plugins for general information on volumes.
Here’s an example of a two-service setup where a database’s data directory is shared with another service as a volume so that it can be periodically backed up:
version: "3"
Specify a list of options as key-value pairs to pass to the driver for this volume. Those options are driver-dependent - consult the driver’s documentation for more information. Optional.
driver_opts: foo: "bar" baz: 1
External volumes are always created with docker stack deploy
External volumes that do not exist will be created if you use docker stack deploy to launch the app in swarm mode (instead of docker compose up). In swarm mode, a volume is automatically created when it is defined by a service. As service tasks are scheduled on new nodes, swarmkit creates the volume on the local node. To learn more, see moby/moby#29976.
Docker defaults to using a
bridge network on a single host. For examples of how to work with bridge networks, see the Docker Labs tutorial on Bridge networking.
The
overlay driver creates a named network across multiple nodes in a swarm.
For a working example of how to build and use an
overlay network with a service in swarm mode, see the Docker Labs tutorial on Overlay networking and service discovery.
For an in-depth look at how it works under the hood, see the networking concepts lab on the Overlay Driver Network Architecture.
Specify a list of options as key-value pairs to pass to the driver for this network. Those options are driver-dependent - consult the driver’s documentation for more information. Optional.
driver_opts: foo: "bar" baz: 1
Enable IPv6 networking on this network.
A full example:
ipam: driver: default config: - subnet: 172.28.0.0/16
Note: Additional IPAM configurations, such as
gateway, are only honored for version 2 at the moment.
By default, Docker also connects a bridge network to it to provide external connectivity. If you want to create an externally isolated overlay network, you can set this option to
true."
If set to
true, specifies that this network has been created outside of Compose.
docker-compose up will not attempt to create it, and will raise an error if it doesn’t exist.
external cannot be used in conjunction with other network configuration keys (
driver,
driver_opts,
The top-level
configs declaration defines or references configs which can be granted to the services in this stack. The source of the config is either
file or
external.
file: The config is created with the contents of the file at the specified path.
external: If set to true, specifies that this config has already been created. Docker will not attempt to create it, and if it does not exist, a
config not founderror occurs.
In this example,
my_first_config will be created (as
<stack_name>_my_first_config)when the stack is deployed, and
my_second_config already exists in Docker.
configs: my_first_config: file: ./config_data my_second_config: external: true
Another variant for external configs is when the name of the config in Docker is different from the name that will exist within the service. The following example modifies the previous one to use the external config called
redis_config.
configs: my_first_config: file: ./config_data my_second_config: external: name: redis_config
You still need to grant access to the config to each service in the stack.
The top-level
secrets declaration defines or references secrets which can be granted to the services in this stack. The source of the secret is either
file or
external.
file: The secret is created with the contents of the file at the specified path.
external: If set to true, specifies that this secret has already been created. Docker will not attempt to create it, and if it does not exist, a
secret not founderror occurs.
In this example,
my_first_secret will be created (as
<stack_name>_my_first_secret)when the stack is deployed, and
my_second_secret already exists in Docker.
secrets: my_first_secret: file: ./secret_data my_second_secret: external: true
Another variant for external secrets is when the name of the secret in Docker is different from the name that will exist within the service. The following example modifies the previous one to use the external secret called
redis_secret.
secrets: my_first_secret: file: ./secret_data my_second_secret: external: name: redis_secret
You still need to grant access to the secrets to each service in the stack..
fig, composition, compose, docker. | http://docs.w3cub.com/docker~17/compose/compose-file/ | CC-MAIN-2018-13 | refinedweb | 4,374 | 56.15 |
replace the signal mask, and then suspend the process
#include <signal.h> int sigsuspend( const sigset_t *sigmask );
The sigsuspend() function replaces the process's signal mask with the set of signals pointed to by sigmask and then suspends the process until delivery of a signal whose action is either to execute a signal-catching function (then return), or to terminate the process.
A value of -1 is returned (if it returns at all), and errno is set.
/* * This program pauses until a signal other than * a SIGINT occurs. In this case a SIGALRM. */ #include <stdio.h> #include <signal.h> #include <stdlib.h> #include <unistd.h> sigset_t set; void main() { sigemptyset( &set ); sigaddset( &set, SIGINT ); printf( "Program suspended and immune to breaks.\n" ); printf( "A SIGALRM will terminate the program" " in 10 seconds.\n" ); alarm( 10 ); sigsuspend( &set ); }
POSIX 1003.1
errno, pause(), sigaction(), sigpending(), sigprocmask() | https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/sigsuspend.html | CC-MAIN-2022-33 | refinedweb | 145 | 68.47 |
Red Hat Bugzilla – Bug 25731
strcpy breaks with -O2 -fno-inline
Last modified: 2016-11-24 09:53:58 EST
$ cat foo.c
#include <string.h>
int main(void)
{
char foo[4];
strcpy(foo, "foo");
return 0;
}
$ gcc -O2 -fno-inline foo.c
/tmp/ccLwrlUH.o: In function `main':
/tmp/ccLwrlUH.o(.text+0x5c): undefined reference to `__strcpy_small'
collect2: ld returned 1 exit status
$ rpm -q glibc gcc
glibc-2.2.1-3
gcc-2.96-71
Pass -O2 -fno-inline -D__NO_STRING_INLINES then. Really, there is nothing
glibc can do about this (apart from exporting all those inlines as static
functions but that makes no sense) and gcc does not pass any macros
which would tell whether -finline or -fno-inline was passed, so it has to use
__OPTIMIZE__ (and has done that for ages). | https://bugzilla.redhat.com/show_bug.cgi?id=25731 | CC-MAIN-2018-34 | refinedweb | 135 | 72.76 |
xsh - XML Editing Shell
xsh [options] script_or_command xsh [options] -al script [arguments ...] xsh [options] -aC command [arguments ...] xsh [options] -p commands < input.xml > output.xml xsh [options] -I input.xml -O output.xml commands xsh [options] -P file.xml commands xsh [options] -cD compiled_script.pl ....
See XSH2 manual page or
@ARGV (in
XML::XSH2::Map namespace). If used with
--commands or without
--load, the first argument should contain XSH commands and the rest are the passed to via
@ARGV.
Indicate, that the command-line arguments should be treated as input filenames. If used with
--commands or without
--load, the first argument should contain XSH commands. A XSH script specified with
--load or XSH commands specified with
--commands (or both in this order) are evaluated repeatedly on each input file. For example, running
$ xsh -l script.xsh -FC command file1.xml file2.xml ...
is equivalent to
$ xsh --stdin -aC command file1.xml file2.xml ... for $INPUT_FILENAME in { @ARGV } { $INPUT := open $INPUT_FILENAME; . "script.xsh"; command } ^D, XML::XSH2, XML::XSH2::Compile, XML::LibXML, XML::XUpdate, | http://search.cpan.org/~pajas/XML-XSH2-2.1.6/xsh | CC-MAIN-2018-05 | refinedweb | 173 | 62.34 |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: Index rebuilding
Well, actually, I believe there is clearly is no argument about the definition of a balanced B-tree. I think the guy who defined it in 1972, Rudolf Bayer, gets to keep his word.
."
See: (National Institute of Standards and Technology) for the full information, or if you want the full glory of the ancient paper, try Rudolf Bayer and Edward M. McCreight, Organization and Maintenance of Large Ordered Indices, Acta Informatica, 1:173-189, 1972.
So balance, by official definition, includes both the number of levels from
the root AND the distribution amongst the nodes. Clearly Oracle has
implemented something a bit different. IF you do not define your terms, it
is common usage to assume reference to discrepancy in height only. However,
the m/2 rule has to do with the property of maintaining minimal height. Now
Oracle, by implementing grow by root split, uniformly increases the number
of levels to all levels. This indeed allows for divergence from the minimal
possible height to a node or set of nodes, but it saves a heck of a lot of
shuffling time.
(For the record, B*-trees are a variant of B-trees where only some component of the value required to navigate to the next layer is kept except for the final level "leaf nodes" where the full key and pointer(s) to data rows must be available.) Oracle also added in forward and backward leaf node pointers, which isn't strictly a feature of balanced B-trees, but I'm sure glad they did.
Anyway, by height Oracle B-tree indexes are always balanced by nature of implementation. By data distribution they have clearly decided it is better to defer all shuffling work except the most opportunistic cheap things such as noticing that a node is completely empty. Without the code, you'd have to engage in accidental discovery to find other cases where they might tune up the distribution on the fly. I always thought that one of the points of Richard's very fine paper was that Oracle made a very good choice in this regard. It is in tune with their general implementation philosophy that if you can defer work, defer it. That makes sense because there is a very good chance you'll never need to do the work at all. They diverge from that philosophy, where I've had the chance to talk to developers about the algorithms, only when doing the "cleanup" or deferable work is as cheap or nearly as cheap as not doing it.
So I'm completely in agreement that bastardization of carefully defined terms should be avoided.
Since there is a bias of usage toward only considering "balance" to mean number of levels, I believe I have personally always been careful to note when I refer to lack of balance as meaning bad distribution of keys in the leaf nodes or violation of the m/2 rule. In the late seventies there was actually a bit of a spat in the industry regarding whether you were allowed to call your indexing scheme a balanced B-tree. My recollection is that the rule was relaxed to m/4, and the maximum difference in levels traversed from root to leaf was settled on as something like 2 or 3 (where strictly it should be 1) but I can't find anything in print about that. Anyway, for a long time now we've called things B-trees that are a lot like B-trees but which don't follow the rules which even now still persist in the NIST definitions. Without the m/2 rule, by reductio absurdum, one row per leaf node qualifies as a balanced B-tree, and is in fact perfectly balanced. Even with the m/2 rule, if you choose 2 for m you get a binary tree. Clearly that is not what we want for optimal performance.
Maybe we should talk about Oracle index trees and unequal fullness of leaf
nodes, but folks mostly refer to what Oracle has implemented as balanced
B*trees.
Since they are implemented height balanced by artifact of spliting the root to grow in height, I guess folks that talk about an Oracle index being unbalanced are either talking about distribution of keys in the leaf nodes or they are ignorant, but I hope folks think about it a bit before they make an assumption which.
Regards,
mwf
-----Original Message-----
From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org]On Behalf Of Cary Millsap Sent: Sunday, November 14, 2004 11:46 AM To: oracle-l_at_freelists.org
Subject: RE: Index rebuilding
Doesn't it come down to making sure you've defined your terms? A lot of =
the
argument seems to be an implicit disagreement over what the word =
"balanced"
means. In Knuth and other computer science texts that discuss indexes, I believe the definition of "balanced" is "an index is balanced iff (if = andSt
only if) all leaf nodes have the same distance to the root." By this definition, Oracle B*-tree indexes are ALWAYS balanced, and NEVER un-balanced. This point is not in contention, correct?
I think what's happening is that people who are complaining about un-balanced-ness are redefining the word "balance" to mean something completely different.
In general, I think it's sloppy to take change the meaning of a =
scientific
word in a discussion or "white paper." When I say "scientific word," I = mean
one that has been carefully defined and used in a specific context = for--in
this case--decades. It's one of the things that drives me nuts about the Oracle culture, this bastardization of carefully defined, = well-established
terms for the convenience of some Oracle author who writes more than he reads. :)
I guess the problem is analogous to the one being solved in the XML =
world by
the implementation of XML namespaces. Maybe instead of the term =
"balanced",
we should use the term "knuth:balanced" or "choose-an-author:balanced". = In
this case, I would suggest that the default namespace should be set to
"knuth". Alex
Sent: Friday, November 12, 2004 5:44 PM
To: DGoulet_at_vicr.com; oraclel_at_weikop.com; oracle-l_at_freelists.org Subject: RE: Index rebuilding
I agree with Dick! Always and never are to be used in cases like "the sun always rises in the east: or "I've never enjoyed working with Oracle more than I do now" :)
Regards!
> Looked at Richard Foote's paper. Don't know about > that. I did prove to > OTS several years ago that a block could get "lost" > in an index due to > deletion/updates that left it empty. I believe that > got finally fixed > in Oracle 8i. I've still seen cases of index's > becoming unbalanced, I > know the docs day it's impossible, but it does > happen without the index > height increasing. And I still believe that index > deletes don't get > flushed so efficiently, as Richard suggests. If > that was the case then > I can't explain why an index rebuild can cause an > index to shrink by 30% > or more. And recent experience still shows that a > rebuild can cause > significant performance improvement. And Oracle has > provided the > capability to rebuild indexes which is not trivial.=20 > Therefore, NEVER > use the word "never" unless your absolutely certain > that under all > circumstances it will be absolutely true. And in > the current context, > that is the truth, that is, never can never be an > absolute. >=20 > BTW: Since we've a few "myth busters" in the group.=20 > I appreciate the > effort these people put into "myth busting", even if > they are later > proven to have erred. At a very minimum they start > discussion and > re-examination of commonly held beliefs that can > have changed or lost > significance over the years(like it's best to have > all of a tables data > in the first extent). Such discussion, although > sometimes the start of > "Holy Wars", is healthy (not the Holy War though) > and a necessary part > of all of us growing. That being said, let it be > noted that I agree to > disagree, in part, with Mr Foote. >=20 >=20 > Dick Goulet > Senior Oracle DBA > Oracle Certified 8i DBA > -----Original Message----- > From: Jared Still [mailto:jkstill_at_gmail.com]=3D20 > Sent: Friday, November 12, 2004 12:44 PM > To: oraclel_at_weikop.com > Cc: oracle-l_at_freelists.org; steve_at_trolltec.co.uk > Subject: Re: Index rebuilding >=20 > On Fri, 12 Nov 2004 11:49:46 +0100, Karsten Weikop > <oraclel_at_weikop.com> > wrote: > > Please read the execellent paper from Richard > Foote (which can be > > downloaded from Miracle's site): > > >
> > Conclusion form this paper: Never Rebuild, but > find the course to the > > problem. >=20 > Never? >=20 > I think you will find that statement as difficult to > support as > 'always rebuild'. >=20 > --=3D20 > Jared Still > Certifiable Oracle DBA and Part Time Perl Evangelist > -- > > -- > >=20 =09 __________________________________=20
-- -- -- on Sun Nov 14 2004 - 12:59:54 CST
Original text of this message | http://www.orafaq.com/maillist/oracle-l/2004/11/14/0529.htm | CC-MAIN-2014-35 | refinedweb | 1,505 | 59.13 |
#include <sys/stream.h> #include <sys/strsun.h> mblk_t *mexchange(queue_t *wq, mblk_t *mp, size_t size, uchar_t type, int32_t primtype);
Solaris DDI specific (Solaris DDI).
Optionally, write queue associated with the read queue to be used on failure (see below).
Optionally, the message to exchange.
Size of the returned message.
Type of the returned message.
Optionally, a 4 byte value to store at the beginning of the returned message.
The mexchange() function exchanges the passed in message for another message of the specified size and type.
If mp is not NULL, is of at least size bytes, and has only one reference (see dupmsg(9F)), mp is converted to be of the specified size and type. Otherwise, a new message of the specified size and type is allocated. If allocation fails, and wq is not NULL, merror(9F) attempts to send an error to the stream head.
Finally, if primtype is not -1 and size is at least 4 bytes, the first 4 bytes are assigned to be primtype. This is chiefly useful for STREAMS-based protocols such as DLPI and TPI which store the protocol message type in the first 4 bytes of each message.
A pointer to the requested message is returned on success. NULL is returned on failure.
This function can be called from user, kernel or interrupt context.
STREAMS Programming Guide | https://docs.oracle.com/cd/E36784_01/html/E36886/mexchange-9f.html | CC-MAIN-2018-13 | refinedweb | 225 | 75.4 |
HI everyone
When I used brew to update python3, I had Python 3.5.2 installed. However,
$ python3 --version
Python 3.4.1
Then I try to update it.
$ brew install python3
I got this warning
Warning: python3-3.5.2_1 already installed, it's just not linked
How can I remove the link to 3.4.1. and relink it to 3.5.2???
I also installed matplotlib. It installed successfully. However, I cannot import it when I used python3. I get this kind of warning.
import matplotlib
[1, 2, 3, 4, 5]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/username/Library/Python/3.4/lib/python/site-packages/matplotlib/__init__.py", line 122, in <module>
from matplotlib.cbook import is_string_like, mplDeprecation, dedent, get_label
I suspect matplotlib is linked to Python3.5.2. Therefore, when I use python3 which is actually Python3.4.1, I get this kind of error.
If anyone know how to link the Python3.5.2 to python3 on OSX, please let me know.
Thank you | http://python-forum.org/viewtopic.php?f=30&t=21045&sid=b509f68ff3b8a8afd7895a0a4f3d1a7d | CC-MAIN-2017-34 | refinedweb | 177 | 79.67 |
if(yesorno != yes). Please post how I can do this, or a revised version of the code.
#include <stdafx.h> #include <iostream> #include <cstdlib> using namespace std; int main() { //This states all of the variables that will be used in the code. int division; int addition; int multiplication; int subtraction; int yesorno; int operation; int number1; int number2; //This states the forumula for each variable that will not be defined later division = number1 / number2; addition = number1 + number2; multiplication = number1 * number2; subtraction = number1 - number2; //Asks the user what they want to do cout << "Would you like to add, subtract, multiply, or add?"; cout << "\n"; cin >> operation; //Double checks with the user with what they want to do cout << "So you would like to "; cout << operation; cout << "?"; cout << "YES/NO"; cout << "\n"; cin >> yesorno; //This decides the course of the user if(yesorno != yes) //What Do I put here to make the program to go back to line 7? if(yesorno = yes) cout << "What is the number you want to start out with?";
This post has been edited by CodingNewb: 09 December 2009 - 07:39 PM | http://www.dreamincode.net/forums/topic/144579-is-it-possible-to-return-to-a-certain-line/ | CC-MAIN-2017-30 | refinedweb | 184 | 65.76 |
[
]
Michael McCandless commented on LUCENE-1278:
--------------------------------------------
{quote}
I had a look at today's patch, but I stopped at DocumentsWriter because it contains a lot
of layout changes, so it's hard to focus on the functional differences.
{quote}
I also stopped at DocumentsWriter: it seems like nearly all the
changes are cosmetic. SegmentTermEnum is also hard to read.
In general it's best to not make cosmetic changes (moving around
import lines, changing whitespace, re-justifying whole paragraphs in
javadocs, etc.) at the same time as a "real" change, when possible. I
do admit there is a strong temptation ;)
Also, indentation should be two spaces, not tab. A number of sources
were changed to tab in the patch.
> | http://mail-archives.apache.org/mod_mbox/lucene-dev/200805.mbox/%3C639239521.1209988915628.JavaMail.jira@brutus%3E | CC-MAIN-2017-26 | refinedweb | 119 | 63.39 |
Related
Tutorial
Using Bootstrap 4 Bootstrap 4 beta is “right around the corner,” and I’m sure there are plenty who are excited to start using it with Vue. Well, they already can, using bootstrap-vue. However, be warned that bootstrap-vue (like Bootstrap 4) is not yet stable, and usage may change between releases.
Installation
As usual, bootstrap-vue can be installed from NPM or Yarn. You’ll also want to install the normal bootstrap package for styles.
# Yarn $ yarn add bootstrap-vue bootstrap # NPM $ npm install bootstrap-vue bootstrap --save
Then, in your app’s main file, enable the VueBootstrap plugin.
import Vue from 'vue'; import BootstrapVue from 'bootstrap-vue/dist/bootstrap-vue.esm'; import App from 'App.vue'; // Import the styles directly. (Or you could add them via script tags.) import 'bootstrap/dist/css/bootstrap.css'; import 'bootstrap-vue/dist/bootstrap-vue.css'; Vue.use(BootstrapVue); new Vue({ el: '#app', render: h => h(App) });
NOTE: Styles are injected globally and may affect other components. Use with care.
Components
NOTE: You don’t need to worry about including Bootstrap’s JS, interactivity is handled by the components.
Usage
Just use the various components in your app as normal! Non-interactive elements are still handled via CSS, so don’t get too worried about it deprecating all your current Bootstrap knowledge.
<template> <b-card no-block> <b-tabs> <b-tab Tab 1 Contents </b-tab> <b-tab Tab 2 Contents <b-buttonBoop</b-button> </b-tab> <b-tab title="Tab 3" disabled> Tab 3 Contents </b-tab> </b-tabs> </b-card> </template>
Documentation
Obviously there’s not a lot here to help you write a complete app, so consult the docs for Bootstrap Vue, and Bootstrap 4. | https://www.digitalocean.com/community/tutorials/vuejs-using-bootstrap4 | CC-MAIN-2020-34 | refinedweb | 288 | 57.06 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to override completely the create method of pos_session class?
Hello all,
I just want to override completely the create method of pos_session class in my custom module.
But because the original create method use return super(), I don't manage it. My new create method always return and use the old one because of the super.
I don't want to use the original create method anymore.
Here is a part of my override code :
class pos_session(osv.osv):
_inherit = 'pos.session'
def create(self, cr, uid, values, context=None):
[...]
res = super(pos_session, self).create(cr, uid, values, context=context)
return res
Thanks to help
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-override-completely-the-create-method-of-pos-session-class-96947 | CC-MAIN-2017-30 | refinedweb | 161 | 60.21 |
Library: XML
Package: DOM
Header: Poco/DOM/NamedNodeMap.h
Description.
A NamedNodeMap returned from a method must be released with a call to release() when no longer needed.
Inheritance
Direct Base Classes: DOMObject
All Base Classes: DOMObject
Known Derived Classes: DTDMap, AttrMap
Member Summary
Member Functions: getNamedItem, getNamedItemNS, item, length, removeNamedItem, removeNamedItemNS, setNamedItem, setNamedItemNS
Inherited Functions: autoRelease, duplicate, release
Destructor
~NamedNodeMap
virtual ~NamedNodeMap();
Member Functions
getNamedItem
virtual Node * getNamedItem(
const XMLString & name
) const = 0;
Retrieves a node specified by name.
getNamedItemNS
virtual Node * getNamedItemNS(
const XMLString & namespaceURI,
const XMLString & localName
) const = 0;
Retrieves a node specified by name.
item
virtual Node * item(
unsigned long index
) const = 0;
Returns the index'th item in the map. If index is greater than or equal to the number of nodes in the map, this returns null.
length
virtual unsigned long length() const = 0;
Returns the number of nodes in the map. The range of valid child node indices is 0 to length - 1 inclusive.
removeNamedItem
virtual Node * removeNamedItem(
const XMLString & name
) = 0;
Removes a node specified by name. When this map contains the attributes attached to an element, if the removed attribute is known to have a default value, an attribute immediately appears containing the default value.
removeNamedItemNS
virtual Node * removeNamedItemNS(
const XMLString & namespaceURI,
const XMLString & localName
) = 0;
Removes a node specified by name.
setNamedItem
virtual Node * setNamedItem(
Node * arg
) = 0;
Adds a node using its nodeName attribute. If a node with that name is already present in this map, it is replaced by the new one..
setNamedItemNS
virtual Node * setNamedItemNS(
Node * arg
) = 0; | https://pocoproject.org/pro/docs/Poco.XML.NamedNodeMap.html | CC-MAIN-2019-26 | refinedweb | 261 | 52.8 |
I wanted to add an animated pie chart to my previous post. The samples from the Toolkit are terrific, but sometimes it is difficult to find the easiest, most cookbook like process; so for those of you who might want to do the same, here is an annotated walk-through of creating this animated pie chart:
I began by opening Visual Studio and creating a new Silverlight Application, and saying no to the offer to create a Web application.
The UI, created in Page.xaml consists of the header and the Chart, placed in a Grid. You can easily do this in Blend or, in this case the layout is so simple, I hard-coded it in Xaml:
<UserControl
xmlns:chartingToolkit="clr-namespace:System.Windows.Controls.DataVisualization.Charting;
assembly=System.Windows.Controls.DataVisualization.Toolkit"
x:Class="Pi.MainPage"
xmlns=""
xmlns:
<Grid x:
<Grid.RowDefinitions>
<RowDefinition Height="50" />
<RowDefinition Height="*" />
</Grid.RowDefinitions>
<TextBlock Text="Division of Effort Among Activities"
Margin="20,10,0,0"
Grid.
<chartingToolkit:Chart x:Name="ActivityChart"
Margin="20"
Grid.
<chartingToolkit:Chart.Series>
<chartingToolkit:PieSeries
</chartingToolkit:Chart.Series>
</chartingToolkit:Chart>
</Grid>
</UserControl>
The key Xaml here is the Chart control which contains a Series which in turn contains a PieSeries. The PieSeries has an Independent and a Dependent set of values, as explained in some detail here.
To have a set of values to bind to, I created a data class that I named Activity. The Activity class has three properties of interest:
- Name – used to hold the independent value
- Value – used to hold the dependent value
- Activities – returns a list of Activity objects
1: using System;
2: using System.Collections.Generic;
3:
4: namespace Pi
5: {
6: public class Activity
7: {
8: public string Name { get; set; }
9: public double Value { get; set; }
10:
11: // static property to retrieve
12: // List of Activity objects
13: public static List<Activity> Activities
14: {
15: get
16: {
17: // value of 5 and names of activities are hard coded
18: // Generalizing is left as an exercise for the ambitious
19: var percentages = FillPercentages(5);
20: var activitiesList = new List<Activity>()
21: {
22: new Activity() {Name = "RiaServices", Value = percentages[0]},
23: new Activity() {Name = "DataGrid", Value = percentages[1]},
24: new Activity() {Name = "Behaviors", Value = percentages[2]},
25: new Activity() {Name = "VSM", Value = percentages[3]},
26: new Activity() {Name = "SampleData", Value = percentages[4]}
27: };
28: return activitiesList;
29: }
30: }
31:
32: // fill List<Double> with n doubles that sum to 1.0
33: // where n = numDoubles
34: private static List<Double> FillPercentages(int numDoubles)
35: {
36: var pctgs = new List< Double >();
37: var r = new Random();
38: double total = 0.0;
39:
40: for (int i = 0; i < numDoubles-1;)
41: {
42:
43: double val = r.NextDouble();
44: if ( val + total < 1.0)
45: {
46: pctgs.Add(val);
47: ++i;
48: }
49: }
50: pctgs.Add( 1.0 - total ); // final value
51: return pctgs;
52: }
53:
54: } // end class
55: } // end namespace
On line 13 we start the definition of the Activities property (only a get accessor is implemented)
On line 19 we delegate to a helper method generating 5 random values between 0 and 1 that together sum to 1. These will be treated as the relative percentages reflected in the pie chart.
Lines 20-27 fill our five hard-coded activities with the generated percentages and on line 28 we return the List of Activity objects we just created.
The code-behind for MainPage uses a dispatch timer to call a helper method FillPie every 4 seconds.
The helper method sets the ItemSource property on the PieSeries to whatever is returned by the static Activities property of the Activity class. Retrieving that property causes the percentages to be regenerated, and the chart is redrawn.
1: using System;
2: using System.Windows.Controls;
3: using System.Windows.Controls.DataVisualization.Charting;
4: using System.Windows.Threading;
5:
6: namespace Pi
7: {
8: public partial class MainPage : UserControl
9: {
10: public MainPage()
11: {
12: InitializeComponent();
13: Animate();
14: }
15:
16: private void Animate()
17: {
18: var timer = new DispatcherTimer();
19: timer.Start(); // Run once for display
20: // lambda syntax - same as
21: // timer.Tick +=new EventHandler(FillPie);
22: // but then you'd need FillPie to take an object and event args
23: timer.Tick +=
24: ( ( s, args ) => FillPie() ); // every tick call FillPie
25:
26: //
27: timer.Interval = new TimeSpan( 0, 0, 4 ); // 4 seconds
28: timer.Start();
29: }
30:
31:
32: private void FillPie()
33: {
34: var cs = ActivityChart.Series[0] as PieSeries;
35: if ( cs != null )
36: {
37: // generating the data is handled by the static property
38: cs.ItemsSource = Activity.Activities;
39: }
40: else
41: {
42: throw new InvalidCastException( "Expected Series[0] to be a column" );
43: } // end else
44: } // end method
45: } // end class
46: } // end namespace
Notice that on lines 23 and 24 we register the FillPie method with the event using lambda notation; this makes short work of using a method that does not happen to need the standard arguments (object, eventArgs).
Timer.Start is called on line 19 to cause an immediate drawing of the pie, and then again on line 28 to implement the new time interval. | http://jesseliberty.com/2009/08/27/pie-chart-easy-as%E2%80%A6/ | CC-MAIN-2017-04 | refinedweb | 859 | 51.28 |
04, 2008 05:59 PM
RubyConf '08 featured many talks about the various Ruby VMs. The talks varied from in-depth technical coverage of implementation details, to tech demos and general talks about Ruby. The sample source from the presentation slides should make the concept clear:
def open_fd(path) fd = _C_(%q[ /* C code */ return INT2FIX(open(RSTRING_PTR(path), O_RDONLY)); ]) raise 'open error' if fd == -1 yield fd ensure raise 'close error' if -1 == _C_(%q[ /* C Code */ return INT2FIX(close(FIX2INT(fd))); ]) end
It's differs from RubyInline by allowing to put C snippets inside a Ruby method. The current version received special support in YARV. Ricsin's SVN repository is available publically.
Further projects are a Ruby to C compiler (at round 28:40), followed by Atomic-Ruby, which tries to shrink Ruby by allowing to exclude some parts.
At 41:00 the status of the Ruby MVM project is explained as well (InfoQ reported about Ruby MVM).
Evan Phoenix' talk on Rubinius explained the status of the Rubinius project and of its C++ VM.
The reasons for the rewrite of the old C VM to the C++ VM is explained. The reasons are type safety and how C++ helped make the code simpler and removed many manual checks.
At 18:04 the current state of the implementation of primitives is discussed (how to write primitives in C). At 26:30, method dispatch and the strategies used to make it fast are discussed. At 35:00, the current state of MethodContext ("stack frames") allocation is explained. In older version of Rubinius, these were heap allocated, but the current version tries to allocate them on a stack (in a dedicated memory region) to reduce allocation cost. However, MethodContexts can be kept alive beyond the lifetime of the methods that they were created for (eg. by a Closure). These objects are kept alive - but since the memory region dedicated to MethodContexts is limited in size, it will fill up at some point, which will trigger a GC run to clean up MethodContexts which are not referenced anymore.
At 38:40 follows a discussion of Rubinius' support for Ruby extensions, which is mportant to run C extension gems such as hpricot, mongrel, mysql, sqlite drivers. Rubinius' handling of the problems of extensions is also explained (eg. problems with the generational GC or segfaulting C code).
Another talk about these topics is "How Ruby can be fast" by Glenn Vanderburg. While not a VM implementer, he discusses some reasons why Ruby is or can be slow. At 07:00 an explanation of Garbage Collection gives an overview of the theory of Generational GCs, followed by an explanation of performance optimizations for method dispatch at 19:35.
"Ruby Persistence in MagLev" by Bob Walker, Allan Ottis, explained the current status of MagLev. Bob Walker explains the basic benefits of Gemstone's persistence model, which allows to simply persist object graphs instead of having to map them to a relational model, thus avoiding a whole lot of issues with ORM libraries and tools.
At 14:30 Allan Ottis follows with a detailed explanations of the inner workings of MagLev, starting with the implementation of the object persistence. At 18:50 he continues with an overview of similarities and differences between Smalltalk and Ruby and what problems they pose for implementing Ruby on Smalltalk. At 22:29 the compilation process is explained, as well as the current parsing solution (ie. a Ruby server to parse Ruby code and use the ParseTree gem to return an AST formatted as ParseTree s-exprs).
At 25:00 the execution modes, both interpretation and native code generation are explained. At 30:30 the implementation of the allocation of Contexts ("stack frames") is shown, followed by the details of the object memory (heap) and the garbage collector.
The questions start at 44:20, with questions about distributed caching, future parsing strategies (using ruby_parser), the behavior of transactions across the distributed object memory.
Some of the more demo oriented Ruby VM talks were
Finally, this is rounded off by Brian Ford's talk on RubySpec, where he explains the RubySpec project which now has tens of thousands of tests which define Ruby's behavior - a crucial tool of every alternative Ruby implementation.
The Agile Business Analyst: Skills and Techniques needed for Agile
Give-away eBook – Confessions of an IT Manager
Download the Free Adobe® Flex® Builder 3 Trial
Server Gated Cryptography | http://www.infoq.com/news/2008/12/rubyconf08-videos-rubyvms | crawl-002 | refinedweb | 740 | 50.36 |
Word Embedding and NLP with TF2.0 and Keras on Twitter Sentiment Data
Word Embedding and Sentiment Analysis
What is Word Embedding?
Natural Language Processing(NLP) refers to computer systems designed to understand human language. Human language, like English or Hindi consists of words and sentences, and NLP attempts to extract information from these sentences.
Machine learning and deep learning algorithms only take numeric input so how do we convert text to numbers?
A word embedding is a learned representation for text where words that have the same meaning have a similar representation. Embeddings translate large sparse vectors into a lower-dimensional space that preserves semantic relationships. Word embeddings is a technique where individual words of a domain or language are represented as real-valued vectors in a lower dimensional space. Sparse Matrix problem with BOW is solved by mapping high-dimensional data into a lower-dimensional space. Lack of meaningful relationship issue of BOW is solved by placing vectors of semantically similar items close to each other. This way words that have similar meaning have similar distances in the vector space as shown below. “king is to queen as man is to woman” encoded in the vector space as well as verb Tense and Country and their capitals are encoded in low dimensional space preserving the semantic relationships.
Dataset
This is the sentiment140 dataset. It contains 1,600,000 tweets extracted using the twitter api.
We are going to use 4000 tweets for training our model. The tweets have been annotated (0 = negative, 1 = positive) and they can be used to detect sentiment.
You can download the modified dataset from here.
Watch Full Video:
Here we are importing the necessary libraries.
pandasis used to read the dataset.
numpyis used to perform basic array operations.
Tokenizeris used to split the text into tokens.
pad_sequencesis used to pad the data if necessary.
train_test_splitfrom
sklearnis used split the data into training and testing dataset.
- The other components are imported to build the neural network.
from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense,Flatten,Embedding,Activation, Dropout from tensorflow.keras.layers import Conv1D, MaxPooling1D, GlobalMaxPooling1D import numpy as np from numpy import array import pandas as pd from sklearn.model_selection import train_test_split
read_csv is used to load the data into the dataframe.
data.head() can be used to see the first 5 rows of the dataset.
df = pd.read_csv('twitter4000.csv') df.head()
Now we will see the distribution of
sentiment in out dataset. The
value_counts() function is used to get a Series containing counts of unique values. In
df there are 2000 positive sentiment reviews and 2000 negative reviews.
df['sentiment'].value_counts()
1 2000 0 2000 Name: sentiment, dtype: int64
Now we will get the text data i.e. the tweets in the form of a list.
text = df['twitts'].tolist() text[:10]
['is bored and wants to watch a movie any suggestions?', 'back in miami. waiting to unboard ship ', "@misskpey awwww dnt dis brng bak memoriessss, I thnk I'm sad. LoL", 'ughhh i am so tired blahhhhhhhhh', "@mandagoforth me bad! It's funny though. Zachary Quinto is only there for a few though. & to reply just put the @ symbol before the name!", "brr, i'm so cold. at the moment doing my assignment on Huntington's Disease, which is really depressing ", "@kevinmarquis haha yep but i really need to sleep, i feel like crap lol cant sleep when he's away god i'm pathetic!", "eating some ice-cream while I try to see @peterfacinelli's followers numbre raise...not working sadly ", '@phatty84 just hella bored at work lol', 'Food poisoning blowssss ']
We will get the labels in
y.
y = df['sentiment']
Now we will use the class
Tokenizer() to convert the data from text to numbers..
token = Tokenizer() token.fit_on_texts(text) token
<keras_preprocessing.text.Tokenizer at 0x1dfb8bae6a0>
word_indexis index -> word dictionary so every word gets a unique integer value. It starts from 0 so we will add 1 to get the
vocab_size.
vocab_size is the total number of unique words in our dataset.
vocab_size = len(token.word_index) + 1 vocab_size
10135
index_word is index -> word dictionary so every word gets a unique integer value. We can see the first 100 key value pairs of the dictionary.
import itertools print(dict(itertools.islice(token.index_word.items(), 100)))
{1: 'i', 2: 'to', 3: 'the', 4: 'a', 5: 'my', 6: 'and', 7: 'you', 8: 'is', 9: 'it', 10: 'in', 11: 'for', 12: 'of', 13: 'me', 14: 'on', 15: 'so', 16: 'that', 17: "i'm", 18: 'have', 19: 'at', 20: 'but', 21: 'just', 22: 'was', 23: 'with', 24: 'not', 25: 'be', 26: 'this', 27: 'day', 28: 'up', 29: 'now', 30: 'good', 31: 'all', 32: 'get', 33: 'out', 34: 'go', 35: 'no', 36: 'http', 37: 'today', 38: 'like', 39: 'are', 40: 'love', 41: 'your', 42: 'quot', 43: 'too', 44: 'lol', 45: 'work', 46: 'got', 47: "it's", 48: 'amp', 49: 'do', 50: 'com', 51: 'u', 52: 'back', 53: 'going', 54: 'what', 55: 'time', 56: 'from', 57: 'had', 58: 'will', 59: 'know', 60: 'about', 61: 'im', 62: 'am', 63: "don't", 64: 'can', 65: 'one', 66: 'really', 67: "can't", 68: 'we', 69: 'oh', 70: 'well', 71: 'still', 72: '2', 73: 'some', 74: 'its', 75: 'miss', 76: 'want', 77: 'see', 78: 'when', 79: 'home', 80: 'think', 81: 'an', 82: 'as', 83: 'if', 84: 'night', 85: 'need', 86: 'again', 87: 'new', 88: 'there', 89: 'morning', 90: 'here', 91: 'how', 92: 'her', 93: 'much', 94: 'thanks', 95: 'or', 96: 'they', 97: '3', 98: 'last', 99: 'off', 100: 'more'}
If we consider
x = 'i to the a and' to be our text, then using token it will be encoded as shown below.
x = ['i to the a and'] token.texts_to_sequences(x)
[[1, 2, 3, 4, 6]]
Now we will encode
text which contains all the tweets.
encoded_text = token.texts_to_sequences(text) print(encoded_text[:30])
[[8, 304, 6, 345, 2, 191, 4, 236, 254, 3079], [52, 10, 1019, 206, 2, 3080, 3081], [3082, 1197, 668, 1955, 3083, 1956, 3084, 1, 3085, 17, 115, 44], [1957, 1, 62, 15, 192, 3086], [3087, 13, 113, 47, 328, 136, 3088, 3089, 8, 101, 88, 11, 4, 285, 136, 48, 2, 448, 21, 277, 3, 3090, 218, 3, 449], [3091, 17, 15, 315, 19, 3, 892, 164, 5, 1459, 14, 3092, 3093, 386, 8, 66, 1460], [3094, 110, 366, 20, 1, 66, 85, 2, 108, 1, 117, 38, 536, 44, 182, 108, 78, 346, 207, 305, 17, 3095], [450, 73, 537, 569, 295, 1, 316, 2, 77, 3096, 367, 3097, 1461, 24, 187, 893], [3098, 21, 1958, 304, 19, 45, 44], [409, 3099, 3100], [3101, 132, 609, 79, 3, 193, 368, 17, 131, 3, 158, 199], [3102, 127, 1, 139, 226, 2, 1020, 9, 29, 1, 222, 74, 55, 2, 3103, 16, 3104], [67, 894, 423], [1959, 119, 52, 56, 211, 159, 387, 669, 48, 68, 255, 1462, 3, 3105, 71, 570, 5, 1959, 329], [1960, 3106, 3107, 46, 3108, 3109], [3110, 1463, 70, 19, 227, 17, 28, 2], [3111, 1, 245, 212, 1961, 51, 72, 36, 146, 246, 3112, 1, 538, 20, 74, 507, 1962, 410, 1, 1198, 219, 787], [3113, 69, 1, 1021, 5, 3114, 33, 2, 1199, 451, 263, 12, 9, 388, 1, 143, 76, 2, 316, 1464, 73, 159, 1465], [3115, 3116, 31, 12, 39, 3117, 9, 20, 96, 39, 24, 1466, 3118, 1200, 386, 507, 369, 15, 68, 571, 32, 2, 1022, 2, 51], [3119, 165, 88, 35, 64, 49, 1963, 6, 24, 52, 10, 572, 306, 1467, 176, 152, 75, 7], [3120, 15, 788, 78, 9, 610, 95, 91, 20, 1, 3121, 789, 370, 7, 91, 39, 7, 91, 18, 7, 102], [54, 8, 28, 23, 5, 3122], [52, 3123, 86, 37, 611, 15, 473, 452, 82, 1201, 101, 719, 153, 1468, 790, 11, 188, 3124], [3125, 69, 1, 670, 83, 3, 1964, 14, 3, 1469, 8, 720, 2, 34], [3126, 7, 347, 42, 3127, 11, 573, 42, 1, 22, 1965, 1966], [3128, 87, 671, 8, 101, 3129, 153, 207, 256, 16, 8, 1023], [1470, 3, 3130, 38, 35, 223], [539, 1, 18, 4, 389, 10, 5, 1967, 43], [1, 612, 2, 18, 255, 5, 1024, 1471], [52, 56, 3, 424, 3131, 102, 4, 148, 55, 508, 1, 98, 57, 3132, 3133, 3134, 3135, 227, 3136, 8, 14, 92, 116, 2, 791, 13]]
We can see that the length of each tweet is different. The length of all encoded tweets must be same before feeding them to the neural network. Hence we are using
pad_sequences which pads zeros to reviews with length less than 120.
max_length = 120 X = pad_sequences(encoded_text, maxlen=max_length, padding='post') print(X)
[[ 8 304 6 ... 0 0 0] [ 52 10 1019 ... 0 0 0] [ 3082 1197 668 ... 0 0 0] ... [ 1033 21 1021 ... 0 0 0] [10134 134 7 ... 0 0 0] [ 94 11 226 ... 0 0 0]]
Now we can see that we have 4000 tweets all having the same length of 120.
X.shape
(4000, 120).
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 42, test_size = 0.2, stratify = y)
A
Sequential() model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor.
The
Embedding() layer is initialized with random weights and will learn an embedding for all of the words in the training dataset. It requires 3 arguments:
input_dim: This is the size of the vocabulary in the text data which is 10135 in our case.
output_dim: This is the size of the vector space in which words will be embedded. It defines the size of the output vectors from this layer for each word. We have set it to 300.
input_length: Length of input sequences, when it is constant. In our case it is 120.
Conv1D() is a 1D Convolution Layer, this layer is very effective for deriving features from a fixed-length segment of the overall dataset, where it is not so important where the feature is located in the segment. In the
Conv1D() layer we are learning a total of
64 filters with size of convolutional window as 8. We will be using
ReLu activation function. The rectified linear activation function or
ReLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero.
MaxPool1D() downsamples the input representation by taking the maximum value over the window defined by pool_size which is 2 in case of this neural network.
Dropout() is used to randomly set the outgoing edges of hidden units to 0 at each update of the training phase. The value passed in dropout specifies the probability at which outputs of the layer are dropped out.
GlobalMaxPooling1D() downsamples the input representation by taking the maximum value over the time dimension.
Dense() is the regular deeply connected neural network layer. The output layer is a dense layer with 1 neuron because we are predicting a single value.
Sigmoid function is used because it exists between (0 to 1) and this facilitates us to predict a binary input.
vec_size = 300 model = Sequential() model.add(Embedding(vocab_size, vec_size, input_length=max_length)) model.add(Conv1D(64, 8, activation = 'relu')) model.add(MaxPooling1D(2)) model.add(Dropout(0.2)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(16, activation='relu')) model.add(GlobalMaxPooling1D()) model.add(Dense(1, activation='sigmoid'))
Here we are
compiling the model and
fitting it to the training data. We will use 5', loss = 'binary_crossentropy', metrics = ['accuracy'])
%%time model.fit(X_train, y_train, epochs = 5, validation_data = (X_test, y_test))
Train on 3200 samples, validate on 800 samples Epoch 1/5 3200/3200 [==============================] - 8s 2ms/sample - loss: 0.6937 - acc: 0.4919 - val_loss: 0.6870 - val_acc: 0.5188 Epoch 2/5 3200/3200 [==============================] - 6s 2ms/sample - loss: 0.6588 - acc: 0.6212 - val_loss: 0.6328 - val_acc: 0.6425 Epoch 3/5 3200/3200 [==============================] - 5s 2ms/sample - loss: 0.5100 - acc: 0.7625 - val_loss: 0.6255 - val_acc: 0.6787 Epoch 4/5 3200/3200 [==============================] - 6s 2ms/sample - loss: 0.3110 - acc: 0.8763 - val_loss: 0.7330 - val_acc: 0.6925 Epoch 5/5 3200/3200 [==============================] - 6s 2ms/sample - loss: 0.1663 - acc: 0.9394 - val_loss: 0.7949 - val_acc: 0.6775 Wall time: 33.8 s
Now we will test the model by predicting sentiments of unseen tweets. We will use the
get_encoded() to pre-process the reviews in the same way as training data. We can predict the class for new data instances using our finalized classification model in Keras using the
predict_classes() function.
def get_encoded(x): x = token.texts_to_sequences(x) x = pad_sequences(x, maxlen=max_length, padding = 'post') return x x = ['worst services. will not come again'] model.predict_classes(get_encoded(x))
array([[0]])
x = ['thank you for watching'] model.predict_classes(get_encoded(x))
array([[1]])
We can increase the accuracy of the model by training the model on the entire dataset of 1,600,000 tweets. We can even use more pre-processing techniques like checking for spelling mistakes, repeated letters, etc. | https://kgptalkie.com/word-embedding-and-nlp-with-tf2-0-and-keras-on-twitter-sentiment-data/ | CC-MAIN-2021-17 | refinedweb | 2,180 | 72.36 |
Packages of C++ programming tools centred on a compiler are becoming so big that it is no longer pos-sible to do a comprehensive review of a single prod-uct let alone try to do a comparative review.
The ubiquitous benchmarks are rather meaningless because they do not compare like with like. Perhaps when the C++ language settles down to a reasonably stable and mature language it will be possible to pro-vide some sort of measure for comparison but for now any benchmark that will run on all the major contenders will not be doing the more up to date products justice.
For example, it is unfair to compare executable size between a compiler supporting exceptions and RTTI with one that does not. Actually many writers do not understand the issue of code size; they think that you just need to look at the size of the executable file. If you believe that this tells you anything about a pro-grams resource requirements I suggest that you should go and do your homework. It doesn't even tell you how much disk space you will need for a fully functioning release version of the program (there may be a whole bundle of support files needed by the ex-ecutable such as DLLs)
When you buy a compiler with, possibly, other tools in the package you need to know what you intend to do with it. Most programmers will be working in a well defined area where they have a need for some special facilities.
One of the things that puzzles me is that programmers do not place a higher priority on reliable compilation. The hobbyist may enjoy trying to identify bugs in the compiler but for the rest of us they are an expensive menace. We need to be able to trust our compilers to compile our code correctly. Bugs in a compiler are not a joke, nor are compilers that are based on a misinterpretation of whatever standard, working paper or draft that they claim to comply with.
Another feature that must be important for C++ (though not for C, as it is currently stable) is a belief that the compiler will continue to develop and sup-port changes in the language. For preference I would like my compiler to continue to compile legacy code as it did before while supporting new code correctly. If it cannot continue to support legacy code it should issue appropriate error or warning messages.
What I want to do in the remainder of this article is to point to those C++ compilers for MSDOS / MSWin-dows that I know about and suggest reasons why some of you might want to look closer at individual products. I would also like to hear from you about the product you are using, particularly if I have left it out.
If you do not use MSDOS/MSWindows please write in about the compiler package(s) you use on your platform highlighting any particular features you find important.
Please note that I cannot possibly know all about even a single product so just because I do not men-tion a specific feature does not mean that it is missing from a package.
Those of you involved in numerical computing probably know that Salford Software are one of the World leaders among producers of FORTRAN com-pilers and libraries. This means that they have an ex-cellent understanding of the numerical specialists needs and their C++ compiler has excellent facilities for using FORTRAN libraries.
Unfortunately as a relatively small company they have not involved themselves with C & C++ stan-dards. This means that they have sometimes got their C wrong and their C++ is based rather too much on hearsay rather than hard facts.
Don't get me wrong - their product is good and the advantages to numerical specialists are considerable but the rest of us might be better off looking else-where.
The implementation team that came with the product when Clarion absorbed JPI is one of the most tal-ented in the world and had an excellent C++ compiler as well as some very innovative facilities. You note that I have written in the past tense. Clarion's main priority is their high quality Database Development package and much of their recent development efforts have been in this direction.
C++ has moved on over the last three years. The Clarion C++ package is still an excellent one but little has been done to it recently. This goes to show just how able their implementation team are - few C++ compilers could retain any value if left with only minimal maintenance over the last couple of years.
The question that I have to ask is how long this situa-tion will continue. With a stable language such as C their compiler will be good for quite a number of years but this is not the case with C++. Without sub-stantial work the C++ compiler is going to rapidly loose ground over the next eighteen months.
If you are involved in database development work and can take advantage of a well integrated database and compilers Clarion's TopSpeed products are still worth the effort of mastering. If you have a need for mixed Pascal, Assembler, Modula 2 and C/C++ pro-gramming the product is still on the must check list.
The great advantage of the GNU C++ compiler is that it comes with the complete source code. For the en-thusiast who wants to tinker this is an outstanding property.
This is not the product for those who want a nice cheap compiler to learn C/C++ programming. By the time you have paid for acquiring it, paid for all the other tools you will need you will have spent the sub-stantial part of £100.
The GNU products are great value for money for those that have the expertise to use them, for the rest they are a mountain of frustration.
The other problem with G++ is that it is designed for easy conversion to many platforms, but its starting place is on UNIX type systems. This means that the design is for 32-bit flat memory. You can guess that this needs more than a little fixing to work on a stan-dard PC.
Definitely not for the inexperienced.
Like G++ this is a multi-platform product with an expectation that you will be using 32-bit flat memory. As a commercial product the only acceptable fix is to provide a DOS-extender. Unfortunately the DOS compiler needs an extender but it is not part of the standard package.
This is a very high quality compiler but with very little else in the package. It is one of the most up to date C++ compilers available (the only commercial one that supports 'namespace' on a DOS platform.
The front ends for a very wide range of platforms are as identical as possible (sometimes a limitation of a platform results in some resource having to be dum-mied) and for those that have a need for multi-platform high quality development this is a product that should be on the inspection list.
However, the product is very expensive by PC terms, though cheap for UNIX. If you need versions for more than one platform you should talk to Metaware about prices as they sell each version separately.
If you want a single package for the whole range of Microsoft platforms as well as OS/2 then this is probably the most cost effective solution. Watcom have always produced high quality compilers coupled with a number of support tools. Until this last release they have relied on command line use. Those with the expertise to use an editor such as ED or MicroEmacs could integrate much for themselves but the remain-der have had to flounder around. Fine for the dedi-cated 'real programmer' but tough for the rest who just want to knock together a quick solution to to-day's problem.
With release 10 Watcom have done two important things. They bundled all the versions for different platforms (DOS, OS/2, MSWindows and Windows NT) on to one CD and provided support for 32-bit as well as 16-bit code and for provision of executables for all the above as well as DOS text, DOS extended and WIN32s.
They have also provided their first cut at an IDE. Pretty primitive still but at least a step in the direction that many programmers want.
The price without printed manuals (you do get a 'Getting Started' on paper) makes their product one worth spending time on.
One outstanding feature of their compiler is its debug support for release code. I wish all the others would emulate it.
Serious programmers should take some time to find out about this product even if you eventually decide it is not for you.
The ancestry of Symantec C++ should tell you to expect a really innovative product, at least a year ahead of its rivals. Unfortunately history will also lead you to expect a badly bugged product. That Symantec (like Zortech whom they bought) has one of the better technical support departments doesn't make up for lost time discovering that the problem is not in your code but in the compiler.
The new release which is about to come out (may be already out by the time you read this) will be very attractively priced and will include features that most programmers only dream of. A parser that has been uncoupled from the compiler so that your code can be parsed prior to compilation. After parsing an ex-cellent graphical browser allows you to inspect your code much more effectively at the time when such inspection is of most value. You can even modify inheritance hierarchies in the browser and get auto-matic modification of the code.
Incremental compilation and compilation distributed over a network are just two more of the goodies that wait for you to explore.
The one thing that I cannot know about is the reliability of the compiler. If Symantec have got that right this time then this will be the compiler that oth-ers will have to match.
The only problem is that it is only for Microsoft plat-forms.
Symantec C++ 7.0 might be cost effective for nov-ices but that will depend on the degree to which Sy-mantec have managed to clean up their IDE.
Definitely one to watch. I hope this one succeeds be-cause it has so much that I want to use.
I have no doubt that for the novice Turbo C++ or Turbo C++ for Windows is the cost effective point of departure. You will get an easy to use IDE and a very solid compiler. This is the product that has been de-signed for such users and for those for whom cost is a primary consideration. At the moment I just do not see another competitor in the market. (I think that those who think that Visual C++ standard edition meets such needs should look carefully at its limita-tions - those that want to do pure Windows pro-gramming simply should be looking at Visual Basic).
Borland C++ had become an excellent and very reli-able product by the time it reached release 3.1. They then tried to do a single massive leap forward and support almost all the contents of the C++ Working Paper as of Summer '93.
There were two dire consequences. They invested many resources in trying to solve interpretation and definition problems that WG21/X3J16 are still working on (this has an unfortunate consequence that Borland now have a decided motivation in seeing the WP adopt their solutions even if careful consideration suggest a different one). The other problem was that they produced a compiler that was buggy. I know that the compilers of most of their rivals are also bug rid-den but the users of such products already expected this. Users of Borland C++ 3.1 had come to expect a solid and stable product and it has been a rude shock to enter the real world of poor quality compilers - great features, a pity it doesn't compile my code to run the way I wrote it.
Release 4.02 did much to fix the bugs but the damage had been done. Many Borland devotees had lost con-fidence in the product.
A new release, 4.5, is due to ship soon. It doesn't add very much more apart from OLE2 support. What I hope is that it will put Borland back on course as producers of reliable and easy to use development tools.
Note that the Turbo products have retained their reli-ability because they have not tried to be 'leader of the pack' with the result that they are the leaders for those they are targeted at.
I should say one other brief thing, and that concerns OWL. Both the original product that was based on a method that WG21/X3J16 eventually rejected and the newer OWL2 were fine added value products unfor-tunately they have not captured the third party market which means that if you do not use a Borland com-piler you will not use OWL 1/2. This makes switch-ing between compilers more difficult.
Up until the recent release of 2.0, VC++ has been based on a rapidly dating specification for C++. This problem will remain for those that want to continue programming in 16-bit environments. They have no choice as 2.0 requires Windows NT 3.5 (or Chicago, sorry, Windows 95 when that finally makes it out of the box). On a long term basis Microsoft's strategy is perfectly sound. In three years time we will be wondering how anyone ever dreamt of managing with a DX33, 4 Mbytes of RAM and a paltry half gigabyte hard-drive. Unfortunately we have to get from here to there. Most ACCU members do not have the resources to run Windows NT 3.5 so VC++ 2.0 is not available to them.
For C++ programmers, the earlier versions of VC++ do not support language features that most of us now expect.
The strong point of these packages has been the growing adoption of MFC by third parties. The early versions do a reasonable job of encapsulating the Windows API. For technical reasons programs using MFC are likely to leak resources. What's new? :-)
The newest versions of MFC are more robust and I would expect them to work more reliably in an ex-ception handling environment.
Microsoft tell me that something like 80% of their developers use hardware that can run Windows NT 3.5. I am not sure what this means as I would expect that a far smaller percentage of C++ programmers use such equipment. Perhaps that reflects the differ-ence between being a C++ programmer and being an MSWindows developer. | https://accu.org/index.php/journals/603 | CC-MAIN-2017-39 | refinedweb | 2,498 | 68.4 |
read_reading, fread_reading - Read a trace file into a Read structure.
#include <Read.h> Read *read_reading( char *filename, int format);
Read *fread_reading( FILE *fp, char *filename, int format);
These functions read trace files into a Read structure. A variety of formats are supported including ABI, ALF and SCF. (Note that the first two are only supported when the library is used as part of the Staden Package.) Additionally, support for reading the plain (old) staden format files and Experiment files is included. Compressed trace files may also be read. Decompression is performed using either gzip -d or uncompress and is written to a temporary file for further processing. The temporary file is then read and removed.
When reading an experiment file the trace file referenced by the LN and LT line types is read. The QL, QR (left and right quality clips), SL and SR (left and right vector clips) are taken from the Experiment file to produce the cutoff information held within the Read structure. The orig_trace field of the Read structure will then contain the pointer to the experiment file structure and the orig_trace_format field will be set to TT_EXP.
The functions allocate a Read structure which is returned. To deallocate this structure use the read_deallocate() function.
read_reading() reads a trace from the specified filename and format. Formats available are TT_SCF, TT_ABI, TT_ALF, TT_PLN, TT_EXPand TT_ANY. Specifying format TT_ANY will attempt to automatically detect the corret format type by analysing the trace file for magic numbers and composition. The format field of the structure can then be used to determine the real trace type.
fread_reading() reads a trace from the specified file pointer. The filename argument is used for setting the trace_name field of the resulting structure, and for error messages. Otherwise the function is identical to the read_reading() function.
The Read structure itself is as follows. typedef uint_2 TRACE; /* for trace heights */
typedef struct
{
int format; /* Trace file format */
char *trace_name; /* Trace file name */
int NPoints; /* No. of points of data */
int NBases; /* No. of bases */
/* Traces */
TRACE *traceA; /* Array of length `NPoints' */
TRACE *traceC; /* Array of length `NPoints' */
TRACE *traceG; /* Array of length `NPoints' */
TRACE *traceT; /* Array of length `NPoints' */
TRACE maxTraceVal; /* The maximal value in any trace */
/* Bases */
char *base; /* Array of length `NBases' */
uint_2 *basePos; /* Array of length `NBases' */
/* Cutoffs */
int leftCutoff; /* Number of unwanted bases */
int rightCutoff; /* Number of unwanted bases */
/* Miscellaneous Sequence Information */
char *info; /* misc seq info, eg comments */
/* Probability information */
char *prob_A; /* Array of length 'NBases' */
char *prob_C; /* Array of length 'NBases' */
char *prob_G; /* Array of length 'NBases' */
char *prob_T; /* Array of length 'NBases' */
/* The original input format data, or NULL if inapplicable */
int orig_trace_format;
void *orig_trace;
} Read;
On successful completion, the read_reading() and fread_reading() functions return a pointer to a Read structure. Otherwise these functions return NULLRead (which is a null pointer).
write_reading(3),
fwrite_reading(3),
deallocate_reading(3),
scf(4),
ExperimentFile(4) | http://www.makelinux.net/man/3/F/fread_reading | CC-MAIN-2014-35 | refinedweb | 482 | 52.9 |
My post tries to kill 2 birds with 1 stone. Sorry in advance for the ignorance.
I'm trying to create an array of strings that I can
index[0] or use
ptr++ to advance the array. I'm not sure if I should create an array of
char pointers, or a pointer to a
char array. The variables will be stored in a struct. Forgive the ignorance, I'm just having a hard time with the order of precedence of when and where to use
(). I understand a basic struct, it was when I started using a pointer to a string when I started to loose syntax structure. If I can understand the syntax for this, I could apply it further to dimensional structures of arrays.
Assuming I had the assignment of the variables correct, I think I rather use
ptr++ in regards to something like
printf("%s", ptr++). If I understand correctly,
ptr++ would move the pointer to the next string, or some for of
ptr++ could. This correct? Seems like that would be faster for many, many things.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
struct Umbrella {
char *name[10];
} Umbrella;
int main ()
{
struct Umbrella * ptr;
// Not understanding this way...
ptr->name[0] = "Some name";
// or this way. Well name++ as long as it wasn't the first string.
ptr->name++ = "Some name";
return 0;
}
Boot note: I have read "C Primer Plus" by Prata. He does well in explaining things, it is just when you start bending things to your will when you start to fall short on applying the syntax. In example, it never covered using pointers to structures to access multidimensional arrays, and it didn't cover pointer arithmetic in a manner of syntax where you would actual use it. Can anyone recommend another book that might at least braze by such approaches?
P.S. This is my second post, and I forgot to say I really like this sites text input design. Had to say it :-).
Well, there's
char *name[10] which is really just something like :
char *name0; char *name1; char *name2; // .. etc
Accessing it as
ptr->name[0] will just pick the
ptr->name0 as a
char*.
ptr->name++ = "asdf"; is a pretty bad idea here. What it basically does is :
*(ptr->name) = "asdf"; ptr->name += 1;
Of course, you can't increase name by one here (it's an array, not a pointer) so the compiler won't allow it.
The
++ operator can be useful when iterating past objects. Example :
ptr->name[9] = nullptr; // Make sure the last element is a NULL pointer. // Take the first element char **it = ptr->name; char *current; // Loop until we reach the NULL while ((current = *(it++)) != nullptr) { printf("%s\n", current); }
The above is a (pretty ugly) way of iterating through an array.
Inserting things in a pre-allocated array:
char **it = ptr->name; // Start at the first element *(it++) = "Hi!"; *(it++) = "This is the second message."; *(it++) = "Hello world!"; *(it++) = nullptr; // End the array
Of course, all of this iteration stuff is from a dark past: nowadays we have C++ which takes care of most of these things for us, via
std::vector, etc. | http://www.dlxedu.com/askdetail/3/63aa72ba9787c5f75aed70d478d35b7d.html | CC-MAIN-2018-39 | refinedweb | 534 | 73.47 |
Question
Before performing any calculations, do you think that CVX and YUM would be good stocks to combine into a portfolio? (Explain.)
Answer to relevant QuestionsCalculate the expected return and variance of a portfolio comprised of 50% Chevron and 50% Yum! Brands. Would some investors find the 30/50/20 portfolio preferable to holding Chevron, Yum! Brands, Johnson & Johnson or the S&P 500 index alone? Draw an XY scatterplot of the expected return (y-axis) and standard deviation ...Calculate the future value of $1000 invested for 3 years at an 8% annual rate of interest with annual compounding. You would like to deposit funds in an investment account and make equal, annual withdrawals of $75,000 per year, beginning exactly 1 year from today and continuing for 15 years, after which time the value of the account will ...You've been shopping for a truck. You have $2000 to use as a down payment, and you've been working with your bank to get the best financing rate possible. The bank recently quoted you an APR of 4.5% for 48 months, but they ...
Post your question | http://www.solutioninn.com/before-performing-any-calculations-do-you-think-that-cvx-and | CC-MAIN-2017-09 | refinedweb | 188 | 64.71 |
Just a few more points on Windows Live Folders:
Also, I admit that I used the codename "Live Drive" as opposed to the correct name "Sky Drive" as the former has a well defined (if wrong) concept outside of Microsoft. We tried before to correct this error but it didn't work. Blame Ray Ozzie for that one :)
Previously:Windows Live Folders beta - more info "Live Drive" is almost here - Windows Live Folders beta
Meh, should still be called Live Drive. Using Homail for storage/using Live Folders for storage seems to be the same to me- online storage. It should just be Live Drive: simple, and catchy. Correct me if I'm wrong.
Yesterday, I signed in and up/downed a file, w/o any signup needed.
It only worked with signing in my live ID before visiting the folders website, though.
LiveSide has got three informative posts on the going-ons of one of the most anticipated Windows Live services, Windows Live Folders, the posts can be found here:
Hi,
Sorry about the trackbacks, I didn't tell it to do that, it just did it automatically. Feel free to remove them.
And thanks for the update about WLFolders, been waiting for some info for ages. Now all we need is WLCalendars. :)
Cheers,
Decipher.
Hey Decipher,
That's absolutely fine with the trackbacks. We like when those show up! As for Calendar, well I think we've all been waiting for that one. :)
The much-vaunted storage-in-the-cloud service from Microsoft's Windows Live is soon to see the light
Los Havros,
You are neither right or wrong - but of course that's your opinion. I hapoen to agree - Windows Live Drive is a great name.
-Jamie
Wow this sound really exciting.. can't wait to see it in action.
Hopefully they will have a small client program for easy uploading.. Kind of like Flickr for photos.
Oh would be nice if they also had a way to sync up a folder on your desktop.
I CANT WAIT!
Zlinko,
Do you not think that they already have the perfect client in Windows Explorer?
yeah...they've used Windows Explorer before...extending it with a namespace that ads like "MSN Groups" or "MSN Storage" back in the day.
It didn't always work so well...and the paltry 3MB storage limits easily curtailed usage. ;-)
Lookin forward to this going public as well... :)
Windows Live Folders and Windows Live Photo Gallery are set to begin limited managed betas, according | http://www.liveside.net/main/archive/2007/05/12/windows-live-folders-part-iii.aspx | crawl-002 | refinedweb | 421 | 74.9 |
On Thu, Oct 25, 2007 at 03:01:11PM -0700, Steve Langasek wrote: > On Tue, Oct 23, 2007 at 12:17:56PM +0300, Damyan Ivanov wrote: > > [d-release CC-ed for oppinion] > > [please CC at least debian-perl] > > > > > On 2007-06-23 Matthias Klose wrote: > > > > > > Package: libdbi-perl > > > > > > This package has been indentified as one with header files in > > > > > > /usr/include matching 'long *double'. Please close this bug report > > > > > > if it is a false positive, or rename the package accordingly. > It appears that libdbi-perl is included in the list for the ldbl transition > because of the file /usr/lib/perl5/auto/DBI/dbipport.h. I don't believe > that anything in Debian builds against this file, but then I also don't know > why it's in the package at all. If other packages do build against this > header, then there is at least potentially an ABI change that needs to be > handled. If this header is dead weight, then no changes need to be made to > the libdbi-perl package for this bug report (not even "depending on a perl > that is compiled with the new glibc/gcc"). The libdbd-*-perl packages do build against this file, through DBIXS.h. The file provides a compatibility API for older versions of Perl so that XS modules can use features introduced in newer versions. See Devel::PPPort(3perl) for more information. In this case, the relevant block is #ifndef NVTYPE # if defined(USE_LONG_DOUBLE) && defined(HAS_LONG_DOUBLE) # define NVTYPE long double # else # define NVTYPE double # endif which provides the NVTYPE definition when building with an old Perl that doesn't know about it. Now, 'perl dbipport.h --api-info NVTYPE' says Supported at least starting from perl-5.6.0. Support by dbipport.h provided back to perl-5.003. so this isn't used on Debian. Even if it were, our perl is compiled without the 'uselongdouble' configuration parameter (see #430322), so the 'long double' alternative wouldn't be used anyway. I'm closing the bug, please reopen if I have missed something. In that case, #430264 should probably be reopened too. Cheers, -- Niko Tyni ntyni@iki.fi | http://lists.debian.org/debian-perl/2007/11/msg00000.html | CC-MAIN-2013-48 | refinedweb | 357 | 63.19 |
LittleORM::Tutorial - what is and how to use LittleORM
LittleORM is an ORM. It uses Moose. It is tested to work with PostgreSQL 8.x and 9.x. It is also tested to work in persistent environment, such as mod_perl 2.x.
I used it in my projects for abt a year and it probably does all you need it to.
The main drawback I am aware of is that it is heavy if you need to process tenths of thousands of records, as every record gets created as an object.
Important: There are at least 2 things LittleORM does not do, which means that you have to do yourself:
- Create your tables, actually writing SQL yourself. You do it only once.
- Write Moose class representing your model. Although you could use inheritance mechanisms to simplify that some. You do it only once, you write your model.
- Connect to your DB with DBI. You connect to database, then initialize ORM with valid connected $dbh.
Did I say 2 things? OK, I meant 3.
Continuing with this tutorial I assume that you're more or less familiar with Moose. If not, then get acquainted before moving on. Moving on.
A model is your table description in terms of LittleORM. You create a model by subclassing from LittleORM::Model class. Or other class, which in turn, is a subclass of LittleORM::Model.
Suppose we have following table:
$ \d author Table "public.author" Column | Type | Modifiers --------+-----------------------+----------------------------------------------------- id | integer | not null default nextval('some_seq') name | character varying | email | character varying | login | character varying | pwdsum | character varying(32) | active | boolean | rctype | smallint | $
We'll call it MyModel::Author. So, let's write:
package MyModel::Author; use Moose; extends 'LittleORM::Model'; # the first column, is PK, id: has 'id' => ( metaclass => 'LittleORM::Meta::Attribute', isa => 'Int', is => 'rw', description => { primary_key => 1 } );
Note
description => { ... } attribute. It is how you tell LittleORM things about your columns.
metaclass => 'LittleORM::Meta::Attribute' should be included along and is required for Moose to process our extra description.
Now, as id column PK is pretty common, I ship base class for it with LittleORM. So we re-write our model:
package MyModel::Author; use Moose; extends 'LittleORM::GenericID';
OK, now we need to tell our model which table in database we work with. Redefine
sub _db_table for that:
package MyModel::Author; use Moose; extends 'LittleORM::GenericID'; sub _db_table { 'author' } # Now other columns: has 'name' => ( is => 'rw', isa => 'Str' ); has 'email' => ( is => 'rw', isa => 'Str' ); has 'login' => ( is => 'rw', isa => 'Str' ); has 'pwdsum' => ( is => 'rw', isa => 'Str' ); has 'active' => ( is => 'rw', isa => 'Bool' ); has 'rctype' => ( is => 'rw', isa => 'Int' );
NOTE: You would want to write
Maybe[Str], Maybe[Int] if your columns can have NULL values in them.
Moving on.
As it is a Moose class you're writing, you're not limited to attributes which only are present in your table. You can add more attributes and methods. A bit artificial example is
valid_email attribute:
has 'valid_email' => ( is => 'rw', isa => 'Bool', metaclass => 'LittleORM::Meta::Attribute', lazy => 1, builder => '_is_valid_email', # your sub description => { ignore => 1 } );
Note
description => { ignore => 1 } attribute. It's not present in the table, so LittleORM must ignore it. This descriptions tells it to.
Before we were describing our model, now, it's time to manipulate it, actually working with database.
As was said before, LittleORM does not connect to DB for you. You have to connect and initialize it before. Now, in my web project I have $dbh available to me with
&dbconnect() function. It connects once and then returns
$dbh to any client script require it.
NOTE:
$dbh is supposed to be db handle returned by
DBI -> connect()
So we write:
use strict; use MyModel::Author; # ... LittleORM::Db -> init( $dbh ); # selecting a single record is done with get(): my $author = MyModel::Author -> get( id => 100500 ); print $author -> name(); # selecting multiple records is done with get_many(): my @active = MyModel::Author -> get_many( active => 1 ); # now, @active is an ARRAY of MyModel::Author objects # selecting count is donte with count() and returns integer my $active_cnt = MyModel::Author -> count( active => 1 );
Every MyModel::Author object you get is MyModel::Author you described in your model, with all the properties and methods you wrote.
NOTE: Always remember to do
LittleORM::Db -> init()! Well, assert will remind you to, but still.
If you're afraid that someone might be tinkering with your record from the time you selected it, you can reload:
my $author = MyModel::Author -> get( id => 100500 ); # ... $author -> reload();
Update simple. You set new value, then call
update():
use strict; use MyModel::Author; # ... LittleORM::Db -> init( $dbh ); my $author = MyModel::Author -> get( id => 100500 ); $author -> name( "New Name For This Author" ); $author -> update();
Insert is actually simple too:
# This will throw assert on error: my $new_author = MyModel::Author -> create( name => 'Mad Squirrel', # other attrs ); print $new_author -> id();
Now, you might want to create new record only if it does not yet exists:
my $author = MyModel::Author -> get_or_create( name => 'Mad Squirrel', # other attrs ); print $new_author -> id();
And you might want to create a copy:
my $author = MyModel::Author -> get( id => 100500 ); my $new_one = $author -> copy();
Delete can be dangerous. Remeber that.
my $author = MyModel::Author -> get( id => 100500 ); $author -> delete();
# same as:
MyModel::Author -> delete( id => 100500 );
# deletes all authors:
MyModel::Author -> delete();
It's safer to call delete() from an instance, not from package.
All mentioned LittleORM calls are translated to SQL language at some (close to final) point. And you might want to see what it looks like.
Every LittleORM method which works with DB - get(), get_many(), count(), delete(), update() support
_debug => 1 argument. If
_debug => 1 is passed, ORM does not do anything, but builds SQL it's about to execute and returns it in one plain string scalar.
my $author = MyModel::Author -> get( id => 100500, _debug => 1 ); print $author; # Might produce something resembling: SELECT author.id,author.name,... FROM author WHERE id='100500'
To this point, we only used simple exact selection filters. Like exact
id or exact
active field. Life is usually more complicated than that.
Note that filtring clauses syntax is the same in get(), get_many(), count(), delete(), clause(), filter() methods. There will be more about former two later.
use strict; use MyModel::Author; # Dont forget: LittleORM::Db -> init( $dbh ); # Several IDs: my @ids_i_want = ( 123, 456, 789 ); my @authors = MyModel::Author -> get_many( id => \@ids_i_want ); # ID more than: my @authors = MyModel::Author -> get_many( id => { '>', 100500 } ); # Name like: my @authors = MyModel::Author -> get_many( name => { 'LIKE', 'Mad%' } ); # Combined with AND: my @authors = MyModel::Author -> get_many( name => { 'LIKE', 'Mad%' }, active => 0, id => { '>', 100500 } ); # Combined with OR: my @authors = MyModel::Author -> get_many( name => { 'LIKE', 'Mad%' }, active => 0, id => { '>', 100500 }, _logic => 'OR' );
We still want active ones:
my @authors = MyModel::Author -> get_many( active => 1, _sortby => [ id => 'ASC', created => 'DESC' ] ); # ... ORDER BY author.id ASC,author.created DESC ...
Oops,
author does not contain
created column in our example. Anyway, you got the idea.
Both get() and get_many() support following system arguments:
( _limit => Int ) - How much records we want to get with get_many() an once (translates to SQL LIMIT)
( _offset => Int ) - Starting from offset (translates to SQL OFFSET)
my @authors = MyModel::Author -> get_many( active => 1, _limit => 50, _offset => 0, _sortby => [ id => 'ASC', created => 'DESC' ] );
( _distinct => 1/0 ) - Select only distinct records. (SQL DISTINCT)
( _clause => $c ) - Pass a clause. If $c if an ARRAYREF it assumed to be args for clause() method.
( _logic => 'AND'/'OR' ) - Join all clauses with this logic. Default is 'AND'.
( _sortby => 'attr' ) or ( _sortby => { 'attr' => 'ASC' / 'DESC', ... } ) or ( _sortby => [ 'attr', 'ASC', ... ] ) - Sort.
( _dbh => $dbh ) - Pass another $dbh. Will be used as default if no other was seen before.
( _where => 'RAW SQL' ) - Be cautios.
LittleORM::Clause is a way to create and store selection clauses in an object. This object may then be used in get(), get_many(), and filter().
It can simplify get() methods arguments. Also it can help you separate selection arguments building from records selecting and processing. Be sure to look at LittleORM::Filter too.
They (Clause objects) can also be combined flexibly.
my $c1 = MyModel::Author -> clause( cond => [ id => { '>', 91 }, # anything that can be passed # to get() funcs # see "MORE ON SELECTION CLAUSES" id => { '<', 100 } ] ); my $c2 = MyModel::Author -> clause( cond => [ id => { '>', 100 }, id => { '<', 110 } ] ); my $c3 = MyModel::Author -> clause( cond => [ $c1, $c2 ], logic => 'OR' ); my $debug = MyModel::Author -> get( _clause => $c3, _debug => 1 ); # same as: my $debug = MyModel::Author -> get( _clause => [ cond => [ $c1, $c2 ], logic => 'OR' ], _debug => 1 ); # produces: # ... WHERE ( ( id > '91' AND id < '100' ) OR ( id > '100' AND id < '110' ) )
LittleORM works with foreign keys nicely. You just have to specify FK in your Model description.
Suppose we have one more table, in addition to
<authors>:
$ \d book Table "public.book" Column | Type | Modifiers -----------+-----------------------------+---------------------------------------------- id | integer | not null default nextval('book_id_seq') title | character varying | published | timestamp without time zone | author | integer | $
Now, the
author column of table
book refers to
author.id. That's our FK. OK, now let's write the model for books :
package MyModel::Book; use Moose; extends 'LittleORM::GenericID'; has 'title' => ( is => 'rw', isa => 'Str' ); # we'll convert this to DateTime later: has 'published' => ( is => 'rw', isa => 'Str' ); # and finally: has 'author' => ( is => 'rw', isa => 'MyModel::Author', metaclass => 'LittleORM::Meta::Attribute', description => { foreign_key => 'MyModel::Author' } );
And that is all. Now we can do something like:
use strict; use MyModel::Book; # ... LittleORM::Db -> init( $dbh ); my $book = MyModel::Book -> get( id => 100500 ); printf( "The author of %s is %s", $book -> title(), $book -> author() -> name() );
Now this is possible only if relation 1-to-1. Although
author table does not contain
book column we could write:
has 'book' => ( is => 'rw', isa => 'MyModel::Book', metaclass => 'LittleORM::Meta::Attribute', description => { foreign_key => 'MyModel::Book', ignore_write => 1, # cant write it db_field => 'id', foreign_key_attr_name => 'author' } );
As you could see, there are many keywords you can use in attribute description. These all proved to be useful in a long time work in real world.
Let's list them all:
coerce_from
Subroutine, which is called to convert DB field value into your class attribute value. Remember when we wrote:
has 'published' => ( is => 'rw', isa => 'Str' );
That's not very cool. Here is how to have DateTime there:
has 'published' => ( is => 'rw', metaclass => 'ORM::Meta::Attribute', isa => 'DateTime', description => { coerce_from => sub { &ts2dt( $_[ 0 ] ) } } );
With ts2dt() being something like:
sub ts2dt { my $ts = shift; return DateTime::Format::Strptime -> new() -> parse_datetime( $ts ); }
coerce_to
Reverse for coerce_from. Previous example will fail on updating/writing, because there is no way LittleORM knows how to convert DateTime back to DB format. We should either put
ignore_write there, or provide coerce_to:
has 'published' => ( is => 'rw', metaclass => 'ORM::Meta::Attribute', isa => 'DateTime', description => { coerce_from => sub { &ts2dt( $_[ 0 ] ) }, coerce_to => sub { &dt2ts( $_[ 0 ] ) } } );
With dt2ts() :
sub ts2dt { my $dt = shift; return DateTime::Format::Strptime -> new() -> format_datetime( $dt ); }
You can have a text or XML field and with
coerce_from /
coerce_to you can appear it to be something else, like. Like anything.
db_field
It happens that DB colunm names are not always precise or appropriate. You can have attribute in your model with a name different from db column name:
has 'product' => ( is => 'rw', metaclass => 'ORM::Meta::Attribute', isa => 'ExampleModel', description => { db_field => 'pid' } );
db_field_type
Well, that is a mechanism to determine a correct SQL operation for underlying DB column depending on it's type.
has 'attrs' => ( is => 'rw', metaclass => 'ORM::Meta::Attribute', isa => 'Str', description => { db_field_type => 'xml' } );
'xml' is the only known field type currently.
do_not_clear_on_reload
There is a reload() method, remember? This causes LittleORM to skip attribute from being cleared when reload() is called.
foreign_key
This is how FKs are defined. See FOREIGN KEYS section.
foreign_key_attr_name
Normally, you dont need this. FK thought to be connected to other model's PK. But if it's not true, you can manually specify the corresponding attribute name from other model.
ignore
Causes LittleORM to ignore this attribute. Let's you have arbitrary attributes in your class along with DB-related ones.
ignore_write
Then LittleORM ignores attribute only on writing. It does not get updated, etc. Only read from DB. If you have something you present with
coerce_from, you might want it.
primary_key
Tells LittleORM that this column is a PK. Most models should have a PK.
sequence
Normally, you don't need this. Just make sure your PKs are of type serial and have sequences attached to them inside DB.
But you may also specify sequence which will be used to obtain a value for column on creating new record (if no value passed of course).
LittleORM::Filter is advanced version of LittleORM::Clause. Filter is a set of clauses, associated with a model. Filter is also a main tool to join tables on query.
Suppose we need to select all books from all active authors:
my $authors = MyModel::Author -> filter( active => 1 ); my @books = MyModel::Book -> filter( $authors ) -> get_many();
OK, what about all authors with books published before 2000?
my $books = MyModel::Book -> f( published => { '<', '2000-01-01' } ); my @authors = MyModel::Author -> f( $books ) -> get_many();
Yeah, you can write f(), not filter(). Shorter that way.
The latter example has one flaw. If there are one-to-many correspondence between authors and books, we might get duplicates in authors. To avoid that:
my @authors = MyModel::Author -> f( $books ) -> get_many( _distinct => 1 );
Note how you dont need to specify corresponding columns between models. It's because you declared FK between them earlier.
But there can be no FK.
You can specify a column which filter corresponds to, and which column is returned from filter. The code from previous section:
my $authors = MyModel::Author -> filter( active => 1 ); my @books = MyModel::Book -> filter( $authors ) -> get_many();
Without FK must be written as:
my $authors = MyModel::Author -> filter( active => 1, _return => 'id' ); my @books = MyModel::Book -> filter( author => $authors ) -> get_many();
And you can join a table on itself. Sorry for totally artificial example:
my $f = Metatable -> f( rgroup => 100500, _clause => $c1, # passing additional clause f01 => Metatable -> f( rgroup => 500100, _return => 'f02' ) ); my @recs = $f -> get_many();
You can connect filters after they have been created with connect_filter(). Same as above:
my $f = Metatable -> f( rgroup => 100500, _clause => $c1 ); my $f1 = Metatable -> f( rgroup => 500100, _return => 'f02' ) $f -> connect_filter( f01 => $f1 );
Public mehods you inherit from
LittleORM::Model or
LittleORM::GenericID:
_db_table()
Specify database table name your model works with.
reload()
Reload object instance from DB.
clone()
Create object copy. DB record is not copied. see
copy() below.
get()
Select and return one object.
values_list()
@values = Class -> values_list( [ 'id', 'name' ], [ something => { '>', 100 } ] ); # will return ( [ id, name ], [ id1, name1 ], ... )
get_or_create()
Try to get record with passed arguments. If none found, calls
<create()> and tries to create it.
get_many()
Get many records/objects.
count() Get matching records count (integer).
create()
Create new record in DB. Returns newly created object.
update()
Write changes you made to object actually to DB.
copy()
Actually copy record. New object corresponding to new record is returned.
delete()
Delete records from DB.
With LittleORM::Clause:
clause()
Create new clause object. See LittleORM::Clause OBJECT section.
With LittleORM::Filter:
filter()
Create new filter object. See LittleORM::Filter OBJECT , JOINING TABLES , MORE JOINING TABLES sections.
f()
Shortcut to filter() .
1. Remember to init
LittleORM::Db -> init( $dbh );
2. You can pass
_debug => 1 and see what is going on.
Could be a bit outdated, but still.
Look here:
Eugene Kuzin,
<eugenek at 45-98.org>, JID:
<gnudist at jabber.ru>
The main drawback I am aware of is that it is heavy if you need to process tenths of thousands of records, as every record gets created as an object.
Please report any bugs or feature requests to
bug-littleorm at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
You can find documentation for this module with the perldoc command.
perldoc Little. | http://search.cpan.org/~eugenek/LittleORM-0.07/lib/LittleORM/Tutorial.pm | CC-MAIN-2016-44 | refinedweb | 2,632 | 64.3 |
I am trying to complete this program. It is excuting the second part of it, Which every 5th word needs to be replaced with an underline. But I need to print out the text with out the underlines first. then have it replaced with underlines. Also, the user has to enter the name of the file "spartan.txt" then it will produce orginal the file, and then it would reproduce it with the underlined section. I am not asking for the answer, JUST some guidance on what may have gone wrong. Thank you.
EXAMPLE on how it should show:
."
"On the banks of the ______ Cedar, There's a school ______ known to all; Its ________ is winning, And those ______ play good ball; Spartan ______ are never beaten, ...."
Only the underlined part is coming out.
#include <iostream> // include standard I/O library #include <fstream> // include standard file library #include <iomanip> // include IO manipulators #include <string> // include C++ string class #include <cctype> // using namespace std; // access standard namespace int main () { ifstream Input; string fileName; // variables char name; char last = ' '; int blank = 1; // book operators bool is5blank = false; bool print = false; // constants const string UNDERLINE = " ________"; cout << "Please enter the file name with .txt at the end" << endl; cin >> fileName; cout << "\n==================" << endl; cout << "Cloze Version of Text" << endl; cout << "==================" << endl << endl; Input.open(fileName.c_str()); while (!Input) { cout << "Please re-enter the file name" << endl; cin >> fileName; cout << endl; Input.open(fileName.c_str()); } Input.clear(); // reset read pointer to beginning of file Input.seekg(0L, ios::beg); cout << endl << endl; while ((name = Input.get()) != EOF) { if (isalpha(name)) { if(blank == 0) { if(print == false) { cout << UNDERLINE; print = true; } } else cout << name; } else { if(isalpha(last)) blank++; if(blank == 5) { blank = 0; print = false; } cout << name; } last = name; } Input.close(); return 0; } | https://www.daniweb.com/programming/software-development/threads/315576/console-print | CC-MAIN-2018-47 | refinedweb | 301 | 71.75 |
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */
The WebDAV protocol is a set of extensions for the HTTP/1.1 protocol that allows clients to perform remote web content authoring operations. This extension is defined on RFC2518. WebDAV aims to give place to an open architecture at protocol level, to develop new distributed authoring tools in the web, specially emphasizing the collaborative authoring of web pages (see RFC 2518). WebDAV defines operations over properties, collections, namespaces and overwriting protection, and for these operations, it defines new methods, headers, request and response entity bodies. Nevertheless, versioning features, present in the original proposition, have been moved to Delta-V IETF work group, which aims to extend WebDAV and HTTP/1.1 for those features (see E. J. Whitehead's paper "The future of Distributed Software Development on the Internet").
More informations about WebDAV can be found also in WebDAV.org.
#ifndef WWWDAV_H #define WWWDAV WebDAV protocol defined 7 new HTTP methods and also some new request headers. The WWWDAV library defines high levels functions to manipulate those new elements.
#ifdef HT_DAV #include "HTDAV.h" #endif /* HT_DAV */
#ifdef __cplusplus } /* end extern C definitions */ #endif #endif | http://www.w3.org/Library/src/WWWDAV.html | CC-MAIN-2013-48 | refinedweb | 200 | 50.02 |
Hi all,
I seem to have backed myself into a corner and I cannot easily upgrade
from a custom 0.7.4 installation to the default 0.8.2. Any help I could
get would be greatly appreciated. Below is an outline of the problem.
Current installation:
0.7.4 with Ed Anuff's custom composite comparators.
CF Comprator Type : "DynamicComposite"
New installation:
0.8.2 with Casasndra's native DynamicComposite column time.
CF Comparator Type:
"DynamicComposite))"
I cannot simply upgrade Cassandra, this fails because the comparator is
incorrectly defined for version 0.8.1 and on. My issue is that the
column family definition has changed, the bytes that are stored are
compatible, I simply need to re-define the CF and migrate the column
data over.
Initially I was thinking I could perform a read from the rows and
columns from a single 0.7 node, then insert them into the 0.8 cluster,
however I cannot have 2 different versions of the thrift API running in
the same java JVM due to namespace conflicts.
Idea is to perform the following steps, if anyone has any better
suggestions, they would be greatly appreciated.
1. Back up all my CFs that use the dynamic composite, and copy the
SSTable out of my Keyspace's data directory to tmp
2. Drop all CFs that use dynamic composite
3. Re-create the CFs with the new comparator definition
4. Using an external program directly read the 0.7 SSTables (without
starting the cassandra daemon) and insert the rows and columns into the
0.8 cluster via thrift.
Can anyone point me at a good example for reading rows and columns
directly from the SSTables without using the thrift api?
Thanks,
Todd | http://mail-archives.apache.org/mod_mbox/incubator-cassandra-user/201108.mbox/%3C1312333592.29615.11.camel@greenlantern.local%3E | CC-MAIN-2015-32 | refinedweb | 290 | 58.99 |
Mike Vowels continues his ‘journey of discovery’ - Goal is helping injured veterans
-
- Written by Lisa Allen, Valley View Editor
Courtesy photo . Mike Vowels on the ski slopes of Sun Valley last spring."I have found there’s no expiration date on what you are capable of doing," Mike Vowels said recently over coffee at Duvall Starbucks.
One point he was making was, despite his own physical limitations, he has been able to live a full life and is continuing to challenge himself, along with developing plans to help others similarly afflicted.
But what really changed his focus and got him on a new path, he pointed out, was the fact that he had undergone a recent epiphany of sorts, starting with a bout of depression, which made him realize he could, and should, do more that he thought he could, or even wanted to do.
But first, a little background.
For those who missed out on Connie Berg’s two-part story earlier this year on Mike’s life so far, it explained how he is known locally for creating a beautiful and "green" landscaped yard and home, all from a wheelchair that he has had to use since a skiing accident at age 29.
The two articles told of Mike’s trip to Sun Valley last spring, and with it a return (using adaptive equipment) to skiing, a sport he once loved and dominated as a freestyle skier but had completely turned his back on after the accident.
Now 58, he is certain he will accomplish the next goals he has set for himself.
But that doesn’t mean he isn’t nervous.
"Right now, I’m a bit afraid but exhilarated at the same time," he explained. "When I went to Sun Valley, it was like I cracked the door open and looked in. But now I feel there is so much ahead of me with the Mount Rainier thing (he plans to use a self-powered ascent-sled to climb the mountain and then mono-ski down. He would be the first to mono-ski the Muir snowfields)."
That adventure is planned for late spring or early summer of 2014, with a team. He will be self-ascending nearing 5,000 feet to the 10,080 foot elevation of Mount Rainier with Camp Muir as the destination. But before that journey is undertaken, another trip to Sun Valley is planned for next week, and will be much different from the last. He used a mono-ski there for the first time, and he was surrounded by dozens of fellow skiers and friends.
This time he is determined to do it all with as little help as possible. The only one going with him will be his friend John Tardiff, a former Alpental ski instructor.
"When I went to Sun Valley before, I had plenty of people to help me up when I fell," he said. "It was wonderful on a social level and life-changing for me and all of them, but this next time I want to be able to pick myself up all by myself, and I will be going down a steeper slope so that should make it easier. Learning this is like being a little kid again.
"It’s like being a time traveler – after I got hurt I put that skier away in a glass bottle and for me to come back after almost three decades, it’s like taking it out one piece at a time. And over the years they have made great progress on the mono-skis. I will be using a donated one from K2 Sports.
"I can’t wait to fall in love with skiing again."
Mike and John will meet up with a filmmaker in Sun Valley they have been working with where they will continue their filming of "Return to Paradise," a documentary ski film that demonstrates that "recovery has no deadlines and no expiration date; it is never too late," as Mike explains in a statement on the crowd funding site Projekt Karma which he is using to promote the film.
He held a successful fundraiser after his first trip to raise money to pay for the filmmaker, travels and lodging. When the film is done, all net revenue raised from showings and lectures will go to fund a non-profit adaptive skiing program, earmarked for injured vets from back as far as the Vietnam conflict to those injured in recent wars. They will learn how to ski using adaptive equipment that is customized to meet their individual needs, Mike says.
"That’s my cause," he said. "I never served (in the military) and I missed Vietnam so this is my chance to serve. I taught skiing for fourteen years (before the injury). I love teaching and mentoring. I want to serve a purpose. I want to make this film interesting and marketable – everyone can identify with soldiers after 9/11. "
Even the name of the website seems made-to-order. According to the online dictionary Wikipedia, "Karma" is defined as "A word meaning the result of a person’s actions as well as the actions themselves," certainly appropriate in this case.
Anyone who wishes to learn more about this cause, to donate or watch the inspiring film trailer can visit, or check Mike’s Facebook page. | http://www.nwnews.com/index.php?option=com_content&view=article&id=9037:acoustic-cadence-celtic-trio-to-play-at-alexas-cafe&catid=37:events&Itemid=83 | CC-MAIN-2016-44 | refinedweb | 893 | 74.93 |
A seemingly simple change to fix a small bug lead me to some interesting software design choices. I’ll try to explain.
In the new beta of coverage.py, I had a regression where the “run --append” option didn’t work when there wasn’t an existing data file. The problem was code in class CoverageScript in cmdline.py that looked like this:
if options.append:
self.coverage.combine(".coverage")
self.coverage.save()
If there was no .coverage data file, then this code would fail. The fix was really simple: just check if the file exists before trying to combine it:
if options.append:
if os.path.exists(".coverage"):
self.coverage.combine(".coverage")
self.coverage.save()
(Of course, all of these code examples have been simplified from the actual code...)
The problem with this has to do with how the CoverageScript class is tested. It’s responsible for dealing with the command-line syntax, and invoking methods on a coverage.Coverage object. To make the testing faster and more focused, test_cmdline.py uses mocking. It doesn’t use an actual Coverage object, it uses a mock, and checks that the right methods are being invoked on it.
The test for this bit of code looked like this, using a mocking helper that works from a sketch of methods being invoked:
self.cmd_executes("run --append foo.py", """\
.Coverage()
.start()
.run_python_file('foo.py', ['foo.py'])
.stop()
.combine('.coverage')
.save()
""", path_exists=True)
This test means that “run --append foo.py” will make a Coverage object with no arguments, then call cov.start(), then cov.run_python_file with two arguments, etc.
The problem is that the product code (cmdline.py) will actually call os.path.exists, and maybe call .combine, depending on what it finds. This mocking test can’t easily take that into account. The design of cmdline.py was that it was a thin-ish wrapper over the methods on a Coverage object. This made the mocking strategy straightforward. Adding logic in cmdline.py makes the testing more complicated.
OK, second approach: change Coverage.combine() to take a missing_ok=True parameter. Now cmdline.py could tell combine() to not freak out if the file didn’t exist, and we could remove the os.path.exists conditional from cmdline.py. The code would look like this:
if options.append:
self.coverage.combine(".coverage", missing_ok=True)
self.coverage.save()
and the test would now look like this:
self.cmd_executes("run --append foo.py", """\
.Coverage()
.start()
.run_python_file('foo.py', ['foo.py'])
.stop()
.combine('.coverage', missing_ok=True)
.save()
""", path_exists=True)
Coverage.combine() is part of the public API to coverage.py. Was I really going to extend that supported API for this use case? It would mean documenting, testing, and promising to support that option “forever”. There’s no nice way to add an unsupported argument to a supported method.
Extending the supported API to simplify my testing seemed like the tail wagging the dog. I’m all for letting testing concerns inform a design. Often the tests are simply proxies for the users of your API, and what makes the testing easier will also make for a better, more modular design.
But this just felt like me being lazy. I didn’t want combine() to have a weird option just to save the caller from having to check if the file exists. I imagined explaining this option to someone else, and I didn’t want my future self to have to sheepishly admit, “yeah, it made my tests easier...”
What finally turned me back from this choice was the principle of saying “no.” Sometimes the best way to keep a product simple and good is to say “no” to extraneous features. Setting aside all the testing concerns, this option on Coverage.combine() just felt extraneous.
Having said “no” to changing the public API, it’s back to a conditional in cmdline.py. To make testing CoverageScript easier, I use dependency injection to give the object a function to check for files. CoverageScript already had parameters on the constructor for this purpose, for example to get the stand-in for the Coverage class itself. Now the constructor will look like:
class CoverageScript(object):
"""The command-line interface to coverage.py."""
def __init__(self, ..., _path_exists=None):
...
self.path_exists = _path_exists or os.path.exists
def do_run(self, options, args):
...
if options.append:
if self.path_exists(".coveragerc"):
self.coverage.combine(".coveragerc")
self.coverage.save()
and the test code can provide a mock for _path_exists and check its arguments:
self.cmd_executes("run --append foo.py", """\
.Coverage()
.start()
.run_python_file('foo.py', ['foo.py'])
.stop()
.path_exists('.coverage')
.combine('.coverage')
.save()
""", path_exists=True)
Yes, this makes the testing more involved. But that’s my business, and this doesn’t change the public interface in ways I didn’t like.
When I started writing this blog post, I was absolutely certain I had made the right choice. As I wrote it, I wavered a bit. Would missing_ok=True be so bad to add to the public interface? Maybe not. It’s not such a stretch, and a user of the API might plausibly convince me that it’s genuinely helpful to them. If that happens, I can reverse all this. That would be ok too. Decisions, decisions...
Have you considered not using a mock at all? This code looks to me (to use the language of Gary Bernhardt) like it falls into the outer “procedural glue” of Coverage, not the functional core on the inside, and that therefore you should simply test it as part of your handful of integration tests where the code runs real I/O against real files.
Another way to look at it is that, aside from the do_run method and one use of os.path.exists, cmdline.py is functionally pure. It takes the command-line arguments and builds a work plan or outputs useful diagnostics/docmentation.
The test is painful to add because os.path.exists exercises a side effect to pull in the enormous super-global variable that is system state. It's painful and out-of-place in the existing tests because the rest don't need it. This object isn't about producing side effects, it's about parsing user input.
Imagine if the data of what work to do were separated from the side-effectful methods that read system state and kick off test jobs. You'd have your existing tests of all the command-line arg parsing that's going to call methods with different args and assert on their return values, and for the new code you'd have a new test file that's going to pass those data objects in, mock the outside world, and assert that those mocks were called correctly (maybe with a few assertions on return values).
The library "Click" has good support for just such tests as above, e.g. invoking random CLI commands with temporary filesystems etc.
Probably overkill to switch your entire app to a different command-line argument parser for a single test, but thought it might be useful for others...
Also, another way to fix your .combine() call would be to simply create an empty file if there isn't a .coverage already. Also kind of a distasteful hack. Yet another would be to add an "(append=True)" option to .save() -- which might be easier for others using coverage via Python instead of the CLI.
@brandon, I guess I am not as well-versed in the teachings of Gary as I should be. :) I don't understand what makes this code ill-suited to mocking? I certainly don't understand why its place in the coverage.py world makes it need integration tests. As Peter points out, it's nearly pure-functional.
On a purely practical level, the command-line parsing has a number of combinations to try, it would add a lot of time to do them all with integration tests.
@Peter: I'm not following your description of a better way to approach this. It sounds kind of like what I've already done, so I must be missing something.
Let me unpack my comment a bit. If we want to test this:We can test by passing in various arguments and asserting on the returned value. "Pure" functions like add_pure depend only on their arguments and change nothing else in the system. It is "referentially transparent", we could cache or replace
any call to add with the sum of its arguments.
If we want to test this: We're in a whole new world of testing because add_effectful has "side effects". This side effect writes to the filesystem and is not referentially transparent. Our test can't assert on the return value. In an isolated test we'd have to mock and assert that it was called properly; in an integrated test we'd have to check the effect happened (eg the file now exists with the right contents).
This side effect is really obvious because it crosses the process boundary by writing to the filesystem, but it's still a side effect if it updated a global variable, ran a query (even a SELECT) against the database, or did anything else that another method might detect as a change in state.
The filesystem is really tricky because it feels solid and standard, but it is actually a super-global variable. It is state that is accessible from anywhere in your program, and even from other runs of your program.
I went and skimmed coverage.py and saw that only do_run and one use of os.path.exists have side effects. do_run is effectful because it's calling arbitrary code that could do anything (side effects are transitive - if you call a method with side effects, to any external method you are considered to have side effects). os.path.exists is effectful because it's reading external state.
If not for those two bits of code, you could test all of coverage.py in the simple style where you pass args and assert, rather than integrate or set up mocks and expect.
The code you contemplated adding was also effectful, and that seemed to be the thing you were struggling with. I think the whole piece of code would benefit from separating out sections that have side effects.
If you want more examples and a longer explanation I gave a talk on this earlier this year. I used Ruby/ActiveRecord examples, but if you squint it applies equally well to Python/Django ORM..
@Peter: I see your point, but I'm not quite with you on the conclusion. I can definitely see that os.path.exists is a side-effect, and weighed the pros and cons of using it in this blog post. Sounds like you would have changed the public API to make the testing simpler.
I'm not sure how to remove the rest of the side-effects from CoverageScript. After all, it's entire purpose is to actually *do* things, like run your program. I could make it purely functional, in that it would write some sort of other program that would then get executed. That seems like a long way to go to get nice testing. It would essentially move the funky mocking sketch thing I have in test_cmdline.py into cmdline.py itself. What is the value in that?
To me the missing_ok API feels slightly better, because it doesn't rely on the Look Before You Leap nature of os.path.exists().
In practice the race condition between the existence check and the file maybe disappearing seems unlikely to cause pain.
Add a comment: | https://nedbatchelder.com/blog/201508/small_choices_big_decisions_coverage_run_append.html | CC-MAIN-2020-05 | refinedweb | 1,940 | 66.64 |
I'm learning how to read and write to external text files with C#(yes, this is classwork, getting that out of the way now), and I've kind of hit a wall. See, the first part of this assignment was to create a program that can write customer records to an external file, after creating a Customer class with ID number, name, and current balance fields, which I did.
My predicament is that I now have to create a program to search the file I created with that first one, and print out each line to the console with a balance greater than or equal to a minimum balance supplied by a user. I'm drawing a blank on how to actually do this.
This is as far as I've gotten:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using static System.Console;
using System.IO;
namespace FindCustomerRecords2
{
class Program
{
static void Main(string[] args)
{
//Intialize variables
double minBal;
string line;
//Prompt user to enter minimum balance and store in minBal
Write("Please enter a minimum balance to pull up\n" +
"all customers that owe at least that much.");
minBal = Convert.ToDouble(ReadLine());
//Open data stream to CustomerRecords.txt
StreamReader file = new StreamReader(@"H:\C#\Visual C# 2015\Ch. 14 - Files and Streams\CustomerRecords.txt");
//Loop through CustomerRecords.txt, find values
//greater than or euqal to minBal, and display
//each line with such a value
while((line = file.ReadLine()) != null)
{
}
}
}
}
fields = recordIn.Split(DELIM);
id = Convert.ToInt32(fields[0]);
name = fields[1];
balance = Convert.ToDouble(fields[2]);
Take a closer look inside of your while loop:
while((line = file.ReadLine()) != null) { }
at this point inside of the loop, you should have the following data:
45454, Glenn Matthews, 54.87
If not, then you created your CSV file incorrectly. Let's assume for now that you created it correctly. How would you go about getting the value out of your
line variable in order to compare it to your
minBal variable? If you can figure that part out then you can probably solve the rest on your own.
What tools (such as string methods) do you have at your disposal that could help do this? | https://codedump.io/share/gTQX7fOsH6tE/1/need-help-pulling-data-from-a-file | CC-MAIN-2018-22 | refinedweb | 374 | 66.94 |
i need on this coding to add two variable data which i have in csv file format. please help me.
i need on this coding to add two variable data which i have in csv file format. please help me.
Okay. So if your values are stored in a CSV file, you are going to need to load that file. You can use
loadStrings() to do that.
Then you will need to parse the loaded data. You will need to loop over each line. Use a
for loop.
I assume that each line will have two numbers on it, separated by a comma (as it is a CSV file). You can use
split() to parse the line and get the two strings that represent the values. Then you can cast those values to numbers, using either
int() or
float().
Next, you will probably want to associate each pair of values with a Ball object. You could pass these values as parameters to the
new Ball() constructor call. Then you would just need to modify the
Ball class (which you have, I assume? You haven’t posted it…) to accept these two parameters.
You would then modify the
Ball class to use the stored values… maybe as a size or color value when drawing a Ball? That’s up to you.
Look in the reference for each of the keywords and key functions I have mentioned to get a feel for how to use them. See how much of this process you can get done yourself, and post the code of your attempt for more help.
import toxi.geom.*; //DECLARE - store all the balls/global variable ArrayList ballCollection; // Setup the Processing Canvas void setup() { size(600, 600); smooth(); String [] data; //d //INITIALIZE ballCollection=new ArrayList(); for (int i = 0; i < 100; i++) { Vec3D origin = new Vec3D (random(width), random(130), 0); Ball myBall = new Ball(origin); ballCollection.add(myBall); } } // Main draw loop void draw() { background(0); //CALL FUNCTIONALITY for (int i = 0; i < ballCollection.size(); i++) { Ball mb = (Ball) ballCollection.get(i); mb.run(); } } class Ball { // GLOBAL VARIABLES - LOCATION SPEED Vec3D loc = new Vec3D (0, 0, 0); Vec3D speed = new Vec3D(random(-2, 2), random(-2, 2), 0); Vec3D acc = new Vec3D(); Vec3D grav = new Vec3D(0, 0.2, 0); //CONSTRUCTOR - HOW DO YOU BUILD THE CLASS - GIVE VARIABLES A VALUE Ball(Vec3D _loc) { loc = _loc; } //FUNCTIONS - BREAK DOWN COMPLEX BEHAVIOUR INTO DIFFERENT MODULES void run() { display(); move(); bounce(); //gravity(); //Create a line between the balls lineBetween(); //flock = complex behaviour. Craig Reynolds Boids //flock(); } /* void flock(){ //3 functions of flock : based on vector maths separate(5); cohesion(0.001); align(1); } void align(float magnitude){ Vec3D steer = new Vec3D(); int count = 0; for(int i = 0; i < ballCollection.size();i++) { Ball other = (Ball) ballCollection.get(i); float distance = loc.distanceTo(other.loc); if(distance > 0 && distance < 40) { steer.addSelf(other.speed); count ++; } } if(count > 0){ steer.scaleSelf(1.0/count); } steer.scaleSelf(magnitude); acc.addSelf(steer); } //cohesion =opposite of seperate - keep together - void cohesion(float magnitude){ Vec3D sum = new Vec3D(); int count = 0; for(int i = 0; i < ballCollection.size();i++) { Ball other = (Ball) ballCollection.get(i); float distance = loc.distanceTo(other.loc); if(distance > 0 && distance < 40) { sum.addSelf(other.loc); count++; } } if (count > 0){ sum.scaleSelf(1.0/count); } Vec3D steer = sum.sub(loc); steer.scaleSelf(magnitude); acc.addSelf(steer); } void separate(float magnitude){ Vec3D steer = new Vec3D(); int count = 0; for(int i = 0; i < ballCollection.size();i++){ Ball other = (Ball) ballCollection.get(i); float distance = loc.distanceTo(other.loc); if(distance > 0 && distance < 30){ //move away from another ball // calculate a vector of difference Vec3D diff = loc.sub(other.loc); //increases smoothness of the steer diff.normalizeTo(1.0/distance); steer.addSelf(diff); count++; } } if (count > 0){ steer.scaleSelf(1.0/count); } steer.scaleSelf(magnitude); acc.addSelf(steer); } */ void lineBetween() { //BallCollection for (int i = 0; i < ballCollection.size(); i++) { Ball other = (Ball) ballCollection.get(i); float distance = loc.distanceTo(other.loc); if (distance > 0 && distance < 100) { stroke(255, 0, 0); strokeWeight(0.4); line(loc.x, loc.y, other.loc.x, other.loc.y); } } } void gravity() { speed.addSelf(grav); } void bounce () { if (loc.x > width) { speed.x = speed.x * -1; } if (loc.x < 0) { speed.x = speed.x * -1; } if (loc.y > height) { speed.y = speed.y * -1; } if (loc.y < 0) { speed.y = speed.y * -1; } } void move() { //steering behaviours need accelartion :store the movements for the ball speed.addSelf(acc); speed.limit(2); loc.addSelf(speed); acc.clear(); } void display() { stroke(0); ellipse(loc.x, loc.y, 5, 5); } }
sir this is full code. how to implement the function of coding. help me please.
Alright. It’s good to have code to start out with. We didn’t actually need to see the Ball class yet, as all of the data loading and parsing is probably going to happen in
setup().
Since you have an array of Strings called
data handy, we can use that to load the CSV file’s contents into.
data = loadStrings( "your_filename.csv" );
You can even look at the first line of the loaded data:
println( data[0] );
Try using
split() on that line to get both of those values! Try it yourself! Post the code of YOUR ATTEMPT at this. Don’t think that you can get away with not trying to do it yourself! It’s not cunning or sneaky or even polite…
Attempt a
for loop too if you are feeling bold…
void setup() { size(600, 600); String [] data; data = loadStrings( "Spellman.csv" ); println("there are " + data.length + " data"); for (int i = 0; i < data.length; i++) { println(data[i]); } }
Sir is this correct way i’m doing? Guide me sir. thank you lot previous guide.
Kindly help me above coding…
So far so good. Now, you probably see some output, right? Something like:
46.383,62.939, 39.73,6.828, ...
Not this exact output, of course… but the contents of your CSV file. A bunch of numbers, separated by commas.
So, as I have said, the next step is to use
split(). If you took the time to find the Reference page for this function, you would already have all the information you need.
Again, try to understand what the
split() function does, and then attempt to use it in your own code yourself. The goal here is to get just ONE number per line!
void setup() { size(600, 600); String [] data; data = loadStrings( "Spellman.csv" ); println("there are " + data.length + " data"); for (int i = 0; i < data.length; i++) { // Tell which line we are looking at now. println( "Now looking at line # " + i + ", which is " + data[i] ); // Parse this line... somehow... float[] numbers_on_this_line = ???; // Try to fill in the rest of this line. // Show every number we found on this line. for( int j=0; j < numbers_on_this_line.length; j++){ println( "" + i + ", " + j + " => " + numbers_on_this_line[j] );} } }
So far i get one issues on this. I’m stuck on this issue help me.
The file "Spellman.csv" is missing or inaccessible, make sure the URL is valid or that the file has been added to your sketch and is readable.
What is the name of your CSV file? Is it in your sketch’s data folder?
The file name is
Spellman.csv. In sketch’s data folder.
Okay, so it’s not a missing file then.
Is the folder accessible?
Did you add the file to your sketch?
Is the file readable?
I mean, these are things that the error message you got suggested you try…
That folder can accessible.
How to add the to my sketch? For double confirmation i have click sketch option and select show sketch folder its shown that place i have save.
that file is readable.
Alright. Try saving your sketch and restarting Processing.
i have restart but still same issues.
In the menus, click on “Sketch” > “Add File…”, and try adding it again to make sure your file has been added to your sketch properly.
Make sure you have the name and location right.
“Spellman.csv” is not the same as “SpellMan.csv” or “Spellman.CSV”!
Thank you sir. i get the result right now based on this coding.
void setup() { size(600, 600); String [] data; data = loadStrings( "Spellman.csv" ); println("there are " + data.length + " data"); for (int i = 0; i < data.length; i++) { println(data[i]); } }
The output result is
0.35887,0.1546 0.3588,0.13535 0.3588,0.18143 0.3588,0.04211 0.3588,0.24413 0.3588,0.08708 0.35864,0.18746 0.35849,0.13674 0.35785,0.1546 0.35768,0.09907 0.35768,0.13535 0.35768,0.24413 0.35768,0.08708 0.35768,0.02179 0.35768,0.08708 0.35768,0.18746 0.35768,0.11214 0.35768,0.06695 0.35768,0.06695 0.35768,0.2143 0.35768,0.11214 0.35768,0.13535 0.35768,0.08708 0.35768,0.12725 0.35768,0.08708 0.35768,0.15433 0.35768,0.08708 0.35768,0.09907 0.35768,0.09907 0.35722,0.09907 0.35711,0.09907 0.35711,0.17422 0.35711,0.08708 0.35711,0.10662 0.35711,0.17106 0.35711,0.06607 0.35694,0.09907 0.35694,0.04211 0.35694,0.08708 0.35694,0.13535 0.35694,0.09907 0.35694,0.08708 0.35678,0.10662 0.35662,0.07215 0.35662,0.13535 0.35623,0.09907
Based on this
split() function. I’m confuse on it why i need split data.? Because i need implement two variable data into
class ball coding.
Right now your data is still tied up in a String. That is, a line of your data is, for example, this:
"0.32768,0.11214"
This is not two numbers! It is one String. First, you want to separate that String into two Strings, each of which represents a number. This is what split does.
"0.32768" and "0.11214"
Then, as I have said before, you will want to turn these Strings into numbers. Since they are numbers with decimals, you will want to use floats, or floating point, numbers.
String line_with_numbers = "0.32768,0.11214"; String[] split_line = ???; float a,b; a = float( ??? ); b = float( ??? ); println( "" + a " " + b );
Go back to this post now that your CSV file loads… Agents coding work but need add on two variable data
I have make something
split function code.
void setup() { size(600, 600); String [] data; data = loadStrings( "Spellman.csv" ); println("there are " + data.length + " data"); for (int i = 0; i < data.length; i++) { //data = int(split(stuff[0], ',')); //println(data[i]); println( "Now looking at line # " + i + ", which is " + data[i]); // Tell which line we are looking at now. String [] splitNums = split(data, " " ); String data = splitNums.length; for (int j=0; j < splitNums.length; j++) { myNums.append(splitWords[j]; } }
But i never get any output result on this coding. Please help me. | https://discourse.processing.org/t/agents-coding-work-but-need-add-on-two-variable-data/234 | CC-MAIN-2022-27 | refinedweb | 1,830 | 78.04 |
#;
}
if(playerLocation == 2) {
cout << "You jumped down and landed with a big splash in the crystal water\n While you are in the water you see a small cave\n";
cout << "Do you enter the cave or do rise to the top of the water and walk away?\n 1: Enter the cave\n 2: Rise out of the water and walk away";
cin >> userInput;
if(userInput == 1) playerLocation = 4;
else if(userInput == 2) playerLocation = 5;
}
if(playerLocation == 3) {
cout << "You walk away from the cliff and find yourself walking down a hill.\nYou suddenly stop and look behind you where you see a lion starting to chase you\nTo your right there is a shallow hole you could jump down that the lion couldnt fit\n OR you could run straight ahead to some vines hanging down infront of another cliff\n\n";
cout << "Do you go and hide in the hole or do you jump and swing on the vines?";
cin >> userInput;
if(userInput == 1) playerLocation = 6;
else if(userInput == 2) playerLocation = 5;
}
}
return 0;
}
1234567891011121314151617
//wait.cpp
//include for _getch()
#include <conio.h>
PressEnterToContinue()
{
std::cout << "Press [ ENTER ] to continue...";
char key = '\0';
// 13 in ascii for enter. Look below.
while( key != 13 )
{
key = _getch();
}
} | http://www.cplusplus.com/forum/beginner/90297/ | CC-MAIN-2016-26 | refinedweb | 208 | 64.24 |
I am trying to record my screen while checking individual pixels in the recording for colors. I have successfully created a live recording of my screen, but when I try to check the pixels of this recording I get this error:
TypeError: 'Image' object is not subscriptable. Here is my code:
import cv2 import numpy as np from mss import mss from PIL import Image mon = {'left': 500, 'top': 850, 'width': 450, 'height': 30} with mss() as sct: while True: screenShot = sct.grab(mon) img = Image.frombytes('RGB', (screenShot.width, screenShot.height), screenShot.rgb, ) px = img[10,25] print(px) cv2.imshow('test', np.array(img)) if cv2.waitKey(33) & 0xFF in ( ord('q'), 27, ): break
If anyone has any ideas about what is wrong please tell me. Thanks! | https://discuss.python.org/t/object-is-not-subscriptable-screen-recording/9636 | CC-MAIN-2021-31 | refinedweb | 129 | 72.16 |
Not too long ago I wrote about sending emails in an Ionic Framework app using the Mailgun API. To get you up to speed, I often get a lot of questions regarding how to send emails without opening the default mail application from within an Ionic Framework application. There are a few things that could be done. You can either spin up your own API server and send emails from your server via an HTTP request or you can make use of a service.
To compliment the previous post I wrote for Ionic Framework, I figured it would be a good idea to demonstrate how to use Mailgun in an Ionic 2 application.
Mailgun, for the most part, is free. It will take a lot of emails before you enter the paid tier. Regardless, you’ll need an account to follow along with this tutorial.
Let’s start by creating a new Ionic 2 project. From the Command Prompt (Windows) or Terminal (Mac and Linux), execute the following:
ionic start MailgunApp blank --v2 cd MailgunApp ionic platform add ios ionic platform add android
A few important things to note here. The first thing to note is the use of the
--v2 tag. This is an Ionic 2 project, and to create Ionic 2 projects, the appropriate Ionic CLI must be installed. Finally, you must be using a Mac if you wish to add and build for the iOS platform.
This project uses no plugins or external dependencies. This means we can start coding our application.
Starting with the logic file, open your project’s app/pages/home/home.ts file and include the following code:
import {Component} from '@angular/core'; import {Http, Request, RequestMethod} from "@angular/http"; @Component({ templateUrl: 'build/pages/home/home.html' }) export class HomePage { http: Http; mailgunUrl: string; mailgunApiKey: string; constructor(http: Http) { this.http = http; this.mailgunUrl = "MAILGUN_URL_HERE"; this.mailgunApiKey = window.btoa("api:key-MAILGUN_API_KEY_HERE"); } send(recipient: string, subject: string, message: string) { var requestHeaders = new Headers(); requestHeaders.append("Authorization", "Basic " + this.mailgunApiKey); requestHeaders.append("Content-Type", "application/x-www-form-urlencoded"); this.http.request(new Request({ method: RequestMethod.Post, url: "" + this.mailgunUrl + "/messages", body: "[email protected]&to=" + recipient + "&subject=" + subject + "&text=" + message, headers: requestHeaders })) .subscribe(success => { console.log("SUCCESS -> " + JSON.stringify(success)); }, error => { console.log("ERROR -> " + JSON.stringify(error)); }); } }
There is a lot going on in the above snippet so let’s break it down.
First you’ll notice the following imports:
import { Http, Request, RequestMethod } from "@angular/http";
The Mailgun API is accessed via HTTP requests. To make HTTP requests we must include the appropriate Angular dependencies. More on HTTP requests can be seen in a previous post I wrote specifically on the topic.
Next we define our Mailgun domain URL and corresponding API key. For prototyping I usually just use the sandbox domain and key, but it doesn’t cost any extra to use your own domain. However, notice the following:
this.mailgunApiKey = window.btoa("api:key-MAILGUN_API_KEY_HERE");
We are making use of the
btoa function that will create a base64 encoded string. Having a base64 encoded string is a requirement when using the authorization header of a request.
Finally we get into the heavy lifting. The
send function is where all the magic happens.
The
send function expects an authorization header as well as a content type. The Mailgun API expects form data to be sent, not JSON data. This data is to be sent via a POST request.
This brings us into our UI. To keep things simple our UI is just going to be a form that submits to the
send function. Open your project’s app/pages/home/home.html file and include the following code:
<ion-header> <ion-navbar> <ion-title>Mailgun App</ion-title> </ion-header> <ion-content <ion-list> <ion-item> <ion-label floating>Recipient (To)</ion-label> <ion-input</ion-input> </ion-item> <ion-item> <ion-label floating>Subject</ion-label> <ion-input</ion-input> </ion-item> <ion-item> <ion-label floating>Message</ion-label> <ion-textarea [(ngModel)]="message"></ion-textarea> </ion-item> </ion-list> <div padding> <button block (click)="send(recipient, subject, message)">Send</button> </div> </ion-content>
The above form has two standard text input fields for the recipient and subject. There is a text area for the message body and a button for submitting the form data. The input elements are all bound using an
ngModel. This allows us to pass the fields into the
send function.
Give it a try, you should be able to send emails now via HTTP.
You just saw how to send emails without opening an email client on the users device. This could be useful for sending logs to the developer, or maybe a custom feedback form within the application. Previously I wrote about how to use Mailgun in an Ionic Framework application, but this time we saw how with Ionic 2, Angular, and TypeScript. | https://www.thepolyglotdeveloper.com/2016/05/send-emails-ionic-2-mobile-app-via-mailgun-api/ | CC-MAIN-2019-26 | refinedweb | 818 | 57.57 |
Searching – All built-in collections in Python implement a way to check element membership using in.. Learn More here.
Searching for an element
All built-in collections in Python implement a way to check element membership using in.
List
alist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
5 in alist # True
10 in alist # False
Tuple
atuple = ('0', '1', '2', '3', '4')
4 in atuple # False
'4' in atuple # True
String
astring = 'i am a string'
'a' in astring # True
'am' in astring # True
'I' in astring # False
Set
aset = {(10, 10), (20, 20), (30, 30)}
(10, 10) in aset # True
10 in aset # False
Dict
dict is a bit special: the normal in only checks the keys. If you want to search in values you need to specify it. The same if you want to search for key-value pairs.
adict = {0: 'a', 1: 'b', 2: 'c', 3: 'd'}
1 in adict # True - implicitly searches in keys
'a' in adict # False
2 in adict.keys() # True - explicitly searches in keys
'a' in adict.values() # True - explicitly searches in values
(0, 'a') in adict.items() # True - explicitly searches key/value pairs
Section 71.2: Searching in custom classes: contains and iter
To allow the use of in for custom classes the class must either provide the magic method contains or, failing that, an iter-method.
Suppose you have a class containing a list of lists:
class ListList:
def init(self, value):
self.value = value
Create a set of all values for fast access
self.setofvalues = set(item for sublist in self.value for item in sublist)
def iter(self):
print('Using iter.')
A generator over all sublist elements
return (item for sublist in self.value for item in sublist)
def contains(self, value):
print('Using contains.')
Just lookup if the value is in the set return value in self.setofvalues
Even without the set you could use the iter method for the contains-check:
return any(item == value for item in iter(self))
Using membership testing is possible using in:
a = ListList([[1,1,1],[0,1,1],[1,5,1]])
10 in a # False
Prints: Using contains.
5 in a # True
Prints: Using contains.
even after deleting the contains method:
del ListList.contains
5 in a # True
Prints: Using iter.
Note: The looping in (as in for i in a) will always use iter even if the class implements a contains method.
Searching: Getting the index for strings: str.index(), str.rindex() and str.find(), str.rfind()
String also have an index method but also more advanced options and the additional str.find. For both of these there is a complementary reversed method.
astring = 'Hello on StackOverflow'
astring.index('o') # 4
astring.rindex('o') # 20
astring.find('o') # 4
astring.rfind('o') # 20
The difference between index/rindex and find/rfind is what happens if the substring is not found in the string:
astring.index(‘q’) # ValueError: substring not found astring.find(‘q’) # -1
All of these methods allow a start and end index:
astring.index('o', 5) # 6
astring.index('o', 6) # 6 - start is inclusive
astring.index('o', 5,7)#6
astring.index('o', 5,6)# - end is not inclusive
ValueError: substring not found
astring.rindex(‘o’, 20) # 20
astring.rindex(‘o’, 19) # 20 – still from left to right
astring.rindex('o', 4, 7) # 6
Searching: Getting the index list and tuples: list.index(), tuple.index()
list and tuple have an index-method to get the position of the element:
alist = [10, 16, 26, 5, 2, 19, 105, 26]
alist[1] # 16
alist.index(15)
ValueError: 15 is not in list
But only returns the position of the first found element:
atuple = (10, 16, 26, 5, 2, 19, 105, 26)
atuple.index(26) # 2
atuple[2] # 26
atuple[7] # 26 - is also 26!
Searching key(s) for a value in dict
dict have no builtin method for searching a value or key because dictionaries are unordered. You can create a function that gets the key (or keys) for a specified value:
def getKeysForValue(dictionary, value):
foundkeys = []
for keys in dictionary:
if dictionary[key] == value:
foundkeys.append(key)
return foundkeys
This could also be written as an equivalent list comprehension:
def getKeysForValueComp(dictionary, value):
return [key for key in dictionary if dictionary[key] == value]
If you only care about one found key:
def getOneKeyForValue(dictionary, value):
return next(key for key in dictionary if dictionary[key] == value)
The first two functions will return a list of all keys that have the specified value:
adict = {'a': 10, 'b': 20, 'c': 10}
getKeysForValue(adict, 10) # ['c', 'a'] - order is random could as well be ['a', 'c']
getKeysForValueComp(adict, 10) # ['c', 'a'] - dito
getKeysForValueComp(adict, 20) # ['b']
getKeysForValueComp(adict, 25) # []
The other one will only return one key:
getOneKeyForValue(adict, 10) # 'c' - depending on the circumstances this could also be 'a'
getOneKeyForValue(adict, 20) # 'b'
and raise a StopIteration-Exception if the value is not in the dict:
getOneKeyForValue(adict, 25)
StopIteration
Searching: Getting the index for sorted sequences:
bisect.bisect_left()
Sorted sequences allow the use of faster searching algorithms: bisect.bisect_left()1:
import bisect
def index_sorted(sorted_seq, value):
"""Locate the leftmost value exactly equal to x or raise a ValueError"""
i = bisect.bisect_left(sorted_seq, value)
if i != len(sorted_seq) and sorted_seq[i] == value:
return i
raise ValueError
alist = [i for i in range(1, 100000, 3)] # Sorted list from 1 to 100000 with step 3
index_sorted(alist, 97285) # 32428
index_sorted(alist, 4) # 1
index_sorted(alist, 97286)
ValueError
For very large sorted sequences the speed gain can be quite high. In case for the first search approximately 500 times as fast:
%timeit index_sorted(alist, 97285)
100000 loops, best of 3: 3 µs per loop %timeit alist.index(97285)
1000 loops, best of 3: 1.58 ms per loop
While it’s a bit slower if the element is one of the very first:
%timeit index_sorted(alist, 4)
100000 loops, best of 3: 2.98 µs per loop %timeit alist.index(4)
1000000 loops, best of 3: 580 ns per loop
Searching: Searching nested sequences
Searching in nested sequences like a list of tuple requires an approach like searching the keys for values in dict but needs customized functions.
The index of the outermost sequence if the value was found in the sequence:
GoalKicker.com – Python® Notes for Professionals 360 efficient approach. | https://codingcompiler.com/searching/ | CC-MAIN-2022-27 | refinedweb | 1,063 | 63.39 |
Note
Note: I'm generally a fan of Apple and own many of their products. My primary computer is a MacBook Pro, which I wouldn't trade for anything. But Apple has really screwed this one up.
Apple has hurt its users who develop in Java by declaring an end to Java support but continuing to update Java 6 using its automatic update program. This means that installing Oracle Java 7 JDK is a hassle to begin with, and if you blindly accept all of Apple's software updates (that is, you don't remember to uncheck Java updates when they appear) you'll have to re-do parts of your Java 7 installation from time to time because Apple's Java updates reset all the symlinks to point to its own Java 6. In any case, here's what you need to do to install and use Java 7 on Mac OS X and fix Apples "updates" if they slip by you. At the time of this writing the current Java 7 is update 17 (jdk1.7.0_17).
Get the Java 7 JDK (not JRE) from here:. You want the Mac OS X x64 download (I couldn't provide a direct link becuase the page I linked in the previous sentence requires you to accept a license agreement). It's a .dmg (disk image) containing a package installer. Double click the package installer, enter your password when prompted, and your new Java 7 JDK will be installed in a few minutes. Kind of. This step only places the JDK on your hard disk. Now you have to set a few symlinks so that you can actually use it.
Note
To understand the remaining steps you need to understand that Mac OS X is a Unix operating system, and you have to know a little Unix to get certain things done. You may want to read some background information first.
Apple installs its JDK in /System/Library/Java/JavaVirtualMachines. If you have Apple's JDK and the symlinks are set up for it, perhaps after you've followed these instructions but then allowed Apple's Software Update to install a Java update, you can find out with java -version or by seeing where the symlink at usr/bin/java points. If java -version prints something like "Java 1.6" (the important part being the "1.6") or the symlink at usr/bin/java points to /System/Library/Java/JavaVirtualMachines (note that Apple installs under /System/Library, Oracle installs under /Library), then you need to follow the rest of these instructions.
If you use an older version of Java, like Apple's Java 6 and try to write programs that use Java class files that were compiled with a newer version of Java you may see an error like this:
warning: ./Location.class: major version 51 is newer than 50, the highest major version supported by this compiler. It is recommended that the compiler be upgraded.
This message means you're using Java 1.6 (version 50) with pre-compiled .class files that were compiled with Java 1.7 (version 51). In general, you can use the Java disassembler, javap to find out which version of Java was used to compile a class:
Note
Note: the '$' character is the shell prompt. In the example shell interactions in this guide, the commands you type will appear after a shell prompt.
$ javap -verbose Location.class | head Classfile /Users/chris/Downloads/PacmanSkeleton/Location.class Last modified Mar 8, 2013; size 1241 bytes MD5 checksum 2e22b98aa3c1fb2bb3a06e5cd4f2fd24 Compiled from "Location.java" public class Location SourceFile: "Location.java" minor version: 0 major version: 51 flags: ACC_PUBLIC, ACC_SUPER Constant pool:
Note that I piped the output of javap -verbose through head because it prints a ton of information.
Note.
Remove Apple's JVMs:
$ sudo rm -rf /System/Library/Java/JavaVirtualMachines/
Remove installer records:
$ sudo rm /private/var/db/receipts/com.apple.pkg.JavaForMacOSX*
Remove intaller receipts by editing /Library/Receipts/InstallHistory.plist and removing any <dict>...</dict> entries that contain references to Apple's Java. You can recognize these dict entries because they'll have child elements that contain com.apple. You can leave Oracle's installation receipts alone. It's a bit tedious editing this file. I found the dict elements by searching for ava. Note that you'll need to edit this file as the superuser, for example by doing:
$ sudo emacs /Library/Receipts/InstallHistory.plist
Once you've removed all these traces of Apple's Java 6 install, Apple's software update should not (re)install Java 6 and you should only need to reset your symlinks when you install a new JDK from Oracle.
$ sudo rm /usr/bin/java $ sudo ln -s /Library/Java/JavaVirtualMachines/jdk1.7.0_17.jdk/Contents/Home/bin/java /usr/bin/java
That's it! Now you should get this:
$ java -version java version "1.7.0_17" Java(TM) SE Runtime Environment (build 1.7.0_17-b02) Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode) $ javac -version javac 1.7.0_17
Some Java software, like Tomcat, require that you set an environment variable named JAVA_HOME. The convention in Mac OS X is to make a symlink named /Library/Java/Home and assign it to your JAVA_HOME environment variable. That way you don't have to remember to update your JAVA_HOME environment variable when you install a new JDK, but you do have to update the symlink. The following assumes you already have a /Library/Java/Home symlink (Apple's installer used to set it, I think. Maybe it still does.)
$ sudo rm /Library/Java/Home $ sudo ln -s /Library/Java/JavaVirtualMachines/jdk1.7.0_17.jdk/Contents/Home /Library/Java/Home
Apple provides a fairly decent terminal emulator in /Applications/Terminal.app. Go ahead and put this in your dock. If you're a CS major, you'll use it every day. Terminal provides you with command shell known as BASH (Bourne Again Shell). A shell is a program that allows a user to interact directly with the operating system. You're already familiar with graphical shells that include things like file explorers, start menus, and control panels. Command line shells, at least the ones on Unix, are far more powerful than graphical shells, so you need to learn one, and BASH is by far the most popular. The rest of these instructions are BASH commands that take place in Terminal.
Unix (and Windows) maintains a set of global variables accessible to all programs called environment variables. You can get a list of them with the env command (I've only shown a few interesting ones here):
$ env TERM_PROGRAM=Apple_Terminal SHELL=/bin/bash PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin JAVAFX_HOME=/Library/Java/Home PWD=/System/Library/Java JAVA_HOME=/Library/Java/Home PS1=\[\033[32;1m\][\u@\h \w]\n$ \[\033[0m\] HOME=/Users/chris LOGNAME=chris
The important environment variable to note for now is the PATH variable. The PATH variable contains a colon-separated list of directory paths that contain executable files (commands). When you type a command at the shell prompt, the directories in PATH are searched for a match. You can find out exactly which executable file will be executed in response to a particular command with the which command:
$ which java /usr/bin/java
As you can see, the java command on Mac OS X is contained in the /usr/bin directory, but its a symbolic link to the actual locatoin of the java executable. You can see this by getting a "long" directory listing of it (the -l switch to the ls command gives detailed info about directory entries).
$ ls -l /usr/bin/java lrwxr-xr-x 1 root wheel 72 Mar 13 23:36 /usr/bin/java -> /Library/Java/JavaVirtualMachines/jdk1.7.0_17.jdk/Contents/Home/bin/java
That "->" in the directory listing means it's a symbolic link.
A link is a pointer to a file or a directory, kind of like a reference variable. A hard link is a directory entry that points to a file on disk. Every file has (at least) one hard link. A symbolic link, or soft link, is a directory entry that points to a hard link, like an alias. You create symbolic links with the ln -s command. (You can also create hard links, but don't.) When you remove the last hard link to a file, you can no longer access the file. This is what generally happens when you delete a file. The directory entry for it (hard link) is gone, but the contents remain on disk until they are overwritten. When you delete a directory entry that's a symlink, only the symlink is gone. The file and its hard link remain.
Here's a simple example. We create a text file named foo.txt containing the text "bar" and a symlink to foo.txt named baz.txt. The general form of the ln -s command is ln -s <hard-link> <soft-link>.
$ echo "bar" > foo.txt $ cat foo.txt bar $ ln -s foo.txt baz.txt $ ls -l total 16 lrwxr-xr-x 1 chris staff 7 Mar 13 22:46 baz.txt -> foo.txt -rw-r--r-- 1 chris staff 4 Mar 13 22:45 foo.txt $ cat baz.txt bar
As you can see, the symbolic link acts just like the original hard link.
The Unix file system is organized as a tree. There is a root directory, /, and a tree of directories under / [1]. Just like in other operating systems' file systems, every directory entry has an owner, a group, and access permissions. The important thing you need to know for now is that. | http://www.cc.gatech.edu/~simpkins/teaching/gatech/cs2340/guides/java7-macosx.html | CC-MAIN-2014-35 | refinedweb | 1,626 | 65.32 |
Why Block At All? Thoughts on threading and sockets) { Console.WriteLine("Lock 1 acquired."); ManualResetEvent mre = new ManualResetEvent(false); WaitCallback callback = new WaitCallback(ThrowAwayTheKey); ThreadPool.QueueUserWorkItem(callback, mre); // wait till other thread has lock on // someOtherObject mre.WaitOne(); lock(someOtherObject) { Console.WriteLine("I never get called."); } } } void ThrowAwayTheKey(object resetEvent) { lock(someOtherObject) { Console.WriteLine("Lock 2 acquired."); ManualResetEvent mre = resetEvent as ManualResetEvent; if(mre != null) mre.Set(); //original thread can continue. lock(someObject) { Console.WriteLine("Neither do I"); } } }
Calling the method LockItUp will cause a deadlock and the application will hang until you kill it. Although this example is a bit contrived, you'd be surprised how easy it is in a sufficiently large and complicated system with multiple developers for you to run into this situation in a more roundabout manner. I see this often enough because using the lock statement is the path of least resistance. Instead, try using the TimedLock struct.
Another situation this type of thing comes up is with socket programming. Often I see code like this:
using System.Net.Sockets; //... other stuff ... byte[] _buffer = new byte[4096]; public void Listen(Socket socket) { int bytesRead = socket.Receive(_buffer, 0, 4096 , SocketFlags.None); //You're sitting here all day. }
If the remote socket isn't forthcoming with that data, you're going to be sitting there all day holding that thread open. In order to stop that socket, you'll need another thread to call Shutdown or close on the socket. Contrast that with this approach:
byte[] _buffer = new byte[4096]; public void BeginListen(Socket socket) { socket.BeginReceive(_buffer, 0, 4096 , SocketFlags.None , new AsyncCallback(OnDataReceived) , socket); //returns immediately. } void OnDataReceived(IAsyncResult ar) { Socket socket = ar.AsyncState as Socket; int bytesRead = socket.EndReceive(ar); //go on with your bad self... }
BeginListen returns immediately and OnDataReceived isn't called until there's actual data to receive. An added benefit is that you're not taking up a thread from the ThreadPool, but rather you're using IO completion ports. IO Completion ports is a method Windows uses for asynchronous IO operations. When an asynchronous IO is complete, Windows will awaken and notify your thread. The IO operations run on a pool of kernel threads whose only task in life is to process I/O requests.
Since BeginListen returns immediately, you're free to close the socket if no data is received after a certain time or in response to some other event. This may be a matter of preference, but this is a more elegant and scalable approach to sockets.
For more on asynchronous sockets, take the time to read Using an Asynchronous Server Socket and related articles. | http://haacked.com/archive/2004/08/06/why-block-at-all.aspx/ | CC-MAIN-2016-07 | refinedweb | 441 | 51.95 |
Table Of Contents
reStructuredText renderer¶
New in version 1.1.0.
reStructuredText is an easy-to-read, what-you-see-is-what-you-get plaintext markup syntax and parser system.
Note
This widget requires the
docutils package to run. Install it with
pip or include it as one of your deployment requirements.
Warning
This widget is highly experimental. The styling and implementation should not be considered stable until this warning has been removed.
Usage with Text¶
text = """ .. _top: Hello world =========== This is an **emphased text**, some ``interpreted text``. And this is a reference to top_:: $ print("Hello world") """ document = RstDocument(text=text)
The rendering will output:
Usage with Source¶
You can also render a rst file using the
source property:
document = RstDocument(source='index.rst')
You can reference other documents using the role
:doc:. For example, in the
document
index.rst you can write:
Go to my next document: :doc:`moreinfo.rst`
It will generate a link that, when clicked, opens the
moreinfo.rst
document.
- class
kivy.uix.rst.
RstDocument(**kwargs)[source]¶
Bases:
kivy.uix.scrollview.ScrollView
Base widget used to store an Rst document. See module documentation for more information.
background_color¶
Specifies the background_color to be used for the RstDocument.
New in version 1.8.0.
background_coloris an
AliasPropertyfor colors[‘background’].
base_font_size¶
Font size for the biggest title, 31 by default. All other font sizes are derived from this.
New in version 1.8.0.
colors¶
Dictionary of all the colors used in the RST rendering.
Warning
This dictionary is needs special handling. You also need to call
RstDocument.render()if you change them after loading.
colorsis a
DictProperty.
document_root¶
Root path where :doc: will search for rst documents. If no path is given, it will use the directory of the first loaded source file.
document_rootis a
StringPropertyand defaults to None.
goto(ref, *largs)[source]¶
Scroll to the reference. If it’s not found, nothing will be done.
For this text:
.. _myref: This is something I always wanted.
You can do:
from kivy.clock import Clock from functools import partial doc = RstDocument(...) Clock.schedule_once(partial(doc.goto, 'myref'), 0.1)
Note
It is preferable to delay the call of the goto if you just loaded the document because the layout might not be finished or the size of the RstDocument has not yet been determined. In either case, the calculation of the scrolling would be wrong.
You can, however, do a direct call if the document is already loaded.
New in version 1.3.0.
preload(filename, encoding='utf-8', errors='strict')[source]¶
Preload a rst file to get its toctree and its title.
The result will be stored in
toctreeswith the
filenameas key.
resolve_path(filename)[source]¶
Get the path for this filename. If the filename doesn’t exist, it returns the document_root + filename.
show_errors¶
Indicate whether RST parsers errors should be shown on the screen or not.
show_errorsis a
BooleanPropertyand defaults to False.
source¶
Filename of the RST document.
sourceis a
StringPropertyand defaults to None.
source_encoding¶
Encoding to be used for the
sourcefile.
source_encodingis a
StringPropertyand defaults to utf-8.
Note
It is your responsibility to ensure that the value provided is a valid codec supported by python.
source_error¶
Error handling to be used while encoding the
sourcefile.
source_erroris an
OptionPropertyand defaults to strict. Can be one of ‘strict’, ‘ignore’, ‘replace’, ‘xmlcharrefreplace’ or ‘backslashreplac’.
text¶
RST markup text of the document.
textis a
StringPropertyand defaults to None.
title¶
Title of the current document.
titleis a
StringPropertyand defaults to ‘’. It is read-only.
toctrees¶
Toctree of all loaded or preloaded documents. This dictionary is filled when a rst document is explicitly loaded or where
preload()has been called.
If the document has no filename, e.g. when the document is loaded from a text file, the key will be ‘’.
toctreesis a
DictPropertyand defaults to {}.
underline_color¶
underline color of the titles, expressed in html color notation
underline_coloris a
StringPropertyand defaults to ‘204a9699’. | https://kivy.org/doc/master/api-kivy.uix.rst.html | CC-MAIN-2020-34 | refinedweb | 652 | 62.34 |
For much of healthcare data analytics and modelling we will use NumPy and Pandas as the key containers of our data. These libraries allow efficient manipulation and analysis of very large data sets. They are very considerably faster than using a ’pure Python’ approach.
NumPy and Pandas are distributed with all main scientific Python distributions.
NumPy
NumPy is a library for supporting work with large multi-dimensional arrays, with many mathematical functions. NumPy is compatible with many other libraries such as Pandas (see below), MatPlotLib (for plotting), and many Python maths, stats and optimisation libraries.
When we import NumPy as as library it is standard practice to use the as statement which allows it to be referenced with a shorter name. We will import as np:
import numpy as np
Pandas
Pandas is a library that allows manipulation of large arrays of data. Data may be indexed and manipulated based on index. Data is readily pivoted, reshaped, grouped, merged.
We will use pandas alongside numpy. Generally, NumPy is faster for mathemetical functions, but Pandas is more powerful for data manipulation.
As with numpy, we import with a shortened name:
import pandas as pd
One thought on “20. NumPy and Pandas” | https://pythonhealthcare.org/2018/03/28/20-numpy-and-pandas/ | CC-MAIN-2018-51 | refinedweb | 200 | 57.06 |
Thanks, I got it.After referring to arm64 and risc-v, we try to refine our code, such asremoving unneeded checking and refining syscall restart flow. Wehope these modifications can enhance the reliability and readability.However, the following 2 files which you had acked are included inthis modification.1. arch/nds32/include/asm/nds32.h (patch: Assembly macros and definitions) The definition of macro tbl and why are removed. - Now, we use pt_reg->syscallno instead of 'why' to determine whether entering kernel is via syscall or not. Therefore, macro 'why' is unneeded.--- a/arch/nds32/include/asm/nds32.h+++ b/arch/nds32/include/asm/nds32.h@@ -66,10 +66,6 @@ static inline unsigned long CACHE_LINE_SIZE #endif /* __ASSEMBLY__ */-/* tbl and why is used in ex-scall.S and ex-exit.S */-#define tbl $r8-#define why $r8- #define IVB_BASE PHYS_OFFSET2. arch/nds32/kernel/ex-scall.S (patch: System calls handling) a. Define macro tbl - The marco tbl is used only in this file. So, I move its definition from arch/nds32/include/asm/nds32.h to here. b. Remove 'set why = 0' when issuing syscall number is invalid c. Adjust input arguments of syscall_trace_enter--- a/arch/nds32/kernel/ex-scall.S+++ b/arch/nds32/kernel/ex-scall.S...+#define tbl $r8 /* * $r7 will be writen as syscall nr- * by retrieving from $ITYPE 'SWID' bitfiled */ .macro get_scno lwi $r7, [$sp + R15_OFFSET]@@ -54,7 +49,6 @@ ENTRY(eh_syscall) get_scno gie_enable-ENTRY(eh_syscall_phase_2) lwi $p0, [tsk+#TSK_TI_FLAGS] andi $p1, $p0, #_TIF_WORK_SYSCALL_ENTRY@@ -71,7 +65,6 @@ jmp_systbl: jr $p1 ! no return _SCNO_EXCEED:- movi why, 0 ori $r0, $r7, #0 ori $r1, $sp, #0 b bad_syscall@@ -81,8 +74,7 @@ _SCNO_EXCEED: * context switches, and waiting for our parent to respond. */ __sys_trace:- move $r1, $sp- move $r0, $r7 ! trace entry [IP = 0]+ move $r0, $sp bal syscall_trace_enter move $r7, $r0 la $lp, __sys_trace_return ! return addressIf you think these modifications in acked files are not permitted,we will recover it.We verify all modifications by LTP 2017 related cases and glibc2.26 testsuite. We plan to add it in the next version patch andhope you can give us some comments as before.ThanksVincent2018-01-24 19:13 GMT+08:00 Arnd Bergmann <arnd@arndb.de>:> On Wed, Jan 24, 2018 at 1:56 AM, Vincent Chen <deanbo422@gmail.com> wrote:>> 2018-01-18 18:30 GMT+08:00 Arnd Bergmann <arnd@arndb.de>:>>> On Mon, Jan 15, 2018 at 6:53 AM, Greentime Hu <green.hu@gmail.com> wrote:>>>> From: Greentime Hu <greentime@andestech.com>>>>>>>>> This patch adds support for signal handling.>>>>>>>> Signed-off-by: Vincent Chen <vincentc@andestech.com>>>>> Signed-off-by: Greentime Hu <greentime@andestech.com>>>>>>> I never feel qualified enough to properly review signal handling code, so>>> no Ack from me for this code even though I don't see anything wrong with it.>>> Hopefully someone else can give an Ack after looking more closely.>>>>>>> Dear Arnd:>>>> We'd be glad to improve signal handling code to meet your requirement.>> Could you>> tell us which part we need to refine or which implementation is good>> for us to refer?>> No, as I said, the problem is on my side, I just don't understand enough of it.> I would assume that the arm64 and risc-v implementations are the most> thoroughly reviewed, but haven't looked at those in enough detail either.> If your code does something that risc-v doesn't do, try to understand whether> there should be a difference or not.>> Arnd | https://lkml.org/lkml/2018/2/6/9 | CC-MAIN-2018-43 | refinedweb | 580 | 59.09 |
I’m having some really weird problems with an app I’m writing.
Everything is working as expected on my development machine (OSX,
Rails 0.14.4) but is cacking out with weird errors on the Textdrive
demo site (FreeBSD, Rails 1.0.0). It complains about missing
variables, with errors like this:
undefined local variable or method `rawcode’ for
#<#Class:0x8efa1ec:0x8efa138>
This was when I added in rawcode to the locals for the partial for
debugging. Again, everything works fine on the development machine,
and I get the rawcode displayed in the browser as expected.
At one stage I had the following code:
<% unless county.nil? %>
foo
<% else %
bar
<% end %>
…and I didn’t assign county a value at all if I didn’t know it. I
checked to see if county was set by checking if it was nil. When I
first noticed it was giving me problems on the demo server, I
explicitly set it to nil (see the code for the last partial listed)
and that stopped the error; but now it didn’t seem to be picking up
any changes to the local variable ‘county’ (again, still working as
expected on my development machine).
It doesn’t seem that the county variable is being set. Here is the
code for the relevant AJAX action:
def localform
rawcode = request.raw_post || request.query_string
@county = County.find( rawcode.sub( /^(\d+)\D.*$/ ) { |s| $1 } )
if ( @county.country.code == “IE” )
render :partial => “roilocaladdress”, :layout => false, :locals
=> { :county => @county.id, :rawcode => rawcode }
else
render :partial => “nielocaladdress”, :layout => false
end
end
and the relevant code for the contacts/roilocaladdress partial:
and the relevant code for the parent partial, contacts/irishaddress:
Does anyone have any ideas?
Thanks,
David B.
–
Site: | https://www.ruby-forum.com/t/bizarre-problems-with-ajax-missing-variables/50960 | CC-MAIN-2018-47 | refinedweb | 288 | 63.09 |
Written by Paul Hudson @twostraws
If a user clicks a web link in your app, you used to have two options before iOS 9.0 came along: exit your app and launch the web page in Safari, or bring up a new web view controller that you've designed, along with various user interface controls. Exiting your app is rarely what users want, so unsurprisingly lots of app ended up creating mini-Safari experiences to browse inside their app.
As of iOS 9.0, Apple allows you to embed Safari right into your app, which means you get its great user interface, you get its access to stored user data, and you even get Reader Mode right out of the box. To get started, import the SafariServices framework into your view controller, like this:
import SafariServices
Now make your view controller conform to the
SFSafariViewControllerDelegate protocol, then give it a try:
let urlString = "" if let url = URL(string: urlString) { let vc = SFSafariViewController(url: url, entersReaderIfAvailable: true) vc.delegate = self present(vc, animated: true) }
That's all it takes to launch Safari inside your app now – cool, huh? We need to assign ourselves as the delegate of the Safari view controller because when the user taps "Done" inside Safari we should dismiss it and take any other appropriate action.
To do that, add this method to your view controller:
func safariViewControllerDidFinish(_ controller: SFSafariViewController) { dismiss(animated: true) }
Available from iOS 9.0 – see Hacking with Swift tutorial 32. | https://www.hackingwithswift.com/example-code/uikit/how-to-use-sfsafariviewcontroller-to-show-web-pages-in-your-app | CC-MAIN-2018-09 | refinedweb | 247 | 56.08 |
This video is only available to subscribers. Start a subscription today to get access to this and 408 other videos.
CoinList: Stubbing Network Requests
This episode is part of a series: Testing iOS Applications.
Episode Links
Modeling the Coins Response
In the API response we saw last time, all of the coins are returned in a dictionary underneath the
data key. We'll start by modeling that.
enum CodingKeys : String, CodingKey { case response = "Response" case message = "Message" case baseImageURL = "BaseImageUrl" case baseLinkURL = "BaseLinkUrl" case data = "Data" }
We'll provide a custom nested struct to represent this data, decoding with a dynamic
CodingKey implementation (since our keys are dynamic):
struct Data : Decodable { private struct Keys : CodingKey { var stringValue: String init?(stringValue: String) { self.stringValue = stringValue } var intValue: Int? init?(intValue: Int) { self.stringValue = String(intValue) self.intValue = intValue } } private var coins: [String : Coin] = [:] init(from decoder: Decoder) throws { let container = try decoder.container(keyedBy: Keys.self) for key in container.allKeys { coins[key.stringValue] = try container.decode(Coin.self, forKey: key) } } func allCoins() -> [Coin] { return Array(coins.values) } subscript(_ key: String) -> Coin? { return coins[key] } }
This references a new type
Coin. I want this to be somewhat isolated from the rest of the application, so this will be a nested type inside of
CoinList:
struct Coin : Decodable { let name: String let symbol: String let imagePath: String? enum CodingKeys : String, CodingKey { case name = "CoinName" case symbol = "Symbol" case imagePath = "ImageUrl" } }
Now that we have this in place, we can write another test that we can successfully parse these coins and access them by their symbol.
Testing that we can parse coins
We'll start by copying the basic structure of making the request and setting up the expectation that we can wait on later:
func testCoinListRetrievesCoins() { let exp = expectation(description: "Received response") client.fetchCoinList { result in exp.fulfill() // ... } waitForExpectations(timeout: 3.0, handler: nil) }
Then we can write some assertions that we are able to pull out a coin successfully:
switch result { case .success(let coinList): XCTAssertGreaterThan(coinList.data.allCoins().count, 1) let coin = coinList.data["BTC"] XCTAssertNotNil(coin) XCTAssertEqual(coin?.symbol, "BTC") XCTAssertEqual(coin?.name, "Bitcoin") XCTAssertNotNil(coin?.imagePath) case .failure(let error): XCTFail("Error in coin list request: \(error)") }
And if we run this test, it passes. 🎉
But we probably don't want to keep running our tests against a live API, do we?
Setting up OHHTTPStubs
To stub out network calls, we'll use a library called OHHTTPStubs.
We'll integrate this into our test target in our
Podfile:
platform :ios, '11.2' target 'CoinList' do use_frameworks! target 'CoinListTests' do inherit! :search_paths pod 'OHHTTPStubs/Swift' end end
Failing tests that hit the network
The first step is for us to not allow any request to hit the network in our tests. We can make exceptions to this rule, but it's a good thing to set up initially.
At the top of our test we'll import the library:
import OHHTTPStubs
Then we can add this code to the
setup() method:
override func setUp() { super.setUp() OHHTTPStubs.onStubMissing { request in XCTFail("Missing stub for \(request)") } }
Now if we run our tests they will all fail because they are hitting the network.
Intercepting requests and returning fake data
We will use the
curl command in Terminal to fetch the API response and save it to a JSON file.
We can then add a new bundle to our test target called
Fixtures.bundle.
Then we can create a class to read these file and use them as the body of stub responses. We'll call this
FixtureLoader:
class FixtureLoader { static func reset() { OHHTTPStubs.removeAllStubs() } static func stubCoinListResponse() { stub(condition: isHost("min-api.cryptocompare.com") && isPath("/data/all/coinlist")) { req -> OHHTTPStubsResponse in return jsonFixture(with: "coinlist.json") } } private static func jsonFixture(with filename: String) -> OHHTTPStubsResponse { let bundle = OHResourceBundle("Fixtures", FixtureLoader.self)! let path = OHPathForFileInBundle(filename, bundle)! return OHHTTPStubsResponse(fileAtPath: path, statusCode: 200, headers: nil) } }
Then, back in our test, we can set up a stub by calling:
FixtureLoader.stubCoinListResponse()
And make sure we unset it in our
tearDown() method to avoid one stub interfering with a different test.
override func tearDown() { super.tearDown() FixtureLoader.reset() }
Now we can continue to run our tests, but they will use this dummy response instead of actually hitting the network.
So our coinListResponse needs to model
a couple of additional properties.
One of them is baseImageURL, and this is going to be
a partial URL that all the rest of the coins
will be referenced against
in order to compute the image for each coin.
There's also a baseLinkURL, which we can use as well.
If we look at the CryptoCompare documentation,
and we go over to the all coins list,
we can see that it's BaseImageUrl and BaseLinkUrl like that,
so we can create our cases here,
baseImageURL = "BaseImageUrl," and then we'll do
the same thing for BaseLinkURL like that.
So, we could write a test for this,
but this is basically essentially what we already had,
and I'm not really that concerned with this.
The next key is going to be a thing called Data.
And if we look at that, Data is an object that,
inside of that object, has a list of coins,
and each key, and this is basically a dictionary,
each key in the dictionary refers to the symbol,
and then there's the detail about that coin within it.
So, we need to have some sort of object,
and I'm going to use a nested type here, a Data,
which is also going to be Decodable,
and then we can say that we have a data
on this CoinList response, and the case for data is Data.
So now we've got a data that we can use,
and we need to have a coding key that is,
because the keys here are all dynamic,
the coding key itself can't just be an enum.
It's got to be its own type.
So, what we're going to do here is have a class here
that we're going to call Keys,
and this can actually be, this can actually
be a private struct here called Keys,
and this will CodingKey implementation,
and that's going to require us to implement
a couple of initializers and a couple of properties.
So, here we need to implement one that takes a string value,
in which case we can just set the stringValue
to the value that was passed in.
And then for the intValue, we can pass in the stringValue
as the string representation of that intValue,
that's fine, and then self.intValue = intValue,
and we're not actually using intValue,
but the coding key protocol forces us to implement that.
So, this is the basic struct we need
to have just sort of a dynamic coding key
that can have any type of value,
and then here in out init(from decoder),
the decoder implementation, which throws,
we need to implement this ourself so that we can loop over
all the keys and construct a coin from each one of those.
So, we're going to have to have,
I'm going to make this private for now.
We'll call this coins, which is going to be
a String to a Coin instance.
This coin, I want to make this also an extension on this
CoinList, so we're going to make this a struct Coin here,
because I want my own type later on to be called Coin,
and this Coin is going to be, like,
if we take a look at one of these coins,
notice that the Url is a relative URL,
and the imageUrl is a relative URL.
When I deal with this inside of my application,
I'm going to want that to be a fully fledged URL.
But we can't build that yet.
We have to first parse the response exactly how this is,
and then later, we can translate these or morph these
into our own types that will resemble these types,
but maybe they have a different structure.
So, another thing to consider here is the fact
that there's no price information here,
and we may want some sort of notion of like,
the last coin price.
That may be something that we want to do on our own model,
but it's not returned in the API like that.
So I think it's, it's important when you're dealing
with applications like this that we model
our response models as closely to the API as possible
and then translate those into objects
that we want to work with inside of our own application.
So, with that in mind, our coin is now nested
underneath the CoinList structure,
and that keeps it isolated from the rest of our application.
A CoinList coin will look exactly like this,
but our coin model may look a little bit different.
Okay, so our struct Coin here is going to have a few things.
Let's take the CoinName, the Symbol,
and the ImageUrl.
So, we will have a name, which'll be a String,
a symbol, which will be a String,
and then we'll have the imagePath, which will be a String.
And then we will have our enum CodingKeys to map those.
That's going to be CoinName, and then symbol will be Symbol.
And then the imagePath will be the ImageUrl.
Okay, so now we've modeled our Coin properly.
Now we just need to get the container from the decoder,
and we'll say container(keyedBy: the coding key protocol,
or what we called Keys, like that,
and then we can loop over all the keys.
So we can say for key in container.keys, allKeys.
Then we just need to add our coins here,
so we just need to have a list of these coins.
We can initialize that to an empty dictionary
right up here and just append to it, that might work.
So we can say coins for key.stringValue.
Remember, the key is going to be this PPC,
which is going to be the string value,
and then we want to decode the coin itself here,
so we can say container.decode(Coin.self, forKey: key).
We also want to be able to inspect this type,
and right now, our coin's property here is a dictionary,
which represents what we just parsed,
but I also want to just be able to get a list of all coins.
So what we can do here is have a func called allCoins,
which returns an array of Coin.
And here, we just want to return coins.values,
and then we need to convert that to an array.
And then we might want to have a subscript here
that will take a key which is a String and return
a Coin optionally, and then that can just reach
into our coins array for that key.
Okay, so we've got our data type,
and we've got it compiling, and it looks like it works,
but let's go ahead and take a look at our test.
What I want to test is that we can make a call,
testRetrievesCoins, and I want to say
CoinList retrieves coins.
So here I'm going to again copy this and instead
of asserting that we got a successful response here,
I now want to make an assertion on the actual coins
that we got back, so I can say
XCTAssertEqual(coinList.data.allCoins().count),
and then we're going to assert that we got the appropriate
number of these, and I'm not really sure how many there are.
But it looks like hundreds.
Let's just double-check that we got some.
Actually, we could just say XCTAssertGreaterThan,
and make sure that the coins are greater than one.
So we at least got on, okay?
I also want to make sure that I can get a specific coin,
so in this case, I will say let coin = coinList.data.,
or, indexed with BTC.
I want to make sure that that's not nil.
XCTAssertNotNil,
and then XCTAssert, and we can assert the few properties
that we care about on this one.
We assert that the coin's symbol is BTC.
We assert, and this needs to be AssertEqual,
XCTAssertEqual the coin's name is equal to Bitcoin,
and then XCTAssertEqual,
or I want to say AssertNotNil,
that the coin's imagePath is not nil.
Okay, let's go ahead and run this
and make sure that we can get all the coins
and we can index the specific coin from the list.
Okay, we've got a failure here,
and this is probably a decoding error.
And it could be quite hard to see the actual
error message here because Xcode doesn't actually show it
in a way that I can actually read it,
but if we go over here to the build time errors list,
this actually shows us test failures as well,
and we can see the error in coin list response,
the response format is invalid.
We can't actually read this error message very easily,
but we can see the fact that it's returning the body.
But it didn't actually return the error message,
and that should have been left for us in the console,
so let's go ahead and look inside of the console instead.
Okay, so, we can see here that we got a decoding error
and the path was CoinList.CoinList.CodingKeys.data,
and inside of there, we had a stringValue for ROS,
and there was no ImageUrl in that particular string.
And if we go look at the API in here,
if we scroll down, well, it's not going to be easy
for us to find ROS in there.
But basically, the issue
is that it couldn't find the ImageURL, and we modeled
that as a required property here in our CoinList coin.
So, imagePath here is actually going to be optional,
which will allow it to decode it and leave a nil in there
in the cases where we don't have an image.
So let's go ahead and run this again, okay.
And our test succeeded.
So now we've got a test that validates a little bit more
of this API and it's validating parts of the response,
and then we decided that from JSON properly.
Okay, so I mentioned that it is less than desirable
to have our unit test hit a live API.
On the one hand, it allows us to get started quickly
and actually see that our results are working,
but the downsides are, we could get banned from this API
by running our test too often, and these tests won't work
if we have intermittent connectivity or if we're working
on a train or an airplane or something.
And so, having these hit the live API
is not desirable.
So what I want to do is tackle that problem next
before we go any further.
To do that, I want to implement a library
called OHHTTPStubs, which we can get through CocoaPods.
So I'm going to create a pod file in this folder,
and then we will open that up,
and I'm going to set our platform to 11.2, I think,
which is the version that we're using,
and then, let me get rid of these comments.
Okay, so in our pod file, notice that our target
for CoinList is the outer target, and then we can add
our pods in here, like if we have a pod
that we want to use here, we can use that,
but then there's a nested one for our tests.
And so we can have separate pods just for testing.
In this case, I want to use OHHTTPStubs.
And this particular project has an Objective-C version
and a Swift version, so we're going to use the /Swift pod,
which is only going to include the Swift-related things.
Okay, so with that added, I'm going to save and quit,
and we're going to run pod install.
And this is going to download the pod,
integrate it with our project, and I actually need
to close Xcode, and I want to open up the workspace instead.
I've got a shortcut for that, xc, which is going to open up
the first workspace it finds in this folder,
and then we have got some warnings here
that I want to take care of now,
because I don't want to leave these alone.
So in our Debug target for our tests,
Debug and Release target for our test project,
we have this ALWAYS_EMBED_SWIFT_STANDARD_LIBRARIES set,
and I need to unset that.
So let's go ahead and go into the build settings,
and this always embeds Swift standard libraries.
I'm just going to hit delete, which is going to make that
not bold, which means it's back at the default setting.
And to verify that I did fix that warning,
I want to run pod install again and just make sure
that we don't get those warnings again.
Okay, looks good.
Okay, so at this point, now we've got a pod in here,
OHHTTPStubs, and we can use this to stub out the networking
or the actual response that we got from an API
and allow us to get a consistent response back.
And this is also useful if we want to hit the API,
and then we want to, we expect to get back
an error state, so we can actually model and fake out
getting back an error state from the API,
which may be something that's hard to set up in real life.
And so now that we have that OHHTTPStubs,
we're going to go back into our test project,
and we need to set up a way to stub out calls to this API.
Okay, so the first step I need to do is import
OHHTTPStubs at the top of my test,
and then in the setup for my test,
I want to say OHHTTPStubs.onStubMissing,
and I want to fail a test any time a network call is made
and the stub isn't present.
So here, we're given the request that was made,
and we can see something like XCTFail with the message
Missing stub for, and then say \(request).
Maybe we just output it like that.
And so this is going to hook into the networking system,
and basically reroute requests through OHHTTPStubs,
and this is a useful mechanism for allowing us to interact
still with URLSession and the foundation APIs
for networking, but then intercept those at a specific point
and say I want to return this response
or I want to return that response.
For now, I just want to have anything, a big safety net
that says nothing should go
out of the system without us saying it's okay.
So now if I run the test, I expect them all to fail,
because we're missing those stubs.
So it tells us we're missing a stub for that particular
endpoint, and notice that all three tests are failing.
This means they're not hitting the network anymore;
they're all failing right here.
Okay, so, what we need to do is stub a specific request.
So I want to go over here, and I want to interact
with that API and pull down a live response.
So we're going to copy this URL here,
and then I'm going to go back to the terminal,
and I'm going to use the curl command to pull down this data.
So, this is a lot of data, and we can actually
pull that down into a coinlist.json file,
and we can do this for any of the responses that we want,
and we can even edit this file if we want to.
And now I want to bring this file into my test target
underneath something called a bundle.
So we're going to add a new file here,
and I want to scroll down to where it says Settings Bundle,
so I'm going to use that, and we're going to call this Fixtures.
So, test fixtures are basically sort of defined responses
that we can use, and we can actually
delete the stuff that's inside of that already,
and then we can start adding our own files.
So, I'm going to drag in a file
that I already have made from another project,
and we're going to copy these over
and make sure it's added to our test target.
So now we've got a fixtures bundle
that has this CoinListResponse.
I'm not going to click on it here,
because Xcode does not do well with large .json files,
so it may act a little bit strange.
But then I've got some other responses
that I might need later on.
So, now I've got a fixtures bundle
that has the stubbed response that I want in it,
and we just need to load that.
So, I want to create a new group here called Support,
and this is going to be like supporting stuff for my tests.
And we're going to create a new class called FixtureLoader.
We're going to import OHHTTPStubs,
and we're going to have this class FixtureLoader.
So we need to set up our fixture loader such that in a test,
we can say I want you to load this fixture
for this type of request, and then at the end of the test
in the tearDown method, I want to be able to reset those.
So let's start with the reset method,
and I'm just going to make these a static func reset.
That's going to call OHHTTPStubs.removeAllStubs.
So at the end of every test, we're going to remove all stubs,
and then we want to have some static functions
to stub the things that we want.
So in this case, I want to stub the coinListResponse,
and the way this works is we need to tell it
when to stub that particular response.
If you have other networking things in your application,
say you have something like Crashlytics,
it's going to make a call to the Crashlytics API
when it first launches, then you don't necessarily
want to stub those responses.
You only want to stub the ones that you're working with,
and you may want to allow certain ones to go through.
So OHHTTPStubs provides this method stub,
and it's got a condition and a response.
So the condition is a block,
and the response is also a block.
So, if we look at the condition,
we can say I want to look at the request
and then return a true whether I want to stub this thing.
So, if we look at the request, it's just a URL request,
so we can check the URL,
and we can check the host parameter of the URL.
We can check the path and things like that.
And there's actually some really easy helper blocks
that correspond to this interface.
One of those is isHost, and then basically, you want to say,
you could say isHost and then pass in a string.
So in this case, it would be min-API.cryptocompare.com.
And so that thing returns a block that is sufficient
to go in this stub condition argument here.
It's a little bit hard to describe,
but basically, if we take a look at isHost,
note that this does return that same block,
and then we also want to check,
is this the right path?
And in that case there's an is, isPath parameter here,
and we can check to see
if that is data/all/coinlist like that.
And you can chain these together with &&,
because that operator's overloaded to evaluate
both conditions with the provided request.
Okay, so we have our condition here,
isHost and isPath, and then in this case,
our response is going to be some sort of response block.
So we're given the request,
and then we need to return a response.
Going to be loading stuff from this fixtures bundle a lot.
I'm going to create a private static jsonFixture with filename,
and then that is going to return an OHHTTPStubs response.
Basically this same response,
we're going to return that from this method,
because I want to be able to say, return jsonFixture,
and then with the filename,
and the filename in this case was coinlist.json.
We'll add the func keyword there.
So, loading the jsonFixture here is going to rely
on a few helper methods from OHHTTPStubs.
So, we're first going to get a path to the bundle,
which is OHResouceBundle, and the bundle base name here
is Fixtures, and then inBundleForClass
is going to be FixtureLoader.self.
So, whatever bundle the FixtureLoader class is defined in,
the fixtures are going to be loaded in that same way.
Now, once we have that bundle,
we can get the path, which is OHPathForFileInBundle,
and we can pass in the file name and the bundle we just got.
And in this case, we want to force unwrap that,
because I'd like this to crash if the files aren't on disk,
and then finally, we can return an OHHTTPStubsResponse,
and this is going to take a handful of parameters.
The data, the status code, the headers,
basically anything that we want our response to look like.
So in this case, we can grab the file at this path.
statusCode I'm going to assume is 200 in this case,
and headers are going to be nil.
And until we decide that we want to stub
a different type of response, we can do that.
And of course it's looking for a non-optional path,
so let's go ahead and force unwrap that as well.
So, again, if we pass in the wrong path here,
it's going to blow up, and we're going to fix it.
Okay, so, what we need to do now is call
this stubCoinListResponse in our tests when we're looking
at our API tests here, in our CryptoCompareClientTests.
We need to call FixtureLoader.stubCoinListResponse,
and then we also want to have a tearDown method
which calls FixtureLoader.reset.
And the idea here is we don't want to have any loaded fixtures
or loaded stubbed response hanging around for the next test.
Okay, let's go ahead and run this
and see if we get a passing test.
Okay, and we do get a passing test, and if we were
to turn our Mac on Airplane Mode and run our tests,
they would still work.
So these tests are still going to work
on the bus or an airplane.
They're going to work on a CI environment,
and they're not going to count against our rate limit
when it comes to consuming that API.
And the downside of this approach is now we have
this sort of fixed response, and if the API ever changes,
for instance, if they add new coins,
or let's say they add a new attribute,
I'm going to have to delete this .json file
and refresh it with a new one.
And I think that is a small price to pay,
because that's not going to happen very often,
and we're going to run these tests very, very often.
So, this is definitely a massive improvement in testing,
making sure that we don't actually hit the network
that we test with known states and things like that.
One additional case here, and that is that
we want to be able to stub things that return errors.
So if we go back over to fixture loader,
I want to have a coinListResponse that throws an error.
So we're going to create a static func,
stubCoinListReturningError, and let's just say
the server had some sort of error there.
We're going to have the same sort of stub
condition requirement, and then the block for what to do,
in this case, we can return an OHHTTPStubs response
that has Data, statusCode, and headers.
Oh, in this case, let's say that we have the data
is going to be Server Error,
and we can say data(using: .utf8) string coding.
We can pass in this data, say that that's a status code
of 500, and then say that there are,
an empty header's returned.
So now we've got a case where we can say I want to do
the exact same request to the same path, but this time,
I want to return an error.
And what are we going to validate in this case?
I want to validate that when I fetch the CoinListResponse,
CoinListResponse returns ServerError.
So, in this case, it's going to be a very similar test here.
So we've been copying and pasting a lot of code in our test,
and certainly, we could stand to refactor these
to make them more readable.
But I also think it's important to, to stress that
sometimes, you want duplication in your test,
because you don't want the test to share a lot of state,
and I want to make sure that the test
is still easily readable from start to finish.
So, in this case, we want to check to make sure
that the result is not success.
So if we get a successful result,
we actually want to fail,
and in this case, we want to assert that the error
is of a given type.
So we could say if case let, or if case.serverError = error,
and that probably should be ApiError.serverError,
then we can XCTFail here, "Expected a server error
but got \(error)" like that.
We don't care about the coinList property there.
So we can use an underscore, and the serverError here,
if we jump to the definition there,
we could pass in the status as an Int,
if that were an important piece of data.
And so down here when we're setting the serverError,
we can return HTTP.statusCode in there,
and then here, we can say let status, like that.
And now can just say XCTAssert that the status was set.
So status, we expect that to be 500.
Okay, so now we've got the error
that it should have returned an error but it didn't.
That's 'cause we're expecting one,
but we never set up the stub.
So in this test, we can say
FixtureLoader.stubCoinListReturningError,
and when we run the test again, this time,
the error stubbed response should take precedence,
and now we've got a passing test that tests,
did we receive a serverError,
even if the actual server's not returning an error.
Okay, so that's how we use OHHTTPStubs to stub out
network calls but still allow us
to make assertions against our networking code. | https://nsscreencast.com/episodes/335-stubbing-network-requests | CC-MAIN-2019-39 | refinedweb | 5,253 | 76.15 |
Technical Articles
Deep dive on SAP S/4HANA Migration Cockpit – Direct Transfer
HOT NEWS (18.02.2022):
SAP S/4HANA Migration Cockpit – Direct Transfer: Top 5 FAQs (as of Feb 2022) | SAP Blogs
HOT NEWS (18.05.2021):
We have released new material on the following topics:
- Transport capabilities in detail and system modifiability:
- How to use a project as rollout template; copy projects, copy objects:
- How to influence the performance / reduce the system downtime in a MC DT project. Technical and non-technical mean. LINK to KBA
- How to handle migration object updates delivered by SAP (by an upgrade, a support pack/feature pack, a TCI note):
HOT NEWS (23.03.2021):
We have released a new video. It demonstrates the enhanced error analysis capabilities which come with 2020 FPS1 for file/staging and direct transfer.
Different views on messages are possible:
Activity – which messages occurred?
Message – which instances are concerned? NEW WITH 2020 FPS1
Migration object instance – which messages occurred?
If you are interested in more deep dive material on the Migration Cockpit, pls. visit our landing page!
HOT NEWS (02.02.2021):
We have released a video: How to create an own mapping rule (OnPrem 2020): LINK
Enjoy 🙂
HOT NEWS (11.11.2020):
We have released a lot of material (PDF, videos, click-troughs) on our Migration Cockpit landing page: landing page
Click on News => get to the development news for Onprem 2020
Click on Training and Education => some examples:
Deep dive Direct Transfer LTMOM => get real deep insights on how to use the modelling environment
Customer/Vendor integration in Direct Transfer
… take your time to surf all the material!
The SAP S/4HANA Migration Cockpit offers different approaches. Since 1909 there is a third approach available: transfer data directly from an SAP system.
In this blog and the linked blog posts I will provide deep dive material on the Direct Transfer approach.
If you are looking for more general or overview information pls. go to the SAP S/4HANA Migration Cockpit starter blog: LINK
There is an openSAP Course available which gives insight into all three approaches – on a very detailed level! It also treats the topic how to create own migration objects. You can download and/or view a huge amount of PPTs & videos. The hands-on experience is still possible for another 2 months, afterwards you can use the so-called “Fully activated appliance” in order to practice.
Migrating Your Business Data to SAP S/4HANA – New Implementation Scenario
With this course, you’ll get an introduction to data migration with SAP S/4HANA, and where it fits in with respect to the different transition scenarios. The course will focus on the new implementation scenario, with a deep dive into the SAP S/4HANA migration cockpit and the migration object modeler. Week 3 (of 4 in sum) treats exclusively the “Direct Transfer” approach – including an example how to create an own migration object. We’ll also offer optional hands-on exercises so you can better familiarize yourself with the different migration approaches. For these exercises, you can use a system image from the SAP Cloud Appliance Library. For the Direct Transfer there is one E2E exercise where you migrate “Activity Types”. In a second exercise you adapt the selection from the source system.
Table of contents
1. General: Can a non-unicode SAP system be the source system for the Direct Transfer?
2. LTMOM: Can I create all kinds/types of transfer rules (1909)?
3. MC & LTMOM: SAP S/4HANA Migration Cockpit – Direct Transfer – Value mapping (1909) – see LINK
4. MC: Can I repeat the selection (1909)? See LINK
5. LTMOM: How can I influence the selection (1909)? See LINK
6. LTMOM: How to create an own migration object (1909)?
7. LTMOM: How to exclude a field from being migrated in (1909)?
8. LTMOM: How to add a Z-field to a Migration Object?
9. LTMOM: How can I transfer completely self-defined custom developments, for example a Z table?
ANSWERS
1. General: Can a non-unicode SAP system be the source system for the Direct Transfer?
Yes, a non-unicode should also work as source system. The MC uses RFC to connect the two systems, the RFC should do the converting automatically.
This blog post might be of use: . Some of the mentioned notes might be of interest.
2. LTMOM: Can I create all kinds/types of transfer rules (1909)?
In 1909 you can create:
- transfer rules of type “move” – means, you move the content of this field unchanged from the source field to the target field
- transfer rules of type” fixed value” – examples: posting date; a migration account as offset account for initial postings in the target
- transfer rules of type “value mapping” – map a source value 1:1 to a new value in the target system. Pls. note that the automatic creation of mapping proposal does not work in 1909, means you have some manual steps to do in addition (see next question).
In 1909 it is not possible to create own source code rules.
6. LTMOM: How to create an own migration object (1909)?
This video shows how to create an own migration object.
I use the example: new migration object “Cost Center Texts”.
Slides shown in the video: LINK
7. LTMOM: How to exclude a field from being migrated in (1909)?
To exclude fields from the transfer you have to remove the field level mapping of data for the respective field that you do not want to migrate. It depends on the respective API used for this migration object if this approach works. It only works if the API supports migration without the respective fields being mapped.
Starting 27:08 you see how to maintain the “field mapping” in the Direct Transfer approach for a migration object. Link demo video create own migration object (Direct Transfer)
8. LTMOM: How to add a Z-field to a Migration Object?
If there exist z fields (SAP standard table is enhanced by a custom-own field) in the source system, the Migration Cockpit, Direct Transfer, recognizes it and these fields are automatically shown in LTMOM as source fields. In case there exist target fields which match, the Z source fields can be mapped in LTMOM in the field mapping to the target fields.
Pls. note: only primary structures (parent and child tables) of the data model can be adapted.
There may be single cases where Z fields are not covered by the API which is used in the migration object. In this case pls. open a ticket on component CA-DT-MIG.
9. LTMOM: How can I transfer completely self-defined custom developments, for example a Z table?
Create your own API and use it in your own migration object. The pre-requisites for an own API are quite the same as the BAPI definitions in the BAPI Programming Guide (Implementing the Function Module). The API must not execute ‘COMMIT WORK’ commands.
Heike Jensen
Product Management SAP S/4HANA Migration Cockpit
Nice blog. Thanks.
Hi Heike,
2. LTMOM: Can I create all kinds/types of transfer rules (1909)?
Is in the Roadmap (future FSP or 2010/2009 ) to include other type of transformation rule? E.g. If field not blank then MOVE or even better we can create our own custom rules?
Thanks
Hi John,
is it perhaps possible that you enroll to this openSAP course “Migrating Your Business Data to SAP S/4HANA – New Implementation Scenario”? The course is still available. The material is too much to be uploaded here.
In week 3 the topic is “Direct Transfer“. I have 7 units there (always PPT & demo video), units 5 & 6 are about creating own objects. In the unit 6 video I demonstrate the field mapping and there you see that the Trule type "move" is automatically assigned. We know that "custom rules" is a requested topic but unfortunately, currently I cannot give you a concrete statement about future FPS/releases.
Best regards,
Heike
For more information about SAP Data Management and Landscape Transformation (DMLT) see this page or read the solution brief and the SAPPI Success Story. You can contact the global SAP DMLT team by email sap_dmlt_gce@sap.com. They provides services for Selective Data Transition, New Implementation and System Conversion that can also be delivered remotely.
Hi Expert,
One of our clients is interested in exploring the above suggest option for DM project. But due to high scrutiny on the integrity of data, we have an additional requirement for have a Automated reconciliation tool to recon the data from source & target system post migration.
Q1: Is there any dashboard/ standard reports available for Summary/Field based recon with this option?
Q2. Is there any other approach we can use which will suffice our requirements.
Thanks In Advance
BR,
Shobhit Taggar
Hello Shobhit,
unfortunately, there is no standard functionality available in the Migration Cockpit for comparing/reconciliating data between source and target system.
The SAP consulting unit (see post above from Olivia Ghebrezghi) offers these kind of services.
No, there is no standard approach available. You can of course use SAP standard reports e.g. for master data, balances, and so on to compare the data.
Best regards,
Heike
Thanks Heike ! We plan to build the custom recon tool to cover the client requirements.
Hi Heike;
Excellent post, i have a questions, regarding point 9, is possible migrate an program “Z” or Ricef complete from ECC 6.0 to S4 1909 with direct transfer?
Regards
Hello Nelson,
thanks for you positive feedback!
No, the Migration Cockpit does not transfer objects from the ABAP environment.
It is meant to do a New Implementation and thus, we only offer Migration objects to transfer the necessary application data for a "new start" in S/4. This means: migration objects for master data and "necessary" transaction data" such as FI open items, FI balances and so on.
You find the complete list of available migration objects for each approach and release on the help portal
The modelling environment (transaction LTMOM) of the Migration Cockpit offers the possibility to change migration objects or create own migration objects. It is not possible to transfer programs or the like.
Best regards,
Heike
Hallo Heike,
Ist es möglich eine Regel komplett zu kopieren und nicht nur die Einträge einer Regel?
Grund ist folgender: Bevor wir eine Migration durchführen wollen wir erst einige Test durchführen. Leider sieht es so aus das wir beim Direct Transfer unsere Einstellungen in Migrationsobjektmodeller nicht in ein neues Projekt kopieren können. Nur die Einträge sind über up- and download machbar. Es wäre gut wenn man ein durchgeführtes Projekt kopieren könnte. Gibt es vielleicht so eine Möglichkeit?
Hallo Dietmar,
könntest Du evtl. ein Ticket aufmachen auf CA-LT-MC und mir dort Deine Mailadresse hinterlassen und um Weiterleitung bitten?
In 1909 gibt es diese Transport- und Kopierfunktion noch nicht. Diese Funktionalität ist nicht ganz so banal wie es evtl. erscheinen mag. Es müssen diverse generierte Objekte gehandhabt werden.
Diese Funktion sowie diverse weitere neue Funktionen wird es in Version 2020 geben. Am 07.10. ist Release to customer - am besten auf unsere Landing page gehen, man findet dann dort die kompletten DevNews für das Migration Cockpit 2020.
Gruß, Heike
Hallo Heike,
danke für deine Antwort. Habe ein Ticket (649104 / 2020 Migration Cockpit Direct Transfer) aufgemacht.
Gruß
Dietmar
Hello Heike,
For point #9 (transferring z tables), we have 100+ tables in partner namespace and that need to be copied from ECC to S4. It will be a 1to1 transfer. If we have to create FM for each table, it would be a huge development activity. Is there a simpler way to do this instead of a BAPI as the target ?
Note that we are not upgrading ECC to S4, its a new S4 implementation but need to copy the data from the partner namespace tables.
Regards, Parag.
Hello Parag,
The SAP S/4HANA Migration Cockpit is able to migrate custom developments such as
The design of the Migration Cockpit is for “instance-based” processing. Means, each(!) line of the root table in the source structure definition is treated as “instance”.
Therefore, the “instance-based” migration means, that each sender record becomes an “instance” in the migration cockpit and is also treated as single instance while transferring the data. This makes sense for processing of an instance as a whole entity (usually an instance consists of several source tables, which entries have a relation to each other via foreign key relations). The target API is doing plausibility checks for each instance before creating the instance on DB level. So, it is an “all or nothing” principle per instance to create/update only logically consistent instances.
If the intention of your Z migration object is to migrate the Z table from the SAP ERP source system 1:1 to the S/4HANA target system, it is a simple table copy in the end.
Due to the design of the migration cockpit, the tool is of course able to fulfill such a migration, but in the end, the runtime of such a transfer will never be as expected for a simple 1:1 table transfer as each line of the table is treated individually (for example having an own log).
If you have for example 3.000.000 entries in your Z table this results in 3.000.000 single instances in the migration cockpit, which will again be processed one by one, just to fill the target table via the API (= single function call per line).
So if you want to do a simple 1:1 table transfer without the need for any transformations, plausibility and consistency checks you should consider to develop an own RFC-enabled function module (currently used API just RFC-enabled) and a Z program which is retrieving the data from the Z table and transfers it 1:1 from sender to target. This will be the easiest and fastest solution.
I hope this information helps.
Best regards, Heike
Hi Heike,
I attended your course on open SAP, it was a real help. Thank you!
I was wandering if for direct transfer exists something similar with /1LT/DS_MAPPING from staging table approach? I have noticed that the system creates some staging tables also for Direct Transfer after the Select Data.
Best regards,
Cristina
Hello Christina,
thanks for your positive feedback 🙂
You can upload and download mapping entries for some or all mapping tasks in LTMOM and since release 2020 also in the Migration Cockpit (Fiori). For direct transfer, there is no other way to access the mapping. Why are you looking for /1LT/DS_MAPPING like structures? Is it because of the amount of mappings, do you want to fill it with a program or what is the reason behind?
Best regards,
Heike
Thank you for your quick reply.
We investigate the possibility to extract data from 3 ECC systems to S4/HANA 1909 using Direct transfer, but we need to reconcile, validate and harmonize the data. Having the mapping tables we would like to create a program/script to help us to do the reconciliation of data.
Best regards,
Cristina
Hello Christina,
cleansing & harmonizing data after having selected it into the Migration Cockpit / Direct transfer is not possible. You can adjust the selection (using transaction LTMOM) and skip dedicated items from further processing (Firoi, instance list, with 2020: mass processing in instance list) after you have selected them.
With 2020, you have a so-called skip-rule available with which you can skip items from the selection by using your own coding.
Cleansing, golden record and so on are not possible using the Migration Cockpit/DT.
If harmonization is needed, you have to do it in the source system.
Mapping in our terminology means: map field values old (selected) to new values and then pass this transformed data record to the API for creation/posting into the target system. E.g. the selected cost center is 567 with responsible Müller, you map 567 to 987 and Müller to Jensen.
Best regards,
Heike
Hello Heike,
is it possible to create custom source code mappings for the direct transfer approach? In case it is not possible: is this planned for the future?
Best regards
Peter
Hello Peter,
starting with release onprem 2020 FPS0 it is possible to create own source-code rules.
Best regards,
Heike
Hello Heike,
Is it possible to migrate the huge volume of data like 10 million articles.
Best Regards,
Vajravel
Hello Shanmugavel,
The AFS Migration solution in Migration Cockpit is designed to handle heavy loads of data. Based on the system configuration you have (with respect to the number of jobs and so on), you should be definitely able to migrate 10 million AFS material as articles into the S/4HANA FVB system.
Thanks and Regards,
Sreejith Rajashekaran
Hello Vajravel,
from a technical point of view it is possible. As always: it is a question of runtime and the repsective cutover plan. You should check which instances might be loaded upfront (during uptime).
Further you should prepare your value mapping offline, because dealing with a huge amount of data in the FIORI app might be very timeconsuming. That's why it would be advisable to prepare it ready for upload.
Best Regards,
Claudia
Dear Vajravel,
based on our experience, if this is the complex table structure object(e.g. material master), less than 1 million is better, as the data selection with generating all mapping tasks will take very much long time and can't be finished. if this is simple table structure(e.g. change history), up to 10 million may be OK, but we still need to split to different packages for data selection and data migration.
Hope this could help you.
Chen Jun
Hello,
I have a question about SAP S/4HANA Migration Cockpit, on whether it could be used to migrate companies codes, master data and transaction data from an existing ECC local country based system into an existing global S/4HANA. In other words, would it be possible to consolidate an existing S/4HANA system with new data from an existing SAP ECC. The idea would be to migrate the data from ECC only once, and after that decommission ECC.
According to the FAQ , it is not recommended ''The
SAP S/4HANA Migration Cockpit is designed for a new implementation of an SAP S/4HANA
system, therefore, data records can only be migrated once. '' Too bad ...
But then, sap note 2684818 seems to open the door, on this possibility, if the data depending on the use case : ''
So, could the Migration Cockpit be used to consolidate an existing S/4HANA, from a specific country related ECC data ?
Thank you
Hello Raoul,
The SAP S/4HANA Migration Cockpit migrates master data and transactional data as e.g. open items, balances, inventory but no historical data (not reproducing the whole history).
Pls. see the full list of available migration objects here. Pls. make sure to set your release on top of the page.
You can use the staging approach (upload files or fill staging tables directly) or the direct transfer approach (use RFC connection to SAP source system).
Of course, you can migrate into an existing SAP S/4HANA system for example in order to migrate a new/another company code from an SAP ERP system.
But the MC is not meant for a perpetual update of a system or to keep systems in sync.
For example, you cannot update records which already exist in the S/4 system. Suppose you have migrated a cost center master record already and now you want to update this master record because someone made changes in the source/old system. This is not supported by the MC.
There are some objects which have several views as e.g. product. In the staging approach there exist migration objects which can extend an existing record by new views, e.g. migration object “Product - extend existing record by new org levels”.
Pls. check the migration object list (see link above) in order to figure out how far the delivered migration objects cover your requirements.
The MC also provides a modelling environment so that you can enhance or create own migration objects.
Best regards,
Heike
Thank you very much Heike, really appreciated !
You are welcome!
Hi Heike Jensen,
Objects for real estate in the Migration Cockpit - Direct Transfer 2020 are not translated into German.
Is there a correction note?
Best regards,
Desislava
Hello Desislava,
in older releases there are some objects not translated. Unfortunately, there is no correction note for that.
Best regards,
Heike
Hello Desislava,
I want to add that with SAP S/4HANA 2020 SP04 respectively with SAP S/4HANA 2021 FPS02, migration object translations should be complete.
Best regards,
Heike
Hi Heike,
Thanks a lot for another very valuable post. Especially point 9 was a great help for me in my current task.
However, I still have an open item: I should store the identifiers from KNA1 to BUT0ID.
This means converting 4 fields from 1 record of KNA1 to 4 lines in BUT0ID.
Then mapped BUT0ID to the structure and fields where I should use it.
Besides it being an ugly solution, it does not even work, as the new records are added to BUT0ID in the target system, so the mapping will not work.
I also have tried to add a Z-table, but that was not even accepted in the Source Tables segment...
So my current solution is point 2-3, creating the entries in the target system BUT0ID, and in the next migration step I created a Z function module to map the old KUNNR with the new BP Partner id in BUT0ID. But this will not work if the values should be stored/updated in multiple tables, therefor we have the API/BAPIs.
Any solution for this scenario?
Thanks
Kind regards
Gyöngyi
Hello Gyöngyi,
Since you mentioned that you have added BUT0ID ( Identification ) to the Data model tables , I'm assuming you have Business partner activated( BUPA ) in the source system.
In such scenario , first you should use the Business partner migration object then followed by the customer and Supplier object. The Business partner object will migrate BUTOID data.
Please refer customer migration object prerequisite section
Customer Documentation
Business Partner Documentation
Business Partner
Regards,
Arun
Thanks Arun, will check with the consultants
Hello Gyöngyi,
Pls. also see our deep dive slide deck about customer/vendor/integration/migration Direct transfer.
Best, Heike
Thanks a lot Heike, will share this link with the team!
Hello ,
Thank your for support . we want to read LONG text (READ_TEXT) from source system with Direct transfer approach.
we defined
Manuel Defined tablefor long text in the source table accordingly SAP examples .
SAP standart : CNV_OT_APPL_PE_S4_ASEL_FIAA ,CNV_OT_APPL_PE_S4_ASEL_DUMMY2
Custom Manuel Table : ZV_OT_APPL_PE_S4_ASSET_TEXT
Firstly ,I couldn't find how can I read long text data for the table then i understood that it is include .
1- Should we create ZV_OT_APPL_PE_S4_ASSET_TEXT include in the source system or Migration Cockpit system. is it correct approach ?
2-I couldn't find how can Migration cockpit create a function group in the source system ? Because we created migration project in the S4D system and I saw some function can not be generated due to some include is missing some include ( in the source system (MK2) .
error message: include-report "cnv_ot_appl_pe_s4_asel_long" nichtgefunden
regards
Yahya
Hello Yaha,
pls. first use the Migration Cockpit Note analyzer SAP Note 2596411 to check if you have all current notes in the system. Standard includes should be there.
Regarding your questions: yes, the selection include has to be created in the source system. The other questions go too deep into the consulting direction, so I can unfortunately not answer them. You can contact our consulting department (sap_dmlt_gce@sap.com) if you want to get billable consulting services.
Best regards,
Heike
Hello Heike,
we are using Migration Cockpit direct transfer aproach.
we don't want to transfer data exclude some item line in the source table and we have to check two fields together. .
Our Migration cockpit version doesn't have badi to filter selection criteria .Do you have any suggestion to not read some item from source table directly.
Example
we don't want to read below data (SKIP)
We want to read below data (NOT SKIP)
PS :We watched all video and it doesn't have same screen in our system.
Our system version
S4HANA ON PREMISE 2020 03 (11/2021) sap.com SAP S/4HANA 2020
MDG_FND 805 0003 SAPK-80503INMDGFND MDG Foundation
MDG_APPL 805 0003 SAPK-80503INMDGAPPL MDG Applications
S4CORE 105 0003 SAPK-10503INS4CORE S4CORE
regards
Yahya
Hi Yahya,
either you can create a filter table or manually define table to filter in source system
Chen Jun | https://blogs.sap.com/2020/03/19/deep-dive-on-sap-s-4hana-migration-cockpit-direct-transfer/ | CC-MAIN-2022-40 | refinedweb | 4,132 | 62.88 |
table of contents
NAME¶
ualarm - schedule signal after given number of microseconds
SYNOPSIS¶
#include <unistd.h>
useconds_t ualarm(useconds_t usecs, useconds_t interval);
ualarm():
Since glibc 2.12:
(_XOPEN_SOURCE >= 500) && ! (_POSIX_C_SOURCE >= 200809L)
|| /* Glibc since 2.19: */ _DEFAULT_SOURCE
|| /* Glibc <= 2.19: */ _BSD_SOURCE
Before glibc 2.12:
_BSD_SOURCE || _XOPEN_SOURCE >= 500
DESCRIPTION¶.
RETURN VALUE¶
This function returns the number of microseconds remaining for any alarm that was previously set, or 0 if no alarm was pending.
ERRORS¶
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
4.3BSD, POSIX.1-2001. POSIX.1-2001 marks ualarm() as obsolete. POSIX.1-2008 removes the specification of ualarm(). 4.3BSD, SUSv2, and POSIX do not define any errors.
NOTES¶.
SEE ALSO¶
alarm(2), getitimer(2), nanosleep(2), select(2), setitimer(2), usleep(3), time(7)
COLOPHON¶
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://dyn.manpages.debian.org/unstable/manpages-dev/ualarm.3.en.html | CC-MAIN-2022-33 | refinedweb | 171 | 62.14 |
Create a CMS-Powered Blog
So you’ve just launched your Vue.js website, congrats! Now you want to add a blog that quickly plugs into your website and you don’t want to have to spin up a whole server just to host a Wordpress instance (or any DB-powered CMS for that matter). You want to just be able to add a few Vue.js blog components and some routes and have it all just work, right? What you’re looking for is a blog that’s powered entirely by API’s you can consume directly from your Vue.js application. This tutorial will teach you how to do just that, let’s dive in!
We’re going to quickly build a CMS-powered blog with Vue.js. It uses ButterCMS, an API-first CMS that lets you manage content using the ButterCMS dashboard and integrate our content API into your Vue.js app. You can use ButterCMS for new or existing Vue.js projects.
Install
Run this in your commandline:
npm install buttercms --save
Butter can also be loaded using a CDN:
<script src=""></script>
Quickstart
Set your API token:
var butter = require('buttercms')('your_api_token');
Using ES6:
import Butter from 'buttercms'; your blog posts. Your account comes with one example post which you’ll see in the response.
Display posts
To display posts we create a
/blog route (using Vue Router) in our app and fetch blog posts from the Butter API, as well as a
/blog/:slug route to handle individual posts.
See the ButterCMS API reference for additional options such as filtering by category or author. The response also includes some metadata we’ll use for pagination. } ] })
Then create
components/BlogHome.vue which will be your blog homepage that lists your most recent posts.
<script> import { butter } from '@/buttercms' export default { name: 'blog-home', data() { return { page_title: 'Blog', posts: [] } }, methods: { getPosts() { butter.post.list({ page: 1, page_size: 10 }).then(res => { this.posts = res.data.data }) } }, created() { this.getPosts() } } </script>
<template> <div id="blog-home"> <h1>{{ page_title }}</h1> <!-- Create `v-for` and apply a `key` for Vue. Here we>
Here’s what it looks like (note we added CSS from for quick styling):
Now create
components/BlogPost.vue which will be your Blog Post page to list a single post.
<script> import { butter } from '@/buttercms' export default { name: 'blog-post', data() { return { post: {} } }, methods: { getPost() { butter.post.retrieve(this.$route.params.slug) .then(res => { this.post = res.data }).catch(res => { console.log(res) }) } }, created() { this.getPost() } } </script>
<template> <div id="blog-post"> <h1>{{ post.data.title }}</h1> <h4>{{ post.data.author.first_name }} {{ post.data.author.last_name }}</h4> <div v-</div> <router-link {{ post.meta.previous_post.title }} </router-link> <router-link {{ post.meta.next_post.title }} </router-link> </div> </template>
Here’s a preview:
Now our app is pulling all blog posts and we can navigate to individual posts. However, our next/previous post buttons are not working.
One thing to note when using routes with params is that when the user navigates from
/blog/foo to
/blog/bar, the same component instance will be reused. Since both routes render the same component, this is more efficient than destroying the old instance and then creating a new one.
Be aware, that using the component this way will mean that the lifecycle hooks of the component will not be called. Visit the Vue Router’s docs to learn more about Dynamic Route Matching
To fix this we need to watch the
$route object and call
getPost() when the route changes.
Updated
<script> section in
components/BlogPost.vue:
<script> import { butter } from '@/buttercms' export default { name: 'blog-post', data() { return { post: null } }, methods: { getPost() { butter.post.retrieve(this.$route.params.slug) .then(res => { this.post = res.data }).catch(res => { console.log(res) }) } }, watch: { $route(to, from) { this.getPost() } }, created() { this.getPost() } } </script>
Now your app has a working blog that can be updated easily in the ButterCMS dashboard.
Categories, Tags, and Authors
Use Butter’s APIs for categories, tags, and authors to feature and filter content on your blog.
See the ButterCMS API reference for more information about these objects:
Here’s an example of listing all categories and getting posts by category. Call these methods on the
created() lifecycle hook:
methods: { // ... getCategories() { butter.category.list() .then(res => { console.log('List of Categories:') console.log(res.data.data) }) }, getPostsByCategory() { butter.category.retrieve('example-category', { include: 'recent_posts' }) .then(res => { console.log('Posts with specific category:') console.log(res) }) } }, created() { // ... this.getCategories() this.getPostsByCategory() }
Alternative Patterns
An alternative pattern to consider, especially if you prefer writing only in Markdown, is using something like Nuxtent. Nuxtent allows you to use
Vue Component inside of Markdown files. This approach would be akin to a static site approach (i.e. Jekyll) where you compose your blog posts in Markdown files. Nuxtent adds a nice integration between Vue.js and Markdown allowing you to live in a 100% Vue.js world.
Wrap up
That’s it! You now have a fully functional CMS-powered blog running in your app. We hope this tutorial was helpful and made your development experience with Vue.js even more enjoyable :) | http://semantic-portal.net/vue-cookbook-create-a-cms-powered-blog | CC-MAIN-2021-39 | refinedweb | 860 | 58.58 |
Best described as a "web Application Server for PyThon".
This WiKi is in ZWiki, which runs on ZoPe. Running on zope-space that I'm renting (not dedicated hardware).
Supports a text format called Structured Text.
A key product for Zope is the ZopeCMF.
I've been unhappy with the zope books I've read/browsed. Probably better off with the developer's guide
Jeffrey P Shell recommendations (Jul'03):
I try to do a lot of "Product" based development, and I strive to do smaller components with a clean inheritance model. I think they key is just doing smart Python development. Or just smart OOP development, meaning - smaller classes, lots of collaborators. My primary critique of Zope 2 has long been its inheritance tree mess. Zope 3 aims to fix this.
Since Products are written in clean Python, they're a bit easier to do unit tests. Instead of testing the web side, you test the Model (or at least, the Model parts), preferably by aping the Controller parts (the methods that will interact with web forms). The [CMF] has a large body of unit tests, none of which is dependent on a web setup or even a persistent ZODB connection. It takes some work to set up a test harness, but it can be done. There should be enough examples floating through the Zope and [CMF] core to provide examples of how to do that.
For testing web stuff, I know they've come up with some Functional Tests for Zope 3. Richard Jones has released [WebUnit] (I believe) that can set up web requests and test responses.
ZPT is a heavily established Zope 2 technology. It's been in the core since Zope 2.4 and works great. I love it. I can't imagine any other way of templating. (Template System)
Formulator product (does this work happily with SQL databases?)
I would recommend avoiding [Z Classes]. I'd also get familiar with the [OFS] package, particularly the differences between [OFS].[Simple Item].Item (not persistent) and [OFS].[Simple Item].[Simple Item] (persistent); learn [Object Manager]; learn the classes that make up a Folder; and become familiar with [Class Security Info] (should be documented in the [Zope Developers Guide]). This is useful even for non-CMS related stuff. I'm making applications now out of fairly simple persistent objects that load all their page templates off of the file system (out of the product) instead of the ZODB. The objects in question are the View/Controller pair of the application and then talk to other classes and objects inside the Product to do validation, LDAP communication, etc. In total - there are five objects that are put into the ZODB that make the application work.
Docs to help learn:
the [Zope Book], available in print and in free HTML. Seems a little content-mgmt focused to me
Nov'02 article - a bit [LinUx]-focused
links to intro dev pieces
process recommendations - stay away from the [ZMI] for any real development.
Jeffrey P Shell on the ZoPe inheritance tree
Mar'02 tutorialcheckout/Packages/[Job Board Ex]/Tutorial.html on building "job list/board" app with Zope3 - business classes vs ZPT, etc.
Basic intro/weird notes:
strange restriction on spacing for
dtml-let -
dtml-let value = "expression" does not work, but
dtml-let value="expression" works fine
One of the big questions when using Zope as an application server instead of just a content management system is which kind of custom code to use: search
[Python Script]-s: can only use a subset of python libraries
[ZClass]-es - see Death to zClasses vs this note on using them as starting point
Products - use a minimal facade if appropriate
External Method-s: *gets context information in an obscure and often confusing way* (intro )
SQL interaction (using an RDBMS instead of ZODB)
Ian Bicking isn't happy with it
when you want to pass the
REQUEST bits as an argument, plus some other variables
Zope support via email
main/general list
search via yahoo group but don't post that way - email to mailto:zope@zope.org
Recovering Corrupted Data.fs
Debugging, testing
import zLOG zLOG.LOG(subsystem, severity=0, summary, message)
TextPad tips
Archetypes framework for developing new content types
Zope3 will have a new architecture. It's being done in pieces. Here are some links:
roadmap/schedule as of Jan'03, plan was for beta Jun'03, release Sept'03. Migration tools will come after that!
oops z2004-11-09- Zope3 Out
"Five Project" aims to provide some ZoPe-3 features in ZoPe-2.
Dec'2005 - Jeffrey P Shell offers some v3 quick-start links
[Open Flow] --2003/09/08 02:33 [GMT]
a very interesting zope product is openflow -- Josef Davies-Coates | http://webseitz.fluxent.com/wiki/ZoPe | crawl-002 | refinedweb | 793 | 63.49 |
throwing some bits of python code to jython to see how near
it is to be usable to my purposes. It has been fun by now, as the
failed tests forced me to start looking on the Jython code :).
One thing that I discovered is that even when an import fails jython
saves the result on sys.modules, setting the new entry to Py.None
(arround the middle of imp.import_name) . That's different to what
CPython does (AFAICS, import.c:mark_miss only saves failed relative
imports), and in the practice it doesn't allow the following trick,
which works under CPython2.3 :
try:
import foo
except ImportError:
download_from_somewhere('foo.zip')
sys.path.append('foo.zip')
import foo
What's the reason behind this behaviour? Performance alone? Can we
change it to mimic CPython behaviour?
--
Leo Soto M. | https://sourceforge.net/p/jython/mailman/message/10848094/ | CC-MAIN-2017-47 | refinedweb | 138 | 77.03 |
Hi all,
I hope that this isn't a repost, but i couldn't find anything related on this forum.
I am trying to build an Android app and to include the OrgModeParser for C++ () which also requires qt5. So i installed qt5 via homebrew since I'm on OSX and installed OrgModeParser as described in the repository's Readme. Just to try out if including headers works with Xcode projects, i made a new JUCE-project with the Introjucer and created the exporters Android-Studio and Xcode. I added
#include "OrgFile.h"
to the MainComponent.h file. For Xcode the following setting in the "Header search paths" field worked out for me:
/usr/local/include/OrgModeParser/ /Users/danielhoepfner/Desktop/Android_Project/OrgModeParser/** /usr/local/Cellar/qt5/5.5.1_2/*
I pasted the above into the same fields for Debug und Release configuration of the Android exporter and I get the error
'QCoreApplication' file not found
from which i conclude that the first (non-recursive) search path seems to work well, but not the other two recursive ones. Does anyone know how to set this for Android-Studio inside the Introjucer?
Cheers and thanks for help in advance
Daniel | https://forum.juce.com/t/recursive-header-search-paths-for-android-exporter/16675 | CC-MAIN-2018-34 | refinedweb | 198 | 53.71 |
First time here? Check out the FAQ!
I just ran the code on a HP Elitebook with a Quadro 3000M running CUDA 5.0. Initiation time is now only 2040 ms. It's probably driver related as well.
Thanks for this clarification. I understand the multiple run requirement, which will indeed be the case for this match set-up as well (i.e. normally I will run at least 9 to 49 templates against the same image).
Your suggestion does not lead to any significant improvement, however. Initiation time stays around 26 seconds, even if I use smaller input images. I still do not understand whether this is specific to OpenCV gpu implementation. I am running a pure-CUDA phase correlation based on cuFFT, which gives similar results as matchTemplate, but does not seem to have an excessive initiation penalty.
It runs in 1710 ms total (330 ms for the GPU part) with the same images as before.
Dear All,
I am interested in using template matching on large (satellite) images (at least 8192 by 8192 pixels), using templates from reference image sets that are typically 256 by 256 or 512 by 512 pixels in size. A normal use case is matching N by N templates against the image (N=5,7,9...).
I am using OpenCV 2.4.6 with CUDA 4.2. I managed to get the gpu version of matchTemplate going, but ran into the initiation timing issue. This causes the gpu version to be slower than the cpu version, when used in a single image/single template match. I have done careful timing analysis (see code below) and find that the code is spending 98% of the time on initiation. I know that this has to do with the JIT compilation of the CUDA related code, but the reference to check this further in the documentation on the nvcc compiler and the CUDA_DEVCODE_CACHE environment variable is leading nowhere to a solution (I set the environment variable, but nothing improves).
This should be a compile once, run often code case, so if someone got the code caching working correctly, I'd appreciate if that knowledge could be shared.
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/gpu/gpu.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/// Global Variables
Mat img;
Mat templ;
Mat result;
int match_method;
/** @function main
Stripped down version, without GUI functionality
*/
int main( int argc, char** argv )
{
/// Load image and template
img = imread( argv[1], 1 );
templ = imread( argv[2], 1 );
match_method = atoi(argv[2]);
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
result.create( result_cols, result_rows, CV_32F);
size_t t0 = clock();
try
{
gpu::printCudaDeviceInfo(gpu::getDevice());
gpu::resetDevice();
}
catch (const std::exception& e)
{
//no GPU, DLL not compiled with GPU
printf("Exception thrown: %s\n", e.what());
return 0;
}
size_t t1 = clock();
printf("GPU initialize: %f ms\n", (double(t1 - t0)/CLOCKS_PER_SEC*1000.0));
gpu::GpuMat d_src, d_templ, d_dst;
d_templ.upload(templ);
printf("GPU load templ: %f ms\n", (double(clock() - t1)/CLOCKS_PER_SEC*1000.0));
d_src.upload(img);
printf("GPU load img: %f ms\n", (double(clock() - t1)/CLOCKS_PER_SEC*1000.0));
//d_templ.upload(templ);
//printf("GPU load templ: %f ms\n", (double(clock() - t1)/CLOCKS_PER_SEC*1000.0));
d_dst.upload(result);
printf("GPU load result: %f ms\n", (double(clock() - t1)/CLOCKS_PER_SEC*1000.0));
/// Do the Matching
size_t t2 = clock();
printf("GPU memory set-up: %f ms\n", (double(t2 - t1)/CLOCKS_PER_SEC*1000.0));
gpu::matchTemplate( d_src, d_templ, d_dst, match_method );
size_t t3 = clock();
printf("GPU template match: %f ms\n", (double(t3 - t2)/CLOCKS_PER_SEC*1000.0));
/// Localizing the best match with minMaxLoc
double minVal; double maxVal; Point minLoc; Point maxLoc;
Point matchLoc;
gpu::minMaxLoc( d_dst, &minVal, &maxVal, &minLoc, &maxLoc);
size_t t4 = clock();
printf("GPU minMaxLoc: %f ms\n", (double(t4 - t3)/CLOCKS_PER_SEC*1000.0));
/// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the ...
Even after installing sphinx, the doc build will not work after a new cmake. There is a problem in cmake/OpenCVDetectPython.cmake. The line
if(SPHINX_OUTPUT MATCHES "^Sphinx v([0-9][^ \n]*)")
does not work correctly. Only if I force SPHINX_VERSION to 1.2 (my version) the documentation will built (both HTML and PDF).
The problem is related to executing sphinx-build, which returns a multi-line response, which includes the Sphinx version as the second line. I guess the MATCHES statement does not handle that correctly (I am not a cmake expert, though).
I am using the git procedure to build from source (2.4).
After a bit of extra work, I got the groovy problem solved as well! For some reason the System.loadLibrary(CORE.NATIVE_LIBRARY_NAME) does not work well with Groovy.
I wrote the following (somewhat trivial) Java class:
import org.opencv.core.Core;
public class LibLoader {
public static void load() {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
}
Compile with:
javac -cp ../build/bin/opencv-245.jar:. LibLoader.java
LibLoader.load()
/*
* Detects faces in an image, draws boxes around them, and writes the results
* to "faceDetection.png".
*/
LibLoader.load()
println("\nRunning DetectFaceDemo")
// Create a face detector from the cascade file in the resources directory.
// Note: I got rid of the getClass.getResource.getPath construction (lazy)
def faceDetector = new CascadeClassifier("resources/lbpcascade_frontalface.xml")
def image = Highgui.imread("resources/AverageMaleFace.jpg")
// Detect faces in the image.
// MatOfRect is a special container class for Rect.
def faceDetections = new MatOfRect()
faceDetector.detectMultiScale(image, faceDetect.
def filename = "faceDetection.png"
println(String.format("Writing %s", filename))
Highgui.imwrite(filename, image)
Compile and run with:
groovyc -cp ../build/bin/opencv-245.jar:. DetectFace.groovy
java -cp ../build/bin/opencv-245.jar:/usr/local/groovy/embeddable/groovy-all-2.1.4.jar:. -Djava.library.path=../build/lib DetectFace
Results:
Running DetectFaceDemo
Detected 1 faces
Writing faceDetection.png
I reckon I will get groovy to work without the need for compilation first (there seems to be a problem with java.library.path).
What's neat about groovy is the simplicity of the code. I used to work with Java Advanced Imaging, but now that OpenCV has gpu-enabled routines, I want to go this road.
I get this error ONLY when I try to run as a groovy script, but not when I use the java classes. I use OpenCV 2.4.5 (installed via git as suggested in the intro to Java development OpenCV 2.4.5 document). Instead of using the ant build, I simply compile and then run SimpleSample.java which works fine, as in:
javac -cp ../build/bin/opencv-245.jar:. SimpleSample.java
java -cp ../build/bin/opencv-245.jar:. -Djava.library.path=../build/lib SimpleSample
My goal is to run this in groovy. I have groovy-fied SimpleSample.java as follows:
import org.opencv.core.Core
import org.opencv.core.Mat
import org.opencv.core.CvType
import org.opencv.core.Scalar
System.loadLibrary(Core.NATIVE_LIBRARY_NAME)
println("Welcome to OpenCV " + Core.VERSION)
def m = new Mat(5, 10, CvType.CV_8UC1, new Scalar(0))
println("OpenCV Mat: " + m)
...
I compile this with groovyc and then run it with java, which throws the exception.
groovyc
groovyc -cp ../build/bin/opencv-245.jar:. TestGroovy.groovy
java -cp ../build/bin/opencv-245.jar:/usr/local/groovy/embeddable/groovy-all-2.1.4.jar:. -Djava.library.path=../build/lib TestGroovy
Welcome to OpenCV 2.4.5.0
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.opencv.core.Mat.n_Mat(IIIDDDD)J | http://answers.opencv.org/users/3870/ggl/?sort=recent | CC-MAIN-2019-18 | refinedweb | 1,240 | 51.85 |
Opened 7 years ago
Closed 7 years ago
Last modified 7 years ago
#13235 closed (wontfix)
Better manage.py: import django after importing settings
Description
Sometimes it may be necessary to use custom versions of libs for certain project. They may be placed to special dir which should be first in
sys.path. This dir may be inserted to
sys.path at first lines in settings.py
import os import sys PROJECT_ROOT = os.path.realpath(os.path.dirname(__file__)) PROJECT_LIBS = os.path.realpath(os.path.join(PROJECT_ROOT, '..', 'lib')) if not PROJECT_LIBS in sys.path: sys.path.insert(0, PROJECT_LIBS)
But! We can't place custom
django to that dir, because
manage.py imports
django first, then imports
settings. It seems, the more preferred way is to import settings first.
#!/usr/bin/env python try: import settings # Assumed to be in the same directory. except ImportError: import sys) from django.core.management import execute_manager if __name__ == "__main__": execute_manager(settings)
The example above is based on original manage.py, but with one line moved from top to bottom:
from django.core.management import execute_manager
Change History (2)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
Sometimes it may be necessary to use custom versions of libs for certain project
A solution exists for this. It's called virtualenv.
To my mind, f you need to do sys path modification, the right place to do that isn't settings.py -- it's manage.py itself.
Marking wontfix; if you feel particularly passionate about this, please start a discussion on django-developers. | https://code.djangoproject.com/ticket/13235 | CC-MAIN-2017-09 | refinedweb | 264 | 52.05 |
flock − apply or remove an advisory lock on an open file
#include <sys/file.h>
int flock(int fd, int operation);
Apply or remove an advisory lock on the open file specified by fd. The argument operation is one of the following:.
On success, zero is returned. On error, −1 is returned, and errno is set appropriately.). This yields classical BSD semantics: there is no interaction between the types of lock placed by flock() and fcntl(2), and flock() does not detect deadlock. (Note, however, that on some modern BSDs, flock() and fcntl(2) locks do interact with one another.))
Documentation/filesystem/locks.txt in the Linux kernel source tree (Documentation/locks.txt in older kernels)
This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/. | http://man.linuxtool.net/centos7/u1/man/2_flock.html | CC-MAIN-2019-09 | refinedweb | 145 | 59.3 |
Introduction
A user agent is a string of information identifying browser, operating system, the web server, and several other details. The browser sends user agent to the websites it loads. Whenever a browser connects to the website, it sends user agent’s information string in its HTTP header. The value of the user agent is different for each browser. The purpose of the user agent is to:
- to load the webpage as per the particular browser
- display webpage content as per the device
- helps in getting the statistics about the browser and operating systems in use by their users
Changing to a mobile user agent
When we are connecting to a website using a browser, we can trick the site to think that a different browser is loading it. Changing the user agent can do the work. Here, in this tutorial, we will see how we can run the WebDriver script using a mobile user agent like iPhone. This means although we will use Chrome or Firefox browser for the automation code, the test site will think its iPhone mobile browser.
Chrome Settings
Firstly, we need to do the setting in the browser to procure the user agent string which we can pass from the script. Let us see the setting for the Chrome browser.
- Goto Chrome > More tools > Extensions > Chrome web store and add any user agent switcher. There are several switchers available. I am using the first search result which is Chrome UA Spoofer user agent switcher. The purpose of it is to get the user agent string.
- Now to get the string, go to Extensions and User Agent Switcher > Details > Extension options and click.
3. Copy the user agent string for iPhone which for me is “Mozilla/5.0 (iPhone; CPU iPhone OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5376e Safari/8536.25“.
We can pass this text string from Selenium code to run the script on the iPhone browser using Chrome browser of our system.
Code for Chrome browser
For passing the user agent string, we will make use of ChromeOptions class. With the help of this class, we can send details regarding the browser session by using the addArguments() method. Please look at the following code.
package seleniumAutomationTests; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.chrome.ChromeOptions; public class ChromeUserAgent { public static void main(String[] args) { // TODO Auto-generated method stub //Set the system property for Chrome driver System.setProperty("webdriver.chrome.driver","C://softwares//drivers//chromedriver.exe"); ChromeOptions options = new ChromeOptions(); options.addArguments("--user-agent=Mozilla/5.0 (iPhone; CPU iPhone OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, " + "like Gecko) Version/6.0 Mobile/10A5376e Safari/8536.25"); options.addArguments("--start-maximized"); WebDriver driver = new ChromeDriver(options); driver.get(""); //get the title of the page in a string variable String pageTitle = driver.getTitle(); //print the page title on console System.out.println(pageTitle); driver.close(); } }
When we execute this code, it will load the application under test on the chrome with the user agent setting for iPhone. Next, it will fetch the page title and print it on the console.
Firefox Setting
On the Firefox browser, firstly we need to do the user agent extension setting.
- Goto Firefox browser, Add-ons > Extensions and add user agent switcher extension.
- From the extension icon, select any browser of choice. And then copy the user agent string.
We will pass this string from the code, to send the user agent details to the website under test.
Code for Firefox browser
Please consider the code given below. We will make use of FirefoxOptions to set the profile settings for the session. We will set the profile details and pass them to the driver session using FirefoxOptions.
package seleniumAutomationTests; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.openqa.selenium.firefox.FirefoxOptions; import org.openqa.selenium.firefox.FirefoxProfile; public class FirefoxUserAgent { public static void main(String[] args) { // TODO Auto-generated method stub //Set the system property for Gecko driver System.setProperty("webdriver.gecko.driver","C://softwares//drivers//geckodriver.exe"); FirefoxProfile profile = new FirefoxProfile(); profile.setPreference("general.useragent.override", "Mozilla/5.0 (iPhone; CPU iPhone OS 11_0_1 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mobile/15A402 Safari/604.1"); FirefoxOptions options = new FirefoxOptions(); options.setProfile(profile); WebDriver driver = new FirefoxDriver(options); //load the webpage driver.get(""); //get the title of the page in a string variable String pageTitle = driver.getTitle(); //print the page title on console System.out.println(pageTitle); driver.close(); } }
When we execute this code, it will load the application under test on the Firefox browser but with the user agent setting for iPhone. Next, it will fetch the page title and print it on the console.
Conclusion
When we want to check the webpage on a particular browser without installing it or using the device, we can make use of user agents. Selenium WebDriver provides ChromeOptions and FirefoxOptions classes using which we are able to send the user agent information to the web application under test. This will provide the experience of testing the webpage on mobile browsers from the local browser. | https://www.tutorialcup.com/testing/selenium-tutorial/running-with-mobile-user-agents.htm | CC-MAIN-2021-31 | refinedweb | 876 | 50.43 |
Example and Exercise Files
There are two Visual Studio solutions and one XML file containing Caché class definitions associated with this tutorial:
PhonebookSoln — This solution contains the completed code for the tutorial projects. The solution contains two projects: PhonebookObj( Part II of the tutorial) and Phonebook (Part III of the tutorial). To build and execute these projects complete the following steps:
Using Studio, install, compile, and populate the Caché data classes. Click here for instructions.
In Visual Studio, set the startup project by clicking Project –> Set as StartUp Project. Do this for either the PhonebookObj or Phonebook project, depending on which one you are working on.
In Visual Studio, add a reference to the Caché Managed Provider. See Part I for instructions.
For PhonebookObj, use CacheNetWizard to create .NET proxies for the Caché data classes. See Part I for instructions.
In Visual Studio, add the appropriate namespace, user, and pwd information to the configuration file (Phonebook.exe.config or PhonebookObj.exe.config).
In Visual Studio, build the project by clicking Build –> Compile.
In Visual Studio, launch the project by clicking either Debug –> Start Debugging or Debug –> Start Without Debugging.
Phonebook — This solution contains the skeleton code for the tutorial projects. The solution contains two Visual Studio projects: PhonebookObj( Part II) and Phonebook( Part III).
Contacts.xml — This file contains the Caché class definitions for the tutorial projects. Click here for instructions on installing and populating the Caché classes. | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=TCMP_EXAMPLESANDEXERCISES | CC-MAIN-2021-10 | refinedweb | 238 | 50.53 |
I'm trying to plot, using imshow(), only the middle hundred rows of an image I have. I was wondering if there are any numpy commands that can slice only the middle hundred rows of my image's array. If not, can I use some variation on imshow() itself to be able to select and show only the middle hundred rows?
What you are looking for is :
pic[np.shape(pic)[0]/2-50:np.shape(pic)[0]/2+50,np.shape(pic)[1]/2-50:np.shape(pic)[1]/2+50]
example code:
import numpy as np import matplotlib.pyplot as plt pic = np.random.rand(300,300) fig1 = plt.figure() fig1.suptitle('Full image') plt.imshow(pic) cropped = pic[np.shape(pic)[0]/2-50:np.shape(pic)[0]/2+50,np.shape(pic)[1]/2-50:np.shape(pic)[1]/2+50] fig2 = plt.figure() fig2.suptitle('middle 100 rows and column cropped') plt.imshow(cropped) plt.show()
Result: | https://codedump.io/share/JuJpowzTb27d/1/ipython-imshow-or-numpy-row-selection | CC-MAIN-2018-05 | refinedweb | 162 | 71.1 |
Hi, I have been set an assignment for university which involves me creating a "basic" applet using Java. I have been going at this for several hours now and can't seem to make any progress. The main problem is displaying the text which I type into the textfield within the applet. Here is the basic outlines for what I have to do, any help would be greatly appreciated:
Have a Textfield entry component which processes ActionEvents.
In the actionPerformed() the string must be placed into a character array.
The string must be displayed in the applet in the color blue. However to count down on plagarism, the following proviso holds: If the string typed in contains the last letter of your first name then that particular letter should always be displayed in red, and if the string contains the last letter of your surname then that letter should always be displayed in green.
This link shows an example of what I have to create:
Here is the code I have so far:
Code Java:
import java.awt.Graphics; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JTextField; public class MainClass extends JPanel implements ActionListener { JTextField jtf = new JTextField(15); public MainClass() { add(jtf); jtf.addActionListener(this); } // Show text when user presses ENTER. public void actionPerformed(ActionEvent ae) { System.out.println(jtf.getText()); } public static void main(String[] args) { JFrame frame = new JFrame(); frame.getContentPane().add(new MainClass()); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(200, 200); frame.setVisible(true); } }
Thanks. | http://www.javaprogrammingforums.com/%20java-theory-questions/7902-help-java-assignment-eclipse-printingthethread.html | CC-MAIN-2015-11 | refinedweb | 263 | 59.3 |
Image Segmentation DeepLabV3 on iOS¶
Reviewed by: Jeremiah Chung in applications such as autonomous driving and scene understanding.
In this tutorial, we will provide a step-by-step guide on how to prepare and run the PyTorch DeepLabV3 model on iOS, taking you from the beginning of having a model you may want to use on iOS to the end of having a complete iOS app using the model. We will also cover practical and general tips on how to check if your next favorite pre-trained PyTorch models can run on iOS, and how to avoid pitfalls.
Note
Before going through this tutorial, you should check out PyTorch Mobile for iOS and give the PyTorch iOS HelloWorld example app a quick try. This tutorial will go beyond the image classification model, usually the first kind of model deployed on mobile. The complete code repo for this tutorial is available here.
Learning Objectives¶
In this tutorial, you will learn how to:
- Convert the DeepLabV3 model for iOS deployment.
- Get the output of the model for the example input image in Python and compare it to the output from the iOS app.
- Build a new iOS app or reuse an iOS example app to load the converted model.
- Prepare the input into the format that the model expects and process the model output.
- Complete the UI, refactor, build and run the app to see image segmentation in action.
Steps¶
1. Convert the DeepLabV3 model for iOS deployment¶
The first step to deploying a model on iOS is to convert the model into the TorchScript format.
Note
Not all PyTorch models can be converted to TorchScript at this time because a model definition may use language features that are not in TorchScript, which is a subset of Python. See the Script and Optimize Recipe for more details.
Simply run the script below to generate the scripted model deeplabv3_scripted.pt:
import torch # use deeplabv3_resnet50 instead of deeplabv3_resnet101 to reduce the model size model = torch.hub.load('pytorch/vision:v0.8.0', 'deeplabv3_resnet50', pretrained=True) model.eval() scriptedm = torch.jit.script(model) torch.jit.save(scriptedm, "deeplabv3_scripted.pt")
The size of the generated deeplabv3_scripted.pt model file should be around 168MB. Ideally, a model should also be quantized for significant size reduction and faster inference before being deployed on an iOS app. To have a general understanding of quantization, see the Quantization Recipe and the resource links there. We will cover in detail how to correctly apply a quantization workflow called Post Training Static Quantization to the DeepLabV3 model in a future tutorial or recipe.
2. Get example input and output of the model in Python¶
Now that we have a scripted PyTorch model, let’s test with some example inputs to make sure the model works correctly on iOS. First, let’s write a Python script that uses the model to make inferences and examine inputs and outputs. For this example of the DeepLabV3 model, we can reuse the code in Step 1 and in the DeepLabV3 model hub site. Add the following code snippet to the code above:
from PIL import Image from torchvision import transforms input_image = Image.open("deeplab.jpg") preprocess = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) input_tensor = preprocess(input_image) input_batch = input_tensor.unsqueeze(0) with torch.no_grad(): output = model(input_batch)['out'][0] print(input_batch.shape) print(output.shape)
Download deeplab.jpg from here and run the script above to see the shapes of the input and output of the model:
torch.Size([1, 3, 400, 400]) torch.Size([21, 400, 400])
So if you provide the same image input deeplab.jpg of size 400x400 to the model on iOS, the output of the model should have the size [21, 400, 400]. You should also print out at least the beginning parts of the actual data of the input and output, to be used in Step 4 below to compare with the actual input and output of the model when running in the iOS app.
3. Build a new iOS app or reuse an example app and load the model¶
First, follow Step 3 of the Model Preparation for iOS recipe to use our model in an Xcode project with PyTorch Mobile enabled. Because both the DeepLabV3 model used in this tutorial and the MobileNet v2 model used in the PyTorch HelloWorld iOS example are computer vision models, you may choose to start with the HelloWorld example repo as a template to reuse the code that loads the model and processes the input and output.
Now let’s add deeplabv3_scripted.pt and deeplab.jpg used in Step 2 to the Xcode project and modify ViewController.swift to resemble:
class ViewController: UIViewController { var image = UIImage(named: "deeplab.jpg")! override func viewDidLoad() { super.viewDidLoad() } private lazy var module: TorchModule = { if let filePath = Bundle.main.path(forResource: "deeplabv3_scripted", ofType: "pt"), let module = TorchModule(fileAtPath: filePath) { return module } else { fatalError("Can't load the model file!") } }() }
Then set a breakpoint at the line return module and build and run the app. The app should stop at the breakpoint, meaning that the scripted model in Step 1 has been successfully loaded on iOS.
4. Process the model input and output for model inference¶
After the model loads in the previous step, let’s verify that it works with expected inputs and can generate expected outputs. As the model input for the DeepLabV3 model is an image, the same as that of the MobileNet v2 in the HelloWorld example, we will reuse some of the code in the TorchModule.mm file from HelloWorld for input processing. Replace the predictImage method implementation in TorchModule.mm with the following code:
- (unsigned char*)predictImage:(void*)imageBuffer { // 1. the example deeplab.jpg size is size 400x400 and there are 21 semantic classes const int WIDTH = 400; const int HEIGHT = 400; const int CLASSNUM = 21; at::Tensor tensor = torch::from_blob(imageBuffer, {1, 3, WIDTH, HEIGHT}, at::kFloat); torch::autograd::AutoGradMode guard(false); at::AutoNonVariableTypeMode non_var_type_mode(true); // 2. convert the input tensor to an NSMutableArray for debugging float* floatInput = tensor.data_ptr<float>(); if (!floatInput) { return nil; } NSMutableArray* inputs = [[NSMutableArray alloc] init]; for (int i = 0; i < 3 * WIDTH * HEIGHT; i++) { [inputs addObject:@(floatInput[i])]; } // 3. the output of the model is a dictionary of string and tensor, as // specified at auto outputDict = _impl.forward({tensor}).toGenericDict(); // 4. convert the output to another NSMutableArray for easy debugging auto outputTensor = outputDict.at("out").toTensor(); float* floatBuffer = outputTensor.data_ptr<float>(); if (!floatBuffer) { return nil; } NSMutableArray* results = [[NSMutableArray alloc] init]; for (int i = 0; i < CLASSNUM * WIDTH * HEIGHT; i++) { [results addObject:@(floatBuffer[i])]; } return nil; }
Note
The model output is a dictionary for the DeepLabV3 model so we use toGenericDict to correctly extract the result. For other models, the model output may also be a single tensor or a tuple of tensors, among other things.
With the code changes shown above, you can set breakpoints after the two for loops that populate inputs and results and compare them with the model input and output data you saw in Step 2 to see if they match. For the same inputs to the models running on iOS and Python, you should get the same outputs.
All we have done so far is to confirm that the model of our interest can be scripted and run correctly in our iOS app as in Python. The steps we walked through so far for using a model in an iOS app consumes the bulk, if not most, of our app development time, similar to how data preprocessing is the heaviest lift for a typical machine learning project.
5. Complete the UI, refactor, build and run the app¶
Now we are ready to complete the app and the UI to actually see the processed result as a new image. The output processing code should be like this, added to the end of the code snippet in Step 4 in TorchModule.mm - remember to first remove the line return nil; temporarily put there to make the code build and run:
// see the 20 semantic classes link in Introduction const int DOG = 12; const int PERSON = 15; const int SHEEP = 17; NSMutableData* data = [NSMutableData dataWithLength: sizeof(unsigned char) * 3 * WIDTH * HEIGHT]; unsigned char* buffer = (unsigned char*)[data mutableBytes]; // go through each element in the output of size [WIDTH, HEIGHT] and // set different color for different classnum for (int j = 0; j < WIDTH; j++) { for (int k = 0; k < HEIGHT; k++) { // maxi: the index of the 21 CLASSNUM with the max probability int maxi = 0, maxj = 0, maxk = 0; float maxnum = -100000.0; for (int i = 0; i < CLASSNUM; i++) { if ([results[i * (WIDTH * HEIGHT) + j * WIDTH + k] floatValue] > maxnum) { maxnum = [results[i * (WIDTH * HEIGHT) + j * WIDTH + k] floatValue]; maxi = i; maxj = j; maxk = k; } } int n = 3 * (maxj * width + maxk); // color coding for person (red), dog (green), sheep (blue) // black color for background and other classes buffer[n] = 0; buffer[n+1] = 0; buffer[n+2] = 0; if (maxi == PERSON) buffer[n] = 255; else if (maxi == DOG) buffer[n+1] = 255; else if (maxi == SHEEP) buffer[n+2] = 255; } } return buffer;
The implementation here is based on the understanding of the DeepLabV3 model which outputs a tensor of size [21, width, height] for an input image of width*height. Each element in the width*height output array is a value between 0 and 20 (for a total of 21 semantic labels described in Introduction) and the value is used to set a specific color. Color coding of the segmentation here is based on the class with the highest probability, and you can extend the color coding for all classes in your own dataset.
After the output processing, you will also need to call a helper function to convert the RGB buffer to an UIImage instance to be shown on UIImageView. You can refer to the example code convertRGBBufferToUIImage defined in UIImageHelper.mm in the code repo.
The UI for this app is also similar to that for HelloWorld, except that you do not need the UITextView to show the image classification result. You can also add two buttons Segment and Restart as shown in the code repo to run the model inference and to show back the original image after the segmentation result is shown.
The last step before we can run the app is to connect all the pieces together. Modify the ViewController.swift file to use the predictImage, which is refactored and changed to segmentImage in the repo, and helper functions you built as shown in the example code in the repo in ViewController.swift. Connect the buttons to the actions and you should be good to go.
Now when you run the app on an iOS simulator or an actual iOS device, you will see the following screens:
Recap¶
In this tutorial, we described what it takes to convert a pre-trained PyTorch DeepLabV3 model for iOS and how to make sure the model can run successfully on iOS. Our focus was to help you understand the process of confirming that a model can indeed run on iOS. The complete code repo is available here.
More advanced topics such as quantization and using models via transfer learning or of your own on iOS will be covered soon in future demo apps and tutorials. | https://pytorch.org/tutorials/beginner/deeplabv3_on_ios.html | CC-MAIN-2021-31 | refinedweb | 1,894 | 58.72 |
{-# LANGUAGE FlexibleInstances, MultiParamTypeClasses #-} ----------------------------------------------------------------------------- -- | -- Module : XMonad.Layout.OnHost -- Copyright : (c) Brandon S Allbery, Brent Yorgey -- License : BSD-style (see LICENSE) -- -- Maintainer : <allbery.b@gmail.com> -- Stability : unstable -- Portability : unportable -- -- Configure layouts on a per-host basis: use layouts and apply -- layout modifiers selectively, depending on the host. Heavily based on -- "XMonad.Layout.PerWorkspace" by Brent Yorgey. ----------------------------------------------------------------------------- module XMonad.Layout.OnHost (-- * Usage -- $usage OnHost ,onHost ,onHosts ,modHost ,modHosts ) where import XMonad import qualified XMonad.StackSet as W import XMonad.Layout.LayoutModifier import Data.Maybe (fromMaybe) import System.Posix.Env (getEnv) -- 'System.Posix.Env. -- | Specify one layout to use on a particular host, and another -- to use on all others. The second layout can be another call to -- 'onHost', and so on. onHost :: (LayoutClass l1 a, LayoutClass l2 a) => String -- ^ the name of the host to match -> (l1 a) -- ^ layout to use on the matched host -> (l2 a) -- ^ layout to use everywhere else -> OnHost l1 l2 a onHost host = onHosts [host] -- | Specify one layout to use on a particular set of hosts, and -- another to use on all other hosts. onHosts :: (LayoutClass l1 a, LayoutClass l2 a) => [String] -- ^ names of hosts to match -> (l1 a) -- ^ layout to use on matched hosts -> (l2 a) -- ^ layout to use everywhere else -> OnHost l1 l2 a onHosts hosts l1 l2 = OnHost hosts False l1 l2 -- | Specify a layout modifier to apply on a particular host; layouts -- on all other hosts will remain unmodified. modHost :: (LayoutClass l a) => String -- ^ name of the host to match -> (l a -> ModifiedLayout lm l a) -- ^ the modifier to apply on the matching host -> l a -- ^ the base layout -> OnHost (ModifiedLayout lm l) l a modHost host = modHosts [host] -- | Specify a layout modifier to apply on a particular set of -- hosts; layouts on all other hosts will remain -- unmodified. modHosts :: (LayoutClass l a) => [String] -- ^ names of the hosts to match -> (l a -> ModifiedLayout lm l a) -- ^ the modifier to apply on the matching hosts -> l a -- ^ the base layout -> OnHost (ModifiedLayout lm l) l a modHosts hosts f l = OnHost hosts False (f l) l -- | Structure for representing a host-specific layout along with -- a layout for all other hosts. We store the names of hosts -- to be matched, and the two layouts. We save the layout choice in -- the Bool, to be used to implement description. data OnHost l1 l2 a = OnHost [String] Bool (l1 a) (l2 a) deriving (Read, Show) instance (LayoutClass l1 a, LayoutClass l2 a, Show a) => LayoutClass (OnHost l1 l2) a where runLayout (W.Workspace i p@(OnHost hosts _ lt lf) ms) r = do h <- io $ getEnv "HOST" if maybe False (`elemFQDN` hosts) h then do (wrs, mlt') <- runLayout (W.Workspace i lt ms) r return (wrs, Just $ mkNewOnHostT p mlt') else do (wrs, mlt') <- runLayout (W.Workspace i lf ms) r return (wrs, Just $ mkNewOnHostF p mlt') handleMessage (OnHost hosts bool lt lf) m | bool = handleMessage lt m >>= maybe (return Nothing) (\nt -> return . Just $ OnHost hosts bool nt lf) | otherwise = handleMessage lf m >>= maybe (return Nothing) (\nf -> return . Just $ OnHost hosts bool lt nf) description (OnHost _ True l1 _) = description l1 description (OnHost _ _ _ l2) = description l2 -- | Construct new OnHost values with possibly modified layouts. mkNewOnHostT :: OnHost l1 l2 a -> Maybe (l1 a) -> OnHost l1 l2 a mkNewOnHostT (OnHost hosts _ lt lf) mlt' = (\lt' -> OnHost hosts True lt' lf) $ fromMaybe lt mlt' mkNewOnHostF :: OnHost l1 l2 a -> Maybe (l2 a) -> OnHost l1 l2 a mkNewOnHostF (OnHost hosts _ lt lf) mlf' = (\lf' -> OnHost hosts False lt lf') $ fromMaybe lf mlf' -- | 'Data.List.elem' except that if one side has a dot and the other doesn't, we truncate -- the one that does at the dot. elemFQDN :: String -> [String] -> Bool elemFQDN _ [] = False elemFQDN h0 (h:hs) | h0 `eqFQDN` h = True | otherwise = elemFQDN h0 hs -- | String equality, possibly truncating one side at a dot. eqFQDN :: String -> String -> Bool eqFQDN a b | '.' `elem` a && '.' `elem` b = a == b | '.' `elem` a = takeWhile (/= '.') a == b | '.' `elem` b = a == takeWhile (/= '.') b | otherwise = a == b | http://hackage.haskell.org/package/xmonad-contrib-0.11.2/docs/src/XMonad-Layout-OnHost.html | CC-MAIN-2015-06 | refinedweb | 667 | 59.64 |
How To Write Python Comments: The 2020 Guide
For programmers, it is important that your code is easily understood by outside users.
While professionals might not find it difficult to understand, for someone who is trying to learn Python, including comments in the code can be very helpful.
I have seen many bad practices within Python comments over the years.
In this article, I will explain the best way to write Python comments for developers in 2020 (and beyond!).
What Are Python Comments?
Comments can be understood as lines of code that allow a layman to read and understand the code.
These lines are skipped by the compilers and interpreters when the code is executed. Said differently, code instructs computers, while comments instruct humans.
There are many reasons why developers include comments in their code.
Depending on the length of the program or the purpose of it, comments can be used to make notes for a reader or yourself. Sometimes they are also used to help another programmer understand your code.
It is always a good idea to include comments in your code when writing new code or updating an old one because you might forget your thought process later on.
Understanding the Syntax
Python comments start with a hash sign
# and a white space character. Anything that is written after that until the end of the line counted as a comment.
For instance:
# this is a comment
Comments do not appear on the screen when you run a program because they do not execute. They appear in the source code for humans to get a better understanding of it. The computer, however, does not execute it.
Have a look at the code below.
print ("This will show on the screen") # This will not run
When the above code is executed, you will see the output as
This will show on the screen. Everything else will be ignored.
You can make comments anywhere in the code followed by a hash mark. However, it is suggested that comments be made at the same indent as the code that the comment is referencing.
A Simple Use of Python Comments in the “Hello World” Program
The first program you ever make when learning a new language is “Hello, World!”.
Here’s how you write the code for it in Python along with comments.
# Print "Hello, World!" to console print ("Hello, World!")
Let's work through another example.
Consider a for loop in Python that iterates over a list of items. To explain the program's functionality in the code, you could write the following:
# Define fruits variable as a list of strings fruits = ['apple', 'mango', 'banana', 'grapes', 'strawberry', 'orange'] # To print each item in the list for fruit in fruits: print(fruit)
Python Multiline Comments
The above examples were for single-line comments. There will also be times when you will need to include longer Python comments that cover multiple lines. These are called multiline comments.
Unlike C or Java, there is no syntax to write multiline comments in Python:
# If you do this in Python it will raise a Syntax error
However, all is not lost. While there is no native multiline commenting functionality in Python, there are two workarounds that you can take advantage of.
The first method is to simply press the Enter key after each line and add hashmark at the beginning of the new line:
# This is a good example of # how you can include a multiline # comment in your Python code
Because each line beginning with a hash mark is ignored by the program, this is perhaps the most logical way to include multiline comments in your code. However, it is not the method that I use.
There is another strategy that you can do to write comments that go on for multiple lines. Namely, you can wrap your entire comment inside a set of triple quotes, as shown below:
""" If you hate to type so many hash marks at the beginning of each line, you can just wrap it in triple quotes like this instead """
For anyone that has familiarity with Java programming, this is very similar to comments in Java. However, you must know that it, technically, is not a comment.
Instead, multiline comments created with the
""" operator are strings that are not assigned to any variable and therefore, it is not called or referenced by the program when you run it. It does not appear in the bytecode and therefore functions effectively as a comment.
At the same time, you need to be careful where you place multiline comments created with triple-quotes in your program. You might turn them into docstrings if placed incorrectly, as shown in the example below:
def my_function(): """Demonstrate docstrings""" return None print "Using __doc__:" print my_function.__doc__
The output, in this case, will be:
Using __doc__: Demonstrate docstrings
This is because using triple double quotes below class, method, or function declaration leads to a docstring declaration. Docstrings are used to store documentation in Python modules, functions, classes, and methods.
Choosing between the
# character and the
""" character when creating multi-line strings can be confusing. I have one recommendation: if you are doubtful, just put a hash mark before the beginning of each comment line to avoid any conflicts with docstrings in your program.
Why Write Python Comments?
Commenting is unquestionably a significant part of any large Python application.
Now that you know how to write simple and complex comments while keeping them separate from docstrings, I wanted to discuss when and why to use Python comments in your software development.
Understanding Your Own Code
Programmers forget what their code does all the time, especially when codes are long, and deadlines are tight. I am personally very guilty of this!
What is the source of this problem?
There may be times when you did not name your variables properly because you were in a rush. I run into this problem regularly, especially when I'm in the development stage of a new project.
Separately, the complex logic of a mature software application can be confusing if you haven't recently worked on it - even if all the variables have been named properly.
Not including any Python comments to tell you "what is what" or "what does what" can be a nightmare. Completing a project and going back to include comments in it never really works because, by the time you are done developing, you're usually ready to move on to the next project. Those comments never get included.
Therefore, to make sure that your code is understood by you (and others!) at any point in the future, the best way is to comment as you go.
Helping Others Understand your Code
Imagine working on a code that turns out to be more than 20,000 lines and you have to collaborate with other developers to complete it. Comments are vital for any new collaborators to understand what has already been completed.
In a properly-commented application, new developers can skim through your code and use its comments to understand what you have written and how it works. It can also help to ensure a smooth transition if you need to hand over the project to another developer completely.
What NOT to do When Writing Python Comments
While there is no limitation to how you write comments, there are some practices that must be avoided at all costs to make your code easy to understand while not wasting much of your time writing them. We'll discuss these next.
Do Not Repeat Yourself
The main function of comments is to explain something that is not obvious. In any software application, there are certain functions that are self-explanatory. You should avoid including comments explaining obvious concepts.
I am guilty of wanting to over-comment my code on some occasions. Because of this, I think a blatant example is helpful for understanding when you should not include comments in your code:
return a # Returns a
It is very clear in the above line of code that
a is returned. Stating the same thing in the comment makes it redundant and wastes the time of the programmer as well as the reader. It also makes the code less readable.
If while writing the code, you write some comments for your own help, make sure you go back and delete them when code is running properly.
Stay Away From Smelly Comments
Smelly comments are defined as comments that hint at a deeper problem with the code. Comments are supposed to support your code and not explain it. Said differently, smelly comments are those that explain how your code works, instead of why it is performing a certain task.
Why are smelly comments bad?
An unnecessary explanation of the code signifies that it is trying to mask any underlying issue and there are good chances that code is written poorly. No amount of commenting can fix that and it will only make the code either buggy or bulky.
Avoid Rude Comments
When working with other developers, you might need to rewrite the code and make changes to correct it. While doing so, here is what you must not do:
# Put this here to fix Mike's really stupid error - what an idiot...
While this might be humorous when you're in the development stage, such comments can accidentally be left in the code and shipped to production. This will look really, really bad on your part and create problems for you and within your organization.
Header Comments in Python
Python files usually starts with a few lines of comments that state the information about the project, details about the programmer, software license used for the code, and the purpose behind creating the file.
Below is an example of what that might look like.
# ----------------------------------------------------------- # demonstrates how to run a program # # (C) 2020 Nick McCullum, Fredericton, New Brunswick, Canada # Released under Public License ABC # email nicholasmccullum@gmail.com # -----------------------------------------------------------
In larger organizations, this is helpful because it demonstrates who should be contacted if a code file begins to malfunction.
The Bottom Line
Writing comments is not and should not be a tedious task.
It helps the programmers and non-technical people understand the code. Even if you visit your code at a later date, you will not feel lost.
In this article, we demonstrated how to write Python comments properly, which is an extremely important component of your software development toolkit. Feel free to refer back to this article if you ever get stuck in the future. | https://nickmccullum.com/how-to-write-python-comments/ | CC-MAIN-2021-31 | refinedweb | 1,761 | 69.41 |
Activity ideas
From OLPC
Ideas for Sugar activities to use in the field. For more, see Summer of Code ideas, School server ideas, and the related category.
Mediawiki stuff
I have seen a fair amount of discussion of the possibility of having school-level wikipedias. The issues involved are offline browsing (static content, caching, at 2 levels: global<->school server<->xo) and editing (multilevel synchronization - a problem that probably cannot be "solved" but can be attacked). This person would need to have architectural vision and PHP skills. If there were such a proposal, I would suggest that they could spend a little extra time supporting/mentoring my Summer of Content proposal for a multilingual wiki. Homunq 12:18, 2 March 2008 (EST)
Shared wikis for projects
Some ideas:
- Keep a shared page for each collaborative project or actvity itself that is shared across a school and/or class : automatically generate pages/namespaces for class + activity + project where help notes, reports, and progress are tracked. Define how these namespaces interact aross clases, schools, and at a global level on wikieducator / wikiversity / similar sites
- Define how to link together a set of related work into a report : linking to a project/file/record, customized to launch a specific activity via wiki markup.
- Work on the interface b/t MikMik and a MediaWiki server.
Alternative desktops
Interactive Desktop
hii its nishant i would like to devlop an interactive desktop softwarewhich can be used to learn the system and also can be used as a friend . I would like to use AI to enhance the working and simulating this . I also want to make that desktop like aperson who can handdle particular tasks assigned to it .
Mesh Networking
Tools development
As we reach for increasingly larger numbers of nodes participating in mesh network testbed, it is evident that sophisticated methods and tools for monitoring, logging and debugging will become necessary. Project deliverables include:
- Maintain our mesh network testbed
- Review different methods for controlling and monitoring large numbers of machines (control over wireless vs. control over Ethernet, stored logs vs. online logging, etc)
- Implementation of network application + GUI to remotely control, configure and analyze logs from mesh network experiments on large testbed
Visualization development
A few visualizations of the mesh have been developed so far; the default random visualization, a roughly signal-strength based visualization that shows other XOs a distance away inversely proportional to signal srength. What other visualizations would be useful or interesting? How do these idea scale to thousands of XOs or a number of school clusters?
Health Tools
- Design software to interact with the different health peripherals.
- Integration of other FLOSS software like OpenMRS.
- Display and interpret bio-signals (e.g EKG. EMG..)
- Help in Projects/TeleHealth Database
- Hello, my name is Chao Zhang, and I am a graduate student of computer science focused on bioinformatics. I always paid close attention to OLPC and I am very interested in developing some health tools for OLPC. It is a good chance to contribute my knowledge to the open source projects. Since I have more than 5 years experiences on developing finance and enterprise management systems and more than 8 years experiences on JAVA programming, I am also fascinated about the simple financial planning tools. I hope to chat with mentors to get more details on those ideas. My email: chaozhang.mu@gmail.com
- Dear mentors,
My name is Joana Cabral and I am an enthusiastic supporter of the OLPC project. I have graduated last September in Biomedical Engineering and I am currently following a PhD program in Computational Neuroscience in Barcelona. During my scholar education I got familiarized with bio-signals, not only EKG and EMG but also more chaotic signals like EEG so I believe I may fit to your needs.
I think this kind of softwares will be extremely useful, especially for undevelopped countries, where the health services are sparse and unsufficient and cannot eventually afford to buy health tools softwares.Nevertheless we must underline that a special care must be given to the reliability and robustness of of health tools since they can eventually be used for diagnosis and remote monitorization of children patients.
I would like to discuss with the mentors some more ideas about this project and let you know a little more about me. Please contact me to: juanitacabral@hotmail.com
- Cheers
Code libraries
some core sugar, some xo-optimization
Pygame/sdl
Add as much support as possible using the Geode graphics processor.
xo3d
Develop the xo3d library based on work started by User:Wade. This is a flat shaded software 3D renderer with support for objects, lighting & clipping, exposed to Python. It also features a matrix and vector math library.
Publication and Journal sharing
Incorporate the Distribute activity into the shell At the basic level this will require:
- A way to initiate a transfer (a button in the Journal, a contextual option on an object, a drag'n'drop operation)
- Notification to the receiver
- Some method to indicate progress to sender and receiver, with a way to cancel
- A journal entry for the receiver containing the resulting file
Necessary future support:
- Transparent support for interrupted connections
WebJournal Proposal
This is a proposal to build modification in the Journal Activity that provide a publication and share feature. This new feature will be able to become children production accessible for the web.
Web Page Building Tool
This is activity is related to the Networked Blogging Project above. While the focus of that is the system level implementation including server side web applications and posting to public blog tools, the focus of this piece is on the client side interface. We need a simple GUI for kids to build web pages. It should have hooks in it to easily post to the full tool as described above. It should also have an easy way for kids to see the web page in a browser and then edit it again based on that layout. The target is an update to the existing write application which has a post to blog option and a view in browser option as well.
- The fundamental solution requirements are similar to those from last year:
Requiremientos_Para_XO
Specific Activities
This list of desirable activities is largely a grab-bag, meant to spark ideas. There are plenty more at Category:Software ideas. Part of the work of doing your SoC application would be to do a preliminary evaluation of existing open source options in a domain and their adaptability to OLPC. Python and/or GTK-based programs are the easiest to adapt. Also present in the platform are Javascript, C/C++ (of course), and Smalltalk (squeak). See Sugar and Developers/Stack for further info.
Applications should show serious thought about what can be achieved in the short time available. Whether you are starting from scratch or adapting an existing app, fewer well-implemented/adapted features are far preferable to many poorly-implemented ones. In either case, but especially in the case of an adaptation, a solid foundation makes it easy to add (back) in more features later.
Flash Card creator
The student and mentor would evaluate open source flash card programs together, and then either port or adapt one to XO. The flash card program would be developed with a Sugar-specific UI and features. It would feature one of the well known flash card memorization algorithms for tracking student progress through each deck of cards. (see Drill and test software)
- Hello, my name is Jon Volkman, and I am fascinated about working on the creation of an open source flash card program for XO. I have several preliminary ideas, and find what research (and efforts on other platforms) that have been put into this concept very intriguing.
- Hi, my name is Adam Goldstein and I recently applied and submitted what thoughts I have for writing a flexible and complete flash card creator for XO. I would really like to take what lessons I've learned from both making and using various programs on multiple platforms to develop a solid tool for study. I'm very excited to explore implementation possibilities and would really enjoy a chance to discuss.
- Hi, my name is Urko Fernandez and I've recently started coding a new activity to create and use flashcards. I wanted to give a constructivist touch so I've added tagging to cards and I'm trying to encourage the user to create and even rewrite or reformulate the given questions and answers with her own words and let some sort of popularity decide which is the best way to ask and answer a card. I'm going to use Leitner's system but as flashcards are to be related according to the tags and the content of their answers, I may insert from time to time a question apparently unrelated.
- More info: Code is already on the git server: . Tturk (2008)
- There was lots of interest in this last year, but no completed projects. This would be a great type of project to pick up this SOC, and I'd be glad to mentor such a project. --Sj talk 12:49, 16 March 2009 (UTC)
Master Mind (game)
Implement the well known board game:
- Hello, my name is Ian, and I am interested in working on this project, as it seems similar to other projects I have worked on. I would be interested in learning more about this project in detail if possible. Please email me at imperialisthobo@yahoo.com.
Typing Tutor
A game-like typing tutor activity would be developed by the student. Existing open source projects would be evaluated for ideas. Features would include adapting to student progress, support for all XO keymaps (take a look at Keyboard#Languages_other_than_English for information on supported keyboard layouts) and written languages, progress tracking graphs, the ability to locally customize the program, etc
See related 2008 work from: Prakhar, called LetsType, and from K. Scheppke, both working with wade.
- Hello, my name is Kelly. I'm most interested in writing a game to teach basic numbers and math. I'm also interested in this typing tutorial, and I'm willing to work wherever you think I'd be most useful. I know Scheme, C, and Python; I'm familiar with lex and yacc. Mentors please email me at kekenned@gmail.com
- Hello, I am Shree Kant , I m interested in writing a typing tutor based on video and audio, I am thinking about something of this sort from quite a long time, I would like to do it this summer for OLPC email me shreekantbohra@gmail.com
Finance
The student would develop a simple financial planning program, basically the simplest possible version of Quicken. It would provide a simple income / expense register, monthly tracking, budget planning, expense & income categories, and a loan calculator.
This activity idea came from a request by the Nepal deployment.
- I have put up the detailed proposal here and the abstract to which can be found here -dkd903 : dkd903@gmail.com
- I'm interested in creating an easy to use finance program in java or python, although I wouldn't mind expanding its functionality to more than is described here. Trying to look for OLPC GSoC related items in IRC to no avail, but my email is mpoon@mit.edu. -mpoon
- My name is David Wong and I am an undergraduate at U.C. Berkeley studying Business administration and EECS. I would love to create a financial accounting program! I could write the program in Java, C, or even Scheme. Please email more details to david_wong@berkeley.edu.
- My name is Tamil and I am currently a Masters student at Georgia Institute of Technology. I actually have created rudimentary fuel economy and optimization programs through FORTRAN. I also know Java and C++ and am currently teaching myself Python. I am an active member of my school's Investments and Finance Club so I am astute in all things financial (plus i use Quicken on a regular basis). I would love to help out with OLPC on this plus I do have other ideas. Please contact me at tamil@gatech.edu.
I'm Sergiu, and I am an undergraduate at the Technical University of Iasi Gh Asachi, in Romania. The financial system i would like to build for you will be web based (php or java, you choice), so the same system can be accessed from any location. Also (no matter which language will be used for the web part), i can build a desktop application, for accessing the database from the server, so including the programs other students might be able to develop. The security won't be a problem (as i have developed applications verified and signed by VeriSign, world's most notorious authority in security).
- Please contact me at dogaru.sergiu@zrgiu.com, or zrgiu@yahoo.com
System Dynamics modeling
Create a system dynamics (SD) model editor and simulation engine, expanding on the current OpenSim work. The advantages to this would be:
- Models are visual and mathematical representations of a system, which allows for a different form of visual programing than that of Turtle Art
- SD is used in international development planning and to teach systems thinking to K-12 students
- Simulation engine could be accessible from other programs, like Micropolis
- Programs like Micropolis could have their core logic in SD models and access it through the simulation engine, allowing people to switch to a visual representation of the program logic to understand and change it.
-Bobby Powers (potential mentor)
Inferno
There's a variety of work that is left to be done in Inferno on OLPC, things that might be best done by a student include:
- fontfs - mapping OLPC fonts to inferno native fonts
- metafs - mapping file system permissions to OLPC model
- camera/audio support
- new window manager for Inferno which better matches OLPC paradigm
- integration with OLPC collaboration framework
- integration with OLPC internationalization mechanisms
- edutainment applications written in Limbo for OLPC
GIS activity for XO
Engineers Without Borders, Timepedia, and International Symposium on Digital Earth want to work with OLPC to create community-based mapping data collection systems that will feed to global mapping and analysis projects, which will then feed back to the children and their communities. Environment, health, agriculture...
Deducto
Deducto is a Board Game based on pattern recognition initially developed by Walter Bender at MIT Media Labs using Perl language. The game has been re-written in Python by the founding members of Open Source Community-NSIT, India. HTML version of the game is available at [Deducto]. Project is also available at [GIT Repository]. Addition of a feature where a user could generate his/her own levels of the game, development of UI, and re-design using PyGTK are the areas for development.
Food Force Project
World Food Programme's Food Force Project [Windows and Mac] to be re-designed for the Sugar environment.Please visit Food_Force/Design_Document.
PlayGo
Go is a great game which promotes connectivity and cultural exchange, not to mention critical thinking. The PlayGo activity ([1]) has alreagy begun an implementation. It would be nice to bring this project into phases 2 and 3.
- I am very interested in this particular project. My name is Brandon Wilson, and if I could be of help in this project please contact me at [2].
- My name is Artem Kaznatcheev and I am also interested in this project. I was curious if you desired future development to follow the "PlayGo" phase 2 and 3 goals exactly, or if we could expand and split from there; some ideas could include: AI opponents (GnuGo, AnyGo and other open source players), "learning" mode, "puzzle" mode, and variants of Go (Zen Go, etc). Any information would be welcome.
ANN - Artificial Neural Networks
ANN is an activity where children can design, build, and test artificial neural networks (ANNs). Each 'experiment' will have a particular task ranging in difficulty from switching on and off a light to controlling a paddle in a game of pong. Children will design and build an ANN that they can then test in a simulated environment. If you have any comments, please contact me at bjgraham@udel.edu
Hi That is Smriti Garg here I am a student doing Masters of Sciences Computer Application and is right now exploring these ANNs So I will love to help you out with this as far as experience is considered i have developed a few web applications and has good hands on on JAVA here is my e mail id smritigarg87@gmail.com
Puzzles
Jigsaw puzzle. play and share.
- Hi, I am Omar Arana. I study computing science.
I like developing 3-D games and animations. I've got experience developing applications of this kind, for Cross-Platform Windows/Linux. Here you have a sample of what I can do: I've used SDL beforehand. I'd love to create "Jigsaw Puzzles". I have also made 3-D graphics, using only basic 2-D graphic primitives. Mentors please email me to ao_indy@yahoo.com
- Hi, I'm Omar Mestas, I've been interested in the developing of educational games for children, since
it's the best time to learn. I am good in languages like C/C++, Delphi/Kylix. I'd like to create games in 3D that can catch children's eye helping to the learning process. I can also work with some tools related to the developing of graphics. I'm good at working in groups and under pressure so I can complete my aims. Please contact me to my mail: omar_23@hotmail.com
- Hello I am Smriti Garg right now a student but has worked on some of the web development projects and is quite interested in putting in efforts to the game development areas So would love to work on some thing like that
Doing Masters of Sciences Computer Application I am sure that if taken any project in hand will create it as have been expected
Water Wonders
A game targeted at 8 – 12 year old children. The objective of the game is to teach children about how pollution, global warming and mismanagement of the water supply is affecting the world’s water resources. This is accomplished using an intriguing storyline, fun characters, a variety of interesting environments to explore, and an appealing graphical interface. The game adheres to a high level of scientific accuracy and realistically depicts many aspects of the job of an environmental scientist.
- Lin linszhou@gmail.com
Education ToolKit". For more information goto
Hi, I am student from Indian Institute of Information Technology and Management, Gwalior, India interested in this type of software because of the reason that I have worked on this sometime long. I have created an E-learning software for children which is an interactive one. Let me know how can I help OLPC in this particular process. Contact me at tejapv@gmail.com. Thanks...
Hi, My name is waseem. I am Master student in Royal Institute of Technology (Stockholm, Sweden), my major is Software Engineering
of Distributed Systems. I have an idea (which i have been working on for a while) and it is closely related to this topic. The
main idea is to develop an interactive application that will take any form of text (story, scientific essay, news or any form of composed text) and extract those part of the text that could be transformed in to multiple choice questions and short questions, which could be then presented to the user . Main purpose of the application will be to help children in their exam preparation, as they could make their own exam paper on the fly and evaluate their preparation . I would like to discuss this porject in more details. I can be easily reached by email exactlypinpoint@gmail.com.
With Best Regards, Waseem Shaukat
Board & card games
A suite of board and/or card games would be developed by the student including things like Chess, Checkers, Othello, Mancala. They would all be built on a common framework so that more games could be developed easily. Features would include multiplayer tournaments (including chat & spectator support), good computer AI, interactive game teaching, game recording & playback, etc.
- Hi, I am Preeti, from New Delhi. I am very keen on working with OLPC on developing Board & Card Games. I have already done work in this aspect, in C++, by making several games such as Brainvita, Solitaire and Scrabble. I want to contribute to the Google Summer of Code in this project. Please let me know how I can help..kspreeti.13@gmail.com
My name Is Christopher Hall, I study computing science at Glasgow University, I have already written Othello in Java and would love to be selected to write more games on a standard frame work. I will answer any questions about myself or my interest in this project at 0406503h(@)student.gla.ac.uk
- Hello im a Lucien Pereira, I'm a computer scientist student. I'm interrested in contributing to write a chess game in GSoC context. I got good skills in Python, C++ and java. Mail me at lperei04[@t]etudiant.univ-mlv.fr.
- Hello, my name is Pedro Marcos. I'm a computer engineering student at ITA (wich stands for, in English, Aeronautics Technology Institute), I'm Brazilian and I'm very interested in developing a checkers game for OLPC written in Java. It could possibly include a multiplayer mode and record history. Also an implementation of a Sudoku game (also in Java) with different levels of difficulty and size. I have already written a game (SuperTrunfo) in Java for a class and I would love to see one of my programs being a part of OLPC.For more information please email-me at petrol101@gmail.com
>> Hello, My name is Dommaraju Sandeep.I am pursuing M.Sc (Hons) Mathematics and B.E (Hons) Computer Science Engineering in Bits-pilani,goa campus,India.I would like to work on developing an "Unscramble words" game for the OLPC programme, which I am sure will be both recreational and informative game for children.I have programming experience in C,C++ and Java.I would like to extend my help in developing in games like zatacka,arkanoid, etc.If you like you can contact me at sandeepdommaraju@gmail.com
- Hello, My name is Ray Hogan and I will be graduating in October with a degree in Computing and Computer Games Development. I am interested in making casual games (boards, card etc) in JAVA. I have a vast knowledge of the preperation, design and production of games as it's been a key topic in the past three years of my studies. I am currently making a Hangman, Tic Tac Toe and Connect 4 game in Java using Java Swing components. Please feel free to contact me at rayhogan@gmail.com or send me a memo on the freenode IRC network, my nickname being H0gan.
>>Greetings. My name is Joseph Austin. I am finishing an undergrad in English at Middle Tennessee State University, while working on another in Computer Science. I will be out of classes this summer, so I would like to spend those idle hours with GSOC. Specifically, I have a game idea in mind that has never been implemented, called "The Diamond Watch". I have experience in C and C++, specifically using Allegro. My website ( ) contains a game created by me, completed in under a month. What I am offering is a game that is fun, intuitive, and works the logical centers of our users developing brains without exhausting them.
The board (
) Works as a clock that changes each turn, with a diamond on both ends. The user may go to any node ( marked purple ), but may not go back a step. The player also has the option of standing still once at a time. This game is played like paper-rock-scissors, in that both moves are made privately before the board updates. If both players move to the same location, they each must go the opposite direction. If one player "rests" a turn, and the other attempts to move into the same space, only the other player has to move the opposite direction. The object is to claim both diamonds, much like capture the flag.
Feel free to contact me at jba2g@mtsu.edu. I will also be on the IRC channel as JBLovloss.
3D Software Renderer & Game
A simple flat shaded 3D graphics library would be developed by the student for the XO platform, with an accompanying game. The game would be something exciting and multiplayer but non-violent, I'm thinking about a first person firefighting simulator (where you shoot water at animated fires and rescue victims) or something like that. The game would be designed by the student with direction from the mentor.
- I may be able to do this firefighting simulator game using python/pygame. Not sure about 3D graphics library. Request some details of mentor. --krish
- Hi, my name is nataly, I am peruvian , I am working in 3D reconstruction now and I am interested in this project. My mail is nzapana@gmail.com . Mentors please tell me how I can help.
XO Smart Kid
Its a single player game where the main character is the role of a child(XO user). The story line of the game revolves around the life of the child & hence the stages include missions like 1. going to school 2. organising items in a room 3. getting list of things from mall and more The game intends to teach the player(child) elements of social behaviour & also cautious and careful living. Examples: 1. the player avoids any contact with strangers on his way to school. 2. he needs to cautiously cross the roads 3. remember routes for school, home,mall...
The player will be awarded game points like chocolates/pastries; since most children love them.
The learning from this game can be applied to real social life.
- I had proposed this idea on the games mailing list. Looking for a mentor. Suggestions/comments welcome - raja.aishwarya@gmail.com
Micropolis (SimCity)
I have an older list of interesting ways to develop Micropolis (aka SimCity), which I have written about on my blog!
The source code is on Google Code, and I've been working on finishing up all the grunt work that requires familiarity with the code and would be hard for other people to do, to enable other people to work on the higher level stuff that depends on that.
There are two Micropolis projects:
- The old "micropolis-activity" which is the original TCL/Tk version of SimCity for Unix, which I ported to Linux and adapted to the OLPC.
- The new "MicropolisCore" C++/SWIG/Python module that I've cleaned up and I have started developing a user interface.
It would be best to put effort into developing the new MicropolisCore code for the long term, although there are some small tasks that could be done with the old TCL/Tk code for the short term. -Don Hopkins
- I've been brainstorming some new sort of work connecting the MC engine with other libraries, and recently got PacMan to work with pacman traversing city streets and eating up traffic as dots... [Jan 2009] [3]
Update: We have made a lot of progress on the new Python based version of Micropolis! The GTK/Cairo/Pango version is playable, and I'm developing a web based version with Python/TurboGears/SQLAlchemy/Genshi/Cairo/Pango on the web server, and OpenLaszlo/Flash on the client side. More information on the project at [4].
Improve DrGeo
The DrGeo activity (interactive geometry) port need to be finished and improved in different areas.
Parts to be written
- implementation of the macro-construction system. It is a system to record a set of constructions as a function the user can save and use repeatably. See the original implementation.
- implementation of the script system. A script within DrGeo is code hooked to an interactive sketch, it is used to perform calculus. See the original implementation in Scheme. The script language will be Smalltalk based.
Parts to improve
- Improve the load time, the load time is now unacceptable for the user and make DrGeo unadapted for the OLPC.
- Define a journal type entry to save/load.
- Improve the user interface, particularly the access to the construction tools.
- Improve the locus sampling, it is by now suboptimal.
Other suggestions for improvements, see the DrGeo tracker.
== comment == I'd love to participate to DrGeo. I'm Anna Wrochna (Lilavati), I am studying math+CS at the Warsaw Univeristy. I speak French, have used SmallTalk, loved geometry in high school - I'm the man for the job. Please contact me as a.wrochna(a)gmail.com .
Develop a light, functional and usable email client
- log children on automatically.
- cache things locally, both for writing and for reading.
- interest: Shikhar is interested in developing an email activity, see proposal outline Email client
- Hello, I'm Pedro Marcos and I have already showed my interest in developing a project for OLPC (the Board, card games Project) but this project also interests me a great deal. I have, while i was a intern at a webdevelopment and webhosting company called Secrel () , developed a light email manager written in ASP. It could download emails via SMTP and categorize then according to what it was at the body of the message and more. Contact-me at petrol101@gmail.com for further information. Thanks for the oportunity :)
Mind mapping activity
A few teachers (including the teacher in Arahuay) have requested a mind-mapping activity. MindMeister and similar suites have offered us some of their toolchains.
Elements. 2D physics simulation
Making 2D rigid body physics easily accessible and implementable with python/pygame on the XO laptop. Project started already as 'Elements'
Computer Vision with OpenCV
OpenCV is a computer vision library developed by Intel that greatly simplifies complex tasks like object recognition and tracking and image manipulation. Possible uses for it include vision based games, gesture recognition, and video chat with low bandwidth cartoon characters substituted for video.
Nirav Patel started working with OpenCV and face recognition on the XO laptop.
XOradio and XOtv
The idea is to supply an easy way (1 click) to broadcast contents (audio and video) from a OLPC laptop, and then, put all the available channels together in a OLPC global channel of lectures sharinghowtos sharing, p2p online help, video connected classrooms/sessions,... page in OLPC Austria wiki this link is broken at the moment
Virtual Garden
It will be a virtual garden where children can grow and breed different plants. The plant's characteristics are going to be defined by the user, but the main idea of the project is that every characteristic of the plant is genetically represented; therefore, to create beautiful, interesting plants, children would need to understand the way in which the genes affect the plant, and how can they use inheritance to produce the desired offspring. In the creation of the artificial plants, the work by Przemyslav Prusinkiewicz y Aristid Lindenmayer in their book "The Algorithmic beauty of Plants" will be used, with an addition: The flowers will be based in the superformula by Jhon Giellis.
Hi, My name is Venture M and I am pursuing my Masters in Information Technology at RMIT, Melbourne Australia. I have programming experience since last 7 years, of which 4 years is in Software Industry. I have been programming in OpenGL since the last 5 years as a hobby. Although I have not read the ideas behind this project, I am sure that I will be able to grasp and implement them. Please contact me at venturecoder@gmail.com.
Hi, I'm User:Jairtrejo, Mechatronics student from Mexico City, and original proponent of the Garden activity. A detailed description of my proposal lives here: Garden Activity.
Language Learning
Foreign language learning
Focusing on English: A tool to learn foreign languages would be a great addition for the XO. At it's most rudimentary form, it can start with an interactive [[dictionary, but something advanced would be preferred, perhaps along the lines of LingoTeach.
- I've created a wiki pages with my ideas. Language Learning -Steven Mohr
- My name is Kelly. I'm interested in this task and other OLPC tasks, and want to talk to a mentor. Please email me at kekenned@gmail.com
- I am interested in this task and in other linguistic/language oriented tasks. I would like to talk to a mentor. E-mail me at shwayd@brandeis.edu Thanks. ~Kobey
- Hello, I have drafted my idea for a English vocabulary improving activity at WordNet Game. Regards Nikola
- Hello,i have an idea of an application where children can "talk" to the computer.Application uses speech recognition and speaks out words and involves small vocabulary building games.-Vidya Email me at (vidyareddy.b@gmail.com)
twext :)
Speech synthesis
Listen and Spell : A simple game to help children learn to spell words correctly using speech synthesis technology. Words will be spoken, and the child will be expected to correctly spell it.
The game can have the following features:-
- Difficulty Level - Easy/Medium/Hard
- Multiple Dictionary Sources
- Contextual Dictionary Lookups - The application can lookup words related to specific keywords, speak out a small description of the word, and then expect the child to spell it.
- Mesh Challenge - Children can collaborate over the Mesh Network and challenge each other in a multiplayer game. The child will type the word on his XO, this will be spoken on the other XO, and the player must spell it correctly.
A very basic activity draft that can be suitably scaled is available at talkntype
Hi, I am Assim Deodia. I have originally posted this idea on olpc mailing list and gsoc list. I have extented this idea and created a wiki page here Listen_and Spell. I am looking for a mentor who is interested in speech synthesis and language learning activity for the XO. email: assim.deodia@gmail.com
I'm really interested in this project and need to contact mentor. e-mail:sachith.ponnamperuma@gmail.com
Hi, I'm very interested in this project and I need to contact the mentor about some ideas that I had :). Please contact me: diogolr@gmail.com
Coding Tutor
Coding Tutor is a Programming Language & Technique learning tool, inspired by the Hackety Hack software which is a browser cum compiler for learning hacking in Ruby. Initially a C/C++ & Python Tutor with the purpose of C/C++ as system/base languages and Python as the modern easy to use, friendly programming language! Some key features include Speech Interactivity, Error Highlighter, Hints, Multi-Language Support, Network Sharing, Quiz Sessions etc. Coding Tutor supports the idea of "Learning while having fun" and thus will be a software intended to make the chore of learning coding a fun task.
Hi, I am Rahul Bagaria, NSIT, India. I have proposed this project for GSoC and have created a basic wiki page describing the project in detail at Coding Tutor. I am looking for a mentor interested in Programming Environments and speech based learning for XO.
Misc / needs work
My FilmCity(@INDOKLEY)
Record your songs and upload. Download the songs that is give by the "Music" Teacher. Make Playlist from the list of Songs(here we can resirict the list of songs according to the MUSIC TEACHERS)
- Upload Video
- Download Video
Embed twexter into activities
twexter software formats twin text (twext) for language learners.. twexter can work with all kindsa tools/activities like moodle or scratch or mediawiki or wixi so we can grow multilingual.. twexter can also annote same language text, for example by translating complex english to "basic english"
hi,I am kinda interested in this project. I am a master student from China, whose research interest is natural language processing. I need more information about this. I guess my research experience may do some contribution to this project.Who is the mentor? could you please contace me. my email: wenjuan1239@gmail.com. thanks a lot.
synxi
synxi wants to make it easier to add timed text SLS to video..
- speech to text
- closed captioning
- timed text (syllabic level karaoke)
- sync w/ audio video
synxi will help us 2.) learn language, 1.) teach language, 0.) share language
Meta activities
Activity Translation Activity
Similar to the functionality offered by the "View Source" key, an activity should ideally allow the user to translate it. A Translate activity would allow the user to translate any given activity, and optionally let the user share the translation, so that it can be reused by other users in the mesh.
Eclipse plugin for Activities
An [Eclipse] plugin which would allow software developers to easily write Python based activities for Sugar. Some of the features can be
- Integration with an Xnest/Xephyr window which would run the activity being developed in Sugar
- Easily accessible developer documentation
after three days playing around with XO, Sugar, xephyr ... i realize that we are no need another eclipse plugin for developing sugar activities. Just install PyDev plugin. start Xephyr and run eclipse "inside" Xephyr
Sugar Factory
Sugar Factory is an automated method for Sugarizing non-Python applications. Albert Cahalan has some of this working now.
Privacy and Parental Control
- Access control of the students
- Trace of Student activity and alert if open illegal and unauthorized websites and contents.
- Remote control of student laptop by if he is in home network
- Activiy log of the student and daily usage of laptop
- Daily,monthly analysis of the student usage (what they have used like fun,studies ,games etc)
Hi, I am Maria, a master student at University of California. I am interested to help XO program by developing a tool for parental control. I have great knowledge in C++, Perl and Java and my previous research include some privacy preserving software development. | http://wiki.laptop.org/go/Activity_ideas | CC-MAIN-2014-42 | refinedweb | 6,326 | 53.51 |
second part of the series of 12 LightSwitch tutorials. In this part we will look into how to make the data entry screen.
You can read the other parts of the LightSwitch tutorial as under
The article assumes that, we have LightSwitch 2011 installed in our system
Let us follow the below steps for creating a New Data Entry Screen for inserting records to the database
Step1: Choosing LightSwitch Application
Open VS2010 and from the project templates, choose LightSwitch Application (Visual C#)
Click OK button. We will be presented with an empty project with only two folders named "Data Sources" and "Screens".
We can either create a new table or connect to the external data source. In this tutorial we will go with "Create new table" option.
Step 2: Create a new Table
Click on "Create new table" option.Once clicked, we will get the below screen
So a default table by the name Table1Items has been created in the ApplicationData database.We will rename our table to tblEmployee.Next we will add some fields to it
N.B.~Rules for creating the Field names
N.B.~Some tit bits
We can even insert a new property, move the Properties up and down, add and edit relationships as shown under
So finally our Employee Table looks as under
As can be make out that, we have used some new data types such as Email Address, Phone Numbers etc.For every name field, the property dialog looks as under
N.B.~LightSwitch uses SQL Server Express for its internal database. The ApplicationDatabase.mdf file will be available in the bin\data folder of our project
Step 3: Create Data Entry Screen
Now we will add the dataentry screen. For this either we can click on the Screens Folder , right click and choose "Add Screen" Or Click on the Screen Icon or Press CTRL + SHIFT + E.
The "Add New Screen" window opens as shown under
We will choose the New Data Screen. Let us give a suitable screen name and the data source. That’s all
Click "Ok" to continue. This will create a new UI screen for our application to insert new data record by end user. Now, in the solution explorer we can see that, the “Data Sources” folder has one database named “ApplicationData” and it has a table named "tblEmployee". We can find the "EmployeeDataEntryScreen" in the "Screens" folder.We can change the design of the UI from the below screen:
Step 4: Running the application
Build the application.Press F5 to run the application. The below screen will appear
Enter valid values and click on the Save button and the record will be saved
N.B.~A few words about field validations
Moving the mouse in the Emp Name field shows the correct message for validation.
Upon entering the valid records and clicking on the save button the next record will be saved and it's ID will be incremented as under
Now let us try to close the window without saving the record.In that case the system will ask for a confirmation message as to whether save the changes or to quit the application without doing so.
We can however, customize the screen.For that let us click on the Design Screen Customize button as show below
It will ask for "Your unsaved data on the screen will be lost? Continue". Click OK.
The customization screen appears.
Here we can set the values of the various properties for our screen customization. In this case we changed the Horizontal Alignment to Left and Vertical Alignment to Top for the sizing property. Next click on the save button.And we can figure out that our data entry screen has been changed
If we want to close any unsaved changes, that can be done by clicking on the tab close button
Sometime we may need to work with the computed column.We can add computed column by clicking on the Computed Property or by using Ctrl + Shift + C as shown under
This will add a computed property by the name Property1.Change the name to EmpFullName. Hence, our table design looks as under
Click on the Edit Method of the Columns property as shown uder
Add the below code
namespace LightSwitchApplication
{
public partial class tblEmployee
{
partial void EmpFullName_Compute(ref string result)
{
// Set result to the desired field value
result = string.Concat(this.EmpFirstName, " ", this.EmpLastName);
}
}
}
So, our computed column will concat the First and the Last Employee Names. If we run the application we can see that.
Till now we say the validation provided by the LightSwitch framework. But we may need to create our own validation. Suppose we want to make the EmpFirst Name column to accept only alphabets and spaces.
For doing so, first click on the "Custom Validation"
And add the below code in the EmpFirstName_Validate method
partial void EmpFirstName_Validate(EntityValidationResultsBuilder results)
{
Regex r = new Regex(@"^[a-zA-Z\s]*$");
if (!(r.IsMatch(this.EmpFirstName)))
{
results.AddPropertyError("First name can contain only alphabets and spaces.");
}
}
If we now run the application and enter some wrong value in the EmpFirstName column, we will get the below error message
We will now see as how to add an image in our application
First we have to add a Image Data Type as shown under
Now Save the table. We'll want to be able to edit the picture as well, so double click the SearchEmployee screen. First expand the DataGridRow | Employee and then click the + Add item:
So the layout will look as under
Now run the application
We can figure out that the "EmpPhoto" field is empty since we have not uploaded any photo. Click on the EmpFirstName link
Now we have the option to upload an image
After the image is uploaded, click on the Save button and the record will be saved.
Now if we perform a search, we will get the record with image uploaded
N.B.~Search Screen is described here
So in this part we have seen how to work with the Data Entry Screen for inserting records to the database. Also we have seen how to perform custom validation,create computed column and screen customization.In the next article we will look into the search data screen.
Thanks for reading the article.Happy lightning with LightSwitch.
Latest Articles
Latest Articles from Niladri.biswas
Login to post response | http://www.dotnetfunda.com/articles/show/1575/part-211-rapport-with-new-data-screen-in-lightswitch | CC-MAIN-2016-50 | refinedweb | 1,066 | 61.77 |
>>."
Sounds like Scala (Score:3, Insightful)
From a quick glance it looks like Scala with a more Java-like syntax... I wonder what added benefit they hope to bring.
I'd be very interested to see an in-depth comparison of the two.
Re:Sounds like Scala (Score:4, Informative)
OK, replying to myself because I obviously didn't have enough coffee yet:
They list as the benefits over Scala
- Extensible type system
- Easy transition from Java
- Reified Generics
From those 3 points, only the last one sounds useful...
Re: (Score:2)
I haven't read the docs yet, but how can they claim reified generics and full Java compatibility?
Re: (Score:2)
OK, replying to myself because I obviously didn't have enough coffee yet:
They list as the benefits over Scala - Extensible type system - Easy transition from Java - Reified Generics
From those 3 points, only the last one sounds useful...
Except that reified generics are available in Scala using manifests [scala-blogs.org]. I am not sure what the "extensible type system" means but there are various ways of adding to a type system in Scala. However I agree on not being an "Easy transition from Java". Scala is not really an easy transition from anything!
Re: (Score:2)
No, Scala doesn't have reified generics. You can get something like reified generics in Scala using manifest that covers some cases. In addition, Scala manifest are experimental, and not a stable part of the language and there is no schedule for when they will be (I still use them though.)
True, you have to implement reified generics yourself using manifest. I have not found any limitations myself yet, but I have to admit that I have only done simple reified type collections. ey
Re: (Score:2)
Why do you need it in a Java-like?
Re: (Score:2)
Well if it were a superset of Java then the answer to that should be obvious. I sense people are getting seriously fed up with Oracle / Sun's glacial development schedule as well as all the legal shenanigans. Java 7 is the Duke Nuke Em of language iterations. If someone produces a Java with extensions (almost like C++ was C with classes originally) then they might jumpstart development a
Re: (Score:2): (Score:2)
As long as you're not writing GW-Basic or COBOL in any language, I think you're not too far gone.: (Score:3, Insightful)
My point is, if you give me sufficient encapsulation tools, I can write code that is concise, readable and manipulable. And it is something anyone can do.
Re:Another Language (Score:4, Insightful)
But, VB makes writing bad code trivial, and writing good code challenging.
Re: (Score:2)
it isn't the language, you can write good or bad code in any language But, VB makes writing bad code trivial, and writing good code challenging.
COBOL even more so.
Re: : (Score:2)
Re: (Score:3, Insightful): (Score:2)
But then again I think of it as a bit like wine-tasting. Tasting as many langauges as possible gives me the ability to identify what goes best with what, how do I know if a task is more suited to Perl than assembler - only because I have used both. That said there has to be limits, if I have been using Java exclusively for a few months, it may be a lot quicker for me do some data mangling in Java even though it may require 10x as many lines of code the equivalent Perl (simply because I'll have to switch my
Re: (Score:2)
You can write good code in any language but can (a) other people and (b) the compiler understand what you meant?
It's assembly, you don't need a compiler.
:)
With comments and well-named labels and macros, assembly isn't too hard for a human to understand.
Re: (Score:2)
mov ax, 4c01
int 21h
Re: (Score:3, Insightful)
A compiler translates code INTO assembly (or machine code). An assembler translates assembly code into machine code.
Geez, the kids these days are spoiled with their fancy IDEs, and don't even know what assemblers and linkers are. So sad.
Re: (Score:2)
"there's no way any normal person can write easily in C what is possible in Haskell."
Oh rubbish. Do you think haskell is magic or perhaps it uses special CPU opcodes no other language knows about? Languages like Haskell just make certain concepts and constructs quicker to write, not possible to write. And since a lot of haskell interpreters are written in C ergo whatever you can do in Haskell you can do in C.
Re: (Score:2) [wikipedia.org]
Don't be ridiculous. "All Turing complete languages are Turing complete" is not an argument, its a tautology.
... anything. [psu.edu]
The question is how easily a concept can be expressed and reasoned about. By your logic, because diophantine equations can encode a lisp interpreter, they are a reasonable way to implement
Re: .
:)
Run for the hills! (Score:2)
Its new age cobol. I can hear the PHB now. If we use and instead of && our secretary will understand how to code and we'll save milions!
Re: (Score:2)
If we use and instead of && our secretary will understand how to code and we'll save milions!
Watch out for when your secretary learns to fly, then - for she's coding in Python and she's gonna import antigravity!
Re: (Score:2)
If we use and instead of && our secretary will understand how to code and we'll save milions!
Watch out for when your secretary learns to fly, then - for she's coding in Python and she's gonna import antigravity!
Well I hope she remembers to indent correctly.
Re:Run for the hills! (Score:4, Funny)
As a geek, you should actually prefer "and", because it is quicker to type than "&&" - it does not engage Shift, and it does not require pressing the same key twice (which requires waiting for release after the first press). Same for "or" vs "||".
Re: (Score:3, Funny)
var1&&var2 works but var1andvar2 doesn't
Re: (Score:2)
var1&&var2 works but var1andvar2 doesn't
Well yes, you can write it all on one line as well. But if you really want to go that way, nothing beats APL.
Re: (Score:2)
Try using spaces
Re:Run for the hills! (Score:4, Insightful)
Re: (Score:3, Interesting)
No, as I geek I prefer because "readability counts". Gains in productivity from key presses are irrelevant compared to the gains in productivity from understanding code as you skim it.
Re: (Score:2)
Very neat (Score:2)
Yuck!!! (Score:2, Interesting)
Phonetically gosu sounds identical to the Portuguese word "gozo", which literally means cum (as in ejaculation).
Re: (Score:2)
Only in Brazil. It has no such connotation in Portugal.
Re: .
Epic type system fail - universal covariance (Score:5, Informative)].
Re: (Score:2)
The really funny thing is that in practice all generics really need to do is prevent you from having to repeat casts everywhere, catch errors moderately soon, and aid in documentation. Which is what these do. The real 'trap' here is thinking that something has to be theoretically perfect to be useful or convenient.: (Score:2)
Re: (Score:2)
* @param {?string} input string that may be null
* @return {!string}
**/
function makeDefaultIfNull(input) {
return (input) ? input : "Default";
}
Re: (Score:2)? Please advise
;-)
Additionally, and regardless of how smart/strict a compiler is, are you saying that you would prefer another constant such a
Re: (Score:3, Informative)?
No, that's not it. The initial value of an unassigned local variable is "unassigned", not null, which is why the compiler won't let you use it unless all code paths leading to the current point assign or initialize it. But you can still explicitly initialize with null, and then everything I said applies.
Unassigned fields are initialized to null, and can be used as such with no explicit assignment/initialization.
Additionally, and regardless of how smart/strict a compiler is, are you saying that you would prefer another constant such as say 'unassigned' (which in fact cannot be assigned explicitly) as a value for unassigned references? Then, if an 'unassigned' value pops up from nowhere at runtime after your program has aborted with a thrown exception, you at least know that it was not a null pointer but that you're using a compiler that cannot tell where an unassigned reference is being used? Just thinking aloud...
No. The point is that the type system should ensure that a reference that is possibly null cannot
Re:
Re: (Score:3, Informative)
Which makes me wonder, what do all these language-design people have against contravariance?
It depends on which of the two we're talking about: variance of method argument and return types, or variance of generics.
For generics, the logic goes roughly like this:
1. Generics are mostly used for collections.
2. Intuitively, people expect a collection of objects of derived class to be substitutable for a collecton of objects of base class, even if it is not sound.
3. In practice, most uses of collections per #2 are sound (i.e. only use the covariant methods, not contravariant ones).
4. So, just default to
Re: (Score:2)
Their main 'competitor' Scala (in my mind) has both covariant and contravariant type system.
So in the language comparison, they left out covariant and contravariant..
*Sigh*
Re: .
call be back.... (Score:5, Insightful)
>> The language itself is not yet open source,
ok, call me back once it is. I don't really need another programming language, let alone a closed-source once.
Gosu is an unrefined mix (Score:2)
Gosu is an unrefined mix of cobalt oxide, sodium and other minerals mined in China.
Yup, so very right.
YASBTJ (Score:2)
Re: (Score:2)
Unless Sun/Oracle finally gets its act together and implements some language improvements, we will see another bunch of those languages. Properties have been requested for ages, closures have been discussed for how long. Dynamic reloading of classes for real hotswapping still is a pain in the arse. Java has done so many things right, but like many other sun technologies it falls short by 5% and then it takes ages to get it in out of the fear of breaking compatibility. I personally wonder if it would not be
Re: (Score:2)
Properties have been requested for ages, closures have been discussed for how long.
AFAIK properties are not on the radar, but lambdas are coming in Java 8. You can track the work (both design and implementation - there's already code to try out) here. [java.net]
To me the Dalvik VM from a modern standpoint makes more sense on bytecode level than the registere based JVM.
It's Dalvik which is register-based (which makes fast interpreters easier to implement). JVM is stack-based..
It is case-insensitive... (Score:2)
Re: (Score:2)
0RLY? N4h, wh47 w3 w4n7 15 N07 c453 1n53n5171v3, 8u7 4 1337 c4p4813 14n6u463...
Programmers are excused from learning proper English language capitalization, punctuation and spelling (and other written language semantics (like proper "quote usage", and avoiding nested paren
Cool. Next, fix the VM (Score:2)
Re: (Score:2)
The problem with generics are not on vm level, it is more the problematic implementation on javas side. The JVM after all is just assembler with high level constructs for classes and data types to some degree, it can scale to any generic implementation you can think of.
Wow... (Score:2)
Not real (Score:2)
Not real until you post its operational semantics!
Looks very interesting (Score:2)
But I really like my semicolons (as much as lispers like their parenthesis)
Interesting (Score:2)
After all, a programming language syntax does not need to be "encrypted" to be effective.
Scalars weakly typed? (Score:2)
A quick look at the documentation shows that objects/classes are strongly typed, but scalars (i.e. integers) apparently are not. In my experience, you're much less likely to add Apples to Oranges, than you are to add Count of Apples to Count of Oranges. And that also holds true for scalar values used for array indexing, etc.
So it seems to me that no language should be called 'strongly typed' if it doesn't include a complete type system for scalar types.
Re: J
Re: (Score:3, Interesting)
It is rather questionable if JIT related patents can hold up in a courtcase, JIT compilation has been around since the 70s Smalltalk and Lisp have been using it for decades.
Re: (Score:2)
But look at the picture, if you recognize half those languages, you are pretty good. The languages most of use daily weren't even dreamed of when that picture was made. And so it goe
Re: (Score:2)
Ada is named after a real person, Ada Byron, Countess of Lovelace, therefore should not be typed in all caps. thx
Re: (Score:2)
why in the hell do we need to keep inventing new programming languages?
On one hand, the ones we have are not perfect. On the other hand, there is as yet no agreement as to what a perfect one would look like. Hence many people take many different ways towards what they see as perfection.
On the other hand, there are still quite a lot unresolved language design problems - mainly to do with type systems. Until those are dealt with, it's not clear if perfection is even attainable.
Re: (Score:3, Informative)
if a type is EVER inferred, then the language is NOT statically typed. just because some preprocessor interpreter assigned a static type heuristically doesn't mean the language has anything to do with static typing... in fact, if the language ever infers type, that has EVERYTHING to do with DYNAMIC typing.
You might want to go tell the authors and users of ML (incl. OCaml) and Haskell that they're using dynamically typed languages. Somehow I'm sure they will be very open to this idea.
Re: (Score:2)
Slashdot used to appeal to the technically literate. *sigh*
Re: (Score:2)
Apart from the way the names 'Gosu' and 'Go' overlap and are both derived from the board game, what similarities do you see between the gosu-lang.org and the golang.org websites?
Re: (Score:2)
After briefly glancing through the docs, this language has absolutely nothing to do with Go. I'm not even sure what you mean by "lift of the website" in this context. The design is completely different, and so is the contents.
Re: (Score:2)
Re: (Score:2)
Jepp the dreadful lisp syntax drove me away from clojure, how can anyone write a big system with such a mess of a syntax. There are functional languages which are actually readable, lispish languages are definitely not one of those.
Re: (Score:2)
Replacing Java with another JVM language doesn't help you one bit if you have a problem with relying on Oracle software.
Re: (Score:2): (Score:3, Informative)
Stop spreading the old Java FUD. Please do some research if you feel so strongly about how someone else chooses to do their work. I don't care what languages you use, why should you care what I use?
This is a mature 3D library + engine: [jmonkeyengine.com]
Re: (Score:3, Informative)
We just released our language, and are excited about it.: (Score:2)
No, this is from Guidewire Software. It's says right there in the first two words. It's licensed under the Apache license.
But it's still free (in both senses of the word), so I don't get GP's complaint.
Re: (Score:2)
"But it IS a piece of Gosu!"
Don't worry: the language will probably be cancelled before it has a chance to really get going.
Anyway, that was my first thought too, then I wondered if it was an invitation to Oracle's lawyers...?
Re: (Score:3, Informative)
The JVM runs in more systems than the CLR. Assuming it's not too big, you could possibly use it on Android in the near future.
Re: (Score:2)
"How many times have you really used this fall through feature?"
All the time actually. Almost every switch I write has multiple cases which all require the same code executed (or not). If you've never come across this scenario then all I can think is you must just do toy coding.
Re: (Score:2) pract insis.
Re: (Score:3, Funny)
It's happening because languages are nowhere near maturity yet. Give it another 150 years and we'll be down to about 5 or 6 commonly used languages. Right now no one has it adequately right. unificati
Inexperienced Programmers (Score:3, Insightful)
Think of all the talent locked up in someone who has done language A for 10 years but is totally useless to you because your project uses language B? The concepts are the same, yet people's knowledge is arbitrarily walled off in this development environment or that environment. How can this be considered good?
Innovation doesn't mean re-inventing the wheel. librari s | http://slashdot.org/story/10/11/09/0510258/Gosu-Programming-Language-Released-To-Public | CC-MAIN-2013-48 | refinedweb | 2,883 | 71.95 |
Thanks again for your responses, guys. To answer the question,the features I'd love to see in a Python IDE are: * First and foremost, Vim editing behavior. Let me keep my fingers on the homerow. I'm lazy. Point and click and CTRL + SHIFT has its moments, but text editing is not one of them. * Graphical symbolic debugger: the course I'm auditing, Software Carpentry, by Greg Wilson of University of Toronto, devoted a whole lecture to debuggers. See . So now I want to try this crazy thing. I love the idea of being able to watch the values of variables change in "realtime" as the program runs, from the convenience of a little side window. I also love the concept of not having to insert debugging code into the production code--just a click in the left column and you set the debugging command. Keep the production code clean by putting the debugging commands outside the program. * Source browser: the ability to jump back and forth between specific blocks of code very quickly, and to see the overall layout of the file in terms of classes, methods, functions, etc. I want the big picture in a side window to keep me on task and remind me of how far I've come when I start feeling bogged down in details. * Autocompletion: PythonWin by ActiveState has nice autocompletion. When I import a module, it can dive down into those namespaces and allow autocompletion on them. That's a nice, productive feature. * Usage tips/tooltips: Also something I found in PythonWin. During the writing of the method, a little tip box pops up advising me what the inputs are for a method or an instance construction for a class. Very nice, very productive. * Linux compatibility: Nothing against Microsoft, or Apple, I just like to use a Linux box more. It seems like the IDEs I've looked at have most of the features, but none do Vim. Crazy. I agree that you can do all your coding using just Vim. That's how I've been doing it. But following along with Greg Wilson's Software Carpentry has made me realize that I could be more productive using the additional, apparently now-standard tools of a good IDE. I just don't want to sacrifice productivity in in keystrokes. It just seems like a compromise programmers shouldn't have to make. the other Chris Chris Lambacher wrote: > I would second that. I use Vim for editing. I find I don't need an IDE (not > even for C/C++). Vim does everything I need. If I want a debugger I will use > the shell debugger. Most other things can be added to Vim, though I tend to > run with very few plugins. > > -Chris > > > On Tue, Oct 18, 2005 at 05:12:30PM +0000, Ron Adam wrote: > > What features are you looking for. I think most Vim users just add what > > they want to Vim. > > | http://mail.python.org/pipermail/python-list/2005-October/345430.html | CC-MAIN-2013-20 | refinedweb | 494 | 73.37 |
The QTableWidgetItem class provides an item for use with the QTableWidget class. More...
#include <QTableWidgetItem>
The QTableWidgetItem class provides an item for use with the QTableWidget class.
Table items are used to hold pieces of information for table widgets. Items usually contain text, icons, or checkboxes
The QTableWidgetItem class is a convenience class that replaces the QTableItem class in Qt 3. It provides an item for use with the QTableWidget class.
Items are usually constructed with a table widget as their parent then inserted at a particular position specified by row and column numbers:
QTableWidgetItem *newItem = new QTableWidgetItem(tr("%1").arg( pow(row, column+1))); tableWidget->setItem(row, column, newItem);
Each item can have its own background color which is set with the setBackgroundColor() function. The current background color can be found with backgroundColor(). The text label for each item can be rendered with its own font and text color. These are specified with the setFont() and setTextColor() functions, and read with font() and textColor().
Items can be made checkable by setting the appropriate flag value with the setFlags() function. The current state of the item's flags can be read with flags().
See also QTableWidget.
Constructs a table item of the specified type that does not belong to any table.
See also type().
Constructs a table item with the given text.
See also type().
Destroys the table item.
Returns the color used to render the item's background.
See also textColor() and setBackgroundColor().
Returns the checked state of the list item (see Qt::CheckState).
See also setCheckState() and flags().
Creates an exact copy of the item.
Returns the item's data for the given role.
See also setData().
Returns the flags used to describe the item. These determine whether the item can be checked, edited, and selected.
See also setFlags().
Returns the font used to render the item's text.
See also setFont().
Returns the item's icon.
See also setIcon().
Reads the item from stream in.
See also write().
Sets the item's background color to the specified color.
See also backgroundColor() and setTextColor().
Sets the check state of the table item to be state.
See also checkState().
Sets the item's data for the given role to the specified value.
See also data().
Sets the flags for the item to the given flags. These determine whether the item can be selected or modified.
See also flags().
Sets the font used to display the item's text to the given font.
See also font(), setText(), and setTextColor().
Sets the item's icon to the icon specified.
See also icon() and setText().
Sets the item's status tip to the string specified by statusTip.
See also statusTip(), setToolTip(), and setWhatsThis().
Sets the item's text to the text specified.
See also text(), setFont(), and setTextColor().
Sets the text alignment for the item's text to the alignment specified (see Qt::AlignmentFlag).
See also textAlignment().
Sets the color used to display the item's text to the given color.
See also textColor(), setFont(), and setText().
Sets the item's tooltip to the string specified by toolTip.
See also toolTip(), setStatusTip(), and setWhatsThis().
Sets the item's "What's This?" help to the string specified by whatsThis.
See also whatsThis(), setStatusTip(), and setToolTip().
Returns the item's status tip.
See also setStatusTip().
Returns the table widget that contains the item.
Returns the item's text.
See also setText().
Returns the text alignment for the item's text (see Qt::AlignmentFlag).
See also setTextAlignment().
Returns the color used to render the item's text.
See also backgroundColor() and setTextColor().
Returns the item's tooltip.
See also setToolTip().
Returns the type passed to the QTableWidgetItem constructor.
Returns the item's "What's This?" help.
See also setWhatsThis().
Writes the item to stream out.
See also read().
Returns true if the item is less than the other item; otherwise returns false.
Assigns other's data and flags to this item. Note that type() and tableWidget() are not copied.
This function is useful when reimplementing clone().
See also data() and flags().
The default type for table widget items.
See also UserType and type().
The minimum value for custom types. Values below UserType are reserved by Qt.
See also Type and type().
This is an overloaded member function, provided for convenience. It behaves essentially like the above function.
Writes the table widget item item to stream out.
This operator uses QTableWidgetItem::write().
See also Format of the QDataStream Operators.
This is an overloaded member function, provided for convenience. It behaves essentially like the above function.
Reads a table widget item from stream in into item.
This operator uses QTableWidgetItem::read().
See also Format of the QDataStream Operators. | http://doc.trolltech.com/4.0/qtablewidgetitem.html | crawl-001 | refinedweb | 781 | 72.02 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.