text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
QWizardPage Class Reference
The QWizardPage class is the base class for wizard pages. More...
#include <QWizardPage>
This class was introduced in Qt 4.3.
Properties
Public Functions
- 221 public functions inherited from QWidget
- 29 public functions inherited from QObject
- 13 public functions inherited from QPaintDevice
Signals:
- initializePage() is called to initialize the page's contents when the user clicks the wizard's Next button. If you want to derive the page's default from what the user entered on previous pages, this is the function to reimplement.
- cleanupPage() is called to reset the page's contents when the user clicks the wizard's Back button.
- validatePage() validates the page when the user clicks Next or Finish. It is often used to show an error message if the user has entered incomplete or invalid information.
- nextId() returns the ID of the next page. It is useful when creating non-linear wizards, which allow different traversal paths based on the information provided by the user.
- isComplete() is called to determine whether the Next and/or Finish button should be enabled or disabled. If you reimplement isComplete(), also make sure that completeChanged() is emitted whenever the complete state changes..
Property Documentation
subTitle : QString.
By default, this property contains an empty string.
Access functions:
See also title, QWizard::IgnoreSubTitles, and Elements of a Wizard Page.
title : QString
This property holds the title of the page.
The title is shown by the QWizard, above the actual page. All pages should have a title.
The title may be plain text or HTML, depending on the value of the QWizard::titleFormat property.
By default, this property contains an empty string.
Access functions:
See also subTitle and Elements of a Wizard Page.
Member Function Documentation
QWizardPage::QWizardPage ( QWidget * parent = 0 )
Constructs a wizard page with the given parent.
When the page is inserted into a wizard using QWizard::addPage() or QWizard::setPage(), the parent is automatically set to be the wizard.
QString QWizardPage::buttonText ( QWizard::WizardButton which ) const().
void QWizardPage::cleanupPage () [virtual]
This virtual function is called by QWizard::cleanupPage() when the user leaves the page by clicking.
void QWizardPage::completeChanged () [signal]().
QVariant QWizardPage::field ( const QString & name ) const [protected]
Returns the value of the field called name.
This function can be used to access fields on any page of the wizard. It is equivalent to calling wizard()->field(name).())); }
See also QWizard::field(), setField(), and registerField().
void QWizardPage::initializePage () [virtual]
This virtual function is called by QWizard::initializePage() to prepare the page())); }
The default implementation does nothing.
See also QWizard::initializePage(), cleanupPage(), and QWizard::IndependentPages.
bool QWizardPage::isCommitPage () const
Returns true if this page is a commit page; otherwise returns false.
See also setCommitPage().
bool QWizardPage::isComplete () const [virtual]
This virtual function is called by QWizard to determine whether the Next or Finish button should be enabled or disabled.
The default implementation returns true if all mandatory fields are filled; otherwise, it returns false.
If you reimplement this function, make sure to emit completeChanged(), from the rest of your implementation, whenever the value of isComplete() changes. This ensures that QWizard updates the enabled or disabled state of its buttons. An example of the reimplementation is available here.
See also completeChanged() and isFinalPage().
bool QWizardPage::isFinalPage () const.
int QWizardPage::nextId () const [virtual]
This virtual function is called by QWizard::nextId() to find out which page to show when the user clicks the Next button.
The return value is the ID of the next page, or -1 if no page follows.
By default, this function returns the lowest ID greater than the ID of the current page, or -1 if there is no such ID.().
QPixmap QWizardPage::pixmap ( QWizard::WizardPixmap which ) const.
void QWizardPage::registerField ( const QString & name, QWidget * widget, const char * property = 0, const char * changedSignal = 0 ) [protected]().
void QWizardPage::setButtonText ( QWizard::WizardButton which, const QString & text )().
void QWizardPage::setCommitPage ( bool commitPage )().
void QWizardPage::setField ( const QString & name, const QVariant & value ) [protected]
Sets the value of the field called name to value.
This function can be used to set fields on any page of the wizard. It is equivalent to calling wizard()->setField(name, value).
See also QWizard::setField(), field(), and registerField().
void QWizardPage::setFinalPage ( bool finalPage ).
void QWizardPage::setPixmap ( QWizard: the entire wizard using QWizard::setPixmap(), in which case they apply for all pages that don't specify a pixmap.
See also pixmap(), QWizard::setPixmap(), and Elements of a Wizard Page.
bool QWizardPage::validatePage () [virtual]().
QWizard * QWizardPage::wizard () const [protected]
Returns the wizard associated with this page, or 0 if this page hasn't been inserted into a QWizard yet.
See also QWizard::addPage() and QWizard::setPage().
No notes | http://qt-project.org/doc/qt-4.8/qwizardpage.html | crawl-003 | en | refinedweb |
Description
Long quicklinks get truncated and are hardly readable especially for sidebar-themes.
Steps to reproduce
Add a page name with more than 25 chars to your quicklinks.
Details
This Wiki. All themes are affected.
Workaround
Patch MoinMoin/themes/__init__.py to use shortenPagename-method for links in navibar (patch generated in local svn repos: rev.100 corresponds to moin-1.5.4-release):
{{{Index: C:/Sources/moin-aca-trunk/MoinMoin/theme/init.py
=================================================================== --- C:/Sources/moin-aca-trunk/MoinMoin/theme/init.py (revision 100) +++ C:/Sources/moin-aca-trunk/MoinMoin/theme/init.py (revision 101) @@ -278,7 +278,7 @@
- title = wikiutil.escape(title) + title = self.shortenPagename(wikiutil.escape(title))
link = '<a href="%s">%s</a>' % (pagename, title) return pagename, link
@@ -292,7 +292,7 @@
- thiswiki = request.cfg.interwikiname if interwiki == thiswiki:
- pagename = page
- title = page + title = self.shortenPagename(page)
- else:
- return (pagename,
- self.request.formatter.interwikilink(True, interwiki, page) +
}}}
- Another way to handle this "bug" (if moin itself will be not changed which seems to be the current plan): override the method "splitNavilink" in your theme.
Discussion
- What is the official opinion on this one? If the patch is not accepted because long names are OK for the modern theme, it would be nevertheless worthwhile adding an optional argument in order to reduce the length of the quicklinks for sidebar themes.
It is just incorrectly filed (rather a patch/feature request, not a hard bug). Besides the incorrect order of function calls in the first patch hunk (it needs to be escaped after shortening it), it looks ok. It does not meet our PatchPolicy, though. -- AlexanderSchremmer 2006-10-09 18:37:47
- Patch was uploaded a while ago when 1.5.4 was current. ...if that was the point.
If "Long quicklinks get truncated and are hardly readable", then why do you want to truncate the title?
- Because they don`t fit into the sidebar of the right-sidebar theme and it's even worth with left sidebar themes.
This theme-specific problem can be solved in the theme by overriding the splitNavilink-method (e.g. as done in DavidLinke/Sinorca4Moin. So it is not a real moin bug.
Plan
- Priority:
- Assigned to:
- Status: 1.6 shortens long quicklinks | http://www.moinmo.in/MoinMoinBugs/NavibarDoesNotShortenPagenamesForQuicklinks | crawl-003 | en | refinedweb |
Storage for 3 collinear points to serve as 1-D projective basis. More...
#include <vgl_1d_basis.h>
Storage for 3 collinear points to serve as 1-D projective basis.
This class is (the unit point) (1,1), and the third one (point at infinity) (1,0).,1) and (1,0).!)
Definition at line 92 of file vgl_1d_basis.h.
Definition at line 100 of file vgl_1d_basis.h.
Construct from three collinear points (projective basis).
It will serve as origin (0,1), unity (1,1) and point at infinity (1,0). The points must be collinear, and different from each other.
Note that there is no valid default constructor, since any sensible default heavily depends on the structure of the point class T, the template type.
Note that there is no way to overwrite an existing vgl_basis_1d; just create a new one if you need a different one. Hence it is not possible to read a vgl_basis_1d from stream with >>.
Definition at line 10 of file vgl_1d_basis.txx.
Construct from two points (affine basis).
It will serve as origin (0,1) and unity point (1,1). The points must be different from each other, and not at infinity. This creates an affine basis, i.e., the point at infinity of the basis will be the point at infinity of the line o-u in the source space.
Definition at line 17 of file vgl_1d_basis.txx.
Definition at line 106 of file vgl_1d_basis.h.
Definition at line 105 of file vgl_1d_basis.h.
Definition at line 103 of file vgl_1d_basis.h.
Projection from a point in the source space to a 1-D homogeneous point.
Definition at line 24 of file vgl_1d_basis.txx.
Definition at line 107 of file vgl_1d_basis.h.
Definition at line 104 of file vgl_1d_basis.h.
Write "<vgl_1d_basis o u i> " to stream.
normally false; if true, inf_pt_ is not used: affine basis
Definition at line 98 of file vgl_1d_basis.h.
The point to be mapped to homogeneous (1,0)
Definition at line 97 of file vgl_1d_basis.h.
The point to be mapped to homogeneous (0,1)
Definition at line 95 of file vgl_1d_basis.h.
The point to be mapped to homogeneous (1,1)
Definition at line 96 of file vgl_1d_basis.h. | http://public.kitware.com/vxl/doc/release/core/vgl/html/classvgl__1d__basis.html | crawl-003 | en | refinedweb |
export GRAILS_HOME=/path/to/grailsto your profile
My Computer/Advanced/Environment Variables
bindirectory to your
PATHvariable:
export PATH="$PATH:$GRAILS_HOME/bin":
grails.views.enable.jsessionid=true
PROJECT_HOME/pluginsdirectory by default. This may result in compilation errors in your application unless you either re-install all your plugins or set the following property in
grails-app/conf/BuildConfig.groovy:
grails.project.plugins.dir="./plugins"
Ant.property(environment:"env") grailsHome = Ant.antProject.properties."env.GRAILS_HOME"includeTargets << new File ( "${grailsHome}/scripts/Bootstrap.groovy" )
grailsScriptmethod to import a named script:
includeTargets << grailsScript( "Bootstrap.groovy" )
Antshould be changed to
ant.3) The root directory of the project is no long on the classpath, the result is that loading a resource like this will no longer work:
def stream = getClass().classLoader.getResourceAsStream("grails-app/conf/my-config.xml")
basedirproperty:
new File("${basedir}/grails-app/conf/my-config.xml").withInputStream { stream -> // read the file }
run-app-httpsand
run-war-httpscommands no longer exist and have been replaced by an argument to run-app:
grails run-app -https
static mapping = { someEnum enumType:"ordinal" }
parseRequestargument inside a URL mapping:
"/book"(controller:"book",parseRequest:true)
resourceargument, which enables parsing by default:
"/book"(resource:"book")
grailscommand which is used in the following manner::
web-app/index.gspfile. You will note it has a detected the presence of your controller and clicking on the link to our controller we can see the text "Hello World!" printed to the browser window..
grails run-app
server.portargument:
grails -Dserver.port=8090 run-app test-app
build.xmlwhich can also run the tests by delegating to Grails' test-app command:
ant test
These are merely for your convenience and you can just as easily use an IDE or your favourite text editor.For example to create the basis of an application you typically need a domain model:
grails create-domain-class book
grails-app/domain/Book.groovysuch as:
class Book { }
create-*commands that can be explored in the command line reference guide.
generate-*commands such as generate-all, which will generate a controller and the relevant views:
grails generate-all Book | http://www.grails.org/doc/1.1.2/guide/2.%20Getting%20Started.html | crawl-003 | en | refinedweb |
Created on 2011-10-26 15:43 by amyodov, last changed 2011-10-27 12:24 by python-dev. This issue is now closed.
The extended version of assert statement has a strange violation of documented behaviour.
According to the, "assert expression1, expression2" should be equivalent to "if __debug__: if not expression1: raise AssertionError(expression2)". Nevertheless, it is not so for the following scenario:
class A(object):
def __str__(self):
return "str"
def __unicode__(self):
return "unicode"
def __repr__(self):
return "repr"
expression1 = False
expression2 = (A(),)
That is, when expression2 is a single-item tuple, assert statement prints the str-evaluation of the item itself, rather than of the tuple.
This occurs in 2.x branch only, seems fixed in 3.x, and it would be great to have it backported for consistency.
New changeset 7bef55ae5753 by Benjamin Peterson in branch '2.7':
don't let a tuple msg be interpreted as arguments to AssertionError (closes #13268) | http://bugs.python.org/issue13268 | crawl-003 | en | refinedweb |
The QtSoapHttpTransport class provides a mechanism for transporting SOAP messages to and from other hosts using the HTTP protocol. More...
#include <QtSoapHttpTransport>
Inherits QObject..
QtSoapHttpTransport usage().toLatin1().constData() << endl; return; } const QtSoapType &returnValue = response.returnValue(); if (returnValue["temperature"].isValid()) { cout << "The current temperature is " << returnValue["temperature"].toString().toLatin1().constData() << " degrees Celcius." << endl; }
See also QtSoapMessage and QtSoapType.
Constructs a QtSoapHttpTransport object. Passes parent to QObject's constructor.
Destructs a QtSoapHttpTransport.
Returns the most recently received response SOAP message. This message could be a Fault message, so it is wise to check using QtSoapMessage::isFault() before processing the response.
Returns a pointer to the QNetworkAccessManager object used by this transport. This is useful if the application needs to connect to its signals, or set or read its cookie jar, etc.
Returns a pointer to the QNetworkReply object of the current (or last) request, or 0 if no such object is currently available.
This is useful if the application needs to access the raw header data etc.
This signal is emitted when a SOAP response is received from a remote peer.
See also getResponse().
This signal is emitted when a SOAP response is received from a remote peer. The received response is available in response. This signal is emitted in tandem with the argument-less responseReady() signal.
See also responseReady().
Sets the HTTP header SOAPAction to action.
Sets the host this transport should connect to. The transport mode will be HTTP, unless useSecureHTTP is set, in which case it will be HTTPS. This transport will connect to the well-known ports by default (80 for HTTP, 443 for HTTPS), unless a different, non-zero port is specified in port.
Submits the SOAP message request to the path path on the HTTP server set using setHost(). | http://doc.trolltech.com/solutions/4/qtsoap/qtsoaphttptransport.html | crawl-003 | en | refinedweb |
SYNOPSIS
#include <sys/socket.h>
int socketpair(int domain, int type, int protocol,
int socket_vector[2]);
DESCRIPTION unspec-
ified.
ERRORS
The socketpair() function shall fail if:
EAFNOSUPPORT
The implementation does not support the specified address fam-
ily. pro-
tocol.
The following sections are informative.
EXAMPLES
None.
APPLICATION USAGE
The documentation for specific address families specifies which proto-
cols each address family supports. The documentation for specific pro-
tocols specifies which socket types each protocol supports.
The socketpair() function is used primarily with UNIX domain sockets
and need not be supported for other domains.
RATIONALE
None.
FUTURE DIRECTIONS
None.
The Open Group Standard, the original IEEE and The Open Group Standard
is the referee document. The original Standard can be obtained online
at . | http://www.linux-directory.com/man3/socketpair.shtml | crawl-003 | en | refinedweb |
#include <MP_set.hpp>
Inheritance diagram for flopc::MP_set:
This is one of the main public interface classes. One uses this when constructing MP_domains, and subsets. It is frequent that one would directly construct sets of indices, then use expressions to subset or slice the data.
term: dimension is the number of indices used to reference into it.
there is a distince 'empty' MP_set
Definition at line 79 of file MP_set.hpp.
constructs a set with specific cardinality.
Definition at line 82 of file MP_set.hpp.
Constructs an MP_domain on the stack given an index expression into the set.
Implements flopc::MP_set_base.
Definition at line 87 of file MP_set.hpp.
constructs an MP_domain from the MP_set.
Implements flopc::MP_set_base.
Definition at line 91 of file MP_set.hpp.
References flopc::MP_index::MP_domain_set.
constructs a domain by subsetting this MP_set where the MP_boolean evaluates to 'true'
Definition at line 97 of file MP_set.hpp.
References flopc::MP_index::MP_domain_set, and flopc::MP_domain::such_that().
setter for 'cyclic' property
Definition at line 103 of file MP_set.hpp.
References flopc::MP_set_base::Cyclic.
getter for the cardinality of this MP_set.
Implements flopc::MP_set_base.
Reimplemented in flopc::MP_subset< nbr >.
Definition at line 107 of file MP_set.hpp.
References cardinality.
Definition at line 110 of file MP_set.hpp.
References cardinality.
gets the distinct 'empty' MP_set.
Reimplemented from flopc::MP_index.
Reimplemented from flopc::MP_index.
Definition at line 116 of file MP_set.hpp.
Definition at line 117 of file MP_set.hpp.
Referenced by last(), and size(). | http://www.coin-or.org/Doxygen/Smi/classflopc_1_1_m_p__set.html | crawl-003 | en | refinedweb |
Tell us what you think of the site.
Are the pre-rotation options (in DOF) exposed to Python? Can’t seem to locate them in the helpfile.
Any pointers will be great!
-Johan
I’m using only OR SDK, but did you look in PropertyList of FBComponent?
KxL
Ok, so you think that with
FBProperty Find ( const char pPropertyName,
bool pInternalSearch = True,
bool pMultilangLookup = True
)
I could find the pre-rotation values… it’s worth a try… or am I totally lost here… really hate the python manual, probably makes a lot of sense to real programmers :)
-Johan
I think that you can find it, but you will have to know the exact name for it. I think that it is the same as in OR, so:
“PreRotation”
“PostRotation”
but it is always good write a loop and display all of properties that component have.
Hope this will help
KxL
It does help!! I’m wondering one thing though, how do you know the exact name, in the OR documentation there’s no mentioning of PreRotation (at least searching didn’n yield any results).. is there a list somewhere, or a method to display all properties? I probably also need to set DOF to on on the objects.
Thanks!!
-Johan
I have write for you a litte script in python (btw. my first script!), so it will print in the python console all properties for a selected model (they for FBModel will be the same, but I hope this will be some kind of help for others when they will have problems with finding a interesting value)
from pyfbsdk import FBModelList, FBGetSelectedModels, FBPropertyManager
def PrintProperty( pModel ):
lPropMgr = pModel.PropertyList
for lProp in lPropMgr:
print lProp.GetName()
lModelList = FBModelList()
# Get the selected models.FBGetSelectedModels( lModelList )
for lModel in lModelList:
PrintProperty(lModel)
# Cleanup.del( lModelList, PrintProperty )del(FBModelList, FBGetSelectedModels,FBPropertyManager)
KxL
Oh this is great! I’m a proficient maxscript/php/javascript user, but I’m having a hard time trying to understand this python implementation in Motionbuilder or maybe python overall :) , with the helpfile only showing the API in a structured way but for a non-programmer a difficult to thing to grasp. So any scripts other then the examples is a great help!
Thanks!!!
p.s. Since you seem to grasp all this programming concepts, maybe you can start a thread about the way python is integrated in MB. Why we need to import objects and instance them before using them...(if that’s even the right terminology) the “whole workflow” of doing stuff… How to use the manual would be great too, that is offcourse if you feel like it and have some time on your hands! But I think there’s a lot of other people out there, including me :), appriciating stuff like that.
Python integration is made by using Boost_phyton library, so if something is exposed from OR, it will behave in the same way. And that’ the only reason I manage to help you, becouse I’m only using OR, and never needed to use python. About the manual...hmmm..I must say that MotionBuilder is written by great programmers, and what great programmers doesn’t do? ... They never write good documentation ;) That’s the only explanation that I have for this. Maybe someone from Autodesk can explain this?
KxL, thanks for your help.
I managed to write a script to copy the rotation values to the prerotation and reset the rotation values back to 0,0,0. Thanks for answering, I have a much better grip on the helpfile already, but it would be great if it got updated to a more artist friendly helpfile. I now realize that it’s only an API document and doesn’t say a word about python itself or it’s functionality.
So I have so reading up to do, and see if this Python beast can be tamed :P
Thanks so much for your insights,
and if anybody is interested in the script get it here
-Johan
Hi,
I’ve had the same issue and your function actually help a lot the only problem I have is that I cannot access the “Enable Rotation DOF” in python ...
Anyone have a clue?
Best,
Technical Art Director @ Funcom Montreal
Sneaky name:
lModel.PropertyList.Find("RotationActive").Data = True
Stev
Hey thanks!
It was the only i didn’t try :) | http://area.autodesk.com/forum/autodesk-motionbuilder/python/urgent-pre-rotation-exposed | crawl-003 | en | refinedweb |
morphological dilation with circular element More...
#include <vil/vil_image_resource.h>
Go to the source code of this file.
morphological dilation with circular element
Dilation is a morphological operation that replaces a pixel with the maximum value of its surrounding pixels, in a certain neighbourhood. Here, the neighbourhood is circular, with an arbitrary (float) radius, which is to be be passed to the constructor.
Note that the function max(DataIn,DataIn) is being used; for non-scalar data types (like colour pixels) an appropriate max() function must thus be supplied.
Note also the implicit use of DataOut::DataOut(DataIn), which you probably will have to provide when DataIn and DataOut are not the same type. It could even be argued that these types should always be the same!
Modifications 12/97 updated by Tboult to use new codegen form and have valid (public agreed) ctor and to use preop and postop to define/destroy the mask. Peter Vanroose - 20 aug 2003 - changed parameter and return types from vil_image_view_base_sptr to vil_image_resource_sptr
Definition in file vepl2_dilate_disk.h.
morphological dilation with circular element.
Definition at line 10 of file vepl2_dilate_disk.cxx. | http://public.kitware.com/vxl/doc/release/contrib/tbl/vepl2/html/vepl2__dilate__disk_8h.html | crawl-003 | en | refinedweb |
Waitress
--------
Waitress.
Usage
-----
Here's normal usage of the server:
.. code-block:: python
from waitress import serve
serve(wsgiapp, listen='*:8080')
This will run waitress on port 8080 on all available IP addresses, both IPv4
and IPv6.
.. code-block:: python
from waitress import serve
serve(wsgiapp, host='0.0.0.0', port=8080)
This will run waitress on port 8080 on all available IPv4 addresses.
If you want to serve your application on all IP addresses, on port 8080, you
can omit the ``host`` and ``port`` arguments and just call ``serve`` with the
WSGI app as a single argument:
.. code-block:: python
from waitress import serve
serve(wsgiapp)
Press Ctrl-C (or Ctrl-Break on Windows) to exit the server.
The default is to bind to any IPv4 address on port 8080:
.. code-block:: python
from waitress import serve
serve(wsgiapp)
If you want to serve your application through a UNIX domain socket (to serve
a downstream HTTP server/proxy, e.g. nginx, lighttpd, etc.), call ``serve``
with the ``unix_socket`` argument:
.. code-block:: python
from waitress import serve
serve(wsgiapp, unix_socket='/path/to/unix.sock')
Needless to say, this configuration won't work on Windows.
Exceptions generated by your application will be shown on the console by
default. See :ref:`logging` to change this.
There's an entry point for :term:`PasteDeploy` (``egg:waitress#main``) that
lets you use Waitress's WSGI gateway from a configuration file, e.g.:
.. code-block:: ini
[server:main]
use = egg:waitress#main
listen = 127.0.0.1:8080
Using ``host`` and ``port`` is also supported:
.. code-block:: ini
[server:main]
host = 127.0.0.1
port = 8080
The :term:`PasteDeploy` syntax for UNIX domain sockets is analagous:
.. code-block:: ini
[server:main]
use = egg:waitress#main
unix_socket = /path/to/unix.sock
You can find more settings to tweak (arguments to ``waitress.serve`` or
equivalent settings in PasteDeploy) in :ref:`arguments`.
Additionally, there is a command line runner called ``waitress-serve``, which
can be used in development and in situations where the likes of
:term:`PasteDeploy` is not necessary:
.. code-block:: bash
# :ref:`runner`.
.. _logging:
Logging
-------
`:
.. code-block:: python
import logging
logger = logging.getLogger('waitress')
logger.setLevel(logging.INFO)
Within a PasteDeploy configuration file, you can use the normal Python
``logging`` module ``.ini`` file format to change similar Waitress logging
options. For example:
.. code-block:: ini
[logger_waitress]
level = INFO
Using Behind a Reverse Proxy
----------------------------:
1. You can pass a ``url_scheme`` configuration variable to the
``waitress.serve`` function.
2. You can configure the proxy reverse server to pass a header,
``X_FORWARDED_PROTO``, whose value will be set for that request as
the ``wsgi.url_scheme`` environment value. Note that you must also
conigure ``waitress.serve`` by passing the IP address of that proxy
as its ``trusted_proxy``.
3. You can use Paste's ``PrefixMiddleware`` in conjunction with
configuration settings on the reverse proxy server.
Using ``url_scheme`` to set ``wsgi.url_scheme``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can have the Waitress server use the ``https`` url scheme by default.:
.. code-block:: python
from waitress import serve
serve(wsgiapp, listen='0.0.0.0:8080', url_scheme='https')
This works if all URLs generated by your application should use the ``https``
scheme.
Passing the ``X_FORWARDED_PROTO`` header to set `.
Using ``url_prefix`` to influence ``SCRIPT_NAME`` and ``PATH_INFO``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can have the Waitress server use a particular url prefix by default for all
URLs generated by downstream applications that take ``SCRIPT_NAME`` into
account.:
.. code-block:: python.
Using Paste's ``PrefixMiddleware`` to set ``wsgi.url_scheme``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~:
.. code-block:: python :term:`PasteDeploy` configuration file too, if your web framework uses
PasteDeploy-style configuration:
.. code-block:: ini
[app:myapp]
use = egg:mypackage#myapp
[filter:paste_prefix]
use = egg:PasteDeploy#prefix
[pipeline:main]
pipeline =
paste_prefix
myapp
[server:main]
use = egg:waitress#main
listen = 127.0.0.1:8080
Note that you can also set ``PATH_INFO`` and ``SCRIPT_NAME`` using
PrefixMiddleware too (its original purpose, really) instead of using Waitress'
``url_prefix`` adjustment. See the PasteDeploy docs for more information.
Extended Documentation
----------------------
.. toctree::
:maxdepth: 1
design.rst
differences.rst
api.rst
arguments.rst
filewrapper.rst
runner.rst
glossary.rst
Change History
--------------
.. include:: ../CHANGES.txt
.. include:: ../HISTORY.txt
Known Issues
------------
- Does not support SSL natively.
Support and Development
-----------------------
The `Pylons Project web site | http://docs.pylonsproject.org/projects/waitress/en/latest/_sources/index.txt | CC-MAIN-2017-09 | en | refinedweb |
Red Hat Bugzilla – Bug 210705
kickstart.py throws exception in wrong BIOS disk condition (weekly build 10/12/06)
Last modified: 2007-11-30 17:07:35 EST
Description of problem:
anaconda throws exception when it actually COULD find a BIOS disk.
The method "doPartition" in kickstart.py contains the followings:
def doPartition(self, id, args):
KickstartHandlers.doPartition(self, args)
pd = self.ksdata.partitions[-1]
uniqueID = None
if pd.onbiosdisk != "":
pd.disk = isys.doGetBiosDisk(pd.onbiosdisk)
if pd.disk != "":
raise KickstartValueError, formatErrorMsg(self.lineno,
msg="Specified BIOS disk %s cannot be determined" % pd.disk)
Version-Release number of selected component (if applicable):
anaconda-11.1.0.108-1 ( weekly build 10/12/06 )
How reproducible:
Always with BIOS disk option in ks.cfg.
Steps to Reproduce:
1.Use ks.cfg with BIOS disk option
2.
3.
Actual results:
The following error messages are printed:
(F6) Error Parsing Kickstart Config
The following error was found while parsing your kickstart
configuration:
The following problem occurred on line 35 of the kickstart file:
Specified BIOS disk sda cannot be determined
(Button)Reboot
Expected results:
Install should go on without any error
Additional info:
This bug is blocking Dell's internal test efforts. Can we get the fix at least
in the next weekly build and preferably a test rpm even before that to verify
that the issue has been fixed?
--onbiosdisk takes a number corresponding to a BIOS disk like 80, 82, etc.
Perhaps pykickstart should be modified to detect this.
Why is it NOTABUG?
Because you're giving --onbiosdisk the wrong information (the string "sda"
apparently, instead of an integer), and it is rightly displaying an error
message to tell you so. I will modify the error checking for --onbiosdisk to
expect an integer instead of a string, however, to make it more clear what the
problem is.
Not sure where you got the info about the content of the ks.cfg. But below
is what ks.cfg contained...
part /boot --fstype ext3 --size=100 --onbiosdisk=80 --asprimary
part pv.23225 --size=0 --grow --onbiosdisk=80
Note, onbiosdisk has 80.
The issue here is below,
pd.disk = isys.doGetBiosDisk(pd.onbiosdisk)
if pd.disk != "":
pd.disk here should contain string such as "sda" and you are raising
exception when you have a non NULL string...
I was getting the info about ks.cfg from the error message you pasted into the
bug report, since I didn't see any piece from your kickstart config posted. I
was trusting the error message to be correct, but now that I see what you're
doing I know that's not the case. I'll commit this for the next build of
anaconda. Thanks.
Is this fix confirmed for beta2?
Yes, it looks like the fix made it into the beta 2 tree.
Fix verified. Please close. Thanks! | https://bugzilla.redhat.com/show_bug.cgi?id=210705 | CC-MAIN-2017-09 | en | refinedweb |
I've tried lots of solution that posted on the net, they don't work.
>>> import _imaging
>>> _imaging.__file__
'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd'
>>>
from PIL import Image, ImageDraw, ImageFilter, ImageFont
im = Image.new('RGB', (300,300), 'white')
draw = ImageDraw.Draw(im)
font = ImageFont.truetype('arial.ttf', 14)
draw.text((100,100), 'test text', font = font)
ImportError: The _imagingft C module is not installed
File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__
raise ImportError("The _imagingft C module is not installed")
Your installed PIL was compiled without libfreetype.
You can get precompiled installer of PIL (compiled with libfreetype) here (and many other precompiled Python C Modules): | https://codedump.io/share/fgTgm6AbowxN/1/python-the-imagingft-c-module-is-not-installed | CC-MAIN-2017-09 | en | refinedweb |
For those of you interested in this double use of annotations, you will find an example in the code download. But just to give you a taste, here is what the Address class would look like:
@Entity
@Table(name = "t_address")
@XmlType(propOrder = {"street", "zipcode", "city", "country"})
@XmlAccessorType(XmlAccessType.FIELD)
public class Address {
@XmlTransient
@Id @GeneratedValue
private Long id;
private String street;
@Column(length = 100)
private String city;
@Column(name = "zip_code", length = 10)
@XmlElement(name = "zip")
private String zipcode;
@Column(length = 50)
private String country;
@XmlTransient
@ManyToMany(cascade = CascadeType.PERSIST)
@JoinTable(name = "t_address_tag",
joinColumns = {@JoinColumn(name = "address_fk")},
inverseJoinColumns = {@JoinColumn(name = "tag_fk")})
private List<Tag> tags = new ArrayList<Tag>();
// Constructors, getters, setters
}
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/Java/Article/34069/0/page/5 | CC-MAIN-2017-09 | en | refinedweb |
In the financial industry, Black Scholes is one of the methods used to valuate options. This one is faster than Binomial Options. In this post, I will walk you through a C++ AMP implementation of Black Scholes.
main – Program entry point
Let’s start with main(), where an instance of the blackscholes class is created, an option price is computed, and it is then verified against results of a CPU implementation. The “blackscholes” class implements both the C++ AMP kernel and the CPU implementation. The constructor generates random data and initializes parameters.
blackscholes::execute
This is the function where the C++ AMP kernel is used for computation. To start with, the input data is stored in a concurrency::array. The computation parameters are captured in separate array and passed to the kernel. This kernel schedules tiles of size BSCHOLES_TILE_SIZE and there is one thread per option price computation. The calculated options are stored in a separate output arrays. This implementation takes advantage of many cores on GPU to run threads in parallel. After the computation completes, the results are copied out to host memory for verification.
blackscholes::verify
This function validates the results computed on GPU. The same input data is used to calculate results on the CPU (which is implemented in function “blackscholes::blackscholes_CPU”), then the results of CPU and GPU are compared using “blackscholes::sequence_equal”.
blackscholes::cnd_calc
This is an interesting function to mention because of its restrict(amp, cpu) modifier. This function can be called from a CPU function as well as from a GPU kernel. In this sample this function is called from “blackscholes::blackscholes_CPU“ as well as kernel in “blackscholes::execute”.
Download the sample
Please download the attached sample of the Black Scholes that we discussed here and run it on your hardware, and try to understand what the code does and to learn from it. You will need, as always, Visual Studio 11.
The CND function is a typical example where MAD (multiply and add) would be useful, but I don't see fast_math having such a function. Does the compiler optimize ordinary math, or is this instruction ignored?
Hi Dmitri,
(1) Compiler is allowed to optimize multiplication + addition using the mad instruction if you use /fp:fast (However, this is no guarantee). If you use /fp:precise or /fp:strict (which are mapped to the /Gis switch of the HLSL compiler), it will not do that.
(2) User has control to explicitly requiring "mad". In the concurrency::direct3d namespace, we offer a "mad" function (blogs.msdn.com/…/direct3d-namespace-and-hlsl-intrinsics-in-c-amp.aspx) that will be guaranteed to map to HLSL's intrinsic mad. Please read the "Remarks" section of msdn.microsoft.com/…/ff471418(v=vs.85).aspx, to see what HLSL mad offers.
(3) In the precise_math namespace (blogs.msdn.com/…/math-library-for-c-amp.aspx), we offer:
float fmaf(float x, float y, float z);
float fma(float x, float y, float z);
double fma(double x, double y, double z);
It returns (x * y) + z, rounded as one ternary operation. In our current implementation, the double version is implemented by mapping to HLSL fma intrinsic directly (msdn.microsoft.com/…/hh768893(v=vs.85).aspx). There is no float version of fma in HLSL, so it's up to us on the implementation, as long as the round-off error is small enough.
Thanks,
Weirong
The example code does not have a license. Could you clarify how it may be used?
Hi dar,
This sample is released with Apache 2.0 License. The code zip file has been updated with license headers in the source.
Thanks,
Lingli | https://blogs.msdn.microsoft.com/nativeconcurrency/2012/03/16/black-scholes-using-c-amp/ | CC-MAIN-2017-09 | en | refinedweb |
Created 7 October 2009
This was a presentation I gave at the DevDays Boston conference in
2009. I don't have much text to go with it, but these are the slides.
If you need them larger, there is a zip file of .pngs.
Joel's suggestion was to explain Peter Norvig's Spell Corrector
line-by-line. I've made a few small edits for expository purposes.
Here's the code:
import re, collectionsdef words(text): return re.findall('[a-z]+', text.lower())
def train(words): model = collections.defaultdict(int) for w in words: model[w] += 1 return modelN))
Here's the completed Nango code:
import reclass
2009,
Ned Batchelder | http://nedbatchelder.com/text/devdays.html | CC-MAIN-2017-09 | en | refinedweb |
Following on from Part 1 , we will further enhance our script to trace incoming connections through the kernel to the application.
Before doing that we will tweak the script to provide a CSV output. We do this using a BEGIN probe to print the header and then update the probes. We will also split the probes. Where there is a match, multiple probes are triggered in the order listed in the script. We can therefore say we are only interested in a specific port and then do the heavy lifting later on.
It is also important that you clear variables once you finish with them so as not to fill up the kernel buffers (you will loose traces if you don't; given enough probes firing).
#!/usr/sbin/dtrace -Cs
#pragma D option quiet
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/ip.h>
BEGIN
{
printf("TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax\n");
}
fbt:ip:tcp_conn_request:entry
{
self->tcpq = *(tcp_t**)((char*)arg0+0x28);
self->lport = ntohs(*(uint16_t*)((char*)self->tcpq->tcp_connp+0x10a));
}
fbt:ip:tcp_conn_request:entry
/self->lport == $1/
{
this->srcAddr = 0;
this->srcPort = 0;
printf("%d,%Y,%s,0x%08x,%d,%d,%d,%d,%d\n",
timestamp, walltimestamp, "syn",
this->srcAddr, this->srcPort,
self->lport,
self->tcpq->tcp_conn_req_cnt_q0,
self->tcpq->tcp_conn_req_cnt_q,
self->tcpq->tcp_conn_req_max
);
}
fbt:ip:tcp_conn_request:return
/self->lport/
{
self->tcpq = 0;
self->lport = 0;
}
Whilst in tcp_conn_request() we don't actually have a connection from the client; the kernel is about to build one.
It is the message in the call that is actually the incoming packet to process. IP has dealt with it; now it is TCP's turn. Remember that the tcpq we have used so far is the listener's connection (the one that is in the listen state), not the eager's (new connection); so looking at that is not going to be productive.
We could probe another function or we could decode the message; tcp_accept_comm() is called by tcp_conn_request() to accept the connection providing a few things are satisfied such as Qmax not been reached. If we are only interested in connections that aren't dropped due to, e.g. Q==Qmax, then this would be a good place. However, if we also want to capture failed connections, we need to do the processing ourselves.
So, let's decode the message :D
Looking at the source for tcp_conn_request() we can see that arg1 ( mp , the mblk_t ) is the message. Not surprising as this is a STREAMS module. Further down we can see where the data is and the format of the data. i.e. we have a pointer to the IP header here.
ipha = (struct ip *)mp->b_rptr;
As mblk_t is a standard structure we can easily map this within our DTrace script. We can therefore change the setting of srcAddr to the following:
this->mblk = (mblk_t *)arg1;
this->iphdr = (struct ip*)this->mblk->b_rptr;
this->srcAddr = this->iphdr->ip_src.s_addr;
If we then run the script and connect to the server running on 172.16.170.133 from client 172.16.170.1 we get the following:
root@sol10-u9-t4# ./tcpq_ex7.d 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
180548302609318,2016 Aug 11 14:37:02,syn, 0x85aa10ac ,0,22,0,0,8
As you can see, the hex address is that of the destination and not the source. Given that an IP header is a standard format (defined in rfc791 ) this is a bit surprising. So, what is going on?
A Slight Detour
The IP header is defined in /usr/include/netinet/ip.h . It is as expected. Whilst I won't go into all the steps to find the cause, here is the quick version.
The DTrace compiler uses the C pre-processor, but compiles it's own code for the kernel to use. DTrace should be able to import C headers to allow you to use names, etc, but clearly something has gone wrong.
One test I did is to create my own header using the constituent parts and a key tweak; one struct doesn't use fields (bitmasks) but both are the same size.
#ifndef MYIP_H
#define MYIP_H
#include <sys/types.h>
typedef struct {
uchar_t c1;
uchar_t c2,c3,c4;
ushort_t s1,s2,s3,s4;
uint32_t src,dst;
} myip_t;
typedef struct {
uchar_t c1_1:4,c1_2:4;
uchar_t c2,c3,c4;
ushort_t s1,s2,s3,s4;
uint32_t src,dst;
} myip2_t;
#endif
Then, if we run this through a simple C program that just prints the sizes all looks good:
#include <stdio.h>
#include "myip.h"
int main(int argc, char *argv[]) {
printf("myip_t : %u\n", sizeof(myip_t));
printf("myip2_t: %u\n", sizeof(myip2_t));
return 0;
}
Which gives the following output:
root@sol10-u9-t4# ./a.out
myip_t : 20
myip2_t: 20
Then within the BEGIN probe of the DTrace we do the same (not forgetting to include the header):
printf("myip_t : %d\n", sizeof(myip_t));
printf("myip2_t: %d\n", sizeof(myip2_t));
Which gives the following output:
root@sol10-u9-t4# ./tcpq_ex8.d -I `pwd` 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
myip_t : 20
myip2_t: 24
This is the problem. A few more tests show that the DTrace compiler cannot handle fields (bitmasks) in the C structs correctly, at least in the situations I've tested (me'thinks a bug). I will leave it as an exercise for the reader to analyse this further.
Back to the Task at Hand
As the IP header is well-defined we can just define the offset without further analysis.
this->srcAddr = *(uint32_t*)((char*)(this->iphdr)+ 0xc );
This yields the correct IP address:
root@sol10-u9-t4# ./tcpq_ex9.d 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
181523558976884,2016 Aug 11 14:53:17,syn, 0x01aa10ac ,0,22,0,0,8
As the header length of the IP header and the TCP header has the bitmask after the port, we should be able to extract the port quite easily (setting srcPort to the result):
this->ihl = 4 * ((int)*((char*)(this->iphdr)) & 0xf);
this->tcphdr = (struct tcphdr*)((char*)(this->iphdr)+(this->ihl));
this->srcPort = ntohs(this->tcphdr->th_sport);
This yields the following output:
root@sol10-u9-t4# ./tcpq_ex10.d 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
181903328660708,2016 Aug 11 14:59:37,syn,0x01aa10ac, 40950 ,22,0,0,8
Which is confirmed via netstat:
root@sol10-u9-t4# netstat -an | fgrep 40950
172.16.170.133.22 172.16.170.1. 40950 29312 0 49232 0 ESTABLISHED
Completion of the handshake
As you can guess, there are various locations that we can probe the kernel in order to mark events. In the case of the completion of the three-way handshake, I'm going to go for tcp_send_conn_ind() in usr/src/uts/common/inet/tcp/tcp_tpi.c . The key function is actual tcp_rput_data() in usr/src/stand/lib/tcp/tcp.c where the state is changed to EST.
Anyway, in tcp_send_conn_ind() arg0 is the listener and arg1 is a mblk_t containing a pointer to a struct T_conn_ind . However, as we have already processed the original syn packet, we also have an eager tcp_t . We can extract that, and use the offsets we know to get the address and port of the peer.
We can combine probes where there is common processing. In this particular kernel arg0 is the same.
fbt:ip:tcp_conn_request:entry,
fbt:ip:tcp_send_conn_ind:entry
{
self->tcpq = *(tcp_t**)((char*)arg0+0x28);
self->lport = htons(*(uint16_t*)((char*)self->tcpq->tcp_connp+0x10a));
}
We can now predicate all the other probes within the kernel thread to where the port maps what we are after. This includes a simple way to label the action. We can also change the variable context from 'this' (lexical scope) to 'self' (thread scope) to pass the parameters that are in different probes but the same thread.
First, tcp_conn_request() , which should be familiar:
fbt:ip:tcp_conn_request:entry
/self->lport == $1/
{
self->action = "syn";
this->mblk = (mblk_t *)arg1;
this->iphdr = (struct ip*)this->mblk->b_rptr;
this->ihl = 4 * ((int)*((char*)(this->iphdr)) & 0xf);
this->tcphdr = (struct tcphdr*)((char*)(this->iphdr)+(this->ihl));
self->srcAddr = *(uint32_t*)((char*)(this->iphdr)+0xc);
self->srcPort = ntohs(this->tcphdr->th_sport);
}
Then tcp_send_conn_ind() :
fbt:ip:tcp_send_conn_ind:entry
/self->lport == $1/
{
self->action = "syn-aa";
this->mblk = (mblk_t *)arg1;
this->tconnind = (struct T_conn_ind*)(this->mblk->b_rptr);
this->tcpp = *(tcp_t**)(((char*)this->tconnind)+(this->tconnind->OPT_offset));
self->srcPort = ntohs(*(uint16_t*)((char *)this->tcpp->tcp_connp+0x108));
self->srcAddr = *(uint32_t*)((char*)this->tcpp->tcp_connp+0x104);
}
The printout then is common to both, since all of it is data driven from the previous stages:
fbt:ip:tcp_conn_request:entry,
fbt:ip:tcp_send_conn_ind:entry
/self->lport == $1/
{
printf("%d,%Y,%s,0x%08x,%d,%d,%d,%d,%d\n",
timestamp, walltimestamp,
self->action,
self->srcAddr, self->srcPort,
self->lport,
self->tcpq->tcp_conn_req_cnt_q0,
self->tcpq->tcp_conn_req_cnt_q,
self->tcpq->tcp_conn_req_max
);
}
Finally we clear the buffers to tidy up:
fbt:ip:tcp_conn_request:return,
fbt:ip:tcp_send_conn_ind:return
/self->lport/
{
self->tcpq = 0;
self->lport = 0;
self->action = 0;
self->srcAddr = 0;
self->secPort = 0;
}
If we run this on a connection we can see the two parts. Notice the value of Q0. On entry it is one as we have an embryonic connection which is about to transition to Q.
root@sol10-u9-t4# ./tcpq_ex11.d 22
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
185255847646301,2016 Aug 11 15:55:29,syn,0x01aa10ac,41010,22,0,0,8
185255847845639,2016 Aug 11 15:55:29,syn-aa,0x01aa10ac,41010,22, 1 ,0,8
If we fill up Q so it equals Qmax (using my test script from a previous blog) and retry, accepting a connection on the queue part way through to free the kernel backlog we get this, clearly showing the remote servers tcp retries.
root@sol10-u9-t4# ./tcpq_ex11.d 2000
TS,Date,Action,Source IP,Source Port,Dest Port,Q0,Q,Qmax
185458485779874 ,2016 Aug 11 15:58:52,syn,0x01aa10ac,50066,2000,0,2,2
185459488194513,2016 Aug 11 15:58:53,syn,0x01aa10ac,50066,2000,0,2,2
185461492294103,2016 Aug 11 15:58:55,syn,0x01aa10ac,50066,2000,0,2,2
185465496397317,2016 Aug 11 15:58:59,syn,0x01aa10ac,50066,2000,0,1,2
185465496517076 ,2016 Aug 11 15:58:59,syn-aa,0x01aa10ac,50066,2000,1,1,2
As you can see, the max'ed out queue resulted in the connection taking 185465496517076 minus 185458485779874 = 7010737202 ns (or 7.01 seconds) to establish the connection.
In the next part we will look at extending this into the application, using both syscall tracing and userspace tracing.
As always, please can you provide feedback so I can improve the blog. | http://126kr.com/article/491441o7p0u | CC-MAIN-2017-09 | en | refinedweb |
Scale-out file sync and share Deploying owncloud and Red Hat Storage Server on
- Liliana James
- 1 years ago
- Views:
Transcription
1 Technology detail Scale-out file sync and share Deploying owncloud and Red Hat Storage Server on HP ProLiant SL4540 Servers Abstract Scale to 42,000 users on a single HP ProLiant SL4540 Gen8 two-node server, with linear scalability. Consolidate compute, storage, and database layers for cost and operational efficiencies without compromising performance. Secure enterprise files in a highly-available, on-premise solution.. Table of contents 2 Introduction 2 Architectural overview and results owncloud: Universal access to files from any device at any time Red Hat Storage Server HP ProLiant SL4540 Gen8 Two-Node server 10 Performance testing environment Server, operating system, and storage configuration Testing process 13 Solution implementation guidance Configuring the load generator and load balancer Installing and configuring Red Hat Storage Server Installing and configuring the application layer Configuring database server nodes Additional tuning and optimization 16 Conclusion 17 References linkedin.com/company/red-hat
2 The combination of owncloud and Red Hat Storage Server running on HP ProLiant SL4540 Gen8 servers presents a unique on-premise sync and share solution. Introduction Enterprise mobility, collaboration, and anytime/anywhere access to files and data is creating a need for secure file sync and share capabilities. Cloud-based solutions can be compelling to fulfill this need; and mobile users are often drawn to these options when solutions are not available from their IT departments. Unfortunately, the use of some cloud-based solutions can represent a sort of shadow IT, and skirting corporate data control and security policies can have consequences in the event of data loss or security breaches. To address these challenges while lowering risk, owncloud, Red Hat, and HP have worked together to provide a cost-effective and scalable file sync and share solution that is deployed on-premise, and managed by the enterprise. With owncloud and Red Hat Storage Server, organizations can scale their private file sync and share solutions similar to a public cloud growing to meet the most strenuous enterprise demands. Further, by deploying owncloud and Red Hat Storage Server converged on compact and high-performance HP ProLiant SL4540 Gen8 two-node servers, enterprises can dramatically reduce the total cost of ownership (TCO) of a file sync and share solution through lower hardware, licensing, and maintenance costs. Cost advantages come from the ability to collapse multiple datacenter layers by consolidating application, storage, and even database tiers on capable two-node HP ProLiant SL4540 Gen8 servers clustered with RHS. This approach has distinct advantages in terms of cost, performance, and scalability, including the following: Converging compute, storage, and database tiers onto a smaller number of systems simplifies deployment, and speeds communication between solution elements. Testing has demonstrated significant performance with up to 42,000 owncloud users supported on a single HP ProLiant SL4540 Gen8 two-node server clustered with Red Hat Storage Server. Additional server resources can be added to the solution for greater scalability; and the Red Hat, owncloud, and HP solution exhibits multi-dimensional scalability at the application, storage, and database layers. The Red Hat, owncloud, and HP solution also provides a lower cost per terabyte (TB) than competing approaches, with up to a 52% reduction in up-front storage costs and a 20% reduction in operating costs. 1 This document provides a reference architecture along with tested configurations and results. Implementation guidance is also provided for building a similar system. Architectural overview and results A traditional approach to applications such as owncloud is to deploy a separate set of servers to accommodate the various tiers of the application (Figure 1). In this approach, the stateless application tier is positioned in front of replicated storage, while the database tier sits behind the application layer. 1 IDC, The Economics of Software-based Storage, 2
3 Having separate hardware tiers is virtually ensured when standard, standalone network-attached storage (NAS) devices are used, as they don t allow co-location of software on the NAS device itself. Unfortunately, this approach creates redundant infrastructure layers for each part of the application stack; and the large number of deployed hardware platforms increases complexity along with maintenance and licensing costs. Client 1 Client n App tier: owncloud, PHP, Apache Storage tier: NAS devices Database tier: MySQL Figure 1. Traditional NAS-based approaches to tiered infrastructure often result in highly redundant hardware configurations that raise complexity and cost. In contrast, Red Hat Storage Server can be deployed on industry-standard storage servers, offering far greater flexibility and integration potential, and a consolidated architecture that can collapse the required software layers onto fewer physical servers. This approach is greatly enhanced when combined with powerful HP ProLiant SL4540 Gen8 twonode storage servers. With each server providing considerable compute power and supporting up to 25 disk drives per compute node (50 disk drives per server), a complete, powerful, and highly available on-premise owncloud infrastructure can be delivered in a mere four rack units (RU). By creating redundant application tiers with fewer physical systems, organizations can save on licensing, hardware, and maintenance costs. Because owncloud operates as a stateless entity accessing common storage and database instances, no clustering tasks need to be completed. High availability is achieved by simply placing the own- Cloud application servers behind a load balancer. In testing performed by owncloud and Red Hat, both a distributed configuration and a consolidated configuration were evaluated to understand performance and scalability. In the distributed configuration: owncloud and Red Hat Storage Server were installed together (converged) on a two-node HP ProLiant SL4540 Gen8 server, providing two duplicate application and storage server nodes (Figure 2). A MySQL Network Database (NDB) cluster was installed on two separate HP ProLiant DL380 G5 servers, storing user and file metadata. 3
4 owncloud is an enterprise-grade file sync and share solution that is hosted in your datacenter, on your servers, using your storage. For testing, an HP ProLiant DL380 G5 server was used as a load balancer with a second server used as a load generator. Scripts were executed on the load generator to simulate client activity. When tested, the load generator saturated the distributed configuration with 42,000 active users, simulating steady-state owncloud in operation. owncloud integrates seamlessly into your IT infrastructure. You can leave data where it lives and still deliver file sharing services that meet your data security and compliance policies. Client load generator All network connections are 10 Gigabit Ethernet Load balancer App/storage node 1 MySQL NDB cluster App/storage node 2 HP ProLiant SL4540 Gen8 two-node server HP ProLiant DL380 G5 Intel Xeon X5460, 3.16 GHz 2x 8192 MB 8x 146 GB RAID 1+0 Red Hat Enterprise Linux 6 HP ProLiant DL380 G5 Intel Xeon X5460, 3.16 GHz 2x 8192 MB 8x 146GB RAID 1+0 Red Hat Enterprise Linux 6 HAProxy HP ProLiant SL4540 Gen8 server (each node) 2x Intel Xeon E5-2407, 2.2 GHz 6x 8192MB HP SmartArray P x 2 TB dual port Apache, owncloud, Red Hat Storage Server HP ProLiant DL380 G5 servers Intel Xeon X5460, 3.16 GHz 2x 8192MB 8x 146GB RAID 1+0 Red Hat Enterprise Linux 6 MySQL NDB Figure 2. The distributed configuration provided separate HP ProLiant DL380 G5 servers running a MySQL NDB cluster, with app and storage server functionality on the HP ProLiant SL4540 G8 two-node server. In the consolidated configuration: The MySQL NDB cluster was consolidated on the same HP ProLiant SL4540 Gen8 two-node server that housed the owncloud software and Red Hat Storage Server (Figure 3). Each of the compute nodes on the HP ProLiant SL4540 Gen8 server included Apache, PHP, the owncloud application code, a Red Hat Storage Server node, and a MySQL NDB cluster node. The load generator and load balancer remained, distributing client requests via a roundrobin algorithm. Testing indicated an estimated 40,000 active users could be supported in the consolidated architecture in steady-state operation. 4
5 owncloud provides: Enterprise file sync and share software, on-premise, managed by IT teams. Anytime, anywhere access to files from any device. File access auditability and control. Integration with existing IT infrastructures. Enterprise extensibility with open, flexible APIs. Management to existing data security and compliance policies. Client load generator All network connections are 10 Gigabit Ethernet HP ProLiant DL380 G5 Intel Xeon X5460, 3.16 GHz 2x 8192 MB 8x 146 GB RAID 1+0 Red Hat Enterprise Linux 6 Load balancer HP ProLiant DL380 G5 Intel Xeon X5460, 3.16 GHz 2x 8192 MB 8x 146GB RAID 1+0 Red Hat Enterprise Linux 6 HAProxy App/storage/db node 1 App/storage/db node 2 HP ProLiant SL4540 Gen8 two-node server HP ProLiant SL4540 Gen8 server (each node) 2x Intel Xeon E5-2407, 2.2 GHz 6x 8192MB HP SmartArray P x 2 TB dual port Apache, owncloud, Red Hat Storage Server, MySQL NDB Figure 3. The consolidated configuration deployed app, storage server, and database functionality entirely on the HP ProLiant SL4540 Gen8 two-node server, collapsing multiple layers of infrastructure. Both tested configurations were able to sustain substantial simulated loads. Importantly, the solution architecture displays a modular and multi-dimensional scalability that can let organizations adapt to changing requirements. The architecture is also inherently highly available because: With owncloud, the application layer supports dynamic expansion through additional servers, with high availability provided by stateless owncloud servers behind a load balancer. With Red Hat Storage Server, the storage layer can dynamically expand to meet storage needs, is able to scale based on back-end hardware, and provides high availability through IP fail-over. With MySQL NDB clustering, the database layer can also be scaled through additional SQL/ storage nodes, with high availability provided behind the load balancer. owncloud: Universal access to files from any device at any time owncloud is an enterprise-grade file sync and share solution that, while as easy to use as consumergrade products, is hosted in the datacenter using on-premise servers and storage. owncloud integrates seamlessly into existing IT infrastructures, allowing delivery of file sync and share services, data security and compliance policies, tools, and procedures. owncloud provides: Clean, professional user interfaces. Universal access to files through a web interface, native mobile apps, and a desktop sync client, as well as via a standard WebDAV API. A platform to connect to existing authentication mechanisms, including LDAP-, AD-, and SAMLbased environments. Enhanced logging of user activity, embedded virus scanning, at-rest encryption, and an application level file firewall. 5
6 owncloud installation has minimal server requirements, doesn t need special permissions, and is quick to perform. owncloud is extendable via a simple but powerful API for server side applications plugins, as well as external APIs and mobile libraries to enable integration for more complex use cases and integrations. owncloud delivers several key file sync and share capabilities to the business that: Empower users. Consumer-based file sync and share solutions are in the enterprise for one reason: Employees, partners, and customers need simple, easy access to the right files at the right time. owncloud empowers users with mobile apps, desktop sync clients, and a web interface for accessing files so they can get the job done efficiently all while keeping IT teams in control of sensitive corporate information. Leave files where they are. Years of investments in existing file stores result in multiple locations across the enterprise where data is stored. owncloud can provide consolidated access to these storage locations through a single logical access point, providing users the ability to access their data from any system at any time, wherever they are all based on policies set and managed by the IT team. Provide IT control. Existing IT environments are not greenfield implementations, they are typically complex, heterogeneous environments built over years to be compliant with company policy and relevant industry regulations. Rather than bypassing all of the existing infrastructure in favor of a public cloud, owncloud enables customers to use past investments to properly control, monitor, and audit file sync and share in the context of their businesses. Scale with users. File sync and share solution demand grows exponentially in the enterprise as users add more files, and share files and file versions. owncloud is built on a standard n-tier application architecture, enabling simple and rapid scaling of storage and user count as demand grows. This approach helps ensure fast access to corporate files in the most demanding environments. Sync data. owncloud lets users keep files and data synchronized across all of their devices. Users can access the most recent version of their files with the desktop and web clients, or use the mobile app of their choosing, at any time. Share data. owncloud enables users to share their files with other system users. Sharing can be done to IT policy both publicly and privately with automated expirations, password protection, and granular access permissions. In addition, owncloud provides many of the standard file sync and share features that consumers have come to expect, including mobile sharing, user avatars, file previews, conflict handling, activity streams, an intuitive UI, and the ability to undelete files that were accidentally deleted. For more information on owncloud, visit Red Hat Storage Server Red Hat Storage Server is a software-defined, open source solution that is designed to meet unstructured, semi-structured, and big data storage requirements. At the heart of Red Hat Storage Server is a secure, open source, massively-scalable distributed file system that allows organizations to combine large numbers of storage and compute resources into a high-performance, virtualized, and centrally-managed storage pool (Figure 4). 6
7 Red Hat Storage Server s GlusterFS geo-replication provides a continuous, asynchronous, and incremental replication service from one site to another over local area networks (LANs), wide area network (WANs), and across the Internet. Red Hat Storage Server is designed to achieve several major goals, including: Elasticity. Storage volumes are abstracted from the hardware and managed independently. Volumes can grow or shrink by adding or removing systems from the storage pool. Even as volumes change, data remains available without interrupting the application. Petabyte scalability. Today s organizations demand scalability from terabytes to multiple petabytes. Red Hat Storage Server lets organizations start small and grow to support multi-petabyte repositories as needed. Organizations that need substantial amounts of storage can deploy massive scale-out storage from the outset. High performance. Red Hat Storage Server provides fast file access by eliminating the typical centralized metadata server. Files are spread evenly throughout the system, eliminating hot spots, I/O bottlenecks, and high latency. Organizations can use commodity disk drives and 10 Gigabit Ethernet to maximize performance. Reliability and high availability. Red Hat Storage Server provides automatic replication that helps ensure high levels of data protection and resiliency. In addition to protecting from hardware failures, self-healing capabilities restore data to the correct state following recovery. Industry-standard compatibility. For any storage system to be useful, it must support a broad range of file formats. Red Hat Storage Server provides native Portable Operating System Interface (POSIX) compatibility as well as support for common protocols including common Internet file system (CIFS), network file system (NFS), and hypertext transfer protocol (HTTP). The software is readily supported by off-the-shelf storage management software. Unified global namespace. Red Hat Storage Server aggregates disk and memory resources into a single common pool. This flexible approach can simplify management of the storage environment and eliminates data silos. Global namespaces may be grown and shrunk dynamically, without interrupting client access. Red Hat Storage Server volume Brick Brick Brick Brick Node 1 Node 2 HP ProLiant SL4540 Gen8 two-node server Figure 4. With two logical drives (bricks) per server node, the storage resources of the HP ProLiant SL4540 Gen8 two-node server are combined into a centrally-managed pool. 7
8 Red Hat Storage Server provides distinct technical advantages over other technologies, including: Software-defined storage. Storage is a software problem that cannot be solved by locking organizations into a particular storage hardware vendor or a particular hardware configuration. Instead, the solution is designed to work with a wide variety of industry-standard storage, networking, and compute server solutions. Open source. Red Hat Storage Server is based on the GlusterFS project. A worldwide community of developers, customers, and partners test and update GlusterFS in a wide range of environments and workloads, providing continuous and unbiased feedback to other users. This project is certified, secured, and made enterprise-ready in the Red Hat Storage Server distribution. Complete storage operating system stack. The storage product delivers more than just a distributed file system, adding distributed memory management, I/O scheduling, software RAID, selfhealing, local n-way synchronous replication, and asynchronous long-distance replication. User space. Unlike traditional file systems, Red Hat Storage Server operates in user space, rather than kernel space. This makes installing and upgrading Red Hat Storage Server significantly easier, and greatly simplifies development efforts since specialized kernel experience is not required. Modular, stackable architecture. Red Hat Storage Server is designed using a modular and stackable architecture approach. Configuring it for highly-specialized environments is a simple matter of including or excluding particular modules. Data stored in native formats. Data is stored on disk using native formats (XFS) with various selfhealing processes established for data. As a result, the system is extremely resilient and files stay naturally readable, even without the Red Hat Storage Server software. There is no proprietary or closed format used for storing file data. No metadata with the elastic hash algorithm. Unlike other storage systems with a distributed file system, Red Hat Storage Server does not create, store, or use a separate index of metadata in any way. Instead, it places and locates files algorithmically. The performance, availability, and stability advantages of not using metadata are significant, and in some cases produce dramatic improvements. HP ProLiant SL4540 Gen8 Two-Node server The HP ProLiant SL4500 Gen8 server is the industry s first ever purpose-built server for big data. With support for up to 540 drives and over 2 PB per rack, these servers are ideal for building dense scale-out storage infrastructure. At the same time, the server features a converged and balanced architecture that is ideal for a range of workloads, with an efficient chassis that scales easily as data grows. The server is designed with workload optimization in mind, offering scalable performance and throughput demanded by those workloads. The storage solution described in this document is based on the HP ProLiant SL4540 Gen8 twonode server (Figure 5). The two-node chassis supports up to 25 disk drives per node (up to 50 drives per chassis). Importantly, each node within the chassis is electrically isolated enhancing high availability solutions. Power and cooling are shared, reducing both CAPEX and OPEX required for the entire system. HP ProLiant SL4540 Gen8 servers can be configured to order. 8
9 More information on HP ProLiant Servers can be found at Figure 5. The HP ProLiant SL4540 Gen8 two-node server offers 25 disk drives per node. HP ProLiant SL4540 Gen8 servers offer key enterprise features. Cluster management is supported using the HP Insight Cluster Management Utility. Sophisticated power management is provided including Dynamic Power Capping and Asset Management with SL Advanced Power Manager. Ninety-four percent efficient Platinum Plus power supplies offer power discovery. A host of additional enterprise-class features make the HP ProLiant SL4540 Gen8 server ideal for large-scale deployments in both enterprise settings and cloud service provider environments, including: Serviceability. In complex datacenters, scale-out data deployments need each node to be serviceable, and service times need to be quick. With HP ProLiant SL4540 Gen8 servers, compute nodes are serviceable from the front (hot aisle) side of the chassis. No cables need to be disconnected from the rear of the unit to service motherboard components. The I/O module is serviceable from the rear of the chassis and the I/O module and PCI cards can be swapped without having to walk to the front of the unit. No cables cross a service zone, and the hard disk drive (HDD) backplanes are cable-less (and can be rapidly swapped as well). Performance. Matching processor cores to spindles is one aspect of performance that is important for being able to configure workload-specific server infrastructure. Another important aspect is the relationship of network performance to compute or storage performance. HP ProLiant SL4540 Gen8 servers provide a truly balanced architecture, with 40 Gbps of raw bandwidth on the network side and 48 Gbps of raw bandwidth on the Serial-Attahed SCSI (SAS) controller side. Network bandwidth requirements vary, so various I/O options are provided including Gigabit Ethernet, 10 Gigabit Ethernet, and InfiniBand compatibility. HP SmartCache technology. HP SmartCache technology accelerates access times to most frequently-accessed data (hot data). With support provided on the HP SmartArray P420i controller, HP SmartCache technology caches hot data on solid-state devices (SSDs). This approach combines low cost, high capacity spinning media with fast, low-latency SSD devices to dramatically accelerate performance over spinning media alone, at a fraction of the cost of an end-to-end solidstate solution. (Note: Red Hat Storage Server does not support SSDs or HP SmartCache technology as of June 2014). 9
10 Performance testing environment owncloud performance tests demonstrate that it scales best in an n-tier web architecture, with several owncloud app servers and an n-node MySQL database cluster. The sections that follow detail both the hardware and software configuration for the system tested by owncloud and Red Hat. Server, operating system, and storage configuration For testing, an HP ProLiant SL4540 Gen8 two-node server was used to host Red Hat Storage Server. In this configuration, each node within the server controlled 25 disk drives for a total of two Red Hat Storage Server nodes in the storage cluster. The compute nodes were used in the tested architecture as consolidated app servers, storage servers, and database servers depending on the configuration. Each node in the two-node chassis was configured as follows: Two Intel Xeon Processor E at 2.2 GHz. 48GB of Dual Rank x4 PC3L 10600R CAS-9 low-voltage memory. 10GB Ethernet I/O module kit. Twenty-five 2TB dual-port disk 6G SAS 7.2K RPM large form factor (LFF) drives for storage. Two 500GB 6G SATA 7.2K RPM disk drives for the node operating environment. Red Hat Storage Server 2.1. Each node within the HP ProLiant SL4540 Gen8 two-node server was configured with an HP Smart Array B120i controller to manage the two 500 GB drives serving the operating environment. An HP Smart Array P420i controller was used to control the storage drives. Figure 6 illustrates the BIOS configuration of the B120i Smart Array controller. Figure 6. BIOS configuration of the HP Smart Array controller contained in the HP ProLiant SL4540 Gen8 server. 10
11 The HP Array Configuration Utility was then used to configure the 25 drives available on each node within the HP ProLiant SL4540 Gen8 server (Figure 7): Two logical drives (bricks) were created on each node, with each brick consisting of twelve 2 TB SAS drives configured as an 18.2 TB RAID 6 logical volume. One drive on each node was left unassigned as a spare. Figure 7. Each node within the two-node HP ProLiant SL4540 server was configured with two 18.2 TB RAID 6 logical volumes (bricks). In addition, HP ProLiant DL380 G5 servers were used for load generator and load balancer functionality, as well as for additional MySQL NDB nodes in the distributed configuration. These servers were configured as follows: Intel Xeon Processor X5460 at 3.16 GHz. Two 8192 MB DIMMs. Eight 146 GB disk drives configured with RAID 1+0. Red Hat Enterprise Linux 6.4. Software configuration The following software versions were used in the testing: Apache PHP 5.3 owncloud Enterprise Edition MySQL
12 The extra packages for enterprise Linux repository for Red Hat Enterprise Linux was also included as it provides additional packages that are required by owncloud. Some of the components were used because they are the default and supported Red Hat Enterprise Linux packages. Significant performance improvements can be expected when switching to PHP 5.5, for example. For the owncloud installation, server-side encryption was disabled and the owncloud loglevel was set to FATAL only. The default owncloud 6 Enterprise Edition apps were activated, including: Deleted files First run wizard Image viewer Provisioning API Share files Text editor User account migration Versions owncloud instance migration Testing process The testing performed was based on the average owncloud user model, which is the result of several years of working with owncloud customers in production environments. With this model, each user was assumed to have 1.2 desktop clients syncing actively with the server at any time. In other words, for every five users on owncloud, four were syncing with one desktop client and one was syncing with two desktop clients. In addition, it was assumed that each user accessed the server once every hour with a mobile app to browse the file list, and that the user also downloaded one new file and uploaded two new files every hour. The test process itself contained two key measurements: 1. The median time for one request without concurrent access. 2. The number of requests per second using concurrent requests. In tuning server performance, reference measurements were taken accessing a simple textfile. The configuration was tuned for owncloud performance (including database and storage access) for the most-used test scenarios, including: HTTP PUT requests for placing a file on the server. HTTP GET requests for reference measurements on small text files or owncloud folder views. HTTP PROPFIND requests to retrieve data from the server (such as lists of file in a given folder). Upload of files. Download of files 12
13 Information on the Red Hat subscription model can be found at subscription. Solution implementation guidance Building the tested configuration requires installing, configuring, and tuning software on the various servers in the solution architecture. While specific detailed step-by-step instructions are beyond the scope of this document, the sections that follow provide an overview of this process with references to appropriate material for more in-depth information. Configuring the load generator and load balancer Instructions for installing Red Hat Enterprise Linux 6.4 can be found at https:// access./site/ documentation/en-us/red_ Hat_Enterprise_Linux/6/ pdf/installation_guide/ Red_Hat_Enterprise_Linux-6- Installation_Guide-en-US.pdf. Creating a scalable environment at the application layer introduces the need for a load balancer system. In this solution architecture, the stateless application servers are simply replicated behind a load balancer to provide a scalable and highly available environment. Because the compute and storage elements have been consolidated, load balancing can simply be applied at the application layer, as each application layer instance has its own fully-replicated database and storage. The application layer will also provide error handling. For example, in the event one application node loses its storage node, the application tier will no longer respond to the load balancers high-availability check and will no longer receive traffic as a result. In testing performed by owncloud and Red Hat, HAProxy was chosen as the load balancer and configured for HTTPS traffic. The documentation can be found at HAProxy servers were configured with a heartbeat and IP shift to fail over the IP address should one server fail. The HAProxy load balancer is not strictly required, and any commercial load balancer will suffice (e.g., F5 s BIG-IP load balancers). The HP ProLiant DL380 G5 servers used as load generator, load balancer, and optional MySQL database servers all ran Red Hat Enterprise Linux. When configuring load balancing for this type of application stack, there the following suggested options are related to managing application sessions. Sticky sessions. Sticky sessions can be used to route or attempt to route a user to the same server in the stack each time (based on a given identifier), storing all of the user session information locally on that server. The drawback of this approach is that failover will result in a session reset, forcing the user to log back in. The other potential drawback is power user domination of resources. Round-robin scheduling. For testing purposes, owncloud and Red Hat engineers chose to employ a round-robin approach to load balancing. This approach requires the installation of memcache to act as the session manager. Memcache was installed on each node in the cluster, and the application layer was directed to write session information to each instance of memcache. The result is a system where round-robin scheduling can be performed at the transaction level spreading the workload evenly across all of the nodes. Installing and configuring Red Hat Storage Server As mentioned, the consolidated app/storage layer of the solution architecture consists of at least one HP ProLiant SL4540 Gen8 two-node server running Red Hat Storage Server with GlusterFS, which is pre-configured as a part of the Red Hat Storage Server offering. Both nodes on the server were installed with the Red Hat Storage Server 2.1 ISO image that is available at Red Hat Network (rhn.). This release is based on Red Hat Enterprise Linux 6.4 and GlusterFS 3.4. All the documentation for installing and configuring the Red Hat Storage Server can be found on the Red Hat Customer Portal (see Figure 8) at Red_Hat_Storage. 13
14 Figure 8. Documentation for installing Red Hat Storage Server is available on the Red Hat Customer Portal. For the purposes of scale and high availability, a distributed replicated GlusterFS volume was configured with IP failover. The storage was configured on a separate subnet with bonded NICs at the application server level. Engineers decided to address the storage using GlusterFS Native Client protocol for high availability, extensibility, and performance advantages. This approach allows simply adding more bricks to the storage volume, backed by additional physical disk. It is worth noting that several options are available for storage configuration, but those are beyond the scope of this document. Red Hat Storage Server is installed by following the instructions in the installation guide. Once the system is up and running, the storage bricks are created using the rhs-server-init.sh script. This script performs the following: Creates a physical volume. Creates a logical volume. Creates an XFS file system on the logical volume. Runs a tuned performance profile for high throughput. Once bricks have been created and mounted on all Red Hat Storage Server nodes, the next step is to create a storage cluster by entering the following command on the first node (rsh01 in this case): rhs01 # gluster peer probe rhs02 Next, confirm that the two storage nodes are in a cluster by typing the command: rhs01 # gluster peer status If successful, the result will show that rhs01 is connected to rhs02. If the command is entered from rhs02, then it will be shown connected to rhs01. 14
15 Next, the GlusterFS volume is created using all of the bricks available on the two storage nodes that will serve as back-end storage for the owncloud data, in a volume called ocdata. rhs01 # gluster volume create ocdata replica 2 rhs01:/rhs/brick1/ocdata rhs02:/rhs/brick1/ocdata rhs01:/rhs/brick2/ocdata rhs02:/rhs/brick2/ocdata rhs01 # gluster volume start ocdata rhs01 # gluster volume info The volume creation command above creates a distributed-replicated volume where the bricks from nodes rhs01 and rhs02 are synchronously replicated. This configuration ensures high data availability and continued owncloud service even if one of the nodes becomes unavailable. Installing and configuring the application layer Once Red Hat Storage Server is installed and configured on both nodes of the HP ProLiant SL4540 Gen8 server, the next step is to install Apache web server software as well as the owncloud PHP application. Finally, owncloud is configured to access the GlusterFS volume. In testing performed by owncloud and Red Hat, the following components were installed on each application server: Apache PHP 5.4.x PHP-GD, PHP-XML, PHP-MYSQL, PHP-CURL SMBCLIENT Once these components are in place, owncloud can be installed by visiting: download. The owncloud software is then installed on Red Hat Storage nodes rhs01 and rhs02 under /var/www/html/owncloud. The next step is to mount the GlusterFS volume as an owncloud data store. The GlusterFS native client protocol is the recommended method for accessing Red Hat Storage Server volumes when high concurrency and high write performance is required. The GlusterFS volume should be mounted under the /var/www/owncloud/data directory on both rhs01 and rhs02 nodes using the GlusterFS native protocol. This is accomplished by entering the following command on both server nodes: rhs01 # mount -t glusterfs rhs01:/ocdata /var/www/owncloud/data rhs02 # mount -t glusterfs rhs02:/ocdata /var/www/owncloud/data Configuring database server nodes To assure database performance and high availability, Red Hat and owncloud engineers chose MySQL clustering using the NDB storage engine. Two database nodes made up the cluster, whether using two separate servers in the distributed configuration example or running the database cluster on the two HP ProLiant SL4540 Gen8 server nodes in the consolidated configuration. In both cases, each node was configured as both a data storage node and a management node creating a fully redundant database stack. The MySQL NDB cluster allows for writing data to the data storage node, with real-time replication to the other node(s) in the cluster. The cluster was configured based on the documentation found at 15
16 The benefit of MySQL NDB clustering is that both nodes can accept writes to the database. A memory-based engine is used to perform data replication to the other nodes in the cluster. This approach gives each owncloud application instance its own database node. The distributed configuration involved configuring two redundant MySQL NDB SQL/storage nodes on HP ProLiant DL380 G5 servers. In the consolidated configuration, the redundant MySQL nodes were deployed on the same HP ProLiant SL4540 Gen8 server nodes running the app server and Red Hat Storage Server. In either case, the owncloud application servers connect to the database via the MySQL Proxy, which distributes traffic among the various SQL/storage nodes. Note: Red Hat virtualization technology in the form of the Kernel-based Virtual Machine (KVM) technology is recommended when deploying a MySQL NDB cluster consolidated on the app/storage nodes. Containing the MySQL NDB cluster instances with virtualization allows control of database resources and provides a greater level of control over resource consumption for any individual element of the application stack. Additional tuning and optimization While extensive tuning advice is beyond the scope of this document, engineers made a number of optimizations in the course of testing the configuration, including: Live connections. To support a large number of concurrent users, the number of live connections needs to be adjusted to 1,000 or more, which has implications for multiple elements of the architecture. Red Hat Enterprise Linux, the Apache web server, and the MySQL database all must have sufficient connection resources available to support thousands of connections. Enabling the Alternative PHP Cache (APC). The Alternative PHP Cache is a free and open opcode cache for PHP. Turning on the APC on the app servers increases performance for owncloud and is recommended. Placing PHP code into RAM disk. For faster execution of owncloud code, a RAM disk can be used to store the owncloud data directory. Applying MySQL Indexes. Creating indexes for most of the active MySQL tables is also recommended. Those tables minimally include oc_group_user, oc_share, oc_filecache, oc_files_versions and oc_files_trashsize. Conclusion On-premise file sync and share solutions give organizations vital control over the security and availability of their data, without the inherent risks presented by cloud-based solutions. The unique combination of owncloud with Red Hat Storage Server on HP ProLiant SL4540 Gen8 two-node servers provides a scalable and high-performance solution that lets users collaborate effectively. Unlike traditional NAS solutions, Red Hat Storage Server lets organizations collapse layers of datacenter infrastructure by consolidating owncloud application servers, storage server, and even the MySQL database onto the same physical platform. The solution is scalable, available, and cost effective, while providing the significant performance needed to address growing enterprise mobility and enhance essential collaboration and productivity for mobile users. 16
17 Technology Detail Scale-Out File Sync and Share: Deploying owncloud on Red Hat Storage Server References Learn more about: owncloud: Red Hat Storage Server: HP ProLiant SL4540 Gen8 servers: its customers businesses. linkedin.com/company/red-hat NORTH AMERICA REDHAT1 EUROPE, MIDDLE EAST, AND AFRICA ASIA PACIFIC LATIN AMERICA # _KVM _0614
RED HAT GLUSTER STORAGE ON THE HP PROLIANT SL4540 GEN8 SERVER Deploying open scalable software-defined storage
TECHNOLOGY DETAIL ON THE HP PROLIANT SL4540 GEN8 SERVER Deploying open scalable software-defined storage TABLE OF CONTENTS 2 INTRODUCTION Storage, growth, innovation, and cost HP and Red Hat have designed
CONSOLIDATING THE STORAGE TIER FOR PERFORMANCE AND SCALABILITY Data Management and Protection with CommVault Simpana and Red Hat Storage
TECHNOLOGY DETAIL CONSOLIDATING THE STORAGE TIER FOR PERFORMANCE AND SCALABILITY Data Management and Protection with CommVault Simpana and Red Hat Storage TABLE OF CONTENTS 2 INTRODUCTION 2 COMMVAULT SIMPANA
HP RA for Red Hat Storage Server on HP ProLiant SL4540 Gen8 Server
Technical white paper HP RA for Red Hat Storage Server on HP ProLiant SL4540 Gen8 Server Deploying open scalable software-defined storage Table of contents Executive summary... 2 Introduction... 2 Storage
CHOOSING THE RIGHT STORAGE PLATFORM FOR SPLUNK ENTERPRISE
WHITEPAPER CHOOSING THE RIGHT STORAGE PLATFORM FOR SPLUNK ENTERPRISE INTRODUCTION Savvy enterprises are investing in operational analytics to help manage increasing business and technological complexity.
RED HAT STORAGE SERVER An introduction to Red Hat Storage Server architecture
TECHNOLOGY DETAIL RED HAT STORAGE SERVER An introduction to Red Hat Storage Server architecture ABSTRACT During the last decade, enterprises have seen enormous gains in scalability, flexibility, and affordability
owncloud Architecture Overview
owncloud Architecture Overview owncloud, Inc. 57 Bedford Street, Suite 102 Lexington, MA 02420 United States phone: +1 (877) 394-2030 owncloud GmbH Schloßäckerstraße 26a 90443
owncloud Architecture Overview
owncloud Architecture Overview Time to get control back Employees are using cloud-based services to share sensitive company data with vendors, customers, partners and each other. They are syncing data
Introduction to Gluster. Versions 3.0.x
Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster
Private cloud computing advances
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud functionality: advantages and Disadvantages
Whitepaper RED HAT JOINS THE OPENSTACK COMMUNITY IN DEVELOPING AN OPEN SOURCE, PRIVATE CLOUD PLATFORM Introduction: CLOUD COMPUTING AND The Private Cloud cloud functionality: advantages and Disadvantages
Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack
Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack May 2015 Copyright 2015 SwiftStack, Inc. swiftstack.com Page 1 of 19 Table of Contents INTRODUCTION... 3 OpenStack
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Maxta Storage Platform Enterprise Storage Re-defined
Maxta Storage Platform Enterprise Storage Re-defined WHITE PAPER Software-Defined Data Center The Software-Defined Data Center (SDDC) is a unified data center platform that delivers converged
RED HAT OPENSTACK PLATFORM A COST-EFFECTIVE PRIVATE CLOUD FOR YOUR BUSINESS
WHITEPAPER RED HAT OPENSTACK PLATFORM A COST-EFFECTIVE PRIVATE CLOUD FOR YOUR BUSINESS INTRODUCTION The cloud is more than a marketing concept. Cloud computing is an intentional, integrated architecture
EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION
EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All
RED HAT CLOUD SUITE FOR APPLICATIONS
RED HAT CLOUD SUITE FOR APPLICATIONS DATASHEET AT A GLANCE Red Hat Cloud Suite: Provides a single platform to deploy and manage applications. Offers choice and interoperability without vendor lock-in.
POWER ALL GLOBAL FILE SYSTEM (PGFS)
POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology
High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology Evaluation report prepared under contract with Brocade Executive Summary As CIOs
Enterprise Private Cloud Storage
Enterprise Private Cloud Storage The term cloud storage seems to have acquired many definitions. At Cloud Leverage, we define cloud storage as an enterprise-class file server located in multiple geographically
3 Red Hat Enterprise Linux 6 Consolidation
Whitepaper Consolidation EXECUTIVE SUMMARY At this time of massive and disruptive technological changes where applications must be nimbly deployed on physical, virtual, and cloud infrastructure, Red Hat
(Scale Out NAS System)
For Unlimited Capacity & Performance Clustered NAS System (Scale Out NAS System) Copyright 2010 by Netclips, Ltd. All rights reserved -0- 1 2 3 4 5 NAS Storage Trend Scale-Out NAS Solution Scaleway Advantages
Introduction to Red Hat Storage. January, 2012
Introduction to Red Hat Storage January, 2012 1 Today s Speakers 2 Heather Wellington Tom Trainer Storage Program Marketing Manager Storage Product Marketing Manager Red Hat Acquisition of Gluster What
How Solace Message Routers Reduce the Cost of IT Infrastructure
How Message Routers Reduce the Cost of IT Infrastructure This paper explains how s innovative solution can significantly reduce the total cost of ownership of your messaging middleware platform and
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Understanding Microsoft Storage Spaces
S T O R A G E Understanding Microsoft Storage Spaces A critical look at its key features and value proposition for storage administrators A Microsoft s Storage Spaces solution offers storage administrators
RED HAT STORAGE SERVER TECHNICAL OVERVIEW
RED HAT STORAGE SERVER TECHNICAL OVERVIEW Ingo Börnig Solution Architect, Red Hat 24.10.2013 NEW STORAGE REQUIREMENTS FOR THE MODERN HYBRID DATACENTER DESIGNED FOR THE NEW DATA LANDSCAPE PETABYTE SCALE
New Features in SANsymphony -V10 Storage Virtualization Software
New Features in SANsymphony -V10 Storage Virtualization Software Updated: May 28, 2014 Contents Introduction... 1 Virtual SAN Configurations (Pooling Direct-attached Storage on hosts)... 1 Scalability
We look beyond IT. Cloud Offerings
Cloud Offerings cstor Cloud Offerings As today s fast-moving businesses deal with increasing demands for IT services and decreasing IT budgets, the onset of cloud-ready solutions has provided a forward
Parallels Cloud Storage
Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying
Selecting the Right NAS File Server
Selecting the Right NAS File Server As the network administrator for a workgroup LAN, consider this scenario: once again, one of your network file servers is running out of storage space. You send
SQL Server Storage Best Practice Discussion Dell EqualLogic
SQL Server Storage Best Practice Discussion Dell EqualLogic What s keeping you up at night? Managing the demands of a SQL environment Risk Cost Data loss Application unavailability Data growth SQL Server
Qualcomm Achieves Significant Cost Savings and Improved Performance with Red Hat Enterprise Virtualization
Qualcomm Achieves Significant Cost Savings and Improved Performance with Red Hat Enterprise Virtualization Fast facts Customer Industry Geography Business challenge Solution Qualcomm Telecommunications
Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration
Solutions Integrated Storage Appliances Management Optimized Storage & Migration Archive Data Retention & Compliance Services Global Installation & Support SECURING THE FUTURE OF YOUR DATA w w w.q sta
Introduction to NetApp Infinite Volume
Technical Report Introduction to NetApp Infinite Volume Sandra Moulton, Reena Gupta, NetApp April 2013 TR-4037 Summary This document provides an overview of NetApp Infinite Volume, a new innovation in
Cloud for Your Business
Whitepaper Red Hat Enterprise Linux OpenStack Platform A Cost-Effective Private Cloud for Your Business Introduction The cloud is more than a marketing concept. Cloud computing is an intentional, integrated
Accelerating and Simplifying Apache
Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS Executive Overview The technology requirements for big data vary significantly
HadoopTM Analytics DDN
DDN Solution Brief Accelerate> HadoopTM Analytics with the SFA Big Data Platform Organizations that need to extract value from all data can leverage the award winning SFA platform to really accelerate
Unified Computing Systems
Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified to Choose your Red Hat Enterprise Linux Filesystem
How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to
ENTERPRISE STORAGE WITH THE FUTURE BUILT IN
ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage
HP AppSystem for SAP HANA
Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ
MONITORING RED HAT GLUSTER SERVER DEPLOYMENTS With the Nagios IT infrastructure monitoring tool
TECHNOLOGY DETAIL MONITORING RED HAT GLUSTER SERVER DEPLOYMENTS With the Nagios IT infrastructure monitoring tool INTRODUCTION Storage system monitoring is a fundamental task for a storage administrator.
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment
LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and
OPENSTACK IN THE ENTERPRISE Best practices for deploying enterprise-grade OpenStack implementations
WHITEPAPER OPENSTACK IN THE ENTERPRISE Best practices for deploying enterprise-grade OpenStack implementations Vinny Valdez INTRODUCTION 64% of IT managers have OpenStack on their technology roadmaps.
The Design and Implementation of the Zetta Storage Service. October 27, 2009
The Design and Implementation of the Zetta Storage Service October 27, 2009 Zetta s Mission Simplify Enterprise Storage Zetta delivers enterprise-grade storage as a service for IT professionals needing
A virtual SAN for distributed multi-site environments
Data sheet A virtual SAN for distributed multi-site environments What is StorMagic SvSAN? StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL
Cloud Storage. Parallels. Performance Benchmark Results. White Paper.
Parallels Cloud Storage White Paper Performance Benchmark Results Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Platfora Big Data Analytics
Platfora Big Data Analytics ISV Partner Solution Case Study and Cisco Unified Computing System Platfora, the leading enterprise big data analytics platform built natively on Hadoop and Spark, delivers
Intel RAID SSD Cache Controller RCS25ZB040
SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
A High-Performance Storage and Ultra-High-Speed File Transfer Solution
A High-Performance Storage and Ultra-High-Speed File Transfer Solution Storage Platforms with Aspera Abstract A growing number of organizations in media and entertainment, life sciences, high-performance
Getting More Performance and Efficiency in the Application Delivery Network
SOLUTION BRIEF Intel Xeon Processor E5-2600 v2 Product Family Intel Solid-State Drives (Intel SSD) F5* Networks Delivery Controllers (ADCs) Networking and Communications Getting More Performance and Efficiency
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Simplify Data Management and Reduce Storage Costs with File Virtualization
What s Inside: 2 Freedom from File Storage Constraints 2 Simplifying File Access with File Virtualization 3 Simplifying Data Management with Automated Management Policies 4 True Heterogeneity 5 Data Protection
HP ProLiant Storage Server family. Radically simple storage
HP ProLiant Storage Server family Radically simple storage The HP ProLiant Storage Server family delivers affordable, easy-to-use network attached storage (NAS) solutions that simplify storage management
The Ultimate in Scale-Out Storage for HPC and Big Data
Node Inventory Health and Active Filesystem Throughput Monitoring Asset Utilization and Capacity Statistics Manager brings to life powerful, intuitive, context-aware real-time monitoring and proactive
Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection
Solution Brief Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection 2 Unitrends has leveraged over 20 years of experience in understanding ever-changing data protection challenges in
Clustering Windows File Servers for Enterprise Scale and High Availability
Enabling the Always-On Enterprise Clustering Windows File Servers for Enterprise Scale and High Availability By Andrew Melmed Director of Enterprise Solutions, Sanbolic, Inc. April 2012 Introduction Microsoft
Easier - Faster - Better
Highest reliability, availability and serviceability ClusterStor gets you productive fast with robust professional service offerings available as part of solution delivery, including quality controlled | http://docplayer.net/889038-Scale-out-file-sync-and-share-deploying-owncloud-and-red-hat-storage-server-on.html | CC-MAIN-2017-09 | en | refinedweb |
#include "petscdraw.h" PetscErrorCode PetscDrawLGCreate(PetscDraw draw,PetscInt dim,PetscDrawLG *outlg)Collective on PetscDraw
Notes: The MPI communicator that owns the PetscDraw owns this PetscDrawLG, but the calls to set options and add points are ignored on all processes except the zeroth MPI process in the communicator. All MPI processes in the communicator must call PetscDrawLGDraw() to display the updated graph.
Level:intermediate
Location:src/sys/classes/draw/utils/lgc.c
Index of all Draw routines
Table of Contents for all manual pages
Index of all manual pages | http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Draw/PetscDrawLGCreate.html | CC-MAIN-2017-09 | en | refinedweb |
I need to check in Python if the current time on the server isn't in the time range of 22:00PM - 04:00AM.
What is the correct way to write this code ?
Thanks !
Try this below:
import time def isWithinRange(): hour = time.localtime(time.time()).tm_hour return hour >= 22 or hour <= 4 | https://codedump.io/share/2om9euRoe8fo/1/how-do-i-determine-if-current-server-time-isn39t-within-a-specified-range-using-python- | CC-MAIN-2017-09 | en | refinedweb |
Opened 11 years ago
Closed 11 years ago
Last modified 11 years ago
#441 closed defect (fixed)
0.10-incompatibility: Broken.
Description
With Trac trunk (r3411) it stopped working with KeyError (check attached log). Code according to backtrace seems this section:
def _heading_formatter(self, match, fullmatch): ... anchor = self._anchors[-1] ...
I have modified it to the following (code taken from Trac Wiki formatter directly):
def _heading_formatter(self, match, fullmatch): ... anchor = fullmatch.group('hanchor') or '' ...
Seems to be working now. Hope this helps.
Attachments (8)
Change History (19)
Changed 11 years ago by
comment:1 follow-up: 2 Changed 11 years ago by
With Trac trunk (r3411) it stopped working with KeyError (check attached log).
Actually, this changed in [T3408].
def _heading_formatter(self, match, fullmatch): ... anchor = fullmatch.group('hanchor') or '' ...
No, that's not a correct fix. See the abovementioned change
for an example about how to get the
anchor.
The
hanchor group is for an optional, explicitely given
id,
which, most of the time, will not be given.
comment:2 Changed 11 years ago by
Changed 11 years ago by
comment:3 Changed 11 years ago by
comment:4 Changed 11 years ago by
Whilst MyOutlineFormatter.format is being tweaked, purely as a clarification, the following three lines could be moved outside the enclosing
for loop:
36 active = '' 37 if page == active_page: 38 active = ' class="active"'
...since neither
page nor
active_page changes from one loop iteration to the next.
comment:5 Changed 11 years ago by
comment:6 Changed 11 years ago by
cboos' macro.py suffers from inconsistent indentation schemes, and also, I think, introduces some bugs by accidentally shifting some statements to different block levels.
I've cleaned up the indentation and put the shifted statements back to their original block level, and am attaching the result as a patch....
Changed 11 years ago by
comment:7 Changed 11 years ago by
comment:8 Changed 11 years ago by
I've split up cboos' patch into multiple changes:
- fix441: Fix this ticket. (also, added the change I suggested in comment:4)
- tweaks: General refactoring.
- linewrap: Wrap overly long lines.
- wmbref: Convert to WikiMacroBase.
- 011notes: Add notes about 0.11.
(These should be applied in that order - they have interdependencies.)
Additionally, cboos' patch contained:
- Removal of trailing whitespace: I've not included that in the above patches, since it is easier for a committer to run a simple editor command, than review a patch doing the same (but it would be nice to have this done).
- Introduction of
coding: utf-8statement: I've dropped that, since it was only there to support the addition of a weird angled quote character in an added comment, which looked like a typo anyway.
- Placing of an assignment to
argsinto an
else:clause: I've removed this change, since it results in
argsbeing a different datatype (string vs. list) depending on whether args are provided, which doesn't seem right.
- A couple of odd whitespace changes within lines: I left these out.
I will now attach the 5 patch files.
comment:9 Changed 11 years ago by
Oops! There's an error in various places throughout cboos'
macro.py and my derivatives:
out.write(system_message(MESSAGE), None)
is supposed to be:
out.write(system_message(MESSAGE, None))
The error is present in the first of two uses of system_message in
macro.py. In
tweaks.patch, I accidentally spread the error to the second use too. In
linewrap.patch, there's one change which isn't a pure line-wrap: the accidental spreading is undone again.
Rather than re-attaching fixed versions of
tweaks.patch and
linewrap.patch, please just correct the placement of the parentheses as described above, in
tweaks.patch and
linewrap.patch, before applying them - thanks!
comment:10 Changed 11 years ago by
comment:11 Changed 11 years ago by
Yep, thanks for the fixes to my fixes, maxb :)
Also, the
system_message was slightly improved in [T3431].
Lastly, don't hesitate to comment and brainstorm further relative to the attachment:011notes.patch
Trac backtrace of TocMacro error | https://trac-hacks.org/ticket/441 | CC-MAIN-2017-09 | en | refinedweb |
How can I flip the origin of a matplotlib plot to be in the upper-left corner - as opposed to the default lower-left? I'm using matplotlib.pylab.plot to produce the plot (though if there is another plotting routine that is more flexible, please let me know).
I'm looking for the equivalent of the matlab command: axis ij;
Also, I've spent a couple hours surfing matplotlib help and google but haven't come up with an answer. Some info on where I could have looked up the answer would be helpful as well.
For an image or contour plot, you can use the keyword
origin = None | 'lower' | 'upper' and for a line plot, you can set the ylimits high to low.
from pylab import * A = arange(25)/25. A = A.reshape((5,5)) figure() imshow(A, interpolation='nearest', origin='lower') figure() imshow(A, interpolation='nearest') d = arange(5) figure() plot(d) ylim(5, 0) show() | https://codedump.io/share/Rrcjks288LFF/1/matplotlib-coord-sys-origin-to-top-left | CC-MAIN-2017-09 | en | refinedweb |
So far, all of our examples only did their work on page load. As you probably guessed, that isn't normal. In most apps, especially the kind of UI-heavy ones we will be building, there is going to be a ton of things the app does only as a reaction to something. That something could be triggered by a mouse click, a key press, window resize, or a whole bunch of other gestures and interactions. The glue that makes all of this possible is something known as events.
Now, you probably know all about events from your experience using them in the DOM world. (If you don't, then I suggest getting a quick refresher first.) The way React deals with events is a bit different, and these differences can surprise you in various ways if you aren't paying close attention. Don't worry. That's why you have this tutorial. We will start off with a few simple examples and then gradually look at increasingly more bizarre, complex, and (yes!) boring things.
Onwards!
OMG! A React Book Written by Kirupa?!!
To kick your React skills up a few notches, everything you see here and more (with all its casual clarity!) is available in both paperback and digital editions.BUY ON AMAZON
Listening and Reacting to Events
The easiest way to learn about events in React is to actually use them, and that's exactly what we are going to! To help with this, we have a simple example made up of a counter that increments each time you click on a button. Initially, our example will look like this:
Each time you click on the plus button, the counter value will increase by 1. After clicking the plus button a bunch of times, it will look sorta like this:
Under the covers, the way this example works is pretty simple. Each time you click on the button, an event gets fired. We listen for this event and do all sorts of React-ey things to get the counter to update when this event gets overheard.
Starting Point
To save all of us some time, we aren't going to be creating everything in our example from scratch. By now, you probably have a good idea of how to work with components, styles, state, and so on. Instead, we are going to start off with a partially implemented example that contains everything except the event-related functionality that we are here to learn.
First, create a new HTML document and ensure your starting point looks as follows:
<!DOCTYPE html> <html> <head> <title>React! React! React!</title> <script src=""></script> <script src=""></script> <script src=""></script> <style> #container { padding: 50px; background-color: #FFF; } </style> </head> <body> <div id="container"></div> <script type="text/babel"> </script> </body> </html>
Once your new HTML document looks like what you see above, it's time to add our partially implemented counter example. Inside our script tag below the container div, add the following:
var destination = document.querySelector("#container"); var Counter = React.createClass({ render: function() { var textStyle = { fontSize: 72, fontFamily: "sans-serif", color: "#333", fontWeight: "bold" }; return ( <div style={textStyle}> {this.props.display} </div> ); } }); var CounterParent = React.createClass({ getInitialState: function() { return { count: 0 }; }, style={buttonStyle}>+</button> </div> ); } }); ReactDOM.render( <div> <CounterParent/> </div>, destination );
Once you have added all of this, preview everything in your browser to make sure things work. You should see the beginning of our counter. Take a few moments to look at what all of this code does. There shouldn't be anything that looks strange. The only odd thing will be that clicking the plus button won't do anything. We'll fix that right up in the next section.
Making the Button Click Do Something
Each time we click on the plus button, we want the value of our counter to increase by one. What we need to do is going to roughly look like this:
- Listen for the click event on the button.
- When a click event is overheard, specify the event handler that will deal with it.
- Actually implement the event handler where we increase the value of our this.state.count property that our counter relies on.
We'll just go straight down the list...starting with listening for the click event. In React, you listen to an event by specifying everything inline in your JSX itself. More specifically, you specify both the event you are listening for and the event handler that will get called all inside your markup. To do this, find the return function inside our CounterParent component, and make the following highlighted change:
. . . return ( <div style={backgroundStyle}> <Counter display={this.state.count}/> <button onClick={this.increase} style={buttonStyle}>+</button> </div> );
What we've done is told React to call the increase function when the onClick event is overheard. Next, let's go ahead and implement the increase function - aka our event handler. Inside our CounterParent component, add the following highlighted lines:
var CounterParent = React.createClass({ getInitialState: function() { return { count: 0 }; }, increase: function(e) { this.setState({ count: this.state.count + 1 }); }, onClick={this.increase} style={buttonStyle}>+</button> </div> ); } });
All we are doing with these lines is making sure that each call to the increase function increments the value of our this.state.count property by 1. Because we are dealing with events, your increase function (as the designated event handler) will get access to any event arguments. We have set these arguments to be accessed by e, and you can see that by looking at our increase function's signature (aka what its declaration looks like). We'll talk about the various events and their properties in a little bit.
Now, go ahead and preview what you have in your browser. Once everything has loaded, click on the plus button to see all of our newly added code in action. Our counter value should increase with each click! Isn't that pretty awesome?
Event Properties
As you know, our events pass what are known as event arguments to our event handler. These event arguments contain a bunch of properties that are specific to the type of event you are dealing with. In the regular DOM world, each event has its own type. For example, if you are dealing with a mouse event, your event and its event arguments object will be of type MouseEvent. This MouseEvent object will allow you to access mouse-specific information like which button was pressed or the screen position of the mouse click. Event arguments for a keyboard-related event are of type KeyboardEvent. Your KeyboardEvent object contains properties which (among many other things) allow you to figure out which key was actually pressed. I could go on forever for every other Event type, but you get the point. Each Event type contains its own set of properties that you can access via the event handler for that event!
Why am I boring you with things you already know? Well..
Meet Synthetic Events
In React, when you specify an event in JSX like we did with onClick, you are not directly dealing with regular DOM events. Instead, you are dealing with a React-specific event type known as a SyntheticEvent. Your event handlers don't get native event arguments of type MouseEvent, KeyboardEvent, etc. They always get event arguments of type SyntheticEvent that wrap your browser's native event instead. What is the fallout of this in our code? Surprisingly not a whole lot.
Each SyntheticEvent contains the following properties:
These properties should seem pretty straightforward...and generic! The non-generic stuff depends on what type of native event our SyntheticEvent is wrapping. This means that a SyntheticEvent that wraps a MouseEvent will have access to mouse-specific properties such as the following:
boolean altKey number button number buttons number clientX number clientY boolean ctrlKey boolean getModifierState(key) boolean metaKey number pageX number pageY DOMEventTarget relatedTarget number screenX number screenY boolean shiftKey
Similarly, a SyntheticEvent that wraps a KeyboardEvent will have access to these additional keyboard-related properties:
boolean altKey number charCode boolean ctrlKey boolean getModifierState(key) string key number keyCode string locale number location boolean metaKey boolean repeat boolean shiftKey number which
In the end, all of this means that you still get the same functionality in the SyntheticEvent world that you had in the vanilla DOM world.
Now, here is something I learned the hard way. Don't refer to traditional DOM event documentation when using Synthetic events and their properties. Because the SyntheticEvent wraps your native DOM event, events and their properties may not map one-to-one. Some DOM events don't even exist in React. To avoid running into any issues, if you want to know the name of a Synthetic event or any of its properties, refer to the React Event System document instead.
Doing Stuff With Event Properties
By now, you've probably seen more about the DOM and Synthetic events than you'd probably like. To wash away the taste of all that text, let's write some code and put all of this new found knowledge to good use. Right now, our counter example increments by one each time you click on the plus button. What we want to do is increment our counter by ten when the Shift key on the keyboard is pressed while clicking the plus button with our mouse.
The way we are going to do that is by using the shiftKey property that exists on the SyntheticEvent when using the mouse:
boolean altKey number button number buttons number clientX number clientY boolean ctrlKey boolean getModifierState(key) boolean metaKey number pageX number pageY DOMEventTarget relatedTarget number screenX number screenY boolean shiftKey
The way this property works is simple. If the Shift key is pressed when this mouse event fires, then the shiftKey property value is true. Otherwise, the shiftKey property value is false. To increment our counter by 10 when the Shift key is pressed, go back to our increase function and make the following highlighted changes:
increase: function(e) { var currentCount = this.state.count; if (e.shiftKey) { currentCount += 10; } else { currentCount += 1; } this.setState({ count: currentCount }); },
Once you've made the changes, preview our example in the browser. Each time you click on the plus button, your counter will increment by one just like it had always done. If you click on the plus button with your Shift key pressed, notice that our counter increments by 10 instead.
The reason that all of this works is because we change our incrementing behavior depending on whether the Shift key is pressed or not. That is primarily handled by the following lines:
if (e.shiftKey) { currentCount += 10; } else { currentCount += 1; }
If the shiftKey property on our SyntheticEvent event argument is true, we increment our counter by 10. If the shiftKey value is false, we just increment by 1.
More Eventing Shenanigans
We are not done yet! Up until this point, we've looked at how to work with events in React in a very simplistic way. In the real world, rarely will things be as direct as what we've seen. Your real apps will be more complex, and because React insists on doing things differently, we'll need to learn (or re-learn) some new event-related tricks and techniques to make our apps work. That's where this section comes in. We are going to look at some common situations you'll run into and how to deal with them.
You Can't Directly Listen to Events on Components
Let's say your component is nothing more than a button or another type of UI element that users will be interacting with. You can't get away with doing something like what we see in the following highlighted line:
var CounterParent = React.createClass({ getInitialState: function() { return { count: 0 }; }, increase: function() { this.setState({ count: this.state.count + 1 }); }, render: function() { return ( <div> <Counter display={this.state.count}/> <PlusButton onClick={this.increase}/> </div> ); } });
On the surface, this line of JSX looks totally valid. When somebody clicks on our PlusButton component, the increase function will get called. In case you are curious, this is what our PlusButton component looks like:
var PlusButton = React.createClass({ render: function() { return ( <button> + </button> ); } });
Our PlusButton component doesn't do anything crazy. It only returns a single HTML element!
No matter how you slice and dice this, none of this matters. It doesn't matter how simple or obvious the HTML we are returning via a component looks like. You simply can't listen for events on them directly. The reason is because components are wrappers for DOM elements. What does it even mean to listen for an event on a component? Once your component gets unwrapped into DOM elements, does the outer HTML element act as the thing you are listening for the event on? Is it some other element? How do you distinguish between listening for an event and declaring a prop you are listening for?
There is no clear answer to any of those questions. It's too harsh to say that the solution is to simply not listen to events on components either. Fortunately, there is a workaround where we treat the event handler as a prop and pass it on to the component. Inside the component, we can then assign the event to a DOM element and set the event handler to the the value of the prop we just passed in. I realize that probably makes no sense, so let's walk through an example.
Take a look at the following highlighted line:
var CounterParent = React.createClass({ . . . render: function() { return ( <div> <Counter display={this.state.count}/> <PlusButton clickHandler={this.increase}/> </div> ); } });
In this example, we create a property called clickHandler whose value is the increase event handler. Inside our PlusButton component, we can then do something like this:
var PlusButton = React.createClass({ render: function() { return ( <button onClick={this.props.clickHandler}> + </button> ); } });
On our button element, we specify the onClick event and set its value to the clickHandler prop. At runtime, this prop gets evaluated as our increase function, and clicking the plus button ensures the increase function gets called. This solves our problem while still allowing our component to participate in all this eventing goodness!
Listening to Regular DOM Events
If you thought the previous section was a doozy, wait till you see what we have here. Not all DOM events have SyntheticEvent equivalents. It may seem like you can just add the on prefix and capitalize the event you are listening for when specifying it inline in your JSX:
var Something = React.createClass({ handleMyEvent: function(e) { // do something }, render: function() { return ( <div myWeirdEvent={this.handleMyEvent}>Hello!</div> ); } });
It doesn't work that way! For those events that aren't officially recognized by React, you have to use the traditional approach that uses addEventListener with a few extra hoops to jump through.
Take a look at the following section of code:
var Something = React.createClass({ handleMyEvent: function(e) { // do something }, componentDidMount: function() { window.addEventListener("someEvent", this.handleMyEvent); }, componentWillUnmount: function() { window.removeEventListener("someEvent", this.handleMyEvent); }, render: function() { return ( <div>Hello!</div> ); } });
We have our Something component that listens for an event called someEvent. We start listening for this event under the componentDidMount method which is automatically called when our component gets rendered. The way we listen for our event is by using addEventListener and specifying both the event and the event handler to call:
var Something = React.createClass({ handleMyEvent: function(e) { // do something }, componentDidMount: function() { window.addEventListener("someEvent", this.handleMyEvent); }, componentWillUnmount: function() { window.removeEventListener("someEvent", this.handleMyEvent); }, render: function() { return ( <div>Hello!</div> ); } });
That should be pretty straightforward. The only other thing you need to keep in mind is removing the event listener when the component is about to be destroyed. To do that, you can use the opposite of the componentDidMount method, the componentWillUnmount method. Inside that method, put your removeEventListener call there to ensure no trace of our event listening takes place after our component goes away.
The Meaning of this Inside the Event Handler
When dealing with events in React, the value of this inside your event handler is different than what you would normally see in the non-React DOM world. In the non-React world, the value of this inside an event handler refers to the element that fired the event:
function doSomething(e) { console.log(this); //button element } var foo = document.querySelector("button"); foo.addEventListener("click", doSomething, false);
In the React world (when your components are created using React.createClass), the value of this inside your event handler always refers to the component the event handler lives in:
var CounterParent = React.createClass({ getInitialState: function() { return { count: 0 }; }, increase: function(e) { console.log(this); // CounterParent component this.setState({ count: this.state.count + 1 }); }, render: function() { return ( <div> <Counter display={this.state.count}/> <button onClick={this.increase}>+</button> </div> ); } });
In this example, the value of this inside the increase event handler refers to the CounterParent component. It doesn't refer to the element that triggered the event. You get this behavior because React automatically binds all methods inside a component to this. This autobinding behavior only applies when your component is created using React.createClass. If you are using ES6 classes to define your components, the value of this inside your event handler is going to be undefined unless you explicitly bind it yourself:
<button onClick={this.increase.bind(this)}>+</button>
There is no autobinding magic that happens in ES6, so be sure to keep that in mind if you aren't using React.createClass to create your components.
React...why? Why?!
Before we call it a day, let's use this time to talk about why React decided to deviate from how we've worked with events in the past. There are two reasons:
- Browser Compatibility
- Improved Performance
Let's elaborate on these two reasons a little bit.
Browser Compatibility
Event handling is one of those things that works consistently in modern browsers, but once you go back to older browser versions, things get really bad really quickly. By wrapping all of the native events as an object of type SyntheticEvent, React frees you from dealing with event handling quirks that you will end up having to deal with otherwise.
Improved Performance
In complex UIs, the more event handlers you have, the more memory your app takes up. Manually dealing with that isn't difficult, but it is a bit tedious as you try to group events under a common parent. Sometimes, that just isn't possible. Sometimes, the hassle doesn't outweigh the benefits. What React does is pretty clever.
React never attaches event handlers to the DOM elements directly. It uses one event handler at the root of your document that is responsible for listening to all events and calling the appropriate event handler as necessary:
This frees you from having to deal with optimizing your event handler-related code yourself. If you've manually had to do that in the past, you can relax knowing that React takes care of that tedious task for you. If you've never had to optimize event handler-related code yourself, consider yourself lucky :P
Conclusion
You'll spend a lot of time dealing with events, and this tutorial threw a lot of things at you. We started by learning the basics of how to listen to events and specify the event handler. Towards the end, we were all the way in the deep end and looking at eventing corner cases that you will bump into if you aren't careful enough. You don't want to bump into corners. That is never! | https://www.kirupa.com/react/events_in_react.htm | CC-MAIN-2017-09 | en | refinedweb |
0
I can't seem to find the error in this code. I don't know if I constructed the codes correctly, assuming that the error is corrected, I could get the correct value?
/* Function for Right Triangle */ #include<stdio.h> #include<conio.h> #include<math.h> #define p printf #define s scanf double rightTriangle(double x, double y); main() { double a,b; double c; p("Input the 2 lengths of the triangle:\n"); p("a="); s("%0.2f", &a); p("b="); s("%0.2f", &b); rightTriangle(a,b); c=sqrt(rightTriangle(a,b)); p ("%0.2f", c); getch(); return 0; } double rightTrangle(double x, double y) { return ((x*x) + (y*y)); }
I'm using Dev-C++...someone could make things clear to me, I'd really appreciate it ^_^ | https://www.daniweb.com/programming/software-development/threads/450217/undefined-reference-to-function-variable-variable | CC-MAIN-2017-09 | en | refinedweb |
tittjrk
&fbtt.
4 JV
a' ,
roST-OrKICR AT C.UTIIRir, OK , A SUoin i ASS MTTIv
OM III (IF I'll I ICA1IUN ! HANRIROII AlKRl'K
ami jnjimMM
GUTIIRIR, OKLAHOMA, TUESDAY MOTJSL 0, SKPTKM BUR 5, 185)3.
NO. 230.
ft! ? KnTBRKD AT TUB
iw.nfciaWliiai"jS'.'i
jl
H.
1
. r,
K
.
C 11 i..l 1
I S
In till
.1 V dM
m 1 "y ,.;v.u
BL J
ft OF THE FIGHT
PEBBf
''V
IAVE3 COMMITS SUICIDE
HIS PRISON CBLL.
UIHS HE WAS Q3U JUCD TO D3ATH
Trjtslril .equrl In III" Mr? it Itirntiliy
ur.lir M.tt-ry- n I. -Hurl In Ills
' Wife th" l'illU ll tlltterly
AtliciM II M I'.rmemtors
Worn Out n i I "!Tiiiintd.
llf I.Ivji I p.
rhNvwi, Col..
1 h i bJr Urate
On.raf Mm. Jo
luiUctl suicide III
S'pt I -Dp T
the i mviit I po.s
l!nni' lt.ii ii tby, coin
lnscell tit tli" lounty
in r'atttrduV night, piest.injh'y by
Viking pot hoi i lie u.i found stiff
ttnd cold in dea' Ii Sum! iv in it mug by
tht "trusty" who has I) en .11 111 for
lllm. On "his ii'i)H v. is founil llic
fo'lowing It'll m
io tueliron . or 1) ioi
DfcnVc.ll. Col , n 'i IK ir Mr I'lcise ilo
not hold un uiiti,is u.i nw niniliis Thi
eauso of di al h ji v 1 r t I nil un follows
"Dtnd from li uuon Wor.i uut K
tousled ' uursi pitifully
T 't II ATI IIFH 111 AN IS M 1)
The corf so was iuit' colli w lion
fount Ni direct evidence of suicide
ai wihle, but the above letter tells
tho story There weie also letters to
Mis Unives, wife of the doctor; to
.Ittiler C. ws and tin address to tins
public. 1 hat tne pi isoner hail long
con tempi 1 ted taking his own life. Is
uviclcnt front the (Kite of thu letter,
August '.i, lust.
Anotlu r loltct was addressed to the
jailer It rend ns follows
AC..2 IHBt It would kn .1 ntntn busy 10
follow HtiM-ns and
uabluahin ly pcildli-i
or two, I o iner, ni'i I
daily pro.i n to 1 c n
Mass ihu 1 its Mate
the Conniitiiul Me
in r tliu lit s whlih 111!
ul 1 i the pi ors O-ie
u iit in uud tin v uru
I nus a un mbi r of
"I iliial sOLily iiml of
l so 11 IT I in vor
mideapi 1 itiou to 1 lio Khodu Island Milo
Medical soi n ty for aum n Ion
My U
that lnc
n.tliy, en 1
side of 1 1
Mrbut I
coward u i
on v 1
dear, 1
i
line tlie full riioipt. showing
iinn-el t'e 1 l.iti of Mil liar
,it tin i Malt un" ou . lur out
It All ltv will 'I'll' puhllr 810
u 1 1 en 1 I111111 iuo Mlianous
1 s 1 u N 01 Mi un I cm 110 ix
11 1 n . 11 iiilu.. him I must t iku
t 1 1 10 pi t it fur my wife uud
1 ,1 ttl uthor
I '1 H Tt III It GllU l.s
I -. I ittir In tho I'iiIiIK.
tier win h lr T 'llutdiir
Wished ; n to fie public 1.
. and 1-- a - 101 o
In .tves
11 'ii In
. 11 1
iUMTYiJAIlSlHENtTliB,
lc355322SSS5SSl
1 jHiUiuunBjaB'iLfnf' IJEwitj;
)i ysiib w3MpRMMtDiPto.lK'
SvMiaV. I Htf (pTMIV VHH 1 WOlm BWti WnV
nrvr"or 01 r . 1
tM-i
ifmhf
1
t'oUS. !Ull 01 UuyI.t I
null who never Ii
km I of dirty mi 1
end wbo waa aun
juon In mo ott At 1 n
vie found overytliinv n 1 1
lutoly under Ills tontr. 1 1
baJIUTa. too court offlcia.-. 1
the court the dopuilt
1udo nnd Uo Jury (Sui
learned that no man wa
lean he had Urit agreed
fromlaea of political prerii
eclvtd wore ireoly offci ed n
Forty years tto a ma
rminoctli'Ut . ud my ftvth
ovrt some land. 'I In- sou
Urn tvn, and pld tho t,
tho uaua) manner of t-ui
not know until louii iiftci
formod since the trial aoim
recclrol political uppomtn
a, d Hojio ar ' profsastuu j
I lie Jur.' In cis- vbere li
C). These tUinr are i
What posslb.o iiinor 0
atalnit Sfvuna baked l
nurchatsiblo jtiryf oni ul im
faeodod 300) am iitr ilt t
1 1
1 I
. 1 . 1
111 n 1
-ill t o
i Si
lau ami 1
1 niitn, a
ilit. miv
. himself, 1
1 1 Hi 1 lit ,
I 1 1 f 'IlO.M d '
I I 1 I l. - 1
1 r 1 mi"! l1 1 '
11 1 1 tnu 1 rkil
tin Inpurabli ' I
t .' trh w h I
11 1 li j 1 v '
p ki i 1
1 1 ' '
li 'in
; n t a
-. u .1 1
'I .! -
1 1 I
I JLUX 1
1 Pit 11 njr have
- in 111 itven,
. 1 tin 1 upm
il a.K ili "
noun u Il.. r
truiKi! 'i I
1 1I1 ajurtj nil
1 (I that .Oil
WllBOkS!
Informc-l of hor husbaiul's death
shortly before noon She wau at tlie
house of Attorney Thomas Macon,
who 1ms so nbly defended her hus
band, where she. has been stopping
for aoino lime past The poor woman
was deeply alTeetod by tho news, and
foi a time nobody could comfort her.
Accompanied by Mrs. Macon she
hurried to the jail, only to llnd that
the body had been taken to tho coro
ner' office. The new r of tho removal
of the roinains caused another alTcet
Inff scone, and tho poor woman sat in
a da?c for some time. Then she was
led to the apartments of Jailor
Crews, wlicro she remained for
some time, lnOhntng and crying
lailer Crews, in an Interview, indig
n intlv denies that Graves committed
suicide. He says hut? the doctor died
of a broken heart, and to Uho tho jail
er's words, "was murdered by the
attorneys for the state, who have liar
rassed the old man to death."
About tho first of August County
Commissioner Twomblcy went Hast to
seo the w itnesscs for the prosecution
and ascertain whether or not they
would attend tho trial. .lust before
Mr. Twomblny's return, tho doctor, in
an interview", exhibited symptoms
of beirg distuibcd over tho results
of Mr Twombly's tiip. He said in an
Intctvlow, that he believed tho prose
cution would bring here a lot of wit
nesses to slander him, and ho said it
would bo onlv fair for the county
board topty the expenses of his wit
ness s if they paid the expenses of the
witnesses for tho prosecution
Ah is well known, Dr. (Iravcs was in
prison awaiting his second trial for
the alleged murder of Mrs. .Tosephino
llarnaby of Providence, who, at tho
lime of 'her death was visiting friends
in Denver. bho died Apiil 10. 1801.
On April !l sho drank from a bottle of
w liisky that had come by mail from
ISoston and was labeled: "Wish you
a happy Xow Year. Please uc-ept
this line old whisky from your friends
in tho woods."
Tho w liisky contained a solution of
nrsonio. Dr. Graves was accused of
sonditig tho bottle. After one of the
most famous trials in tho criminal an
nals of this country, Dr. Graves was
convicted of murder in the first de
gree and sentenced to be hanged The
supreme court giunted him it new tiial
which was to have begun the latter
pars of this month
DiUrlct Attorney Stewim Tulln.
Attornoy Stevens has prepared a
length statement to tho public in de
fense of his conduct in the Graves
case, and gives out the new evidence
secured on his recent trip to the n.ist.
Tin most important is a letter, the
yvrt twot WH taflfiwty s tnai ono nimuay
jHTHenttJrlRjf,tlie month of Nbvcm-
Mlwtf lSMllHtiwM sitting tn the smoic
fWfi&Vt$!fflu. k Square station of
UStfMJBWrfHXJlt? in ail, 111 Jiusiou,
BiiPflLiM.H.!iMiuu' tii) mil said if ho
4 111- I i. 11.1 -A-t.A .. f.. 1.1...
iji'l J vvuillil will ii uuii; mi iiim
n t lie wiuiii give nun a ,iun icr, nun
what he wrote, ho sa'8, was ex
iutH tli same, woid tr word, as
what was sent with tlat bottle of
wh.shv to Mrs. Harntbr. When Mr.
Stevens was last, in ompany wltli
Comniissiouer Twom)ley, the two
visitctl Canton ami talted with llrcs
lyn, the writer of lite letter, at
lougth. His description of tho man
who had tho noto written tallied with
that of Dr. Graves, and they had no
doubt but that Mr. llrcslyn would be
able to identify Gmes
Although "he It ter n Mra. On.?'
liM not been flv thr p ',
oonttwits arc feo". ' . i . 1 tut
.W. tl'.1A LI- B' tn i i' coi
iiiim tho )itM moni , 'm , ,!4
probably ithut neiv irootl 1 ti.lt to
him. lie therefore ii 1 leu irrnJ hi,
life and tenia His irV- n,ml ...1 iaother
what ho was pouet-Mfi of in ordtsr
SIGHT.
si.t,tt--rTwR;
rvjn. voonHEts threatens
THE SILVER SENATORS.
WANTS A VOTE OH REPEAL BILL
The sllicr Men II. M n Coiifurrnpc 11111I
Ilcrlilo to rrolonir tliii, IlelMiln nil
I.tinc us riHsllilc'-Slr rcTr In
trniliicr-i 11 "oiv Sitbtruniury
hrliemn Tlio JlnrttT Wiirit-
I up on the Halo.
TUB L'.OYS IM DL'JE.
Tliey r Mnrrlitni; (in liiillnnnpolln In
the (ir.iiul Itnitiiiipineiit.
IxPiv.VAi'Ol.is, Ind , Sept I. In
dlfliinpollis IR in holiday nttire to wel
come tlie votonms of the Grand Army
of the Hopublio nnd their friends Kor
the past tlirco wooks he cltiiotm' ov
octttlvo coininitteo lias been nctlvelr
at work nndof an executive
iKMird composed of members
of tho Cninmorclttl club, making
the nritinfrements for the greatest
event In thu hlstorr of the city The
completion of these arrangements 1ms
been marked br tin exh'bitlon of at
tentHn to dotails which has lieen pro
duetivw of by frho best piuparotl
oity in which the untionnl encamp
incut Imis ever beonOiehl.
V
! EXPRESS MESSENGER CHAP
MAN MURDERED.
Wasiiixotov, Sopt 4 --A wnmiitu
was given In the Unll 1 States senate
Stnl lll'HnV flint (lint urn lalfvlif tuiilitt Ind
to force tho sonnto tr a roto on the wM l,orhaP l,rovo l!18 mosl''i-"-stroiis
repeal bill. The warn'ng was given
TWO KILLED, FORTY INJURED.
Dl-introtn strout Cnr Arclilont In CI11-
In lint t Six Hurt Ilrjniii Itcruii-ry,
CiNris.vATi, Ohio, Sept- 4 What
BOLD HOLD-UP ON THE 'FRISCO.
I'olleil In "llii-lr Mtonipt In t'upliire t'lN-
l-un-iiN stlf ,n lll-jlinrjiiieii Ituli
thi. 1'u-iseni-nM All llmlr l'lirlii-
liln ultiulili.', Alioiit 1,011(1
In All, 'lliKoii Mtiiinil Vnl-
lo., Kiiiu, tho Sioiic.
by Mr oorhoes of Indiana, chair
man of tho llnance coiimittee, in tho
form of a notice that ho would ask
the senate Monday to consider it mo
tion to begin tho tlail sessions at 11
o'clock each morning, instead of at
noon
"1 have a sort of an old fttslilonctf
idea," said Mr Voorln-ts in giving the
notice, "that wo shotiil always Bttb
mit to the will of the najoiity and for
Unit reason, I will ask for a vote of
the senate on this preposition."
Instantly tho silver senators con
strued this into meatiug that early
meeting and long se-sions would be
tho rule and that whin the spf echos
of the opposition slioulrl be exhausted,
11 demand for a vote would lid made
and if necessary an appeal to cloturo
rule be made Asa result thev held a
hurried confeienco nnl their plan of
action will ,be to always hnvo tt man
prepatcd for a bpoecl; s'o that there
may be no dangerous Intorvnl in tho
debate, similar to tint of Friday
afternoon. ,
Tho sil'-er men will lot allow a vote
to bo tnken until tliey nre unable to
hold the floor any longer. That is
the point to which the light will
finally bo brought, for the men from
the silver states will never consent to
a vote beiug reached in any other
wav. It Is doubtful if the Democratic
majority nouid cvei consent to the
adoption of a cloture rule, no matter
how lony the talk may be strung out.
What the fieo coinage men seek to
accomplish bv delay was indicated by
Mr. Vnuco in his speech He urged
the advoeatis of trco sil 1 r to hold on
a little whili 1 mger and spoke of tho
iinpioti'incnt already goin , on in bus
iness timing lout I lie t 1 ntr. Uho
silver men hope tlltft-tf th , can delay
a voto long enough the 1 oiiditiou of
the country wlU bkwjufticiently im
proved to weaiten tho demands of tho
ficople upon the senate, for action nnd
11 this way they ran Anally got tho
advocates of lepeal to consent to somo
sort of 11 compromise.
A NEW SUBTRCASURY BILL
Introduced liy Mr. I'rlli'r Hut Not Ito.ltl
tin 1'roilsluiK.
Washinoon, Siopt J. Sonator PcfTor
of Kansas has introduced a subtreaa-
ury bill. It was not lead but was ro-
1 the judlelary oonnilUee. It
street cnr accident that over happened
took place in this city tst evening
An Arondalo car packed with peoplo
dashed down tt hill at frightful speed,
left tho 1,1 aelc, bioko a telegnph polo
and sho. lto tt saloon, w ret king both
Itself and tho structure it struck As
a result of the collision two people aro
dead, six aro injured beyond recovery
and nearly forty more are hurt, many
of them dnuguroiisly.
RESULTS OF THE TORNADO.
I'ully linn Tlminuilit l.li I. ml In tlm
Storm Sucpt Illitrlit.
Cnm.K8iox, S. C, Sept 4 lteports
from the storm swept distiicts In
crease in horror I'ully 1,000 lives
were lost In nenrly all of tho
churches of Charleston collections
wcto taken up .vest onlay for the suf
ferers by tliu storm, and a huge sum
was realized. Thu pastors of tho
colored churches have tailed a mass
meeting to raise a relief fund.
'I tin Mute Until. Tin.
WsiiiiT)N Sept 4 Secrotnry
Carlisle, Speaker Crisp, Hoprosentu-tlvo-DeWitt,
Warner of N'ew York:
Representative Hall of Missouri, nnd
Represent itive Oaltrs of Alabama,
held a conference at tho treasury de
partment looking to tho repeal of tho
ten per cent tax on state banks.
It was ono of a series of such confer
ences which hits had this subject under
consideration President ( lereland Is
represented us favoring tho proposi
tion to repeal the bnnk law if a meas
ure can be framed which will ofTsc1.
thu diffcultles in the way of a rehabil
itation of sta'c banks.
Tho plan suggested is to repeal the
tnx on btitte liiuiktf and provide fin
them tt uniform currency printed and
issued by tho general govnrnmenl
un-u uu 1. ue eiiiHBcs 01 saie ami ac
ceptable bands nnd securities properly
guaranteed by statesornniniclpalitioR.
AroliliUluip Irelind Sptiki- J
Ciiicaoo, Sept I. The world's Inbor
congress last night was crowded. Tho
chief attraction was an address by
ArchbiRhop Ireland. The famous
archbishop was enthusiastically le
ceived and frequently his remarks
woro interrupted by round after
round of applau&o, 1
thi
1 tn t
.V. I
The i.
it, tl.
'all 1.
lUc
Jawla. 1.
St !-, k
1 sec 1
ot tcveral
titttltut on
amend I
i iltii
that they might live In comfort
Crt ;iol:c ooncrbss.
of th tieasmv
I tnt uid w c to the .le jommii
menu t n,000,()0 in pajx-r for ea;h
loO.oOO InUabitautm, or in otber words
at th rate of 960 rr onpliav
nils money '.
in
"sou
or th
lud .0 und jury. It aln.i, o.al'ud to r tKi
? , , L .- ... ,.n1 .1 . ..1. Hit.. .... .1 tl.,1
mm lull wiwi; m 1 . 14 . . 1 t , ti. ., - ..
he wim ".Jajlnufor mi slake
The eupnune to.iri 11 luiurjuo tn thf noi
emphatic, iwtlnii' il inttrr woids id
nouueed the trU1 unt ir 1 t and din d
and ordered 11 n mil ' imt. nmr
found 1 roof Unit n Uu ira w.
outtoouly one Yiiiiuss 11 1
iltv.t of aoiuer m i" n th n
sounded -as to li h r In . .. - ''
In r Buuimonotl m iu , 1 - tl ta lav-ir
,-le-.cu, nad thla s .line I m.mis ball
Ilia The iurv w is 1 11 kr cti.u.i "' mf nf n
unrf itllliins ul 1 11 ttl til Mld 1 . '11
hunt b ono 1 1 tin n 1 . ' 11
-"- . .. . .
ana arounu umu 1 r u ui 1
jTor m ntU i.nd m
littlo, ieraed n '
whlata eooertaiMi 1 1
have tow not h ii
t-tevena. in tin 1
lie aeknowlodoil '
and control to d
dareaolda.Moai.
tut .1 new lrl.il 1 '
tiauMaVtM to lurulsi
for. od to remain 1 i
uu try aummor until 1
.iir.l l b.okon dou
'I r. w dmtriit r 1
taclji aupolntcd. hut 1
relrti HUi more, ani
raacia aa he J ' or.'
l! o 1
INVI "T . m
ect wrltv
in m 11 '.n- ' lncas,1)
1 !, I 11 1 I m,uisiil In
in I 1 titll 1 . e. lilt liy
1 1 111 vinous una
I 1 t lonn r trial md 1
orlc IielcjTKte i'ull
train and l Ki i.j.
. i wmmittee
ou fur the thlu. con.
. vening at lhc Ujnd
. . chairman .,, ..,
ratlfittd the work dono b . , .ri.oir
selected tho following olll . ,,
Uiatrman, Judge Vf,..;"";
to supremo
Profoaaor
unt
l .1
1 11
fin
Pan
porui
4
SI, f1-'. t,
tenner bil s A
h deliver rrtTi-:'
and no Interest 1
its use b' tlie
The ttati s recc
prohibited f.om I
an interest t liar,
to be n.strlbnted
, t!0, etc., lemtl
i of tli!
st
.rl
t
ro
(
cot 1 1
Ja it-1
t-inp 1 ii v i.retir
fill I. US ill AlVfcW-
, 91 .Mt...
l. tittkit, !, (tolotiil.ia w' -s.
'"V I . ' .' 7 .'"r".- ,rWn N,
ft.I. ...ii.j 1, .. ! II 1111 T ,!- ."- .. w ..
N'1 "ii
I IIiCkjIc'mi
ner
it llJW
of cn-.l'
1 li id nJi.i ri'l to
a nun p I 11 ' '
t ' tin mil' .
11 1 uud o in1 1 11 ion
I 1 t-d. anil imi sltt'o
I . I'uu retulncd
1 1 d 1 huv Lh . ti I
,i) iron h Ut n In t
II t at s-, to il Qii
1 1 i mi inhiic
1 1. ti iniint SI. icti
1 ' . t0 up si
1 u ii as the jutln
ii in t what 1 t
to 1 At the prvi
li 1 un 1 oat to
f tho co uimi i " liooawinu
K
Krancia
i.eilurnlrul0,'or
night w.ilUnl . . it 1 oft the aiuruny
of the ti im auJ was klllod. P,tfor"
sence was not dlscorerod un "' u"
, tl u ...ftl.M1 "Ills I"" ..
'itr ti i' l'a.
lani1
lout
ody waa
win
H
flril
gies
peal
wUl-
la'
bten.
. . - im.1 t nti CM
MaeBeaf. ,,, .,.,. to ii. .11, . uiiu a tr 1
la MB - . - . . . . , . lw M..i..,.l
"""M lie attor uauix .iu,riT w..,. .
IIJmi H ill thm cijol and dot'j '
PoSfm' . 1 i prou.Uly Induce ihi.n
I'onup
1 r J
1.11
Im 1
. t u I 1
a I 1.1
I im
I Uu.1"
.iJilirti 1 1
iiuiri ThnnhoJci
itfl'SCsf .'! at
v TiiiM,rl l v
atiw&uu u
ayaaillTformed i'
3Horaui niditli
JrtStBh
IftthcrV
,fs8IlI, "
J
.1, 10 niinaienco s.i 11 ll
he 1
to
ue
,1 o( lids whole inis tte I
.11 I 1110 lallalran.eth I I I
1 ituiilnuo the hi-lit' I tl
,1 , . 1c al 1 ,hl I
1 11 inn ai. 1 ud 1
1 ' 11 i not lur I I ,
1 .''ir is enr d.s.rai a
wi r 01 in tui.1 (
... lurli iHll Kll IH
T ! "..."u "'" - ,rn ,1, .
Illl'lUiiwa ..--- .-
,w "' V4rat Kiaeti
n,sriiif ,. . -Ou
fTortt n . ih ra
i In? iinwtc1 iM
, . 1. ' . !? vt
I ) 1 aio'hpor' ot
has luadt. 1. n a r'jlatl
rlso ' c'riuuu.' 8nf1
taken i' e m w
the -t 11 ito -ii
COUltll I j 1 1 ill' . .1
inatnUr 1 jh S
bill to it.o uu.. yi p. 4m . pared a
Oil L.3 W.
of tho
t i.i nrtra.
0tc e ro-
iw under
Vw York
h in per-
U lias
itM.-r in
ek, t .-
lion in the lious-
trod ui-
11
lor IrM ahldt,
WAitHitoTitN sept t- Con
Georne W I itiiian of unnr-'ercssmaii
chairm.1' .' fie cunurtv''' ,nc new
IliioiahiaudaBd twit
anions " ' -,,,
..inner notoi' '
111 t red of it 1 ' '
Vlort ne m n!niun " -
au. IP la-t; ., '
- ...... tr . ii ineui
fflfor ,uIU'rW'IBOli "
J . 1 .pibeexpenoiua.
f " V" '...ill. .vDubrlKn
VZ: m.
or deed ''
j write
it the r c uiut -w -- I -1..
ierlo.1 'l' I '-' "" Vld
iind icre
1 1.1 i'P I up
,. 1 11 1
P'
iui lied
lit i
V uJtff 1 I
chant 1
pared
ante
'rf
c
n rci.
COUUtl Ii
me 1. -i atre on
jriu 4la cries I as pre
tt ulcli will rcceivt a favor--t
ti m h , oonimilico, pro
't.- 1 er vd'nlssion to Amor
t 1.:' oi ps built in foreign
f'rfjf-i-ree of
to le iliHrgfd
eneial piveriitne
u g tlm moi ey
tli' i v vs o!
.'.I'.i
per unnuiu. Tho presuici.t of '
United States and national treasurer,
onescuttor and tiro intuibe-H nf
tho house of ropiesenlativos uie
to be a committee, to see
each state shall choose uomm tactions
to givo bond for secure handling of 1
money' rccoivod, tho bond to be ap-1
proved by tho governoi of tho state. I
1 his iiionoy is not to be lent on landed
s ctirlty of less than an undoubted j
viluo ofSJ.dOO for every fl.OW iasttcd
aod no 0110 pel sou is to loceive more '
th.iii S'J.OUJ. Corporations uto not al- 1
lowed to loud the money The time for
which money is lent is sixteen years,
but ono fourth of the total amount ia
to be naideverv four viinm lnimiu.i m
to became duo iiniiuaHy ufter earning
it No fees or couunii.iau8 nre to bo
charged to one soliciting or procuring
a loan. All lands and improvements
forfeited for non-payment of principal
or interest aro to go into public do
main. Other money than metal now
outstanding is to bo called into tho
troa ury and destroyed. The aecrc
mr of the treasury is required to
pn i 5,000.000 fifty-eon t bills nnd the
sanit number of twenty-fivo cent bilU,
to b. Eold by postmasters.
Amendment bovouteon prohibits the
OawKnii IC1111 , Sept I. Three men,
0110 masked, hold up 'Frisco passenger
train St. 2, at Mound Valley, Kan.,
nt 3:13 o'clock yesterday morning, shot
nnd Instantly killed liprcss Messen
ger Chnrlos A. Chapman, lobbed near
ly all of the passengers and escaped.
Mound Valley s a littlo station only
fifteen miles from the Indian Terri
tory line. The bandits aro now pro
bably safe within that retreat of out
laws. W hen tho train pulled into Mound
Valley two of the bandits boarded tho
engine and ono remained upon the
platform. A moment later tho colored
porter stepped from the train to usslst
it lady in getting on A Winchester
was thrust into his face and ho was
told to throw up his hands. Instead
of complying he 1 ushed tho woman
into the car and locked the door
Conductor Mills, knowing nothing of
the potter's experience, came forward
from n rear car and met with tho
same locoptlon. lie, too, refused to
comply with the icquest to tlnow up
his hands and ran back to the sleeper.
Meanwhile Messenger Chapman had
loft his car, whether to escape or to
notify the passengers will never bo
known, for he had gone but a few
yaids when ho was . iscovored by tho
outlaws upon tho engine who opened
lire upon him. Only two shots were
filed, but one ball from n Winchester
crashed into his brain and ho stag-!
gored and fell beside the track, dead
Then the outlaws commanded the I
engineer to pull out and mil until he
was told to stop One mile and a half
down tho load ho was told to check
up and dismounting, tho men pro
ceeded to rob the train In killing
I hnpmnn they had shut themselves
on' of the Wells-Fargo safe, however,
for it was locked and successfully re
sisted their blows with a coal pick
Foiled in their nttotnpt to loot tho
safe the bandits turned their attention
to the passengers. tth tho e
ceptlon of those m tho sleeper, uvorf
mun an I woman "was robbed. Montr ,
watches, jawelpf, hits, coat, ati'i
oven a bottle ft wlilsky was taken
It is estimated thai fully 31, 000
in 0,11) and va liable ymif,
secured. Then, learlnf the tram, '
the men iliappiurfd in Hie darkm c !
It is probable that tin 3- had li j s,., ju
waiting and rode for the ter-unty
Tho train va ran lml t M nd
city, wncro 1 iiipiuau a uenu i a us
recovered, anil then
journey. io unTurumnn iiiesarng
lived at .Idplin. lie va 4 vum nl,l
and leave a wife to whom l.i, waa
recently marri.srt
hen tut ' 1 1 1 I't-acher) this place a
poTtSc wa 11 tul up nnd started in
pa suit of b ul'. . .- imi .en is
llt.le hope tnat th y wilt l i-in mred.
c lefof police of I'al.tUi. .. a- I the
llt'gro torter of u 1 tho iraiut .ad w, re
armed, and they offered no lesistanue
vhaterer
All of the pa-aengera interview etl
aay that the lohir dianlaved a ennl-
for I iiited States semTtor "ini aSSf '2. WM R"PU'- remarkable and
("rrirm vj
OUR SHOES TALK
nntl ttie nt)caW 1unn s of misi nll
fa ItraiKl ttf f)Miuit . mi ih t
ir mir imr. 11 dmsn t p i , -. .-u u ,i
fH)txenr lien tuir-. m .il m m'mh
iiU'aHanl as a at aiion Our ii U im It
otir in iten an- u11, cinu .nut mi It
01rll w
1 mi-i 1 1 Hi. (NKkMlMMik. Don'i
1 w I Hi th. th il . 1 ihiikmI) tu
"in t' i 1 wii it 1I1 me tl4
'1' tl! lis Hill lilt 111 s Hid t
'u li U u .air slim s in.1
ear iiothhir t Nr
EISENSCHMIDT & HETSCH.
FAlrlF
U jL Sl J JimI 2Lm.a
If
109 HARRISON AVENUES1
?
Everything In the DRUC
T
WALL FA1JRH
Prescriptions Filled Day or Night
CiUl .U)
AT GOBI ,
A, G, HIXOH, Prop'r.
W TI I 1 l'HONi; CONNKCTI0N.-W
ich
Sece3nd Hand Stor
5-i
Jo
Mall war M'rnlt 11 1 Strrator.
8TRf.AT1.1s, 111., Sent, 4 On Satur
day n glit terrible tf-eek occurred in
. Jie otaktriA ot tin It , -.,, H j In
, o" itnJ f- I the d alb .Vr..v' v ' 'uw '
seriiius 11111 lea u a ilc?en n nr'i r '
sons. 'J lie iilino s 'S liry ai d Nor b
am branch ol tha bin.uig.ua nt u
tbrougji a bridge
IiibiIU' Allnae.l Aniblttiilt.
Toiek, Kan. Nipt 4 It is said
'S21JH-ts4jhtv- t'r lflfiji9will be a ean-
ui'iuie iur 1 linen states senator
luii .' succeed W IVftVr, and as a
ate iiing htone to Jus Miia tonal aspira
ti'm.wUl le a caa tdate for tho lie
pnl icau nominat o'fYor governor next
)"
1 ho Hanii' Hu.o (ilir I'raiposU.
I.ommiw, Sept 1, That the homo
rule bill will be re looted by the house
that 1 of lords in short order ia certain, nnd
that an appeal to tie country will
follow is equally cerltin, but that Mr.
tilailstane will live to lend it second
light for Ireland is 114 probablo.
New Goods &taH3mi-MGCi
Seeour Gasoline Stoves
beat. Sold light DOW
STJS pairing oi Gas.Hii,.; bt
A. H. RiC i
BigtiWi
r
' ialt.
:rOMD,
Fi"-
R
Couiproiiilaeil Witl llepniltorj.
Kkvaiia. Mo , Sept i.' It is reliably
leported that a couiroinlae has been
made lctwoen tho dree tors and de
positors of tho defuuc Hnrtley bank
ing company of .larva Springs. Tho
diioctor have turned over $l?,tKM) to
pay the depositors.
I! imi 11 tlier to the Ltopln'a I'urty.
Dkkvkii, CoL, Sopt 1. The Kocky
Mountain News pubiihas a letter
from T. M. Patterson its propnotor,
now in Washington, announcing its
political allegiance lurunftor to tho
People's party on aoeoint of the silver
issue.
import 01 any puuiic money in it nre '"mu,u' ruining war at 1
vate or incorporated bnnk other lliiyi jl l"t been suuaUiiliuteil
troasury or sub-treas-
M
1
I lia
hall
1 i 1'
1 U"
1'
plllill
.lure
ltni tllv a
4:l Vuo 1
tie nil 11
MUttiknu lor 11 lliirelur.
' vMHIiit Mo., hept 4. - otspXen
".j sjbea, a well known farmer, was us-
"" ' taken for a buiglar ten miles soutlvf
nut ' bedAlia at 1 o'clock yesterday 1031,.
ill '"'I ing, and was shot 111 the heat ly
J. nank Hollaway, who Is also a fira-
li,, mi ,ir. Tlie woui'(tis not jaiai.
.1 a-o.-,
h la
'. c,
I'eoble Ml ml ml, lluv
VuKI
inst
' ,i' K.cip,,
WAy.VM.n,rt 7t i.
1 to for
juiightj
y i&capi
d 00 .11
safely
.-pt
i.'St.
lntnrt.t.l .1. ...
tnlghiandnn
ruction. it
iwthi.werliall (aifen
.lb
tho national
uries.
Amendment eighteen provides for
lb,, fro., pnlnAtr,. ..f l.nl, rv.1.1 .! .it '
.... .... bW.H.t-U WA "U.l fUll WUll ftl
vor and in ordir to carry out this
great ork additional mints aro to
bo established near tlm uiines.
AliienditMMit ID prohibits aubtreas
urcrs from buying gold or silver or re
ceiving gold or silver for deposit and
issuing substitute money therefor.
Amendment 20 divides the national
tieasury into two separate depart
ments one to receive all revenue due
tho government and disburse, and the
other to issue, and distribute money to
the states and rcdeqm mutilated bills.
At WurU on the llulo.
Wasiuxuto.v, Sopt 4 The house
returned wearily to the debate ou the
rnies baturttay morning, not over
moiubers,bjjtng on th floor wjieji
speakeryrqppo.d thr gavuL
,' M. Itoiluct T Shot.
1 A Ills Sopt 4 J. . 1,
Ing u m , n,v ijj,, , ,, , I , jr , 1 ;
,1
Trooin Orileroillto Itohjr,
HLKiiAiir, Ind, Sit 4. Recent
rumors of 11 coming war at Uoby have
and the
Klklmrt and houth l!eldmlllta;y oom-
paiueo liavo beon ordatod to lontle
vousat I,aportu. 1
Mlsmiurl r.iulllo mot Hurueil.
llooxvit.i.B, Mo.Sojt. 1. Tho Mis
souri Pnclflo depot liurned to tho
ground at 9 o'clock ye erday morning
It Is suppos d to lutv loen set on bro
by a trump. Loss, ostluutod at Sl.OiH).
NEWS NOVES.
actiili'lfe oirVn nd at fW "wwAacss.
THP EOUAL SUFPclACIBTSi.
Xliey Adopt n l'latrncin for the Umiias
CumpaiKii at 1891.
, Kaasab Cirv, Kan., Sspt A The
uausus rtqual suffiage oiivuntlon bus
adopted tho following platform
Wlicroaft, The Momon lr. copveoiwn as
semklod In Kansis City, Kan., rcoonlzo und
belluto that the sutimtsulon of the equal sut
fra e 11t11011d1ne.it at tho present tlmo l.s on ev
olution nnd not u ruiolutlon, that It Is Btuiply
one liure step In the progress of il II gororn-
linorii, und thallt Is in 11 eplrlt or holpfulnuss
iiuu uu. uuiu iiiimui iuii nu uai( ino support
of tho men of Hits uiubo thcruforo bo it
KiHoluil Tuut inasimiuh us thiroaro intho
auflriko rinks women ot 110 pollllcal parties
and uomen of no poiitkul affiliations ar.d aluo
women of nil churches and uomon of no
church nnd wheroas, those-m omen urea unit
it thoir demand lor the ballot and aro norkiiu
toji'thor tor their common causo theroforo,
bo lt further
Itesolted. That wo declare It 10 bo tho do
tormlned polio) of tho Kansas Hquul hudravo
Misooiatlon to I'ouline ln uork for the umetiil
luent Htnctly to arguments and propupnmla
fur tho enfranchtsoment of nomea It Is
not expectod, nor will lt hi nskud
of hi women of tho seiT.il partlu.s
that Vc; khould coaso their actllltles und
their .iiulous work for their respouthu pur
tie tot imi most emphatically Mutu that all
xpouUora uuil workers, under tho ausplcos of
the umnndmeut campalun committee, shall ro
train Irom ar.'Uinont for or rercremo to their
party Issuea Inasniuch as we reconlie tbo
present 1 rials und t.ie slgnlllcancu thereof nnd
the relation or this moiument to the political
partle thoretore do It
Itemiived, That all pollttcul p&rtlus of tho
stale hall bo and hereby aro asked to embody
In tbeir county and state platforms expres
sion faiorlng the adoption of tbo pending
amendments
neaolied. That we extend to the Republi
cans l'opull8tH and I'rohlbltlonints of thoiu
lounties which haio adopted uneiiultocal
equil NuSraxo planks In their platform our
hoartT thanks nnd congratulations upon their
sagacity aud iiroarounho ioaitlon.
The rally ilomil this oicnhu with on ad
draw by busau II Anthony, Mrs T J Smith,
Mrs. I.eae and others
P
UHvUwm
HEATJQlAlUiV
fci)IL88
AND SPXs?$rr-
SADDLES FROM $
yevrm
Td!8iu
Chi
uu
1115
vee our Mai 1 .ir
inembfir the number,
111,
11., .
IiaillaOil
CAPITAL CITY BOOK
fi Rniilaa m
7 ,1 c m m a k u trf n n ii. r, -m
uHaaVav
TBftf
itn'il nu wtrtpryu"
UVl.lt.l l 11 lilje f
SAUNDh -c"
c
Hon. J. Proctor Knott is spoken of
ub a sueoesaor of Mr Wount as Minis
tor Ui Hawaii.
The Hitltimoro and Ohio railroad
has decided upon a tonper cent reduc
tion of snlaiios of employes receiving
more than $130 per mou,th
St Joseph,
orso on. rue
thej Ula" l0U Pr 'nontli
tlTifSlP-, was sneedfnff hU ih
lako road the animal belted, throwing ," '
lion aft and orushlngblji 1 skull. lJ
BEADLEJS BLOCK.
I
A full line of Books. SuavnrhriS3t
mb, bSsbKsbbT 5i
aavrvr
Supplies alwftjf tfll hi
H.A. BOYLE, Propri in.
CUv
LOOK HERE I
I Am Here
Attempted to Strut 11 l'ortuue.
San Kit vxcisco, Cal., Sept. -1 The
burned steamer San Juan, on ior lust
trip from Hon? Kong to Manilla, had
on board 200,000 ounccsof silver wortii
about SS50, 000 all of which disappeared.
No signs of it can bo found by divers
working on tho wreck Chief JJa
gincer Webb and a number of ie
bluuiuor's olllcers huve beenni.-i.l
and great oxeltcrooM pr. vuils ,,1 v-
ico. thoJMJillllplneK '' - W '
over tlie tinfeovcry of
was fuunu 1 m m
Sin wnia, "i. , .
iiiua
If you uro in want of tbo Colobratod CJgsfHlMU Sale,
o ind liurgiar rrooi, .
iMn want of the ColojjaJ Aowrtooi K- -
or I'iri
If you aro'
Homo Sewing Mnohinty
If you uro in want of MF
King of Scae. S
, awl Trtcycl'eifc a,
Kowir, th Oriel,
Warwick. tVaiPM King. WeTOftfraM. t
Traveleu thJ&r Vail an4 the Boad Qtv
rotalJ,comoafla get my prices, .t Khi 1. .
to Stay!
r'ire or Huiglar Proof fefli
ir"lg9
gt-H. k n .lehen,
JrCCl
1UD
SHIVELY BROS
- -aIlM ' m m
l..l nnrllLSnn.ll..liirl mid , ".' " V. - UOlie, U
one ,, h'8w;;"-:rp' r W"U' banjlecoPIc
i ,.,. r' -nicin. Cut out this ad ver-
, i . .einent and send with your order.
tt.ia.iAun r i
& vanHIie BEST in tlie CJJ
vt 'Address The PerniH.' t f-ouls, M
l'lrst-aluss livurv bara ,(Sslar a4 Or
improved facilities for rfyini pas-euers bet'
always ready to start Ht ki i ti
1h t i '
. r
LJ2. riaJLaJOsiHBin
B 1 1 fWll'll UIBBBHBBJIi. lr"
tarn. ?
m
xml | txt | http://chroniclingamerica.loc.gov/lccn/sn86063952/1893-09-05/ed-1/seq-1/ocr/ | CC-MAIN-2017-09 | en | refinedweb |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
#include <iostream>
using namespace std;
float moneyAmount(float quarters, float dimes, float nickles, float pennies)
{
float cents = 0;
cents = (quarters * 0.25) + (dimes * 0.10) + (nickles * 0.05) + (pennies * 0.01);
return cents;
}
int main()
{
float quarters = 0;
float dimes = 0;
float nickles = 0;
float pennies = 0;
float cents = 0;";
cents = moneyAmount(quarters, dimes, nickles, pennies);
cout << "Your total is: $"<< cents << endl;
system ("PAUSE");
return 0;
} | http://www.cplusplus.com/forum/general/99451/ | CC-MAIN-2017-09 | en | refinedweb |
Dear all,
I have tried to install SALOME (InstallWizard_6.5.0_Debian_6.0_64bit.tar.gz) on Kubuntu 12.04 64 bit. But I did not succeed unfortunately. I am hoping that someone might point out what I am doing wrong or what I can adjust to make SALOME work. I will now explain what I already have done and which errors I came across along the way.
First I extracted the tarball then I cd to th extracted directory and issued:
sudo ./runInstall -g -d /opt/salome_650
In the wizard I selected:
- Installation type: Install binaries
- Installation platform: Debian 6.0 64 bit
- Installation directory: /opt/salome_650
- Choice of products: everything
After the installation completed I got a warning that several packages were missing:
RECOMMENDED: libicuuc.so.44, libicui18n.so.44, libicudata.so.44
OPTIONAL: libcppunit-1.12.so.1
Afterwards I ran the post-install script to set the environment variables and tried to run Salome by:
cd /opt/salome_650/KERNEL_6.5.0/
source salome.sh
cd bin/salome
runSalome
I got the following errors:
Traceback (most recent call last):
File "/opt/salome_650/KERNEL_6.5.0/bin/salome/envSalome.py", line 27, in <module>
import setenv
File "/opt/salome_650/KERNEL_6.5.0/bin/salome/setenv.py", line 26, in <module>
import orbmodule
File "/opt/salome_650/KERNEL_6.5.0/bin/salome/orbmodule.py", line 31, in <module>
from omniORB import CORBA
ImportError: No module named omniORB
Then I tried to source by .bashrc and run Salome again by
source ~/.bashrc
But then I end up with some other errors:
File "/opt/salome_650/Sphinx-1.1.3/lib/python2.6/site-packages/site.py", line 73, in <module>
__boot()
File "/opt/salome_650/Sphinx-1.1.3/lib/python2.6/site-packages/site.py", line 38, in __boot
raise ImportError("Couldn't find the real 'site' module")
ImportError: Couldn't find the real 'site' module
If any additional information is needed to aid with this problem, please feel free to ask.
Hello,
You can try to source env_products.sh instead:
cd /opt/salome_650
source env_products.sh
If it still does not work, please post the result of "echo $PATH" after the source command.
Chrys
Dear Chrys,
Thank you for your reply. Before reading your post I installed the missing packages and reinstalled Salome. That worked out fine. It is up and running now. However the thing is that I need to source the shell script each and every time I boot my system. So to avoid the manual labour I put:
source /opt/salome_650/env_products.sh
into my .bashrc. However to my disliking, the sourcing takes quit a bit of time. Hence my terminal start-up takes quite long, which is quite annoying. Is there a way to source only once and to make the results of the sourcing permanent?
It's not a good idea to put the source products in .bashrc, since all your applications will be started within the Salome environment, which may contain different versions of the libraries needed by your other applications.
To avoid this you can create a file named salome with these lines:
#!/bin/bash
DIR=/opt/salome_650
source $DIR/env_products.sh
runSalome $*
Then make this file executable:
chmod +x salome
Dear Help
I have installed SALOME (InstallWizard_6.5.0_Debian_6.0_32bit.tar.gz) successfully on my Ubuntu 12.04 LTS using command line :
Trying to run it with : ~/salome_appli_6.5.00]$ runAppli , and I got some errors :
*********************************************************************************
:~/salome_appli_6.5.00]$ ./runAppli
runSalome running on lh-cq61
Searching for a free port for naming service: 2810 - OK
File "/opt/salome_650/KERNEL_6.5.0/bin/salome/runSalome.py", line 1020, in <module>
clt,args = main()
File "/opt/salome_650/KERNEL_6.5.0/bin/salome/runSalome.py", line 946, in main
searchFreePort(args, save_config)
File "/opt/salome_650/KERNEL_6.5.0/bin/salome/runSalome.py", line 876, in searchFreePort
f = open(omniorb_config, "w")
IOError: [Errno 13] Permission denied: '/home/lh/salome_appli_6.5.00/USERS/.omniORB_lh_lh-cq61_2810.cfg'
The same message error gotten when I added this line :
alias runSalome='/home/lh/salome_appli_6.5.00/.runAppli'
at .bashrc configuration file.Before installing Salome 6.5.0 in /opt it was installed in /home/ & with an alias at .bashrc configuration file. it was running correctly with now problem.....Some one can show me the way to finish building my project with Salome.
PS: When I excut : sudo ./runAppli .....Salome is launching.Thanks!
Sorry for messages, connexion problem !
Dear Chrys Lides
I also had this problem and could overcome this by your point. Now I have a question what is the difference between the instruction salome told by your instruction. In readme file :
" To set SALOME environment just type in the terminal window: cd <KERNEL_ROOT_DIR> source salome.sh or cd <KERNEL_ROOT_DIR> source salome.csh where <KERNEL_ROOT_DIR> is KERNEL binaries directory, e.g. KERNEL_6.5.0 for the 6.5.0 version."
Special thanks
Legal Information | http://www.salome-platform.org/forum/forum_9/128718636 | CC-MAIN-2017-09 | en | refinedweb |
XML - Managing Data Exchange/Business Intelligence and XML< XML - Managing Data Exchange
Contents
Learning objectivesEdit
Upon completion of this chapter, for a single entity you will be able to
- create a report specification entirely in XML for Cognos ReportNet
- update a report specification in XML format.
- identify four main sections in a report specification
IntroductionEdit
Every report created in Cognos ReportNet has a specification that is written in XML, so you can customize it using XML editor or create a report specification entirely in XML.
Report Specification FlowEdit
After you save a report and open it again, the report specification is pulled from the content store as you can see in Figure 28.1. When you edit it, the changes remain local on the client machine until you save it. When you save the report, the content store is updated.
Figure 28.1 Report Specification Flow
You can see a sample of web report in figure 28.2 and this report can be generated from XML file;
Figure 28.2 Sample of a report
XML in Report Specification StructureEdit
A report specification consists of four main sections.
- Report Section
- XML Tag:
- <modelConnection>
- Query Section
- XML Tag:
- <querySet>
- Layout Section
- XML Tag:
- <layoutList>
- Variable Section
- XML Tag:
- <variableList>
At minimum, a report specification must include the <report></report> tags, as well as authoring language and schema information.
The specification header in Report Section includes information about:
- authoring language, “en-us” indicates American English. You can use other language than English for the report
- namespace :
- package name: GSR
- model version : @name='model'
The query section includes information about:
- Cube elements are indicated by the <cube></cube> tags which can contain:
- facts (<factList></factList>. Country, First Name and Last Name are the facts.
- dimensions (<dimension></dimension>) consisting of levels(<level></level>)
- filters (<filter></filters> consiting of conditions(<condition></conditions>). Country is the filter for this report, which is equal to Germany.
- Tabular model is contained in the <tabularModel></tabularModel> tags.
- Each tabular model contains data items (<dataItem></dataItem>) consisting of fully qualified expressions (<expression></expression>)
- The query section of a report is contained in the <querySet></querySet>tags.
- The query section can include multiple queries, each of which is contained in the <BIQuery></BIQuery>tags.
Add pages to a report specification:
- You can add many pages to a report. Each page is outlined between the <pageSet> </pageSet>tags.
- Each page can consist of :
- a body ( mandatory)
- a header
- a footer
Add layout objects to a report:
- Once you have added one or more pages to the report layout, you can add a variety of layout objects, such as :
- Text items
- Blocks
- Lists
- Charts
- Crosstabs
- Tables
Specify styles for layout objects:
- You can use Cascading Style Sheets (CSS) attributes to determine the look and feel of objects in the layout.
- CSS values are specified between the <style></style> tags.
- CSS values can apply to things like font sizes, background colors, and so forth.
Add Variables to a Report:
- You can specify variables between the <variableList></variableList> tags of the report specification., and each of variable includes an expression between the <expression></expression> tags.
- We can use Variable 1 that contains a list of possible values, example value: fr for using French language;
Below is the complete XML file for the report in Figure 28.3
ExerciseEdit
The end user wants to read the report in Japanese language, so you have to add a variable for Japanese language. | https://en.m.wikibooks.org/wiki/XML_-_Managing_Data_Exchange/Business_Intelligence_and_XML | CC-MAIN-2017-09 | en | refinedweb |
Improved PHP solution stripping spaces and using a lambda for dryer code.
function stringsAreAnagrams($string1, $string2)
{
$stringToSortedArray = function ($string) {
$stringArray = str_split(str_replace(' ', '', $string));
sort($stringArray);
return $stringArray;
};
return $stringToSortedArray($string1) === $stringToSortedArray($string2);
}
Hi, here is my solution in JavaScript.
I’m sorry if my english is malformed somewhere, but it is not my native language.
This works by looping one time through both strings. Inside this loop I collect the number of occurrences of each character on each string. I collect these counts in two arrays, one for each string. In these arrays the indexes correspond to the char codes, and the values correspond to the counts.
After the loop I compare the two arrays and if they have the same count for the same characters, the strings are anagrams.
I’m assuming the strings will only contain uppercase letters and spaces.
function isAnagram (string1, string2) {
// These arrays will collect the counts.
var arr1 = [], arr2 = [];
var i = 0;
// One loop to traverse both strings.
// This loop will run n times, where n is the length of the longest string.
while (i < string1.length || i < string2.length) {
// It could happen that 'i' increases beyond the length of one of the two strings so first I check if there is a character at this index in the strings, and if this character is not a space. If it is a space it will just jump this character.
if (string1.charAt(i) != '' && string1.charAt(i) != ' ') {
// Then I increase the count for this character in the corresponding array.
// The first time a character occurs I asign 1, the next times I increase 1.
if (arr1[string1.charCodeAt(i)]) {
arr1[string1.charCodeAt(i)] += 1;
}
else {
arr1[string1.charCodeAt(i)] = 1;
}
}
// Do the same for the other string.
if (string2.charAt(i) != '' && string2.charAt(i) != ' ') {
if (arr2[string2.charCodeAt(i)]) {
arr2[string2.charCodeAt(i)] += 1;
}
else {
arr2[string2.charCodeAt(i)] = 1;
}
}
// Increase 'i'
i++;
}
// After the loop I compare the two arrays, and if they have the same counts for the same characters, the strings are anagrams.
i = ('A').charCodeAt(0); // Get the char code of the character 'A'
while (arr1[i] === arr2[i] && i <= ('Z').charCodeAt(0)) {
i++;
}
// After the loop if 'i' has increased beyond the char code of 'Z', all the counts were equal.
return (i > ('Z').charCodeAt(0) ? '1' : '0')
}
Just a reminder to please try and include a link to the code you submit in, if possible, for easier testing - this will help us test the solutions you propose and choose the best solution out of those that work.
You may still paste in raw, markdown-formatted code in your post, but we ask that you try and give us a link to your code in repl.it for ease of testing and evaluation also.
Happy coding!~CC mods
Here's my entry, using pure Javascript. My thinking is that each word or phrase is really just a set of letters, and two anagrams have matching sets of letters. My function arranges the letters so they can be compared, and then sees if they are a match.
var is_anagram = function(str1,str2){
str1 = str1.replace(/\s/g,'').toUpperCase().split('').sort().join('');
str2 = str2.replace(/\s/g,'').toUpperCase().split('').sort().join('');
if (str1 === str2){
//It is an anagram! Return 1
return "1";
}else{
//It's not a match.
return "0";
};
};
On each string: 1. remove whitespace with regex 2. convert to uppercase 3. split into an array to make it possible to sort 4. sort the array (alphabetically) 5. convert back to a string (because it's harder to compare arrays without looping)After this process, strings that are anagrams should produce the same string. If the produced strings don't match, then you don't have an anagram.
My working example is here: just saw that you prefer this so I took a few seconds to upload this
Hi everyone,
Here I present you my Python code for solving the challenge so you can read it and test it yourselves. Right after it you will find a brief explanation on how the code works and how I came up with it:
So, if you have looked at the code you would have noticed that the most substantial part is contained in the anagrams function, which contains the logic that can tell if two strings are anagrams in O(n+m).
anagrams
In order to be able to comply the time complexity constraint I assumed that both input arguments satisfied the precondition of being strings composed of uppercase ASCII letters. Then, if I wanted to accomplish the time complexity goal, I knew that I needed to iterate each argument one time at most. So, I came up with this idea of iterate over the first string and count the occurrences of each letter, and then iterate over the second string while trying to “undo” the counting I did before. This way, if the strings were actually anagrams, all my letters counts would be zero by the time I had finished iterating over the second string, and I wouldn’t have found any letter apart from the ones I encountered in the first string.
So, if we inspect the code of the anagrams function you will see that basically I use a dictionary for counting the letters occurrences. The first for loop is where I define each letter from the first word as a key in the dictionary, and I increment its value as I see them along the string:
for
for letter in first_string:
letters_count[letter] = letters_count.get(letter, 0) + 1
Later on, in the second for loop, I walk through each letter of the second string and I retrieve from the dictionary its number of occurrences:
for letter in second_string:
occurrences = letters_count.get(letter, None)
If that value is higher than 1, then I decrease it and store it back, as I already managed to match a pair of the same letter in both strings:
if occurrences > 1:
letters_count[letter] -= 1
Otherwise, if that value is exactly 1, that means I have just matched the last occurrence of that letter in the first string. Hence, I remove the letter from the dictionary as I shouldn’t be expecting to encounter this letter again while I finish my second string iteration.
if occurrences == 1:
letters_count.pop(letter, None)
Lastly, if that value is None, it means that I have found a letter that I cannot match to an occurrence of the same letter in the first string. Therefore, I can immediately conclude that the strings are note anagrams and return False:
False
if occurrences is None:
return False
By the end of the loop, if no keys are left in the dictionary (meaning that I was able to match each letter from the second string with a letter from the first string and no letter from the first string remained without match), then the words are anagrams. Otherwise, they are not.
return True if not letters_count else False
On top of that, I defined a validate_string function that runs some checks to the guarantee those assumptions I made for the anagram function inputs, and it raises some custom made exceptions in case one of those checks fails.
validate_string
anagram
Then I assembled both of the previous mentioned functions inside a wrapper function called anagram_detector, which also is responsible for translating the boolean value returned by anagram into a 1 or 0 accordingly.
anagram_detector
Finally, I added some tests cases and an interactive routine that allows the user to run the whole test suite or just type in a pair of strings for detecting if they are anagrams.
Well, that is all about the code. I hope you liked it or you find it useful.
Thanks for reading it
–
Carlos.
A post was split to a new topic: Read those docs
A post was split to a new topic: Exception handling
function isAnagram(word1, word2){
var word2Arr = word2.split("");
for(var letter of word1){
var index = word2Arr.indexOf(letter);
if(index == -1){
return "0";
}
word2Arr.splice(index,1);
}
if(word2Arr.length == 0){
return "1";
}else{
return "0";
}
}
The function isAnagram takes two arguments: word1 and word2.Then word2 gets spilled into an array of charactes called word2Arr. Then word1 gets looped through. Each letter gets queried in the word2Arr array. If a match is found, that letter gets removed from the word2Arr array. If a match is NOT found, it means word1 contains a letter that word2 does not, and is thus not an anagram of word2, and returns "0". Once the loop has finished, the length of word2Arr is checked. If all the characters have been removed (an empty array), it means the word is an anagram and "1" is returned.
My solution in Java
import java.util.Arrays;
class Input{
static int find(String x, String y){
String s1=x.replaceAll("\\s","");
String s2=y.replaceAll("\\s","");
boolean state=true;
if(s1.length()!=s2.length()){
state=false;
}else{
char[] s1Array=s1.toCharArray();
char[] s2Array=s2.toCharArray();
Arrays.sort(s1Array);
Arrays.sort(s2Array);
state=Arrays.equals(s1Array,s2Array);
}
if(state){
System.out.print("Anagrams");
return 1;
}else{
System.out.print("Not anagrams");
return 0;
}
}
}
class Anagram{
public static void main(String...s){
Input.find("TOM MARVOLO RIDDLE","I AM LORD VOLDEMORT");
}
}
Following steps are involved in the code:1. Remove the white spaces using Regex2. Convert the Strings into character arrays3. Sort the characters in the array4. Use the equals() method to check for matches5. Return "1" if they are anagrams and "0" if they are not anagrams
Since the input is guaranteed to be strings of spaces and uppercase letters, removing the spaces and comparing the sorted strings will do in Python 3:
def isanagram_slow(s1, s2):
if [a for a in sorted(s1) if a!=" "]==[b for b in sorted(s2) if b!=" "]:
return 1
return 0
# return 1 if sorted(s1)==sorted(s2) else 0
This is inefficient but compact easy version. Should whitespace be treated like any other char, the hashed line is the whole function in an even more compact form.
Intermediate difficulty version:
def isanagram(s1, s2):
A=[0 for i in range(27)]
B=[0 for i in range(27)]
# A=]
for i in s1:
if i == " ": pass
# if i == " ": A[0]+=1
else: A[ord(i)-64]+=1
for i in s2:
if i == " ": pass
# if i == " ": B[0]+=1
else: B[ord(i)-64]+=1
return 1 if A == B else 0
A & B will hold the amount of char occurencies in each string. In this case, position 0 counts spaces, others count letters. According to wikipedia, whitespaces don't count (eg "A B C" == "ABC") so I pass them, but they can be enabled by uncommenting the hashed tests.
Should list comprehension be considered to put a dent in the O(m+n) time for shorter strings, A&B arrays can be defined like the hashed examples. The code could easily be scaled to handle all the chars recognized by Python at the memory cost of 2*1,114,111 bytes (valid chr() range according to docs) for the A & B arrays.
Explanations are in comments.
Hi, my first challenge Solution on python:
def anagram_tester(s1,s2):
if len(s1)!=len(s2):
return 0
s = [0]*23
for a in s1:
s[ord(a)-ord('A')]+=1
for a in s2:
s[ord(a)-ord('A')]-=1
if s[ord(a)-ord('A')]<0:
return 0
if sum(s)==0:
return 1
else:return 0
s1,s2 = "ANAGRAMANAGRAMAMANAGR","RAMANAGRGRAMANAGAMANA"
print ("Anagram? %d" % anagram_tester(s1,s2))
It’s basically creating a list of the number of presences of each letter and increment on the first word and decrement on the second, the result should be that first there’s no negative numbers, and second the sum should be zero.
Any comments appreciated.
Link to Python code on repl.it:Anagram Tester from Anupama.
Explanation:
# 1) if length of the two strings do not match, then the strings are NOT anagrams. Stop checking. (iteration = 1) # 2) if the number of unique letters in the two strings are NOT equal, then strings are NOT anagrams. Stop checking. (iteration = 2) # 3) Convert to upper case and Sort the strings. The two strings should match at every index. If not, they are not anagrams. Stop checking. (iteration = 2 + length(string1) )
Raw code:
def anagramfn(a, b, strflag):
if(len(a) != len(b) ):
strflag = strflag + 1
elif len(''.join(sorted(set(a.lower()))) ) != len(''.join(sorted(set(a.lower()))) ):
strflag = strflag + 1
else:
a = a.upper()
a = ''.join(sorted(a))
b = b.upper()
b = ''.join(sorted(b))
ptr = 0
while(ptr < len(a)):
if(a[ptr] != b[ptr]):
strflag = strflag + 1
ptr = ptr + 1
if strflag > 0:
print("NOT Anagrams")
else :
print("Anagrams")
return;
# initialize test strings:
a = 'practi ce'
b = 'ce racpti;'
# initialize flag to indicate whether the two strings are anagrams or not.
# strflag = 1 implies the strings are true anagrams.
strflag = 0
# accept the two strings:
print("Anagram Tester")
a = input("Input String1: ")
b = input("Input String2: ")
anagramfn(a, b, strflag) ;
A post was split to a new topic: Validation for incorrect inputs
Like last week, I thought I would take a slightly more functional approach to this challenge, and like most people, I figured comparing two sorted lists is the way to go. While a lot of people just used Python’s sorted() function, I figured that if I want to make things more interesting, I could implement my own quicksort function. The quicksort I used is inspired by the elegant Haskell implementation of quicksort. Here it is:
Now for a quick explanation:
First, I define the quickSort function, which takes a pivot (the first element in a word or list) and then creates two lists, one of which contains all the values in the list after the pivot greater than or equal to the pivot, and one of those values less than the pivot. Then, it puts the pivot between these lists and sorts both of these lists using the same method recursively.
The actual anagram function is pretty straightforward. All that this does is convert the supplied words into a usable format, sorts them, and compares them to see if they are the same. If they are, then they are anagrams, otherwise they are not.
Finally, let’s talk about efficiency. I realize that this is definitely not O(M + N), since it uses quicksort, which is at best O(NlogN). This program is probably slower than this, though, due to the other stuff involved, but the purpose of this exercise for me is to practice my functional programming skills with Python, so I don’t mind the slowness.
Note: here is the wikipedia article on quicksort for anyone looking to read about it in more depth
A post was merged into an existing topic: Validation for incorrect inputs
A solution in Python which has a O(n+m) time complexity. It iterates through the first string and counts up the occurences of each character, and then does the same for the second string but decrements the count instead. If the two strings are anagrams then the count for every character should be 0.
Decided to handle all 128 ASCII codes so that the function works with all ASCII characters and not just capital letters, and it removes the need to subtract a base value from each of the ASCII values to convert it to a 0-based index.
def is_anagram(s1, s2):
# can return 0 immediately if the strings are different lengths
if len(s1) != len(s2):
return 0
# for counting the ASCII characters 0 to 127
counts = [0]*128
# increment the count for each character in the first string
for c in s1:
counts[ord(c)] += 1
# decrement the count for each character in the second string
for c in s2:
counts[ord(c)] -= 1
# the strings are anagrams if all the counts cancel each other out
return 0 if any(counts) else 1
There is a simple optimisation to check that the strings are the same length at the start of the function; if the strings are different lengths then they cannot be anagrams. Would be significant improvement for very big strings.
A totally inconsequential optimisation (but why not) is the use of the any() function instead of sum(). It is ever so slightly faster as it only needs to do a comparison with 0 instead of addition, and it does lazy evaluation, meaning it will return as soon as a non-zero value is found.
%timeit sum([0]*128)
100000 loops, best of 3: 2.48 µs per loop
%timeit any([0]*128)
100000 loops, best of 3: 2.07 µs per loop
%timeit sum([1]*128)
100000 loops, best of 3: 2.47 µs per loop
%timeit any([1]*128)
1000000 loops, best of 3: 764 ns per loop
Out of interest, I compared the runtime with a solution that use sorting to compare the strings (shown below). The time complexity for sorting is O(n log(n)), but for short strings the function below is faster. The point where the function above is faster is about strings of length 70 on my laptop.
def is_anagram(s1, s2):
return 1 if len(s1) == len(s2) and sorted(s1) == sorted(s2) else 0
I had this idea as well, but I don't think it works! Just adding up the ASCII values would mean it would see these two strings as anagrams, "AD" and "BC".
I thought maybe you can use the product but it would only work if each letter was represented by a prime number and the calculation of that product would probably make the function slow compared to the other methods, especially for large strings.
Here is my entry using Python 3.6.1 hosted on repl.it:
My first bit of thinking was to be sure to get the input from the user and prompt them as to what words they would like to compare. I then assigned another set of variables equal to the original input ran through the built-in .upper() function, to guarantee that the function would be handling uppercase letters, as per the challenge rules. Next up was converting the strings into lists with a for loop to iterate through each of the words and append each iterated item to the appropriate list. After assigning the lists to sorted versions of themselves, I then compared the sorted lists to each other and returned 1 for anagrams, and 0 for not anagrams.
Code below for those who do not wish to follow links:
def anagram():
#ask the user for words to compare
print('What two words would you like to compare and see if they are anagrams?')
print('First word: ')
word1 = str(input())
print('Second word: ')
word2 = str(input())
#converts the strings from the user to uppercase
word1Upper = word1.upper()
word2Upper = word2.upper()
#creation of empty lists for conversion from strings to lists
word1u = []
word2u = []
#for loops do the work of making lists from the strings
for char in word1Upper:
word1u.append(char)
for char in word2Upper:
word2u.append(char)
#sorting the letters into sequential order so that Python can compare the lists
word1u = sorted(word1u)
word2u = sorted(word2u)
#condition that compares the sorted lists to each other to see if they match
if word1u == word2u:
#returns 1 if they match
""" print('anagram')"""
return 1
else:
""" print('not an anagram')"""
#returns 0 if they don't
return 0
#print(word1u, word2u)
Time complexity was not taken into consideration when devising this solution as I’m still definitely a beginner.
Here is my simple code in PHP.
<?php
function find_anagram ($sample_string1, $sample_string2)
{
$string1 = str_replace(' ', '', $sample_string1);
$string2 = str_replace(' ', '', $sample_string2);
$str_len1 = strlen($string1);
$str_len2 = strlen($string2);
$flag=1;
if($str_len1!=$str_len2)
{
$flag=0;
}
else {
$c = $str_len2-1;
for($i = 0; $i<$str_len1; $i++)
{
If ($string1[$i] != $string2[$c])
{
$flag=0;
break;
}
$c--;
}
}
echo $flag . "<br />";
}
find_anagram("amma", "amma");
find_anagram("asdf", "asdf");
?> | https://discuss.codecademy.com/t/challenge-anagram-detector/83127/19 | CC-MAIN-2017-39 | en | refinedweb |
Exposing C++ backend to QML
Hi
I can see two methods: qmlRegisterType and setContextProperty
Is it possible to register backend.h in main.cpp which then will be exposed in main.qml
import io.qt.examples.backend 1.0 BackEnd { id: backend }
And then if I create "QList<QObject*> dataList" of some sort in backend.cpp or its child classes I can use this as a model for example for Repeater somewhere in qml files eg. like this:
Repeater { model: backend.someClass.dataList
The idea is to have single "entry point" from QML to C++ classes
Will this work?
Best Regards
Marek
Well this will not work.
Digging deeper, I would like some clarify this:
I have created subclassed QAbstractListModel based on this Sailfish nice example
If I register this model:
qmlRegisterType<DemoModel>("com.example", 1, 0, "DemoModel");
I can't pass any arguments during registration - according to this: QML needs to be able to instantiate the type, calling it with only parent QObject.
So if my model needs to get the data from C++ part of the application, eg. from xmlMgr that reads files from directory, it can't be exposed to QML with qmlRegisterType - am I get this right?
In case model needs to get data from C++ part I need to use
demoModel=new DemoModel(xmlMgr); demoModel->init(); view.rootContext()->setContextProperty("demoModel", demoModel);
Is this how it supposed to be, or am I missing something?
Best Regards
Marek
@Marek If you have one object which you have created in C++ you can expose that object to be used in QML with setContextProperty. If a model is based on QAbstractItemModel and can be used automatically in C++ views it should work automatically with QML views, too. So I think your last comment is correct - you need only setContextProperty for it to work.
@Eeli-K Thanks for answer.
I hoped someone will add something about qmlRegisterType used with models (first approach), because as I see it now... it pretty much useless for registering models
Best Regards
Marek
you don't need to create any instances of DemoModel in your C++. qmlRegisterType is enough, but you also need somehow to trigger update of your model. In my case I've created Q_INVOKABLE void update(); method in my model's class implementation which is being called each time data in model should be refreshed.
Of course don't forget to override:
int rowCount(const QModelIndex &parent);
QVariant data(const QModelIndex &index, int role);
methods in your model implementation.
Here you have great description of model-view-delegate concept:
I don't create DemoModel instance in C++. I know it will be created by QML and I know about Q_INVOKABLE.
Lets say you have some cpp class dataMgr - it reads data from external source like tcp socket.
This Data is needed for DemoModel, and you have DemoModel defined in cpp.
How do you call DemoModel Q_INVOKABLE void update() from QML if DemoMode does not know about dataMgr ?
DemoModel is instantiated in QML without pointer to dataMgr and you cant access DataModel from cpp.
I assume that your update() from DemoModel reads data from some source but this process is completely isolated from the rest of cpp part of application, right?
Best Regards
Marek
I have ended up with such a solution:
int main(int argc, char *argv[]) { QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling); QGuiApplication app(argc, argv); QQmlApplicationEngine engine; BackEnd backend(&engine); engine.load(QUrl(QLatin1String("qrc:/main.qml"))); if (engine.rootObjects().isEmpty()) return -1; backend.init(); return app.exec(); }
Backend contructor is before engine load so the models are defined in QML
Backend init is after engine load to get rootObject, signals etc.
Backend - C++ part of the application
#include "backend.h" BackEnd::BackEnd(QQmlApplicationEngine *engine) { this->engine=engine; xmlMgr=new XmlMgr(this); itemModel=new ItemModel(this); xmlMgr->init(); itemModel->loadData(xmlMgr); context = engine->rootContext(); context->setContextProperty("ItemModel",itemModel); } void BackEnd::init() { rootObject=engine->rootObjects().first(); connect(rootObject,SIGNAL(itemClicked(int)),this,SLOT(itemClicked(int))); } void BackEnd::itemClicked(int item_id) { qDebug()<<"BackEnd::itemClicked item_id:"<<item_id; int catalog_id=itemModel->getCatalogId(item_id); if(catalog_id) itemModel->setSelectedCatalog(catalog_id); }
QML part
ApplicationWindow { visible: true width: 640 height: 480 title: qsTr("Hello World") signal itemClicked(int value) header: Header { height: 60; width: parent.width } SwipeView { id: swipeView anchors.fill: parent Page1 { } Page { MainItem2 { id: mainItem visible: true } } } }
This works well for me.
Best Regards
Marek | https://forum.qt.io/topic/80959/exposing-c-backend-to-qml | CC-MAIN-2017-39 | en | refinedweb |
Do you not calculate PnL or need to keep cost records in webERP? Because
stock adjustments basically will mess up all your PnL and BS. If you are
only using webERP to keep stock count, then it works but its a load of work
to do just to keep track of inventory.
--
View this message in context:
Sent from the web-erp-users mailing list archive at Nabble.com.
Hi,
I running a separate POS stock table and weberp Inventory, I been able to
import weberp stock table into the POS, now I need to update the quantity
been sold in POS to weberp....I'm thinking of using the Inventory Adjustment
to do so, item by item from POS stock report.
In other words, if I sell 1 unit in POS today, at the end of the day I will
go into weberp to -1 unit by Inventory Adjustment. Therefore my weberp
Inventory qty can always up to date.
I know this sound a bit crazy if I had few thousand item, unless someone
writing a script that can do batch Inventory Adjustment with just importing
a cvs file.
Just wanted to know is there any problem to do so?
--
View this message in context:
Sent from the web-erp-users mailing list archive at Nabble.com.
That sounds great. I am really looking forward and waiting 'impatiently' for
you release.
I will hold back the adjustments till then. I think I can still afford to do
so for a while..
--
View this message in context:
Sent from the web-erp-users mailing list archive at Nabble.com. | https://sourceforge.net/p/web-erp/mailman/web-erp-users/?viewmonth=201007&viewday=14 | CC-MAIN-2017-39 | en | refinedweb |
Law
Ask a Law Question, Get an Answer ASAP!
Hi will try and help
Hello again
hi
TO what extent have the tenant moved out? Have you had a look through the window and is the place completely abandoned. ?
yes we looked through the windows last night and it is empty they have moved to another property in the same town according to a relative of theirs
ok thanks
in these circumstances I always advise caution. Technically they are allowed back to that property until the date the notice expires (and in in fact until you get a court order) however where a tenant has voluntarily given up possession then the landlrod is allowed to re-enter the property to secure it. The reason I say exercise caution is if the tenant has "fully" vacated the proepryy and later argues that because they left they toaster there, for example, they were still in occupation.
what i suggest you do then is to send/hand deliver them a letter saying that you believe they have vacated and you are concerned over the secutiy of your preoprty, and that if you have not heard from them within, say, 48 ours, you will breack in and change the locks to ensure that yot proeprty is secure.
(terrible spelling in that one sorry)
I have sent them a text reading this- we know you have left and moved out, you need to give us notice that you have left and return the keys for us to take it back. If this is not done by the 1st of December we will seek a possession order from the court and you will be liable for December's and November's rent. we can come through today or tomorrow, if we do not hear from you today, legal proceedings will start tomorrow. then it is out of our hands which i'm sure neither of us want.
we have had no response from them and they have only paid 1 weeks rent this month which was on the 10/10
If they have actually vacated then there is no need to get a court order. You can take possession back. You just have to be very sure they have actually vacated of their own free will. I am assuming that it is more important to get the property back and rented out again, as you will unlikely recover any arrears from the errant tenants?
we are not renting again we are selling but I do not want to enter the property if we are going to break the law. they have definitely left but have left a key in the front door to stop us gaining access which would mean us breaking in
is there a back door that you or the tenants can enter?
we only had a key for the front door but yes there is a back door which can be used
we also do not have a forwarding address for them
There are two lawful ways in which a landlord can get a property back; either he serves notice and gets a court order, or the tenant hands the property back to him. If a tenant leaves, and there is no intention for him to remain or come back, and all of the evidence here points to it, then the landlord can lawfully take back possession. If you have to break in to do it because they haven't arranged to give you the keys back then that is what you will have to do. However this is why i was suggesting the written notice through the door (i don't like text messages as you cant easily evidence them in court). This is a belt and braces approach. if the worse happens, the tenant returns and says you have unlawfully evicted them you can legitimately say to the court that you had texted them, and written to them, the property was completely empty and as far as you were concerned the property had been abandoned. You were worried about security, and after the expiry of the notice you had no choice but to break in and change the locks.
so if I gain access to the property tomorrow and change the locks is there anything else I need to do. and like I said we don't have an address for them to put a letter through the door so at present that would not be possible
sorry i mean serve the notice through the door of the property. That is the last known address. i know it sounds odd as you suspect they don't live there, but what you are doing it laying the ground work so say that, if they were in occupation (or intending to return) to the property, you had done all you could to protect their interest. Let someone else deliver it do that they can give evidence that they had done so, and I would suggest leaving it longer that tomorrow to break in so tat you have given them a good 48 - 72 hours notice of your intention. It is all unlikely to be necessary if they have moved out and never intend to return, I am just giving you the best advise in case of the slim chance that they do actually return (in my experience they very rarely do)
not sure if it was yourself who spoke to me last time, but we can have a chat over the phone for an additional fee if you wish to chat over any other concerns?
ok that is great thank you for your advice again, hopefully this will be the last time
just 1 last thing is there anything specific I need to put in the notice
As I said above, something along the line of that you are lead to believe that they have vacated the property and from the lack of furniture and lack of reply to text message you believe they do not intend to return. If this is not the case ask them to contact you immediately on receipt of the note. If not then, as you are concerned about the security of the property you intend to enter the property on the .......date at such-and-such a time,to ensure that the property is secure an free from tresspassers.
that ok?
fantastic thank you
also, just FYI, in my day job my firm does conveynancing if you require my service on that
when you come to sell
thanks very much bye excellent rating
cheers. If you click on the smiley face i can be paid for my time here. All the best.
Hi your advice was great thank you we received the keys back for the house on Tuesday after pointing out they would still be liable for Decembers rent and that we would have to take them to court in order to get the house back, so thank you very much for your advice it WORKED | http://www.justanswer.co.uk/law/8sobl-hi-spoke-ago-issuing-section-21.html | CC-MAIN-2017-39 | en | refinedweb |
Java Operators
Operators are the symbols which perform the operation on the some values. These values are known as operands. Java have following operators –
- Arithmetic Operators
- Relational Operators
- Logical Operators
- Assignment Operators
- Bitwise Operators
- Misc Operators
Arithmetic Operators
2. Relational Operators
It is also known as comparison operator because it compares the values. After comparison it returns the Boolean value i.e. either true or false.
3. Logical Operators
4. Assignment Operator
5. Bitwise Operators
It performs bit by bit operation. Suppose there are two variable I = 10 and J = 20 and their binary values are
I = 10 = 0000 1010
J = 20 = 0001 0100
Misc Operators
Some misc operators are:
1. Conditional Operator ( ? : )
It is also known as ternary operator that means it consists of three operands and performs Boolean expression.
Syntax
variable i = (condition) ? value1 (if condition is true) : value2( if condition is false)
e.g.
public class Intellipaat { public static void main(String args[]){ int i, j; i = 20; j = (i <=50) ? 20: 60; System.out.println( "Value of j is : " + j ); } }
Compile and execute above program.
Output
Value of j is : 20
2. instanceof Operator:
It is used only for object reference variables. It checks that whether the object is of interface type or class type.
Syntax:
(Object_reference_variable ) instanceof (interface/class type)
e.g.
public class Intellipaat { public static void main(String args[]){ String name = "intellipaat"; boolean outcome = name instanceof String; System.out.println( outcome ); } }
Compile and execute above program.
Output
true
It returns true because name is a type of String.
Operators Precedence in Java:
"0 Responses on Java Operators" | https://intellipaat.com/tutorial/java-tutorial/java-operators/ | CC-MAIN-2017-39 | en | refinedweb |
CodePlexProject Hosting for Open Source Software
:-) You’re like the audience plant to ask just the right questions :-).
They are the same. LINQ Expr Trees v1 evolved into Expr Trees v2 which are exactly the DLR tress. All the sources you see in our codeplex project are the
sources we’re shipping in CLR 4.0 for all of the Expr Trees code. What might be confusing is that until we RTM CLR 4.0, we change the namespaces on codeplex. We need to do that so that if you have a .NET 3.5 C# app that both uses LINQ and hosts, say, IronPython,
then LINQ works as well as the DLR. You just can hand those threes back and forth, which would be a very corner case scenario if one at all. When we hit RTM, the codeplex sources will will have the same namepace, but we won’t build the Microsoft.scripting.core.dll
in our .sln file.
Bill
From: esforbes [mailto:notifications@codeplex.com]
Sent: Thursday, March 26, 2009 5:00 PM
To: Bill Chiles
Subject: Expression Trees + DLR Trees - why the duplication? [dlr:51467]
From: esforbes
Not sure if this is the right place to ask this, but here goes:
Why is there so much duplication between these two incredibly similar tasks? Why couldn't Expression Trees have been implemented as DLR Trees (or
vice versa)?
First, we’re backward compatible. Second, neither C# nor VB extended their LINQ support or lambda expressions to use any of the new ET stuff, so no LINQ
provider will ever see anything they haven’t seen before in CLR 4.0.
bill
From: esforbes [mailto:notifications@codeplex.com]
Sent: Thursday, March 26, 2009 5:11 PM
To: Bill Chiles
Subject: Re: Expression Trees + DLR Trees - why the duplication? [dlr:51467]
Oooh - hehe, okay that makes sense, especially considering all the talk I've been hearing about expression trees being augmented in .NET 4. I guess
that 'augmenting' in this case meant replacing them with DLR trees. =)
That's very good news - but what ramifications will this have on existing LINQ providers, since the expression trees they consume are getting augmented
in ways the original designers probably didn't anticipate?
Yes, if you manually do that, the provider will choke :-). Our design goal was to introduce things so that if current binaries were coded defensively, then
we should always fall into their ‘else’ or ‘default’ clause for unexpected node or property settings. We made sure that any existing nodes have no new semantics based on property values or new properties we added that current binaries would not have known
to detect and change their behavior.
Cheers,
From: esforbes [mailto:notifications@codeplex.com]
Sent: Thursday, March 26, 2009 11:21 PM
To: Bill Chiles
Subject: Re: Expression Trees + DLR Trees - why the duplication? [dlr:51467]
What if someone constructs an expression tree using these more advanced nodes, and tries to send it to a LINQ provider?
Cool! It might be worth adding that even today you can create a subtype of Expression, ball it up into a tree, and hand that to a provider who won’t know
what to do with it. That’s why they should have good defense ‘else’ and ‘default’ branches for failing gracefully already :-).
From: esforbes [mailto:notifications@codeplex.com]
Sent: Friday, March 27, 2009 10:11 AM
To: Bill Chiles
Subject: Re: Expression Trees + DLR Trees - why the duplication? [dlr:51467]
Ahh, I understand - excellent. =) I'm very much looking forward to this in .NET 4. =) Thanks for the information!
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://dlr.codeplex.com/discussions/51467 | CC-MAIN-2017-39 | en | refinedweb |
I'm looking at some code examples for sockets, and find one example that shows:
import socket
and another that shows:
from socket import *
I don't understand the difference, and find that I can't do both because in one condition the 'import socket' instruction works for one example, but if I do both imports, it no longer works. Can someone please tell me what's going on here.
Thanks much | http://forums.devshed.com/python-programming/950252-import-question-last-post.html | CC-MAIN-2017-39 | en | refinedweb |
How to generate an element from laplace distribution
Hi, I am wondering how to generate a random element according to laplace distribution. I tried the method RealDistribution(). But it failed.
According to the reference manual, laplace distribution is not supported....
In the manual, there are some examples showing how to deal with uniform distribution, Gaussian distribution, etc.
My questions are:
(1) How do I know exactly which distributions are supported by the RealDistribution() method?
(2) Is there anyway I can simply generate an element according to laplace distribution in sage?
Thank you in advance.
I found a way to do this by using numpy in sage....
import numpy numpy.random.laplace(loc,scal,size)
will generate an element for laplace distribution. But it is not a very "sage" way to do it. And I am still concerned about my first question. | https://ask.sagemath.org/question/26617/how-to-generate-an-element-from-laplace-distribution/ | CC-MAIN-2017-39 | en | refinedweb |
I
PingBack from
PingBack from
Unix filesystems, anyone? Your ‘post’ (or indeed that which you quote) should really have given credit to the original implementors, or at least mentioned them, shouldn’t it? Windows users, eh.
Yipee! Windows invented the symlink. Or did it already existed on Unix/Linux, something like 15 years ago?
And yet another UNIX feature shows up in Windows as an "innovation".
Are these symbolic links paths (relative or absolute) like the *nix implementation or are they richer implementations that are much more robust like the Mac OS’s aliases?
create a symlink, something i always missed on windows. what about all the other missing features in a os, seen in bsd, linux, unix, etc. maybe start here: and, etc. bleh
"Now why is this relevant to the SMB2 protocol? This is because, for symbolic links to behave correctly, they should be interpreted on the client side of a file sharing protocol (otherwise this can lead to security holes)"
Are you nuts?!
For security to be maintained the filesystem (NTFS) has to parse the link and handle it appropriately! A symlink only has meaning within the system running the filesystem.
Either someone needs a big LARTing by a clue-by-four, I have seriously misred what you wrote, or you are … not quite suitable to do this work.
I hope I seriously missed a point.
I feel it’s worth mentioning to those who are not familiar with *nix, that symlinks have been available on various file systems in that space for… well, a very long time.
It’s a long overdue addition to NTFS, and a very welcome one!
Those who don’t understand UNIX are doomed to reinvent it, poorly
So did you stole this from BSD too?
>>
Will there be SMB2.0 clients for Windows XP/2000/etc or will we need Vista in order for network clients to properly access symlinks?
brilliant idea!
WinFS = UNIX FS 1970
Seems, like so much duplicated efforts now by microsoft at their kernel level, too bad they don’t just do something like apple and use freebsd. At this point anyways, it seems like what microsoft is doing is essentially writing their own unix kernel.
But how do the new NTFS symbolic links differ from NTFS junction points?
Weren’t hard links already in NTFS? I already use them to remap entire folders from Program Files on another drive (e.g. D:)
Hi,
I’m a little bit confused :(. For me this seems a lot like hardlinks/junction points ()
Am I missing the point?
Thanks in advance
i never knew how the shortcuts in windows worked but i’m guessing that they don’t work as symbolic links then?
Why does showing symlinks to a client that doesn’t understand them cause a security hole? Wouldn’t they just either not show up or show up as plain files with an unknown type? Why can’t symlinks be flagged with a "symlink" attribute that old clients ignore, and transparently point to the destination file when acted upon by an ignorant client, just like you said about applications ("Apps… will now open the target by default, unless they explicitly ask for the symbolic link…."). Can’t that apply to clients as well?
Or are you saying that symlinks that the client doesn’t understand can lead to things like symlinking across hosts and causing people to open files they didn’t know they were opening? More of a "the user doesn’t know what their actions are doing" type thing, like when Windows hides file extensions and people think a VBS is a JPG?
finally windows will get symlinks …
That’s great, *nix has been using this successfully for years. Security issues should almost be a moot point as you can learn from others’ mistakes. Good to see you guys moving in the right direction…now if only vista would move away from the whole registry thing. 🙂
so does anyone else think Microsoft is trying to be more like linux now?
It’s good that Windows is getting symlinks. They’re really useful – just imagine being able to switch over files or directories between versions just by moving a symlink?
I was trying to remember how NFS handles them. Is that also client side? I can see the security hole of having a symlink evaluated server side. Point it to a file that you’re not meant to read and read the file. Though won’t normal security prevent that problem anyway? If you can’t read it, you can’t read it. Client side the symlink may not make sense and could also cause confusion for users. Any symlink pointing outside the "share" sounds dodgy.
In this post, you state that, a symlink neads to be resolved clientside
>> "This is because, for symbolic links to behave correctly, they should be interpreted" <<
the trueth (in my honest opinion though is),
that everything, in a protocol accession the server, should actualy be handled By the server.
because, of the ability to fake the symlink’s id to match an object request that otherwise should never be avail to that specific user)
For example: lest say im running a windows Vista server for my clients in a way that compares to a unix-shell server.
for this they are accessiong the system through SMB,
what i would try to do if i wanted unauthorized access,
Id try to reverse engeneer (like samba did) this protocol and fake a network brower’s response: telling the server that the symlike called: /user123/ (of wich i am alowed total control = read, write, change and delete)"
isn’t actualy a link to: "%datadisk%/serverdocs/users/userver123/date"
nut instead to %windir%/system32 (or even the data dir of another user).
this by self COULD becoome quite a risk if the server itself doesn’t check whether this symlink is actualy a valid one…
Now if the server DOES check this, both systems are doing exactly the same job,
Wich in a way both make it redundant, yet also questionable wether it in fact is neaded or just unnesesairy load on your computers…
Ofcaurse with just 1 computer it would hardly make a diference, but with over a 100 or even a 1000 clients it would)…
** based on how symlinks usualy work, they, as al files and folders, inherrit the access rights of thair parrent folder.. so if i gave Full-control to the …./userver123/ folder of where also this symlink whould be located. – it could get quite nasty if the administrator does not suspect this kind of exploits.
so if in fact i indead would be righ regarding this. could you please shed some light on, what will be done to prevent this from actualy be possible,
______________________
with most kind regard,
Edwin van uffelen, (IT-student)
The Netherlands…
Damn i had those in os/2 in 1999. Microsloth just got around to it?
$ fortune -m ‘condemned’
%% (fortunes)
Those who do not understand Unix are condemned to reinvent it, poorly.
— Henry Spencer
Nice… no more having to use scaaaaary junction points.
To me, Kevin Owen’s question seemed to be more about finding out if there is some relationship between the SMB2 symbolic links and the currently implemented NTFS reparse/mount points. Can you expound on this?
Why does everyone feel the need to post the EXACT same comment about UNIX. Ok – everybody knows that this new feature to Windows, is already part of UNIX, get over it.
Thinking of a server that does symlinks server side – Apache on UNIX. Apache does have checks though to ensure that the symlink doesn’t point outside the document root.
These checks are optional. Sometimes it makes sense, sometimes not. Depends on who has write access to the server. They were used in the infamous Mindcraft benchmarks to add more overhead to the Apache server.
I can’t understand how this could create a security problem?
Surely if a client machine logs into a share, then they know the log/pass for that share… if so then they’re likely to want access to all areas of that share.
If they try to access files outside of that share (via an alias..er..’symlink’) then one would naturally assume that they’ll be greeted by a prompt asking them for the log/pass details for the user-account/directory/volume they’re trying to access (if the ‘symlink’ points to a path eminating from anywhere other than the current share’s root then it must be outside the share and subject to different permissions).
…or am I missing something here?
Windows, PAH! Funny how their technological "advances" and "innovations" just seem to be long-standing features of other OS’ but with added flakeyness. 😉
That is one thing that always bugged me about reparse points in NTFS, their inability to refer to network shares.
I want to create a single filespace that I can work from to cover my entire network, but have to resort to using one drive for my local things, and another DFS root.
Don’t they already exist in Windows as ‘Junctions’? I’ve been using these for some time. These only work with directories, but work on WinXP, Windows 2000 and Windows 2003:
There is also a tool in the W2K Resource kit that will do it as well.
A typical set of comments from typical *nix people. Get over your old OS… Or at least move up to Mac. Command line blows…
the reason I’M having a hard time "getting over it" is because of bits like: "Note symbolic links are an NTFS feature"
> Note symbolic links are an NTFS feature.
I assume you’re saying "It’s implemented at the filesystem level", but this is certainly technology that exists far, far into the past in *nix systems.
Don’t pretend to reinvent the wheel, come on out and say you are bringing in *nix features.
Now if you could get a native port of the NFS client working properly. And integrated nicely with the "net use" command. Not like the poor implementation in the "Services for Unix" CD.
Well the funny thing is that SMB already supported hardlinks somehow, which I guess hardly nobody found out.
If you call CreateHardlink with a SMB source and destination, and the SMB location points to the same drive a ‘remote hardlink’ is created, so I deduce SMB supports remote creation of hardlinks.
I have played a lot with hardlinks on NTFS, and
so I am happy to hear NTFS6 will support symbolic links. In the meantime you can play with a few nifty tools related to hardlinks on
Ciao Hermann
At last! This is what they should have implemented originally instead of the horrible .lnk
You guys should patent this symlink mechanism, that’s what I think.
Don’t you know the file systems ext2/3, reiserfs, xfs, ufs. They all support symlinks, soft and hard. The only new thing is that NTFS now support it. I’m not saying it is bad that microsoft does something that already exists, in fact the *.lnk files did really suck, but it isn’t definitely something new! Bye.
"That is one thing that always bugged me about reparse points in NTFS, their inability to refer to network shares. "
In the betas of Windows 2000 (the older betas, back when it was named "NT 5") you could mount network shares to folders. The feature was killed, for some reason that’s not clear to me (because it’s the kind of thing that should have been *made to work*, because it’s extremely useful).
The reparse point mechanism is, I believe, sufficiently general that one could write a simplistic symlink reparse point without much trouble at all.
I’d think all you really need for symlinks is a generalization of junction points ("generalized" to allow them to point anywhere in the object manager namespace, and to allow them to be attached to files as well as directories). The only other thing you’d need to do is to change their "delete" semantics (it’s horrible and wrong that deleting a junction deletes the target, not the junction itself).
So, correct me if I’m wrong, but the last 217 comments are pointing out that the Un*x implementation of SMB2.0 already has symbolic links.
If so, why didnt MS just download the Un*x support library?!
IM: Samba (how *nix does SMB) is free software. In order for it to use code from Samba, they would have to release Windows as free software.
@silpheed:
"the reason I’M having a hard time ‘getting over it’ is because of bits like: ‘Note symbolic links are an NTFS feature’"
Gosh, that just couldn’t mean that it won’t work on FAT32. It must mean that Microsoft is claiming to invent it.
Do you people even think before you post?
Since this is Windows, which only natively supports NTFS and the FAT variants, the comment that symlinks are an NTFS feature implies that the feature is not available on FAT or FAT32. It does not mean that they are not a feature of any other file system for any other OS.
So once again, get over it.
This is truely mind-boggling…
As the original creators of UNIX where in the pre-implementing stage, they already thought out the symlink idea on paper..
That was in the summer of 1969, it was implementen within the same year, or the beginning of 1970, as the diskpack for the original system were delivered.
I believe we have to thank Dennis M. Richie for this one. (I’m not sure about that though)
Those who do not learn from history are doomed to repeat it. Repeatedly.
Symbolic links were invented in the Multics operating system in 1965-66, well before Unix was born. The Bell Labs Unix group chose not to implement them, instead providing much more problematic and limited (but easy to implement) "hard links". BSD Unix wisely adopted symbolic links over a decade later, correcting the error, and improving them to allow relative pathnames. It wasn’t until AT&T Unix System V disappeared that symbolic links became universally available in *nix systems. The hidebound AT&T attitude is why they weren’t in POSIX 1003.1, either. Hard links still present fundamental obstacles to hierarchy-oriented functions like quota management.
Actually, NTFS has had symbolic links for years (maybe NT 3.0?), but they’ve been hidden from Windows users and accessible only through the POSIX subsystem. They’re also used to associate symbolic names (e.g., COM1) with real devices, but Windows users have no visibility into that.
I’d guess that the "news" here is just that those API are being added to the Win32 API–in other words, little news and no novelty. I only hope that they don’t introduce gratuitous incompatibilities.
Of course, NTFS has had Security Descriptors, Access Control Lists, and system level access auditing.
Hey Unix bigot, want to explain why every Network Attached Storage device based on Unix needs to squash root access? Oh, giant gaping hole in the NFS protocol?
Pot. Kettle. Black.
Now, shut the f**k up.
What inspires so many dipsticks to post so much inaccurate garbage here? Shutup. Nobody is impressed with stupid "Unix is better" rhetoric.
Beavis: Actually you’re wrong. People are impressed by the "Unix is better" rhetoric because … well … UNIX IS BETTER. Unix is more stable, more secure, does more, scales better, runs on more hardware. Microsoft is just waking up to this reality. And time and time agein, they have to catch up with the leading *nix systems. many new "features" of Vista are already out there in Unix and Linux. This symlinks feature is just one example.
As a side note, and back to the original post–
If this is a new feature to NTFS, both Microsoft and the open source implementations of Windows file sharing should take it into consideration. But we all know that the open source community will find, patch and fix any holes before Microsoft realizes that hackers are exploiting them.
PingBack from
PingBack from
PingBack from | https://blogs.technet.microsoft.com/windowsserver/2005/10/28/smb2-protocol-what-is-a-symbolic-link/ | CC-MAIN-2017-39 | en | refinedweb |
An example of creating and calling a new form from main form of application in C++ Builder
The task
Develop the application, that calculates a volume of a prism. The formula of volume of prism is following:
where V – volume of the prism, S0 – area of basis., h – the length of prism height.
The result must be outputted as a separate window. To get the window of result we need create a new form.
Progress
Create the new project as “VCL Form Application“. Save project in any folder.
The name of module of main form leave as is – “Unit1.cpp“. The project name leave as “Project1.cbproj“.
- Developing the main form of application.
At present, we have the main form of application named “Form1“. In Object Inspector you can change the form’s name with the help of property Name (Fig. 1). Leave all as is.
Fig. 1. Property “Name” of main form of application
From tab Standard you need to place on the form such components (Fig. 2):
– component of type TButton;
– two components of type TLabel;
– two components of type TEdit.
We place components as shown in Figure 2.
Fig. 2. Placing components on the main form
As a result the objects (variables) are created with such names: Label1, Label2, Edit1, Edit2, Button1.
With the help of Object Inspector we need to set such properties of components:
– in the component Label1 property Caption = “S = “;
– in the component Label2 property Caption = “h = “;
– in the component Edit1 property Text = “” (empty string);
– in the component Edit2 property Text = “”;
– in the component Button1 property Caption = “Calculate”.
Also, you need to change the title of form. To do this you need to select the form (component Form1) and in the property Caption enter the text “Volume of prism”.
After the changes and adjusting the size of components the application form has view as shown in Figure 3.
Fig. 3. Main form of application
- Creating the new form.
In the new form you can output the result of calculation. The new form will be called after pressing on the button “Calculate”.
To create a new form you need choose following command (Fig. 4):
File -> New -> Form C++ Builder
Fig. 4. Command of creating the new form
As a result, the window of newly created form will be showed. The object (variable) named “Form2” corresponds to the new form (Fig. 5).
Fig. 5. Newly created form
After calling the command of saving
File -> Save All
will be proposed to save the file of module of form as “Unit2.cpp“. Let’s leave it as is.
Moreover, the file “Unit2.dfm” will be created. In this file is described information about settings of form: form size, background color, text color, font settings and so on.
- Creating a new form Form2.
We place on the form Form2 such components:
– component of type TLabel;
– component of type TButton.
As a result, two objects (variables) with names “Label1” and “Button1” will be created.
Also, you need correct the size of form using mouse.
As a result, the form Form2 will have view as shown in Figure 6.
Fig. 6. The newly created form with placement of components
Let’s set up some properties of form Form2. To do this, first, we need to select form Form2.
The next step is set such properties of form Form2 using “Object Inspector”:
– property Caption = “Result“;
– property BorderStyle = “bsDialog”. After that, form will look like as a dialog window.
Additionally, we make setting components Label1 and Button1 of form Form2:
– in component Label1 property Caption = “Result = “;
– in component Button1 property Caption = “OK”.
As a result, form will have view as shown in Figure 7.
Fig. 7. Newly created form after settings
- Programming an event of click on the button “OK” of form Form2.
To get the result of returning from form Form2 and close the window of form, you need to call an event of click by mouse on the button “OK”.
To do this you need to do such sequence of actions (Fig. 8):
– select the button Button1 of form Form2;
– in Object Inspector go to tab “Events”;
– in the list of events find the event OnClick;
– make the double click by mouse in text field opposite the event OnClick.
Fig. 8. Calling the event OnClick
As a result, the code of procedure of event handling will be open:
void __fastcall TForm2::Button1Click(TObject *Sender) { }
Between the braces { } we need to write code of event handler. Enter the following string of code:
ModalResult = mrOk;
Thus, the procedure of event handling will have view:
void __fastcall TForm2::Button1Click(TObject *Sender) { ModalResult = mrOk; }
In the code snippet is used a global variable of form named as ModalResult. This variable is responsible for the behavior of the form window. By default the value of variable is zero. It means, that the window of form is closed. If value of variable ModalResult becomes nonzero, then the form is closing with the returning code of this value.
In our case, in the variable ModalResult is set the value of global constant mrOk. It means, that the window of form “Form2” will be closed, with the returning code mrOk.
The global constant ModalResult can taking and other values, for example mrAll, mrCancel, mrIgnore, mrNo, mrYes, mrRetry and so on.
- Setting the connection between forms.
To call the form “Form2” from form “Form1” you need to connect the module “Unit2.cpp” in file “Unit1.cpp” with the module of main form Form1. File “Unit2.cpp” corresponds to form Form2.
It is realized by standard way for C++. In the header file “Unit1.h” of form Form1 we type the text of connecting of module “Unit2.h“:
#include "Unit2.h"
To go to the code of module “Unit1.h” we need to do the double mouse click at the name “Unit1.h” in Project Manager (see Fig. 9).
Fig. 9. The connection string of module “Unit2.h” to module “Unit1.h“
- Programming of calculation of prism volume.
To calculate the volume of prism you need to program the event of clicking on the button “Calculate” of form Form1.
To do it we do following operations:
– with the help of Project Manager we need to go to the form “Form1” by selecting file “Unit1.dfm” (Fig. 10);
– call the code of event handling of click on the button “Calculate” (see p. 5). Calling of code of handling event is realized the same as in paragraph 5;
– type the text of calculation of volume of sphere.
Fig. 10. The selection of file “Unit1.dfm” in Project Manager
Listing of event handler of clicking on the button “Calculate” is following.
void __fastcall TForm1::Button1Click(TObject *Sender) { float s, h, v; s = StrToFloat(Edit1->Text); // get the square h = StrToFloat(Edit2->Text); // get the height of the prism v = s * h; // calculation of volume Form2->Label1->Caption = "Result = " + FloatToStrF(v,ffFixed, 8,2); Form2->ShowModal(); // show the result form }
In the code snippet to convert from string type to floating point type we use function StrToFloat.
To convert back from floating type to string we use function FloatToStrF. This function gets input data of corresponding floating point value v, type of output format (ffFixed), the width of the output field (8 characters), precision of output result (2 decimal places).
Now, you can run the project and test it.
- 004 – Delphi. A new form creating and connecting it to the main form of application.
- 006 – Example of creating a new form and calling the application in C#.
- 002 – An example of creation of dialog window in MS Visual Studio 2010 – C++ (MFC). | http://www.bestprog.net/en/2016/02/25/008-c-builder-an-example-of-creating-and-calling-a-new-form-from-main-form-of-application/ | CC-MAIN-2017-39 | en | refinedweb |
Coffeehouse Thread32 posts
Forum Read Only
This forum has been made read only by the site admins. No new threads or comments can be added.
using winRT to code desktop apps
Back to Forum: Coffeehouse
Conversation locked
This conversation has been locked by the site admins. No new comments can be made.
Pagination
not.
If you write .Net you can use portable class lib other then that, I dunno.
MSDN should have information about whether a particular winrt API works in desktop or metro or both (there are some desktop-only winrt APIs, many metro-only winrt APIs and some APIs that work in both environments).
The last I looked, about 50% of the winrt APIs work in both environments (I'm not sure if that 50% number includes the XAML APIs or not though - none of the XAML APIs work in desktop apps).
No windows runtime APIs will work on Win7 - I'm not aware of any plans to support windows runtime apps on Win7. ...
the question is, can you make a winRT app on win8 desktop? And purchases and installed as metro app, just have it moveable like a regular window.<<
I am looking for the deployment from the app store, the follow me everywhere app settings and especially the siloed features that provides the user with assurance that the app will not hijack their PC in a virus like way.
Another way to put this is, I want to be able to write an app that when it is run on a windows phone looks like a windows phone app, on a tablet it looks like a metro app, on the desktop it looks like a desktop app.
Nope, the Apps store is Metro only. However, they going to list Desktop apps as well but they just link out to for download and install like it is today. The apps will be not maintained by the App store. Not even Microsoft software like Office will be delivered via the App store.
Win8 appstore is for Metro apps only
Don't know about that - but you should probably be able to fudge your own there. It might be exposed over COM (since the usermode WinRT classes are built on top of COM).
That's a kernel protection mechanism and will be available to all user-mode processes, although I'm not sure if they've announced how you're supposed to do it. This is used by IE10 for instance to "sandbox" the page rendering process.
No, you can't. Metro apps have to come from the store and will fail the validation if they try to call arbitrary Win32 APIs.
LoadLibrary is NOT there in the metro-style api 'partition', you can use LoadPackagedLibrary instead, which is very limited.
They fail the onboarding process because you imported the Win32 api. That's not to say you can't call it.
Firstly - onboarding process is only there for win8appstore - you can make your own metro apps or sideload them to avoid the onboarding process entirely if you want to. Metro doesn't stop you calling Win32, the win8appstore tries to stop you calling them.
Secondly - you can do a LoadLibrary and GetProcAddress to do anything you want - you can't import ntdll!NtOpenFile, but there's nothing to stop you doing GetProcAddress(LoadLibraryA("ntdll"), "NtOpenFile").
And thirdly literally all that usermode can do is memory operations and syscalls. Anything interesting from graphics, to virtual memory allocations to file manipulation to inter-process communication is done by syscalls. And C++ can just issue the syscalls directly. You might not be allowed to import NtOpenFile from Ntdll but they can't detect or stop you from just doing the mov eax, SYSCALL_NTOPENFILE; sysenter yourself.
That's why all of the metro app-protection is done in kernel-mode, not done by closing off Win32 functions.
It is there, you're just not being creative enough. Kernel32 is loaded into your address space and LoadLibrary is there. All you need to do is find it and call it. You're not allowed to import LoadLibrary. That doesn't mean you can't call it.
Oh and before someone else mentions it - Microsoft banned the API for good reasons and will block your app retrospectively if they find out that you're doing it. My point is that you can call Win32, not that you're allowed to.
Side-loaded Metro apps (aside from only being available to Enterprise developers) may still have to obey by the rules anyway as, from what I have heard, logic may yet be added to key internal functions like LoadLibrary to fail if loaded into a Metro process (even though it may not be there in current builds). Even if not done for Windows 8, it's perfectly possible for future versions to prohibit such behaviour, so attempting to rely on it is doomed to failure.
Except that you can't call LoadLibraryA. And you can't load arbitrary files into memory, so bypassing that is out too.
O'rly? Here's the key part of LoadLibrary: it loads the file up into memory (yes an arbitrary executable or DLL file that your app can see) and marks it as executable. You have to do your own import resolution (recursively), relocations, and calling DLLmain if you want to use it properly though. Note that GetProcAddress is in-memory only, i.e. you don't have to ask the kernel for help to write that function.
And now all you need is to copy the implementation of NtMapViewOfSection, NtCreateSection and NtOpenFile (all of which are about six assembly lines long to call the syscall).
Usermode can only do memory operations and syscalls. C/C++ can do both of those without importing anything, and hence metro apps can do anything that is not specifically denied to them by kernel mode - e.g. app-protection stuff. The onboarding process is more of a code-quality thing than a barrier.
@evildictaitor: You're still making the assumption that you have unrestricted access to the file system, there is no such guarantee that forthcoming updates to the kernel-mode parts of Metro won't prohibit that. At which point no amount of trying to circumvent it helps.
There's probably also a fair bit that can be accomplished by simply applying appropriate filesystems ACLs, since UAC has to be on, there is no elevation option and Integrity Levels are enforced at kernel level. Working on the assumption today that you'll be able to make arbitrary Win32 calls is just plain foolish, even for sideloaded apps.
I am making no such assumption. You can load stuff like ntdll and kernel32 because they're not really files, they're kernel namespace objects. Additionally, if you take sysinternal's vmmap to a metro app you'll see that ntdll and kernel32 and so on are already loaded in your address space. All your app needs to do is find it and jump to it and you'll have called a Win32 api!
Yes, there is stuff you can't do in Metro apps, but that's not because of the WinRT libraries, it's because of the LowBox tokens in kernel mode. My point was actually that the on-boarding process doesn't really mean very much when it comes to protecting your apps.
You can make arbitrary Win32 calls. They might return STATUS_ACCESS_DENIED if they interact with the kernel and the kernel says no, but that doesn't mean you can't call them.
There's a lot of terminology abuse going on around here (since WinRT seems to mean wildly different things to different people), so I'll define what I mean when I say stuff:
Win32: All of the old-fashioned APIs in user-mode libraries such as ntdll, kernel32. Specifically when I say Win32, I don't mean Win32k or anything in kernel-mode (when I want to say that I'll say Win32k or "the windows kernel"). Win32 APIs might call the kernel (e.g. VirtualAlloc calls NtAllocateVirtualMemory, but I consider "NtAllocateVirtualMemory" part of "the kernel" and "VirtualAlloc" to be a Win32-api).
Kernel-mode: processor Ring 0, or system-mode in ARM.
"The kernel": ntoskrnl.exe and win32k.sys, and any other Microsoft drivers that are designed to service requests from user-mode (like Afd.sys).
WinRT: The APIs and framework of user-mode libraries for Metro apps. WinRT is only in user-mode.
Metro: The start menu thing for apps, as opposed to "Desktop mode".
On-boarding process: The process by which you submit a binary to Microsoft and they decide if it goes into the app-store
app-protection: The kernel-mode protections to stop metro apps doing something bad at the syscall layer. app-protection is only in kernel-mode.
Hence, "WinRT" sits on top of "Win32" in user-mode, and is protected by the "the kernel" in "kernel-mode" by "app-protection".
With those definitions, a Metro app can call any WinRT or any Win32 api. Indeed any user-mode app can call any WinRT or Win32 api. If that API tries to do a protected syscall it might fail (e.g. ZwLoadDriver), but that's not to say that the API can't be called.
I'm sure someone will figure out how to "jailbreak" Win8 Metro and remove any limitations on sideloading. It probably wouldn't be protected under the DMCA exceptions like it is for phones, so the legality will be questionable in some jurisdictions. But has the DMCA really ever stopped anyone? | https://channel9.msdn.com/Forums/Coffeehouse/using-winRT-to-code-desktop-apps | CC-MAIN-2017-39 | en | refinedweb |
//--------------------------------------------------------------------------- #include <time.h> #include <sys/types.h> #include <stdio.h> #include <errno.h> #include <sstream> #include <vcl.h> #pragma hdrstop #include <windows.h> #include <iostream> #include <fstream> #include <cstdlib> //Needed for exit() #include "Shlwapi.h" #include "Teste1.h" #include <sys/stat.h> using namespace std; //--------------------------------------------------------------------------- #pragma package(smart_init) #pragma resource "*.dfm" TForm1 *Form1; //--------------------------------------------------------------------------- __fastcall TForm1::TForm1(TComponent* Owner) : TForm(Owner) { } //--------------------------------------------------------------------------- void __fastcall TForm1::Button1Click(TObject *Sender) { int hour; int day; int month; int year; hour= 98; //Ignores the 9 in the first case so it means 8 day= 28; month= 99; //Ignores the 9 in the first case so it means 9 year= 14; //working fine stringstream ss; //address and name of the files //create file name ss << "C:/Users/rodrigo/Desktop/Calendar/" << hour<< day<< month<< year<<".txt"; string address; address = ss.str(); //Working after changing the argument to the below one ofstream fout(address.c_str()); fout<<hour<<day<<month<<year; fout use the Format function:
Open in new windowBTW, I prefer to start with Year Month Day Hour. In this case the file names will have better sort order.
To check if the file exists you can use either Win32 API GetFileAttributes() function or Delphi's own FileExists() function
Open in new window
"C:/Users/rodrigo/Desktop/
so the "9"s are not ignored as stated in your comment. That's the reason you see a file not found
I'm not sure if you are using the extra "9"s as a placeholder but a better way would be to pad with leading zeros
Something like
Open in new window
hope this helps
Open in new window
Are you thinking about creating an Amazon Web Services account for your business? Not sure where to start? In this course you’ll get an overview of the history of AWS and take a tour of their user interface.
in this case:
"C:/Users/rodrigo/Desktop/
Exists or not.
Open in new window
[bcc32 Error] Teste1.cpp(47): E2034 Cannot convert 'string' to 'UnicodeString'
[bcc32 Error] Teste1.cpp(47): E2342 Type mismatch in parameter 'FileName' (wanted 'const UnicodeString', got 'string')
Open in new window
Errors:
[bcc32 Error] Teste1.cpp(47): E2285 Could not find a match for 'UnicodeString::UnicodeStr
[bcc32 Error] Teste1.cpp(47): E2031 Cannot cast from 'string' to 'UnicodeString'
Open in new window
[bcc32 Error] Teste1.cpp(47): E2235 Member function must be called or its address taken
here's a simple function I wrote.
call this function whenever you want to check for a file's existence. Alternately you can use the implementation directly
Open in new window
Open in new windownote, using wofstream instead of ofstream only means that the stream is capable to handle wide strings for filename and input. it doesn't mean that the file becomes a wide-char text file.
if you omit all '_w' and 'w' and L prefix in the above code it would use ansi character set and not wide characters but the results are the same.
Sara
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial | https://www.experts-exchange.com/questions/28527267/Check-If-a-Variable-FileName-exists.html | CC-MAIN-2018-34 | en | refinedweb |
I've always been fascinated with the challenge of being able to 'read' the current time without looking at a watch. Heck, why stop at the time, I'd love to be able to get all sorts of information from my devices without having to actually look at a physical device. I never did get a chance to buy this sweet braille watch from years ago, but now that I've got a hold of Shira's Pebble, I figured I could finally hack to together a solution.
Using AutoPebble, I'm able to send a vibration pattern to the watch which means that the Pebble can effectively talk Morse Code. At the core of my solution is this JavaScriptlet which maps text to a vibration pattern:
function textToCode(text) { var code = ""; var letters = text.toUpperCase().split(""); var encoder = { A: '.-', B: '-...', C: '-.-.', D: '-..', E: '.', F: '..-.', G: '--.', H: '....', I: '..', J: '.---', K: '-.-', L: '.-..', M: '--', N: '-.', O: '---', P: '.--.', Q: '--.-', R: '.-.', S: '...', T: '-', U: '..-', V: '...-', W: '.--', X: '-..-', Y: '-.--', Z: '--..' }; // Ben's Own Numeric Mapping encoder['1'] = encoder['A']; encoder['2'] = encoder['T']; encoder['3'] = encoder['H']; encoder['4'] = encoder['F']; encoder['5'] = encoder['V']; encoder['6'] = encoder['X']; encoder['7'] = encoder['S']; encoder['8'] = encoder['E']; encoder['9'] = encoder['N']; encoder['0'] = encoder['Z']; for(var i = 0; i < letters.length; i ++) { var c = letters[i]; var e = encoder[c]; if(e) { if(code != "") { code += " "; } code += e; } } return code; } function codeToPattern(code) { var code = code.split(''); var dot = 250; var pattern = []; for(var i = 0; i < code.length; i++) { var letter = code[i]; if(letter == ' ') { continue; } else if(letter == '.') { pattern.push(dot); } else if(letter == '-') { pattern.push(dot * 3); } var peek = (i + 1) == code.length ? null : code[i+1]; if(peek == null) { continue; } else { pattern.push(peek == ' ' ? dot*7 : dot); } } return pattern.join(","); } var text = local("par1"); var code = textToCode(text); var pattern = codeToPattern(code);
There are a couple of hacks above worth noting. First off, each letter is being treated as its own word. So: Hello World would actually be sent as H E L L O W O R L D. For someone who struggles to read even the most basic Morse Code, this simplification makes sense. Maybe one day I'll graduate to parsing words.
Second of all, you'll notice that I'm not using the proper Morse Code for numbers. Instead, I'm using a mapping I made up. It's sort of logical: A = 1, as it's the first letter in the alphabet, 2 = T, as in Two, 3 = H as in tHree, and so on. I did this for a few reasons: first, Morse Code is optimized for frequently letters. So using A = 1 means that I can send 1 as • —. Had I used the proper Morse Code for 1, I'd have to send • — — — —. Secondly, if I'm going to manage to learn anything through this exercises, I'd prefer it was 10 different letters, rather than the Morse Code numbers.
Regardless, the above does the heavy lifting of converting text into dots and dashes, and from there into a Pebble vibration pattern. I wrap up the above code as an action:
Parse Morse (88) A1: JavaScriptlet [ Code: .. code from above ... Libraries: Auto Exit:On Timeout (Seconds):45 ] A2: Return [ Value:%%par2 Stop:On ]
Note that Parse Morse is an example of me figuring out how to make Tasker Actions far more modular. Parse Morse takes in two arguments: the first is the text to parse, and the second is the value to return (either code or pattern).
I then setup the Pebble Morse Time action to listen for requests from AutoPebble, invoke Parse Morse and then present the encoded time on the watch. Here's the code for this action:
Pebble Morse Time (106) Abort Existing Task A1: Perform Task [ Name:Parse Morse Priority:%priority Parameter 1 (%par1):%TIME Parameter 2 (%par2):code Return Value Variable:%code Stop:Off ] A2: Perform Task [ Name:Parse Morse Priority:%priority Parameter 1 (%par1):%TIME Parameter 2 (%par2):pattern Return Value Variable:%pattern Stop:Off ] A3: Variable Set [ Name:%newline To: Do Maths:Off Append:Off ] A4: Variable Search Replace [ Variable:%code Search: Ignore Case:Off Multi-Line:Off One Match Only:Off Store Matches In:%code Replace Matches:On Replace With:%newline ] A5: AutoPebble Text Screen [ Configuration:Full Screen: false Title: Time Text: %code Vibration Pattern: %pattern ]
The above action include string handling that turns spaces into newlines. I chose to do thiss via a Variable Search Replace action, though I'm beginning to appreciate that it may have been easier to just do this as a Javascriptlet. The multi-line approach me to render the Morse Code for the time in a large font, with one "letter" per line. Here's how it looks on the watch:
That's T Z Z Z O'Clock, or 20:00 or 8:00pm.
To polish off my no-eyes solution, I've setup two short-cuts. A long press opens up AutoPebble and another long press kicks off the Pebble Morse Time task.
It's a crude solution, but technically, it does work. I can manage to get the time buzzed to me all without ever looking at a physical device.
As for my ability to interpret Morse Code, I've still got a ways to go. For now, I'm both vibrating and showing the Morse Code of the time, and as you can tell above, the current time is included in the header of the watch itself. I find that most of the time I can properly interpret a digit or two from the vibration, and read most of the remaining digits by looking at dots and dashes.
All in all, it's a fun experiment and is just more proof at how cool AutoPebble is. It probably took longer to write up this blog post than did it to code up this solution. Good times. | http://www.blogbyben.com/2015/12/autopebble-programming-eyes-free-time.html | CC-MAIN-2018-34 | en | refinedweb |
Introduction: ROB: the Ultimate R/C NeoPixel/WS2812B LED Display Platform
About a year ago, I went out and got myself 5 meters of WS2812B LEDs, more commonly referred to as "NeoPixels" although mine aren't from Adafruit. A few weeks ago, I finally figured out how to use them. So, with my new knowledge, I went out and tried to find something fun to do with them. VU Meter? Yup, did that. Spectrum Analyzer? Well, that's a work in progress. FastLED Fire2012? That replaced any need I had for candles. LED Matrix? Oh yeah.
But what about something new, something that combined some of these things into a new thing, a better thing, something that interests me... something R/C?? Now there's an idea worth working on! My Arduino IR Robot Display Platform (R.O.M.A.N.) got disassembled (and Instructabled) last summer... Maybe it's time for Rev. 2.
Introducing "ROB", the Strobe R/C Robot.
The design is simple and completely open source: A two wheel drive rover base, with an Arduino Uno for navigation and secondary displays/extra sensors, with an Adafruit Motor Shield to power the wheels. On top we have an Arduino Mega to run the main display: a 8x10 LED Matrix made from 8 rows of 10 LEDs each, ZigZag pattern. This can be replaced by any other WS8212B/NeoPixel compatible individually addressable LED Matrix, but this is the only one I've got so it's what I'll use. Other LEDs will be in use around the edges of the robot for secondary effects as well. All of the parts (except maybe the Uno) can be switched for other components to suit individual users' needs, hence the open source part.
The remote will be a custom, Arduino Nano-based IR controller, because RF is difficult to use with Arduino and this robot is more likely to be for indoor, dim-room, or nighttime use due to the LEDs, as sunlight will diminish their awesomeness considerably.
I have calculated the approximate cost as €120 (USD $135), so this is not a cheap project unless you already have most of the parts (like I did).
So, without further ado...
Step 1: Gathering Materials
I say this in every Instructable, but it's always a good idea to have what you need when you need it (this is a great example of not-so-common Common Sense). This allows you to avoid "Gosh-Darn-It" moments and other profanities. The links are approximations of the parts I have in my tool kit already (the beauty of open source), so don't be surprised if mine look different..
Materials: You will need to mix and match some of the links to suit your needs, should you use them.
Arduino Uno (or clone) Uno
Arduino Mega (or clone) Mega
Arduino Nano (or clone) Nano
Tip: If you find an Arduino Starter Kit, it will come with a lot of the parts we use here, in addition to the Arduino board, usually for less money. They are also less of a hassle to find. Mega 2560 Kit W/ LEDs, Buttons, Breadboard and Wires
Paper
Cardboard
Wire (Solid core insulated electrical wire, Dupont jumpers and modelling wire will be needed.)
2 Switches (optional) bulk Slide Switches
3 Momentary Pushbuttons Buttons
3 Infra-Red LEDs (I harvested mine from old remotes, so I don't know the specs, but these should work okay)
3-pin IR receiver IR Receiver Board
6x Any Color 5mm clear or diffused lens LEDs (orange are what I used)
5x Red 5mm clear or diffused LEDs
Joystick module with button Joystick breakout board
Large Breadboard Breadboard
3 - 5 meters Neopixel/ WS2812B LED Strip (60 LEDs/Meter) LED StripAND/OR any NeoPixel-compatible LED Matrix 16x16 LED Matrix (not affiliate links)
Motor Driver board/shield/chip (doesn't matter which one, it just needs to be able to run 2 motors. I use the Adafruit Motor Shield V1) L293D Motor Driver (Adafruit Motor Driver V1 Clone)
Battery Boxes -you will want one with 4 AAs for the Nano and one with 6 AAs for the motors (see Step 3 of my other project for how to get recycled ones) 4AA Battery Box6AA Battery Box
A 2WD Rover/Car Base Complete Car Kit (Uno, Base, Motor Diver, 6 AA Battery Box and Sensors)
This link is for a base which I haven't tried, but it is a great option that looks cool and should work the same as the one I use: Tank-Style Treaded Base
Tip: If you go on Amazon/Gearbest and type in "Arduino Robot Car", you are guaranteed to find tons of different brands of two and four wheeled "smart", "WiFi" robot rover kits that have no WiFi capabilities whatsoever (without additional shields) and only have minimal computing power. They are also dead cheap, and come with (almost) everything you need to run them. (Some don't come with motor drivers, others without the Arduino board needed to run it.) I recommend finding one of these as a DIY kit, as it should come with some of the other needed components (Uno, Motor Driver, various sensors if you're lucky) for a lower overall cost.
Step 2: Tools and Batteries
Just like materials, it is a good idea to have all the needed tools and batteries to complete a project.
Tools Needed:
Soldering iron
Hot glue gun
Screwdriver
Scissors
Multimeter
Tape
Black Acrylic Paint
Paint Brush
Pencil
Ruler
2x Printer cable for programming Mega and Uno (should come w/ the Arduinos)
Mini USB cable for programming the Nano Mini USB to USB cable
Batteries:
Portable USB battery (for the Mega and lights.) 10,000 mAh Power Bank (2 ports)
Tip: If you get a larger battery, it will be harder to mount but will give you more run-time. The one I linked here is quite sufficient for most usage, although I use a smaller one with less battery life (RAVpower 3350 mAh found here on Amazon <= not an affiliate link btw). The one linked ^above^ will also be able to simultaneously supply power to the Uno if you prefer separate logic/motor power supplies.
10x AAs (I recommend rechargeables, this project will go through a lot of batteries otherwise)
Step 3: BOM Links
This is a complete list of links that should supply you with everything you need to complete this project. Other links from the BOM can be mixed and matched, but this is what I recommend getting..
Mega Kit w/ LEDs, breadboard, buttons, wires, and more
LED Strip (non-affiliate link to Amazon, get the 5m 60 LEDs per Meter option)
Complete Car Kit (Uno, Base, Motor Diver, 6 AA Battery Box and Sensors)
Power Bank (2 Ports, 10,000 mAh)
TOTAL COST: about €125 ( USD $140) not including shipping or tax
I built this mainly with parts I already had, so it was significantly cheaper for me. A lot of these parts can be substituted for similar items that you may have, so don't be afraid to hack a bit to get yours working without busting your wallet.
Step 4: The Rover Base
"Yeah, they see it rollin', and it's blinkin'...."
So, in order for us to have a robot, we must first have a base to mount said robot on. I decided to use a simple acrylic two-wheel drive base for mine. In the Materials list two steps back I also added a tank style alternative which will be better if you have rough or carpeted floors. The internal layout of the robot will different for that base, however, so if you choose that one you will have to figure it out yourself. (I may come back later when I have more budget and add a step or a new project for that one as well, but for now I just have the 2WD base.)
Step 1: Assemble the base.
The base kit I recommend should come with instructions for building the base; simply follow those to assemble it. If your kit did not come with instructions, follow the image above. Your base should resemble mine when finished. BTW, you can paint the wheels like I did to individualize your robot and make it look cooler. I used gloss red acrylic paint for the inside and hammered bronze spray paint for the outside on mine. It looks much better than the original yellow.
Step 2: Add the Battery Box
There should be two small screws and corresponding hex nuts left over after assembly. Use these to mount the battery box on the rear of the robot, as shown above.
Step 3: Mount the Portable USB Battery and Arduino Mega
Depending on which USB battery you use, the mount may be different. I used two hex shafts with a piece of plastic screwed on top, and the battery just slides in; the back resting on the acrylic that is used to mount the right-side motor. If you got the battery I recommended, you should be able to zip-tie it down in roughly the same place instead.
The Mega should be mounted alongside the USB battery. I used hex shafts, but you could also hot-glue it, use sticky Velcro tape, tape it, or just zip-tie it in place. Just make sure it doesn't touch or get too close to the wheel.
Step 4: Mount the Uno
First, slot the Motor Shiled V1 on top of the Uno. If you are using a different motor driver, disregard this step and mount the motor driver later.
Using sticky Velcro tape (mine is 'industrial', btw) attach the Uno to the underside of the base, in front of the motors. Alternative methods are double sided tape and zip-ties.
Step 5: Wire the Uno (Schematic in pictures)
Thread the wires from the battery box through the central hole in the base, and attach them to the Power block on the Motor Shield. Again, if you have a different motor driver, disregard this. You will need to wire the power to both Vin and GND on the Uno and VCC and GND on your motor driver, so that they run in parallel to each other.
Wire the two motors to the motor 3 and motor 4 block (on the same side) of the shield. This will be the side opposite the programming port on the Uno. If you do it backwards you will need to switch them around later (Don't worry, I do it every time).
EDIT: If you used a L298 motor driver instead, attach Motor A to pins 4 and 7, and Motor B to pins 8 and 9.
Using Female-Female jumpers, wire the IR receiver to A0, GND, and 5V on the Motor Shield/Uno. You may have to add pin headers to the motor shield to do this. Then, bend the pins down so that they are at a 90 degree angle to the board. (This is so they don't catch or scrape on the floor).
Also, note that I wired my receiver wrong and fried it, and used a spare instead, so don't follow the wiring in the pictures, use the schematic instead. Look up the specs of your part to find which pins are which if it isn't labeled.
Fasten the Wires to the middle of the underside of the base using tape, so that they won't touch the floor.
Step 6: Mount the IR Reciever
Remove the rear swivel wheel.
Using tape of some sort, fasten the wires to the underside of the base so that the receiver sticks out to the rear, facing up when the rover is right-side up.
Re-install the swivel wheel, screwing through the tape to further reinforce the receiver's position (which should be directly between the mounting posts for the wheel).
Tape down any extra length of wire in the center.
And now you are ready to move on to the Matrix!
Step 5: Building the Matrix
"Unfortunately, no one can be told what the Matrix is. You have to see it for yourself." - Morpheus (The Matrix, 1999)
Well, now we have digital cameras and YouTube, so you don't really have to experience it first person to believe it. However, no description I write here could ever do justice to the awesomeness of a well-built LED Matrix. So how about I just tell you how to build one, and then you get the experience of plugging it in and (hopefully) seeing the beauty of your handiwork bloom into a rainbow of color and fascinating oscillations of light.
Note: These steps mostly apply if you bought an LED strip; if you already have a matrix you can move on. I do suggest reading through anyway, as it will lend you a better understanding of Matrices in general. I also suggest you run the test codes to make sure your matrix works properly.
Ensure your LED strip is in fact 60 LEDs per meter, not 30. 140 LEDs per meter strips are expensive but will work about the same.
If your strip has silicone insulation, you may want to gently peel it off to make everything easier later on, as we do not need the strips to be waterproof.
Step 1: Test your LEDs
Download and install the Adafruit NeoPixel, Adafruit NeoMatrix, and Adafruit GFX libraries to your IDE. All are linked here. You will need to use the one of the newer versions of the Arduino IDE for this to work properly.
Choose your Matrix size. Mine is 8x10, so I have 80 LEDs. Keep in mind that the bigger the Matrix, the more trouble you will have powering it and the more program memory you will need on the Mega. Note that 8x10 is the perfect Matrix dimensions for the 2WD base.
Open the 'strandtest' example from the Adafruit NeoPixel examples folder (Open>libraries>Adafruit NeoPixel>examples>strandtest)
Scroll down to Line 16. Delete the number 60 and replace it with the number of LEDs you calculated earlier (in my case 80).
Upload the code to the Mega, then unplug it. (otherwise you can fry LEDs while wiring them)
Wire GND on the LED strip to GND on the Mega.
Wire Din (center pin) on the LED strip to pin 6 on the Mega. If you aren't lazy like me, you may want to put a small 300-500 ohm resistor in between these two pins to 'reduce NeoPixel burnout risk' as the code puts it.
Wire 5v on the LED strip to VInon the Mega, NOT the 5v pin (5v won't allow enough amps through, VIn will).
Again, if you aren't lazy like me, you can put a 1000 uF capacitor in between Power and GND to smooth out the power supply and again to 'reduce NeoPixel burnout risk'.
Plug the Mega back in and make sure the first 80 (or whatever number you put in place of 60) of the LEDs are working properly. Things to check for are dim, off-color, or burnt out LEDs. Mark these in some way so you remember which ones they are.
Unplug the Mega after you've watched the pretty colors for a bit.
Step 2: Cutting the Strip
Decide what orientation you want the strips in your Matrix. This will affect your code later as well. I chose Rows, as apposed to Columns. This makes scrolling text look better, and also streamlines the look of the robot.
Since my Matrix is 8 tall and 10 wide, I will be cutting 8 lengths of 10 LEDs each. Do a similar calculation for your dimensions and orientation.
Cut the LEDs as you have calculated, using the designated lines that are pre-printed on the strip.
Cut out any of the bad LEDs from the last step. If you are lucky, there won't be any.
From the leftover LEDs, cut replacements for the bad LEDs after carefully testing them with the Mega. Use extreme caution and ensure that GND never at any point becomes disconnected while Power and Din are running.
Step 3: Soldering
Carefully solder together the replacement LEDs, ensuring the data arrow is pointing the right direction. Test these after the solder cools down. I had an inordinate amount of LEDs stop working after I soldered them.
For every section of LED strip, cut 3 sections of solid core wire (stranded core doesn't seem to work, I tried), each 1.5 - 2 inches or so in length.
Strip the ends of each of these wires.
Note: You can use Dupont jumper cables instead to avoid some of the hassle.
Solder each section of LED strip together in a straight line, with the data arrows all pointing the same direction.
Test the strip after you add each section to make sure it works; if only part of it works go back and check all of your connections and LEDs again. (This part is annoying and takes forever. I spent a weekend trying to get mine to work.)
This is a good point to solder the capacitor and resistor to the first LED, if you want to be safe and reduce the risks of pixels burning out (I didn't and mine work just fine).
Once all of your sections are connected, you should have 3 wires left over. Solder these to the first LED. I used jumpers instead, because that makes it easier to connect to the Mega later.
Step 4: Make a background
On some paper, mark the length of one pixel from the cut lines on the strip. Using this length, form a grid with the dimensions of your Matrix. When aligned to the grid, all pixels should be evenly spaced in both directions.
Crease the paper along the grid lines.
Paint the paper black to match your strip, and let dry.
Cut a piece of cardboard that is about the same dimensions as your background.
Align and glue the background to the cardboard. You can also tape the edges with transparent Scotch tape to secure it better.
Step 5: Stick the LEDs
After the glue has dried, peel the backing off of the first section of LEDs in your strip.
Carefully align it to the proper crease marks on the paper, and then firmly stick it down. If you chose Rows as your orientation, your first section should be running from left to right across the very top row. If you chose Columns, your first strip should be from top to bottom along the far left side. Each pixel should be centered in a box of the grid.
Bend the next section at the wires so that it is parallel to the first strip, peel the backing off, align it to the next row/column of crease marks, and firmly stick it to the background, again making sure the pixels are centered properly.
Repeat until all the strips are neatly stuck to the background. Your Matrix should now look very similar to mine.
Step 6: Testing the Matrix
First, plug the strip into your Mega (the one with the strandtest uploaded to it) Use the same wiring as for testing the strip.
Plug in the Mega, and the Matrix should light up and run the strandtest as if it were a normal LED strip. Check to make sure all the LEDs function properly.
Now, you can start coding it like a proper Matrix. The NeoMatrix library examples are a good place to start. Choose and upload one of these, making sure to read the code and change the settings to match your Matrix.
I suggest watching a few videos on YouTube about LED matrices, as that is where most of my knowledge on the topic originates. It will also help you understand the code part better.
Step 6: Knight Rider, Beacon Lights, and More NeoPixels!
This is the rear, uh, 'casing' shall we call it, that hides the battery box and supports the rear of the Matrix.
Step 1: Cardboard (or is it cardbord? Maybe cardbored? What about cardboared? Eh, whatever.)
Cut three lengths of cardbored, each about 1 inch tall, one the width of the back of the base, and two about the length from just behind the wheels to the rear of the base.
Poke 5 evenly spaced holes in the center of the largest piece of cardbord.
Paint the three pieces of cardboared black, then glue them together to form a 'U' with the longest piece as the bottom and the smaller pieces across from each other as the sides.
Step 2: Knight Rider LEDs
Slot 5 Red LEDs into the holes in the longest piece of cardboard, and glue them in place.
Solder the cathodes (short legs) of the LEDs together.
Cut 5 lengths of wire 5 inches long. Strip the ends.
Solder the wires to the anodes (long legs) of the LEDs.
Step 3: Beacon Light (this step is the same as in my Arduino IR Robot Display Platform, because I recycled the beacon from that).
No robot worth it's bolts doesn't have a beacon light of some sort, and that includes this one. I built my beacon with six orange LEDs, wire, paper, hot glue, and a soldering iron. It can simulate a rotary beacon as well as pulse, blink and fade. You can build it with whatever color LEDs you want, but you will need 6 of them.
Sub-Step 1: Bend and Glue
Bend the legs on each LED to a 90 degree angle, making sure that the cathode is always on the left when the back of the LED is facing towards you and the leads are pointing down.
Cut out a small circle of paper, about half an inch in diameter.
Glue 2 LEDs directly across from each other on the circle.
Glue the remaining four LEDs on either side, each directly across from another, and all evenly spaced.
Fill in the gaps with hot glue (not required, but it makes everything else easier)
Bend the cathodes towards the center, and twist them together.
Sub - Step 2: Wires and Solder
Cut 7 lengths of insulated wire, each about an inch and a half long, and strip each end (solid core works best)
Solder the cathodes together, then solder on a wire from step 6
Solder the remaining wires to the anodes of the LEDs.
Arrange the unoccupied ends of the wires in a row, cathode first, then each LED in order around the circle.
Sub-Step 3: Mount and Glue
Cut a slit in the longest piece of cardboard, on the top-left corner above the red LEDs.
Insert the wires from the beacon through, then glue them in place.
Carefully bend the beacon to a 90 degree angle, so it is pointing up and away from the red LEDs. Make sure no wires are crossing.
Sub-Step 4: Solder
Cut 6 lengths of wire 5 inches long, preferably in a different color to the wires on the red LEDs. Solder these to all the wires except ground (the cathode).
Cut another wire 5 inches in length, then a small one about an inch long.
Strip the ends of these wires.
Solder the short wire in between the cathodes of the red LEDs and the beacon cathode.
Solder the long wire to the cathodes of the red LEDs
Step 4: NeoPixels!!! (Again!! Isn't this fun?)
Cut and test 2 lengths of 5 NeoPixels each. Solder the 5V pins of the first LEDs together with a long jumper wire.
Solder another jumper to 5V on only one of the strips.
Solder jumper wires to GND on each strip.
Solder jumpers to Din on each strip.
Solder the long wire from the cathodes of the LEDs to GND on one of the strips.
After the solder has cooled, test each strip.
Peel the backing off of the strips, and carefully stick them to the sides of the rear 'casing'.
Step 5: Mount the rear 'casing' (see next step for pictures)
Align the 'casing' to the back of the base, so that the beacon is pointing up and the sides hide the battery box.
Glue the back section in place.
If your sides do not align to the sides of the base, follow the below steps. Otherwise, glue your sides in place.
My sides did not align, but rather stuck out a little on each side. This is a simple thing to fix.
First, cut two pieces of cardboard about 1 inch by 3 inches.
Glue these to the underside of the back of the base, so that the 3 inch edge aligns to the bottom of the sides of your 'casing'.
Glue the bottom edge of the sides of the 'casing' to the cardboard supports, making sure that the top edge of the sides remain parallel to the base. You may want to put cardboard "spacers" in between the two to do this.
You are now ready to move on to Wiring!
Step 7: Final Wiring and a Mount for the Matrix
Wires are mystical things. They are solid, and yet they allow the magical smoke of circuitry flow though them, transmitting power and data. Whatever you do, be careful and don't let this smoke out of the circuit, else it may never work again.
Step 1: LEDs
Wire the Red LEDs in order on Mega pins A7 through A12.
Wire the Beacon LEDs in any order on Mega pins 22 through 27. You will need to change the code later to sequence these properly, but that will be easier than testing each wire by hand (trust me- I designed the code to make this easy after toiling through it with the 50-odd versions of my last robot).
Step 2: NeoPixels
Wire GND from the NeoPixel strips to the closest open GND pin on the Mega.
Wire Din from the NeoPixels to Mega pins 2 and 3.
Wire 5V from the NeoPixels to 5V on the Mega.
Step 3: Signal wires from the Uno
Using Male-Female jumpers, wire A5, A6, and GND from the Mega to A1, A2, and GND, respectively, on the Uno. To keep your wires tidy, thread them through the square hole in the center of the base and tape them down after you finish. Also, bend any wires on the Uno so that they won't touch the floor.
Step 4: The Matrix
Wire 5V and GND on the Matrix to Vin and GND, respectively, on the Mega.
Wire Din on the Matrix to pin 6 on the Mega.
And now to mount the Matrix. I used modelling wire and some cardboard to create a hinged mount that allows access to the wires and batteries underneath, but also holds the Matrix flat and steady in the closed position.
Step 1: Cardboard
Place your Matrix on top of the Uno, so that the first LED is at the front of the base, above the Mega, and the rear aligns to the back of the 'casing' we made.
Mark the location of the sides of the 'casing' on the underside of the Matrix.
Cut two pieces of cardboard the same length as your sides and about 1 cm tall.
Paint these black.
After the paint has dried, hot glue the pieces to the underside of your Matrix where you marked previously. When the glue has dried, your Matrix should be able to slide back and forth on the 'casing' but not wiggle from side to side.
Cut off any cardboard that overlaps on the NeoPixel strips when the Matrix is placed over the 'casing'.
Step 2: Modeling wire (I use 18 AWG solid copper wire, but 18 - 22 AWG steel wire should work as well)
Cut a length of about 12 inches of wire and straighten it.
Poke the wire horizontally through the cardboard back of the Matrix to form a hinge.
Center the wire so there is an equal length on each side of the Matrix, then bend it down to a 90 degree angle on either side.
Slot the wire through the base of the rover, and center the Matrix so the back and sides of the 'casing' are aligned properly and the matrix sits comfortably on top. (bend any other wires that get in the way)
Wrap and thread the wire around and through the base enough times to secure it.
Cut off any extra wire.
Step 3: USB Power
This step is optional but recommended. if you choose to skip this step, simply use a short programming cable to power the Mega.
Cut a spare programming cable about 2 inches from either end.
Expose and strip the 4 wires inside each end.
Solder the ends back together, matching the wires by color.
Use electrical tape to insulate the joints individually, and then wrap some around the whole thing as additional protection.
Plug the USB end into the portable battery. The other end we will plug into the Mega after we program it.
And done! You can now move on to make a remote to control your robot with.
Step 8: A Custom IR Remote
So, after having gotten fed up over the crappy range and the sluggish response of the little IR remotes that come with most Arduino kits, and multiple failed attempts to use remotes from old r/c helicopters, cars, and Hexbugs, I decided to just give up and build my own (and it isn't much better [yet]). I used a Nano as the processor, and decided on a console-style design for the controls. A standard breadboard is the perfect size for this, and that made everything really easy. I used a joystick breakout module and three pushbuttons in this iteration, and it wouldn't be at all hard to add a fourth button or even a second stick if I wanted to. I decided to use the stick to drive the robot, and the four buttons (there's one built into the joystick) to cycle through display modes.
So let's get cracking! If you are comfortable with schematics, follow the one above. Otherwise, I transcribed it below.
Step 1: Nano
My Nanos didn't come with pre-soldered pin headers, and I assume yours didn't either. Solder male pin headers on to the Nano, minus the 2x3 ICSP header (or whatever that thing I never use is called).
Bread the Nano on the center of the breadboard.
Step 2: Buttons
Tip: Use solid core wire in short lengths instead of Dupont jumpers to reduce the amount of extra wire in your way.
Bread 3 pushbuttons on the same side of the breadboard as the USB port of the Nano. You can put them in whatever configuration is comfortable to you, so long as no pins are overlapping.
Pick a side on each button, and wire it to the Ground rail at the top (when the USB port is pointing left, see the image above) of the breadboard.
Wire the other side of each button to pins A0 through A2 on the Nano.
Step 3: Joystick
Wire VCC and GND from the joystick to 5V and GND on the Nano.
Wire the button pin to A3 on the Nano.
Wire the X axis pin to pin A4 and the Y axis pin to pin A5 on the Nano.
Step 4: IR LEDs
Bread the LEDs in the top row of the breadboard, with the Cathodes (short legs) in the Ground row and the Anodes (long legs) in the other row.
Connect the row with the Anodes to pin 9 on the Nano. (might need changing later)
Step 5: Grounding and Power (Note: the diagram shows AAA batteries, you should have AAs)
Connect the Ground row at the top of the breadboard to Ground on the Nano.
Connect the negative (black) lead of the battery box to GND on the Nano.
(Optional) Wire a side pin of a slide switch to Vin on the Nano, and the positive (red) wire of the 4 AA battery box to the center pin of the switch.
If you chose not to follow the above, simply connect the red wire of the battery box to Vin on the Nano to turn the Nano on, and disconnect it to turn the Nano off. Leave it disconnected for now.
Finished! You can now move on to the Programming aspect of this project.
Step 9: Programming! (Finally!)
So, you now have a robot and a remote. All you need is to program them. Good luck....
Just kidding. I will provide you with the code for each, but keep in mind, there is a lot that can be improved. Some of the things I am working on currently is running two of the built in displays simultaneously, because the current version can only run one of the three at a time, so I have to choose whether I want to run the beacon and Knight Rider LEDs, the side LEDs, or the Matrix, but I can't use all at once because of conflicting delays in the functions and for() loops. Also, the displays won't deactivate when another is activated, which should be relatively easy to fix, I just haven't had the time.
Another thing that needs work is the controls. I use spam logic with my controller; it just keeps sending the signal in the hope that the receiver will get it. The way I have it set up, the controls need to be Line-of-Sight to the receiver to work as advertized. I am working on a version that uses RF transmitters and receivers so that I can get more response over longer distances. However, progress has been slow so I'm stuck improving this code until I get V2.1 up and running.
You will need to download and install the libraries I linked here for the code to work.
EDIT: I added a bit to the Uno Drive Code to account for a L298 motor driver in addition to the Adafruit one I use.
//IRcontroller.ino for the remote #include <IRremote.h> // initialize the library IRsend irsend; // Supposedly this is pin 3. On mine it ended up being pin 9. // Test the pwm pins with a red led to find which one blinks when you send a signal, // then attach your ir leds to that pin. const int stickX = A3; // the joystick pins const int stickY = A4; const int buttonUp = A0; // the number of the pushbutton pins const int buttonDown = A2; const int buttonLeft = A1; const int buttonStick = A5; //Below are all the variables that the code uses to debounce the buttons and read the joystick position. //You won't need to mess with any of these unless you add more buttons or joysticks. int up = 0; int down = 0; int right = 0; int left = 0; int stick = 0; int x = 0; int y = 0; int sum = 0; int xIn; int yIn; int xVal; int yVal; int upState; // the current reading from the input pins int downState; int leftState; int stickState; int lastUpState = HIGH; // our buttons are active low int lastDownState = HIGH; int lastLeftState = HIGH; int lastStickState = HIGH; // the previous reading from the input pins unsigned long lastDebounceUp = 0; // the last time the output pin was toggled unsigned long lastDebounceDown = 0; unsigned long lastDebounceLeft = 0; unsigned long lastDebounceStick = 0; unsigned long debounceDelay = 50; // the debounce time; increase if the output flickers void debounce(){ //Make debouncing a function to save space in the main loop. It also makes it really easy to disable the buttons. // read the state of the switch into a local variable: int readingUp = digitalRead(buttonUp); int readingDown = digitalRead(buttonDown); int readingLeft = digitalRead(buttonLeft); int readingStick = digitalRead(buttonStick); // check to see if you just pressed the button (i.e. the input went from LOW to HIGH), and you've waited long enough since the last press to ignore any noise: if (readingUp != lastUpState) { // If the switch changed, due to noise or pressing lastDebounceUp = millis(); // reset the debouncing timer } if (readingDown != lastDownState) { lastDebounceDown = millis(); } if (readingLeft != lastLeftState) { lastDebounceLeft = millis(); } if (readingStick != lastStickState) { lastDebounceStick = millis(); } if ((millis() - lastDebounceUp) > debounceDelay) { // whatever the reading is at, it's been there for longer than the debounce delay, so take it as the actual current state: if (readingUp != upState) { // if the button state has changed: upState = readingUp; if (upState == LOW) { //run this function only if th button is low and wasn't low before. goForward(); up = 0; } } } if ((millis() - lastDebounceDown) > debounceDelay) { //repeat for each button if (readingDown != downState) { downState = readingDown; if (downState == LOW) { goBackward(); down = 0; } } } if ((millis() - lastDebounceLeft) > debounceDelay) { if (readingLeft != leftState) { leftState = readingLeft; if (leftState == LOW) { turnLeft(); left = 0; } } } if ((millis() - lastDebounceStick) > debounceDelay) { if (readingStick != stickState) { stickState = readingStick; if (stickState == LOW) { lightOn(); stick = 0; } } } // save the reading. Next time through the loop, it'll be the lastButtonState: lastUpState = readingUp; lastDownState = readingDown; lastLeftState = readingLeft; lastStickState = readingStick; } void goForward(){ //these functions are what send the signals irsend.sendNEC(0x12341234, 32); // all this is is a random set of numerals I typed in with what I think is a 32 bit signature. I dunno, I didn't memorize the "readme" file for the IRremote library. digitalWrite(13, HIGH); // blink the built in LED to tell me the code was sent. delay(40); digitalWrite(13, LOW); } void goBackward(){ //same thing, different numbers. //The receiver is supposed to read these numbers the exact way they are typed here, but mine will only do it short-range due to atmospheric light distortion (I think) irsend.sendNEC(0x43214321, 32); //which means you need to go through and test the controller with the IRrecv Demo code on the receiving end to find what exactly is being received digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } void turnRight(){ irsend.sendNEC(0x32413241, 32); //etc. digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } void turnLeft(){ irsend.sendNEC(0x23142314, 32); digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } void lightOn(){ irsend.sendNEC(0x80808080, 32); digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } void readX(){ //find the analog position of the joystick xIn = analogRead(stickX); xVal = map(xIn, 0,1023, 0, 255); } void readY(){ yIn = analogRead(stickY); yVal = map(yIn, 0, 1023, 0, 255); } void determineStick(){ // and then send a signal accordingly. I haven't yet added codes for diagonal positions, so right now it's just -, <, ^, v, or >. if (xVal >= 137){ x = 0; irsend.sendNEC(0x56835683, 32); digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } if (xVal <= 117){ x = 0; irsend.sendNEC(0x38653865, 32); digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } if (yVal >= 137){ y = 0; irsend.sendNEC(0x68536853, 32); digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } if (yVal <= 117){ y = 0; irsend.sendNEC(0x35863586, 32); digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } if (xVal >= 117 && yVal >= 117 && xVal <= 137 && yVal <= 137){ //stick is centered irsend.sendNEC(0x34566543, 32); //this is the halt signal. Very important if you want to not crash. digitalWrite(13, HIGH); delay(40); digitalWrite(13, LOW); } } void sendVal(){ //this is for debugging purposes, it just prints the state of the controls to the serial monitor Serial.println(xVal); Serial.println(yVal); Serial.println(upState); Serial.println(downState); Serial.println(leftState); Serial.println(stickState); } void setup(){ //initialize everything pinMode(buttonUp, INPUT_PULLUP); pinMode(buttonDown, INPUT_PULLUP); pinMode(buttonLeft, INPUT_PULLUP); //use the internal pullup to ensure there are no 'floating' pins pinMode(buttonStick, INPUT_PULLUP); pinMode(stickX, INPUT); pinMode(stickY, INPUT); // these are analog potentiometers, so a pullup is pointless Serial.begin(9600); //debugging only } void loop() { // sendVal(); // uncomment for serial debugging debounce(); //debounce the buttons readX(); //get the joystick positions readY(); determineStick(); //send signals accordingly delay(10); //let the processor rest before looping again }
// UnoDriveCode.ino for the Uno motor driver #include <AFMotor.h> #include <IRremote.h> //initialize the libraries IRrecv irrecv(A0); //define the ir pin decode_results results; //this is where it stores the received code AF_DCMotor motorRight(3, MOTOR34_64KHZ); // initialize the motors AF_DCMotor motorLeft(4, MOTOR34_64KHZ); #define driver1 //change 1 to 2 if you use the L298 driver, 1 is for the Adafruit driver //vars for L298 motor driver //Pins for motor A const int MotorA1 = 4; const int MotorA2 = 7; //Pins for motor B const int MotorB1 = 8; const int MotorB2 = 9; int f; //these are the variables I use to determine what function to run int b; int l; int r; int h = 1; int up; int down; int stick; int left; void modeFinder(){ //this determines what displays are used by the Mega if (up == 1){ analogWrite(A1, 255); analogWrite(A2, 0); }; if (down == 1){ analogWrite(A1, 0); analogWrite(A2, 255); }; if (left == 1){ analogWrite(A1, 0); analogWrite(A2, 0); }; if (stick == 1){ analogWrite(A1, 0); analogWrite(A2, 255); }; } #ifdef driver1 void fd (){ //these are the drive methods. they don't use delays because that will interfere with the IR code motorRight.run(FORWARD); motorLeft.run(FORWARD); } void bk (){ motorRight.run(BACKWARD); motorLeft.run(BACKWARD); } void lt (){ motorRight.run(RELEASE); motorLeft.run(FORWARD); } void rt (){ motorLeft.run(RELEASE); motorRight.run(FORWARD); } void halt (){ motorLeft.run(RELEASE); motorRight.run(RELEASE); } #endif #ifdef driver2 void fd (){ //these are the drive methods. they don't use delays because that will interfere with the IR code digitalWrite (MotorA1, HIGH); digitalWrite (MotorA2, LOW); digitalWrite (MotorB1, HIGH); digitalWrite (MotorB2, LOW); } void bk (){ digitalWrite (MotorA1 , LOW); digitalWrite (MotorA2, HIGH); digitalWrite (MotorB1, LOW); digitalWrite (MotorB2, HIGH); } void lt (){ digitalWrite (MotorA1, LOW); digitalWrite (MotorA2, LOW); digitalWrite (MotorB1, HIGH); digitalWrite (MotorB2, LOW); } void rt (){ digitalWrite (MotorA1, HIGH); digitalWrite (MotorA2, LOW); digitalWrite (MotorB1, LOW); digitalWrite (MotorB2, LOW); } void halt (){ digitalWrite (MotorA1, LOW); digitalWrite (MotorA2, LOW); digitalWrite (MotorB1, LOW); digitalWrite (MotorB2, LOW); } #endif void translateIR(){ //this translates a HEX value from the IR code into something more useful switch(results.value){ //joystick stuff: case 0x6B85EA37: //these values may be different based on your transmitter and receiver. Use the IRrecv Demo code to determine what does what on your system. f = 1; // go forward b = 0; l = 0; r = 0; h = 0; break; case 0xC4BB93AF: b = 1; // go backward f = 0; l = 0; r = 0; h = 0; break; case 0xDB867814: r = 1; //turn right l = 0; f = 0; b = 0; h = 0 ; break; case 0x2A008D37: l = 1; //turn left r = 0; f = 0; b = 0; h = 0; break; // button stuff: case 0x37E66545: stick = 1; //enable a mode up = 0; left = 0; down = 0; break; case 0xD3C482CC: up = 1; //enable a different mode down = 0; stick = 0; left = 0; break; case 0x8E2CCDCC: left = 1; // etc. up = 0; down = 0; stick = 0; break; case 0xDE3E4AE7: down = 1; up = 0; left = 0; stick = 0; break; case 0x1ADA75F7: f = 0; // VERY IMPORTANT: this is the Halt function that stops the motors. Set this to the code received when the stick on the controller is in the center position. b = 0; // If nothing else, this is the part you want to work most properly. Otherwise your robot will crash. A lot. Trust me. I found out the hard way. l = 0; r = 0; h = 1; break; }; } void setup (){ irrecv.enableIRIn(); //start the receiver pinMode(MotorA1, OUTPUT); pinMode(MotorA2, OUTPUT); pinMode(MotorB1, OUTPUT); pinMode(MotorB2, OUTPUT); digitalWrite(MotorA1,LOW); digitalWrite(MotorA2,LOW); digitalWrite(MotorB1,LOW); digitalWrite(MotorB2,LOW); motorRight.setSpeed(250); //set the motor speeds. I have them at max, but I would suggest turning these down until you get used to the controls. motorLeft.setSpeed(250); pinMode(A1, OUTPUT); //set these as outputs to the Mega pinMode(A2, OUTPUT); } void loop(){ if (irrecv.decode(&results)){ // when it receives a signal, it turns it into something useful translateIR(); irrecv.resume(); } modeFinder(); // make sure we have the displays set to do something</p><p> while(f == 1){ //when the conditions are met, go forward fd(); modeFinder(); // we still want a display while driving if (irrecv.decode(&results)){ // and we want it to be able to stop and run a new code. translateIR(); irrecv.resume(); } }; while (b == 1){ // same thing, just going backwards bk(); modeFinder(); if (irrecv.decode(&results)){ translateIR(); irrecv.resume(); } }; while (r == 1){ rt(); modeFinder(); if (irrecv.decode(&results)){ translateIR(); irrecv.resume(); } }; while (l == 1){ lt(); modeFinder(); if (irrecv.decode(&results)){ translateIR(); irrecv.resume(); } }; while (h == 1){ //And here again is the important part. A robot that drives is great. A robot that drives and can stop driving is better. halt(); modeFinder(); if (irrecv.decode(&results)){ translateIR(); irrecv.resume(); } }; delay(50); // let the processor rest }
// DisplayCode.ino for the Mega /* Please note: a lot of this code is annotated from public examples and other projects I have built. It is by no means perfect, and the various parts are scattered about in such a way that even I have a hard time keeping track of what came from where and why I have it or what it is used for. Forgive me for any shortcomings on my explanations here; if you have a question, comment or suggestion, by all means, please share. This is a learning experience for most of us, anyway, even me. */ #include <Adafruit_GFX.h> #include <Adafruit_NeoMatrix.h> #include <Adafruit_NeoPixel.h> //initialize the libraries #define PIN 6 #define PIN2 2 #define PIN3 3 //define the NeoPixel pins Adafruit_NeoPixel stripLeft = Adafruit_NeoPixel(5, PIN2, NEO_GRB + NEO_KHZ800); Adafruit_NeoPixel stripRight = Adafruit_NeoPixel(5, PIN3, NEO_GRB + NEO_KHZ800); //strip settings. change these if you use different LEDs Adafruit_NeoMatrix matrix = Adafruit_NeoMatrix(10, 8, PIN, NEO_MATRIX_TOP + NEO_MATRIX_LEFT + NEO_MATRIX_ROWS + NEO_MATRIX_ZIGZAG, NEO_GRB + NEO_KHZ800); // matrix settings. Change these to accomodate your matrix size, pattern, LED density, and the position of the first LED. const uint16_t colors[] = { matrix.Color(255, 0, 0), matrix.Color(255, 135, 0), matrix.Color(255, 255, 0), matrix.Color(0, 255, 0), matrix.Color(0, 255, 255), matrix.Color(0, 0, 255), matrix.Color(255, 0, 255) }; //color presets for the text // And now, a safety warning from our friends at Adafruit Industries: //. // Thank you for your patience. You may now continue with the code. // vars for the displays. I think some are unused, but I'm not sure which. int bPin[] = {22, 23, 26, 27, 24, 25}; //the beacon pins NOTE: leave these in an array, it makes timing a lot easier int d = 55; //delay timing int rPin[] = {A12, A11, A10, A9, A8, A7}; // the scanner pins " " " " " " " " " " " " int count = 0; // this is used for the LED arrays int timer = 30; // not sure about this one // these are for the Matrix... I think. int x = matrix.width(); int pass = 0; int c = 1; int s = 1; int pc; //display mode vars: int mode = 0; int input1; int input2; void getModes(){ //find what mode the Uno is telling us to be in: input1 = analogRead(A5); input2 = analogRead(A6); if (input1 >= 127 && input2 <= 127){ mode = 0; }; if (input1 <= 127 && input2 >= 127){ mode = 1; }; if (input1 >= 127 && input2 >= 127){ mode = 2; }; if (input1 <= 127 && input2 <= 127){ mode = 3; }; } // the below function is for the simultaneous display of the beacon and the scanner. The delays here are a pain to explain, so I won't unless you specifially ask in the comments. void beacon(){ digitalWrite(bPin[0], HIGH); //light 1 on delay(d); digitalWrite(bPin[0], LOW); //light 1 off, light 2 on digitalWrite(bPin[1], HIGH); delay(d); if (c>=4 || c <= 0 ){ //basically, figure out which scanner led is on, set the next one to turn on, and reverse when it gets to one end or the other. s = -s; }; pc = c; c = c + s; digitalWrite(rPin[c], HIGH); digitalWrite(rPin[pc], LOW); digitalWrite(bPin[1], LOW); //light 2 off, light 3 on digitalWrite(bPin[2], HIGH); delay(d); digitalWrite(bPin[2], LOW); //light 3 off, light 4 on digitalWrite(bPin[3], HIGH); //So on and so forth delay(d); if (c>=4 || c <= 0 ){ s = -s; }; pc = c; c = c + s; digitalWrite(rPin[c], HIGH); digitalWrite(rPin[pc], LOW); digitalWrite(bPin[3], LOW); digitalWrite(bPin[4], HIGH); delay(d); digitalWrite(bPin[4], LOW); digitalWrite(bPin[5], HIGH); delay(d); if (c>=4 || c <= 0 ){ s = -s; }; pc = c; c = c + s; digitalWrite(rPin[c], HIGH); digitalWrite(rPin[pc], LOW); digitalWrite(bPin[5], LOW); } //Beacon single pulse void pulse(){ for(int i = 0; i <= 5; i++){ digitalWrite(bPin[i], HIGH); } delay(50); for(int i = 0; i <= 5; i++){ digitalWrite(bPin[i], LOW); } delay(1000); } //beacon triple pulse void triP); } //Beacon double pulse void dualP); } // this is the cycling rainbow for the side strips of NeoPixels. This part is also a pain to explain, and is basically the same as what is in the Strandtest code. void rainbow(uint8_t wait) { uint16_t i, j; for(j=0; j<256; j++) { for(i=0; i<stripLeft.numPixels(); i++) { stripLeft.setPixelColor(i, Wheel1((i+j) & 255)); } } for(j=0; j<256; j++) { for(i=0; i<stripRight.numPixels(); i++) { stripRight.setPixelColor(i, Wheel2((i+j) & 255)); } } stripRight.show(); //show the strips simultaneously, not one at a time. stripLeft.show(); delay(wait); } // a rather complex set of ifs for the position of the rainbow on the strip: uint32_t Wheel1(byte WheelPos) { WheelPos = 255 - WheelPos; if(WheelPos < 85) { return stripLeft.Color(255 - WheelPos * 3, 0, WheelPos * 3); } if(WheelPos < 170) { WheelPos -= 85; return stripLeft.Color(0, WheelPos * 3, 255 - WheelPos * 3); } WheelPos -= 170; return stripLeft.Color(WheelPos * 3, 255 - WheelPos * 3, 0); } uint32_t Wheel2(byte WheelPos) { WheelPos = 255 - WheelPos; if(WheelPos < 85) { return stripRight.Color(255 - WheelPos * 3, 0, WheelPos * 3); } if(WheelPos < 170) { WheelPos -= 85; return stripRight.Color(0, WheelPos * 3, 255 - WheelPos * 3); } WheelPos -= 170; return stripRight.Color(WheelPos * 3, 255 - WheelPos * 3, 0); } // a similar rainbow, but slightly different. void rainbowCycle(uint8_t wait) { uint16_t i, j; for(j=0; j<256*5; j++) { // 5 cycles of all colors on wheel for(i=0; i< stripLeft.numPixels(); i++) { stripLeft.setPixelColor(i, Wheel1(((i * 256 / stripLeft.numPixels()) + j) & 255)); } for(i=0; i< stripRight.numPixels(); i++) { stripRight.setPixelColor(i, Wheel2(((i * 256 / stripRight.numPixels()) + j) & 255)); } stripLeft.show(); stripRight.show(); delay(wait); } } void setup() { for (count=0;count<6;count++) { //set the LEDs as outputs pinMode(rPin[count], OUTPUT); } for (count=0;count<6;count++) { pinMode(bPin[count], OUTPUT); } stripLeft.begin(); // initialize all the strips stripRight.begin(); stripLeft.show(); stripRight.show(); matrix.begin(); //init the matrix matrix.setTextWrap(false); // makes text scroll nicely matrix.setBrightness(40); // more brightness = more amps needed, and it makes it painful to read, so we keep it relatively low matrix.setTextColor(colors[0]); //set the color } void loop() { getModes(); while (mode == 0){ triPulse(); // change this if you want dual or single pulse getMode(); }; while (mode == 2){ rainbowCycle(10); // you can change this for the other rainbow or any other animation you want (provided you add the code for it) getModes(); }; while (mode == 3){ beacon(); // not much to change here getModes(); }; while (mode == 1){ matrix.setTextWrap(false); //just to make sure... matrix.setTextSize(1); // with a bigger matrix, a larger size might be desirable. 1 is perfect for an 8 pixel tall matrix, as the largest letters are exactly 8 pixels tall. matrix.setRotation(2); // orientation and direction of the text scrolling. here it is set from front to back (on my robot), but you can change it if you like. matrix.fillScreen(0); matrix.setCursor(x, 0); matrix.print(F("<Warning: Fart Zone>")); // I was bored. And I ate too many beans. So why not? You will probably want to change the message anyway. if(--x < -144) { // change '144' to a smaller multiple of 12 if you have a shorter message, and change it to a larger multiple of 12 for longer messages. not sure why, but it likes 12s. x = matrix.width(); if(++pass >= 7) pass = 0; matrix.setTextColor(colors[pass]); //change the color each time through. } matrix.show(); delay(55); // make this smaller to increase scroll speed, and larger to decrease scroll speed. 55 gives a good speed with not too much 'jerkiness' in pixel transitions getModes(); }; }
Step 10: Testing Everything
So now, you should have read and/or uploaded all of the code to the appropriate boards. Now we need to test it.
Charge up your batteries, get out your goggles, it's time to get this thing rollin'!
Step 1: Power!
Insert the batteries into the battery boxes and power on the Mega with the portable USB power bank.
A display should begin running at this point.
Step 2: Modes
Press the three buttons and the joystick to cycle through modes.
If you can only get some modes to work or it jitters between modes, ensure the Uno has enough power. Check to ensure the wires are in their correct places.
Switch around the values in the Uno code if nothing else works.
Step 3: Driving
Using the joystick, test the drive methods.
If it moves the wrong direction, flip the wires around on the motor(s).
Adjust the motor speeds if it moves too fast or too slow. If you have the speed all the way up, and it still moves extremely slow, try new batteries and ensure the wires are correctly placed.
If it becomes unresponsive, or doesn't stop moving, catch it and hold it still, holding the receiver close to the transmitter, in direct line of sight, until it responds. Alternatively, reset the Uno. This is a problem I am still working on, and I will post updates as I fix it.
Step 11: Results, Potential Upgrades, Lessons Learned, and Afterthoughts
Results:
Overall, I'd call this project a success. It took me a few months to complete due to technical issues, other occupations and sideprojects I was conducting in the meantime. There are still a few issues, though.
Potential Upgrades:
As far as the displays go, the NeoPixels take up a huge amount of memory to run, especially if you want a proper image or animation. A dedicated board such as the Colorduino or the Rainbowduino would probably be better suited to run the Matrix, but as I have neither I decided to simply use the Mega.
The R/C system is the biggest candidate for improvement, as it is currently spotty at best. I am working on 433 Mhz version, but as I said earlier, work is slow. Updates will be posted as I finish them.
Lessons Learned:
NeoPixels are finicky things, especially to test and solder. Research would have been better conducted before rather than after, something I shall remember in days to come.
Wiring is best done with power disconnected, something I discovered after my first receiver went up in metaphorical smoke.
Afterthoughts:
This project would have been better with more experience on my part. My lack of coding expertise has led to some shortcomings in function, but overall the result was acceptable.
In lieu of this, I challenge you to improve this project, to make it better, to improve upon it in every way.
These are the Instructables of Dangerously Explosive, his lifelong mission "to boldly build what you want to build, and more."
You can find the rest of my projects here.
2 Discussions
Great stuff! Very entertaining, and very thorough.
Thanks! I'm glad to know you liked it. | https://www.instructables.com/id/ROB-the-Ultimate-RC-NeoPixelWS2812B-LED-Display-Pl/ | CC-MAIN-2018-34 | en | refinedweb |
.
If.
Accuracy.
Latency.
Here’s what my RepeatRule spec looks like at this point:
[<sequence>] [<nested_repetitions>] [<terminal_sequence>] [terminal <terminal>] [[[and] repeat [that]] <n> times]
- sequence: My largest collection of repeatable commands
- nested_repetitions: Commands that contain Repetition elements
- terminal_sequence: A small set of commands that contains raw dictation
- terminal: Same as terminal_sequence, except not repeated
Organization
The multiedit grammar doesn’t provide an obvious way to create commands that only run in particular contexts.. Here’s what this looks like:
class RepeatRule(CompoundRule): def __init__(self, name, repeated, terminal, context): spec = "[<sequence>] [<nested_repetition>] [<terminal_sequence>] [terminal <terminal>] [[[and] repeat [that]] <n> times]" extras = [ ... ] defaults = { ... } CompoundRule.__init__(self, name=name, spec=spec, extras=extras, defaults=defaults, exported=True, context=context)
Creating mutually exclusive contexts is the trickier part. I use some helper logic:
def combine_contexts(context1, context2): """Combine two contexts using "&", treating None as equivalent to a context that matches everything.""" if not context1: return context2 if not context2: return context1 return context1 & context2 class ContextHelper: """Helper to define a context hierarchy in terms of sub-rules but pass it to dragonfly as top-level rules.""" def __init__(self, name, context, element): """Associate the provided context with the element to be repeated.""" self.name = name self.context = context self.element = element self.children = [] def add_child(self, child): """Add child ContextHelper.""" self.children.append(child) def add_rules(self, grammar, parent_context): """Walk the ContextHelper tree and add exclusive top-level rules to the grammar.""" full_context = combine_contexts(parent_context, self.context) exclusive_context = full_context for child in self.children: child.add_rules(grammar, full_context) exclusive_context = combine_contexts(exclusive_context, ~child.context) grammar.add_rule(RepeatRule(self.name + "RepeatRule", self.element, terminal_element, exclusive_context))
I can then build a tree of these and add all the rules with a single method call:
global_context_helper = ContextHelper("Global", None, single_action) emacs_context_helper = ContextHelper("Emacs", AppContext(title = "Emacs editor"), emacs_element) global_context_helper.add_child(emacs_context_helper) ... add more helpers to the tree ... grammar = Grammar("repeat") global_context_helper.add_rules(grammar, None)
Conclusion
To see the complete code for my repeated grammar, check out _repeat.py on GitHub.
There are lots of other ways you could organize a grammar, and I’m sure I will make more improvements over time. Please add comments if you have ideas!
11 thoughts on “Designing Dragonfly grammars”
I use a set of “literalizer” words (currently a set of one, “English”) that causes the following command to not be considered the start of a command. Pretty easy to get used to.
How do you implement that? Once Dragon picks up the next word as a command, it seems like that would break up the previous Dictation element.
Lol, your question forces me into a funny realization of how our systems differ. In my case, I parse commands myself. All utterances must start as a command of some form, but my “ContinuousRule”s take the final heard utterance, and if they find commands in the designated dictation, the dictation is split, first part updated for the command to receive, and the rest is mimicked. So say I have commands c1 thru c4, and c1 has a spec “c1 ” — then “c1 some stuff c2 some stuff c3” will mimic everything after “c1 somestuff”.
So long story short, I parse and split the heard words, so with that understanding, it’s pretty easy to notice “English” and disallow interpreting what follows as a command start.
My repository (linked to here if you click my name) can be a bit complex, but if you look through it and have any questions, feel free to ask.
Woops, all my <RunOn&rt instances disappeared out of that post.
I haven’t tried this, but my intuition says you should divide things into cases by how frequently command words need to be part of dictation.
For command words that are frequently part of dictation, choose a different command word, a non-English one if necessary.
For infrequent cases, don’t worry too much about it and just provide non-chain-able versions of the dictation commands. Yes, this means in these cases you need a pause before and afterwards, but this will be much simpler than trying to worry about escape words. Because these cases are rare, the extra pause time should not matter very much and well worth a substantial savings in implementation complexity of not having to implement escape words.
I notice that Ben of VoiceCode.IO precedes all his templates by “Quinn” so for example creating an if-then-else template is “Quinn if else”. The use of the word “Quinn” avoids issues with identifiers like “if_need_title”.
Using combinations of words that don’t appear in arbitrary dictation may also work. “key if” for “if ” is much less likely to cause problems while still being memorable.
My escape word (terminal) precedes the command, so in a sense it is equivalent to defining non-chainable commands (e.g. I just think of “terminal score” as the non-chainable version of “score”).
I do something similar to Ben for templates (I use “plate”). So in one command I can say “plate if bang score found tab” for
I just want to say, I love your solution to the problem described in the first paragraph of the “Accuracy” section.
Thanks! This is one of the few grammar patterns I haven’t really changed since I first implemented it.
I mentioned this before, but I went a different way. I can run pretty much every command together, except for the few I specifically decide for whatever reason will not chain due to specific needs.
Mind you, I do have a moderate lag issue, but I think my lag is from some really deeply nested rules of many alternatives repeated so many potential times, i.e. an over complex grammar in terms of it’s rule set, and not in the manner in which I’m dispatching the chained rules.
I’m actually getting ready (meaning maybe years from the look at it :-p) at putting up a python package called “dragonflow” that let’s someone just plug in use my method. It’ll be at my chajadan github.
Alright, I’ve made a preliminary release of what is now called draganfluid (dragonflow was taking on pypi). Quick out of the box chaining support. | http://handsfreecoding.org/2015/04/25/designing-dragonfly-grammars/?replytocom=107 | CC-MAIN-2018-34 | en | refinedweb |
US6990395B2 - Energy management device and architecture with multiple security levels - Google PatentsEnergy management device and architecture with multiple security levels Download PDF
Info
- Publication number
- US6990395B2US6990395B2 US10689895 US68989503A US6990395B2 US 6990395 B2 US6990395 B2 US 6990395B2 US 10689895 US10689895 US 10689895 US 68989503 A US68989503 A US 68989503A US 6990395 B2 US6990395 B2 US 6990395B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- power management
- energy
- data
- network
- power
-142—Define device, module description using xml format file
-/10—Energy trading, including energy flowing from end-user application to grid
Abstract
Description
This application is a continuation under 37 C.F.R. § 1.53(b) of U.S. patent application Ser. No. 10/627,244 filed Jul. 24, 2003, the entire disclosure of which is hereby incorporated by reference. U.S. patent application Ser. No. 10/627,244 is a continuation-in-part under 37 C.F.R. § 1. U.S. patent application Ser. No. 09/814,436 filed Mar. 22, 2001 now U.S. Pat. No. 6,751,562, is a continuation-in-part under 37 C.F.R. § 1.53(b) of U.S. patent application Ser. No. 09/723,564 filed Nov. 28, 2000. U.S. patent application Ser. No. 09/723,564 filed Nov. 28, 2000 U.S. Pat. No. 5,650,936, the entire disclosure of which was energy., electrical energy; and wherein the network interface is further operative to incrementally receive one of a power management command and power management data encoded as an Extensible Markup Language (XML) document from the network, the XML document being received as a plurality of segments, wherein the network interface is capable of processing at least one received of the plurality of segments and extracting the one of the power management command and power management data therefrom prior to receiving all of the plurality of segments.
In addition, energy, and further operative to generate one of a power management command and power management data related thereto; and wherein the network interface is further operative to receive the one of a power management command and power management data from the processor, incrementally generate an XML document comprising the one of a power management command and power management data, the XML document being generated as a plurality of segments, wherein each of the plurality of segments is communicated to the network as it is generated.
The preferred embodiments further relate to a method, in an energy management device, of transmitting a communication from the energy management device over a network coupled with the energy management device. In one embodiment, the method includes; generating a set of data to be communicated over the network as an XML document; transforming each of the data into an XML format as it is generated; and communicating each of the XML formatted data over the network as it is transformed; releasing at least one resource utilized by the XML formatted data from the energy management device as it is communicated; and repeating the transforming and the communicating until the entire the set of data has been communicated.
In addition, the preferred embodiments further relate to a method, in an energy management device, of receiving a communication from a network coupled with the energy management device. In one embodiment, the method includes; receiving data comprising one of a plurality of portions of an XML document from the network; as the data is received, determining when the received one of the plurality of portions comprises processable XML code; and when enough data to process has been received, processing the portion to interpret the processable XML code contained therein; and repeating the receiving, determining and processing until all of the XML document has been received..orida./power measuring devices/power measuring devices. For example this existing Internet infrastructure can be used to simultaneously push out billing, load profile, or power quality data to a large number of IED/power measurement and control devices, is typically email, Telnet, file transfer protocol (“FTP”), trivial file transfer protocol (“TFTP”) or proprietary systems, both unsecured and secure/encrypted..I Meter manufactured by ABB in Raleigh, N.C. Such conversion devices are known in the art.
A communications architecture is described requestor. Simple Mail Transfer Protocol (“SMTP”), Multipurpose Internet Mail Extensions (“MIME”) or Post Office Protocol (.
The Power Management Application 211 utilizes the architecture 100 and comprises power management application components which implement the particular power management functions required by the application 211. 211.
In one preferred embodiment the architecture 100 comprises IED's 102-109 connected via a network 110 and back end servers 120, 121, 122, 123, 124 which further comprise software which utilizes protocol stacks to communicate. IED's 102-109 can be-109-19., 131 or the customer 132, 133 of the electrical power distribution system 101. SOAP allows a program running one kind of operating system to communicate with the same kind, or another kind of operating system, by using HTTP and XML as mechanisms for the information exchange.
Furthermore, the application 211 ED.
In one embodiment, a Billing/Revenue Management component on a back end server receives the billing and revenue computations over the network 307 from the billing/revenue management component 315 c c power quality events and disturbances. In one embodiment, when the IED detects an event, it processes a where it is encrypted before email protocol packaging 321 b takes place. Once the data 330 has been encrypted and packaged, the message is passed through the remaining IP layers 326 a includes authentication or encryption, or alternately the Security Sub-layer 321 a a a. T he data is encrypted with two different encryption keys so only the utility can decrypt the power quality data and only the customer can decrypt the billing data.
In operation the IED monitors the power distribution system 300
For example, an IED 502 is connected to a power line 500 and associated load 501. The IED 502 measures power usage by the load and by converting a kWh or kVa pulse 511 into data 512 and transmits this consumption data 514 over a network 510 to a usage and consumption management application component operating on a back end server. The Usage and consumption management component receives and tracks cost and usage 516, 518 and upon receiving costs 520 and the process is complete 530. TED.
In an alternate embodiment the back end servers
Referring to
As described above,.
Using XML to transfer data within a power monitoring system, as was described above, can further simplify the interoperability among IED's and between IED's and other software or hardware systems/devices. Given the limitations of modem data communications methodologies, and that uses of these methodologies continuously push the boundaries of newer/updated technologies, it is typically not possible to communicate an entire data file, such as an XML document, substantially instantaneously from one device to another. Alternatively, where it is possible to communicate an entire data file substantially instantaneously, it may be advantageous to break up the file as described and communicated in multiple pieces.
Typically, when transferring data, such as XML documents, between devices, or otherwise receiving or transmitting such data to or from a source, such as a storage device, network or other device, etc., the entirety of the data is broken down into a collection of subsets compatible with the communications medium being used. For example, communications over the Internet are typically broken up into multiple packets. These subsets are then substantially sequentially transmitted or “streamed” over that medium to the receiver who receives the data. The data may be broken down into blocks, packets or other subsets for transmission, the sequence or collection of these subsets being referred to as a “data stream.” These transmitted subsets may then be transmitted, in order, out of order, via the same or different communications paths, to the receiver who reassembles the communication. Thereby, there is a minimum latency incurred before the first of the subsets are received and an overall delay incurred until all of the subsets have been received. Further, when all of the subsets of the data are needed to process the whole of the data, the receiving device is limited to waiting for all of the subsets to arrive prior to being able to process that data. In addition to this delay, the receiving device also incurs the resource cost of having to be capable of receiving and buffering the entire data. It will be appreciated, that while the disclosed embodiments are discussed in relation to communications over wide area networks, such as the Internet or intranets, communications over other mediums are also contemplated, such as communications over device interfaces, such as PCI, serial-ATA, USB, or IEEE 1391 (“fire wire”). In addition, the disclosed embodiments are discussed in relation to “processing” of data wherein processing may include generation/transformation of that data for subsequent transmission, in whole or in part, by the transmitting device and translation/interpretation/transformation/parsing of received data by the receiving device. Generally, it will be appreciated that the techniques disclosed herein may be equally applicable to both the transmission and reception of data.
For example, XML documents may be processed using several different methods which are typically memory or otherwise resource intensive. Using such methods, the complete XML document, or the data equivalent to the complete document, is stored/maintained in the memory of the processing system while it is undergoing such processing. As the memory capacity of the processing system is normally not a concern for most applications, such methods, which operate on the complete XML document, work just fine. Unfortunately, typical IED devices, as described above, include only a limited amount of available memory to store XML messages/documents as they are received or generated. In many cases, manufacturing costs and/or design limitations necessitate including a smaller amount of memory than is necessary to store the entirety of a typical XML document prior to or during parsing or transmitting. Further, as new and more resource intensive uses and technologies are developed, the memory capacity and processing resources of existing devices, sufficient when the device was first installed, quickly becomes insufficient. Thus there is a need for a method of incrementally generating and parsing XML documents without having to buffer the complete XML document in memory before, during or after processing. This reduces the processing delay incurred in processing the XML due to the latency of receiving the first processable increment and reduces the amount of memory and other resources that must be made available or otherwise reserved for the processing. Further, memory capacity as a limitation on future development is mitigated.
The disclosed embodiments relate to decomposing an XML document into a set of incremental processable segments or portions and “pipelining” the processing, i.e. generation or transformation, and communication, i.e. reception or transmission, of those segments through independent processing stages, such that processing of each segment can be completed independently of the processing of the other segments. The types of data that can be transferring using incrementally generated or parsed documents include load profile data, power quality data, energy data, alert messages, status information, energy management configuration information, price, cost and temperature data as well as other power management data or derivations therefrom, as described above, presently known or later developed.
An XML document is essentially a collection of ASCII characters arranged in a meaningful format according to the XML specification. As the ASCII data of an XML document is incrementally received, it initially has no meaning to the receiving system other than that it is expected to be an XML document. This subset of data is not necessarily valid or well formed XML code but may be fed to a parser or other interpreter. However, the parser or interpreter may be unable to begin the parsing or interpretation of the XML code until enough data has been received to provide minimally recognizable XML code, referred to herein as a processable segment. An example of a processable segment is an XML token which is the smallest unit of XML code. Examples of XML tokens include “<”, “>”, element names, character data, etc. An XML fragment is a subset of an XML document built from XML tokens and other XML fragments according to the syntax of XML. An XML fragment may comprise one or more processable segments.
As used herein, the terms sequential and incremental are used to refer to applications or computer operating environments/architectures which operate incrementally on data streams, i.e. operate on a portion of the data, or a subset thereof, in the stream as it is received and generating an output, before processing the next data, portion, or subset thereof. Typically, a sequential or incremental application is implemented, as described herein, as a series of processing stages, each stage consuming its input from a source or prior stage and generating an output to a sink or to the next stage before consuming and processing the next input. Although each processing stage must wait for the previous stage to generate its output before processing that output, the processing performed by the particular stage is independent of the other stages, i.e. is self-contained and does not involve the other stage, e.g. the previous stage is free to accept and process additional input. This is distinguished from the use of the term “sequential” for computer architectures to distinguish these architectures from parallel processing architectures which generally refer to the way in which computer program code is executed by the architecture, both of which may be used to implement the disclosed embodiments. Additionally, other software development techniques may also be used to implement the disclosed embodiments such as threading and concurrency.
Incrementally generated or parsed/transformed/translated XML has the advantage that neither the complete XML document, nor the representation of the XML as a data structure, need be stored in the memory of the processing device at any one time for the generation or parsing operation to be able to operate, thereby reducing the delay in commencing processing or transmission of the data and the amount of memory needed, and consequently the manufacturing cost, with the additional advantage of extending the capabilities of the IED to handle XML documents of unlimited or unspecified size. Only those parts of the document currently being processed need to be contained in the memory. Once processed, those parts can be discarded, such as by using an automated garbage collection mechanism as provided by Java, or alternatively, by manually releasing the resources using specific functions. When receiving data from a source, the unprocessed parts can be in transit or otherwise contained in the lower communication layers or maintained at the source device that is communicating the source data or XML document to the XML processing device, until needed.
Processing of the incremental data may include encrypting or decrypting the incremental data, generating authentication information, compressing or translating the incremental data into other formats and protocols. Generating authentication information can include signing the data stream to verify the identity of the originating IED and validity of the data stream to consumers of the data stream or alternatively adding a hashing function to verify that the data has not been tampered with while being transported across the intervening network. In one embodiment, the transformations are implemented using a lazy functional programming language such as Haskell, or a declarative language such as eXtensible Stylesheet Language (“XSL”) which allows for a simple implementation, i.e. the intermediate data in the chains can be deleted once it has been processed. Other examples of transformations that can be implemented include transforming from an original character set to a different character set, transforming from an internal data structure into a formatted human or machine readable structure or other transformations necessary to transmit the data over a communication link from the IED to a client computer. Some sample character sets or formats that can be used include Unicode UTF-8, Unicode UTF-16, ASCII, Latin encoding and base64 formats. The eventual transmission protocol used can also be viewed as another transformation of the data. The data can be transmitted in SMTP, MIME, HTTP, XML, wireless binary XML, SOAP, CSV and other protocols and formats as known in the art. The transformation may also be a unity transform, where the inputs arc copied to the outputs, but the transform has the side effect of computing some value based on the inputs such as a one-way hash function that can be appended onto the data stream or transmitted using other methods.
While the embodiments described below utilize Internet based electronic mail as the medium to communicate the incrementally generated XML documents, it will be appreciated that other communications protocols may also be used, such as HTTP, FTP, etc. Further, non-TCP/IP based protocols may also be used. TCP/IP based protocols offer the advantage of standardization, a packet based implementation, widespread implementation and ease of use.
Regarding the generation of documents for subsequent communication, there are known methods to incrementally generate data messages in simple formats, such as CSV. CSV formatted data messages simply consist of ASCII text or other low-level data encoding format, arranged in a prescribed record format wherein the records are defined by column delimiters, such as commas, and row delimiters, such as line-breaks. Each record is encoded independently of the other records. Incremental generation of such a file includes generating each row of data, including the data elements separated by column delimiters, followed by the row delimiter and providing that data row for subsequent processing, such as by providing it to the communications layer which subsequently transmits it to the destination, for example, in a packet making up the body of a simple email message.
Unfortunately, including data messages in the body of an electronic mail message requires additional custom logic on both the transmitting and receiving ends to insert and properly extract the message, especially where multiple separate messages are included. In addition, Internet based email messages are typically routed through many different intervening email systems and/or routers, all of which may not handle the message body in a standardized manner, potentially resulting in corruption of the message(s). This limits the complexity of the data messages that can be transmitted in this manner.
Additionally, such systems unreliably handle documents which utilize a markup-based format, such as XML. Such documents present additional complexities in that the encoding of data into the markup format at one point in time may affect the encoding of subsequent data into the markup format. For example, encoding one piece of data may require that a begin-tag be generated, for example to indicate the data type of the data to follow the tag, and further necessitating future generation of an end tag. Further, such tags may be nested. Should the body of the document be compromised, the XML code contained therein may no longer be well formed and/or valid, resulting in unreliable processing of the XML code. As discussed in more detail below, XML code is considered well formed if it is structured according to the syntax described in the XML specification. XML code is considered valid if the particular code fits within the overall XML schema. An XML schema represents the interrelationship between the attributes and elements of an XML object (for example, a document or a portion of a document). XML code can be well formed but not well valid.
In one embodiment, an incremental XML generator is coupled with an electronic mail generator which includes the incrementally generated XML document as a Multi-Purpose Internet Mail Extensions (“MIME”) attachment to the email message. MIME is an enhancement to the original electronic mail protocols allowing the exchange of messages including data files preserved in their original encoding and format. Using MIME, the data, such as the XML document, is attached to the email message as a MIME attachment. The incremental XML document generator incrementally generates an XML document and attaches the data of the document to the email message as the email message is streamed/transmitted to the email server. In this embodiment, both the communications process and the receiver of the message are unaware that the MIME attachment is being generated as it is transmitted.
By attaching the XML document as a MIME attachment, the capabilities of the MIME protocol can be utilized. For example, multiple XML documents may be attached to the same email message. Further, standardized handling of MIME encoded messages ensures that those messages arrive intact. By convention, electronic mail applications do not modify email attachments but are permitted to modify the body of the message. In addition, standardized handling of MIME encoded messages upon receipt permits automated or user directed execution of applications based on the content of the MIME attachment. This may greatly simplify subsequent processing of MIME encoded messages.
In operation, an exemplary IED, according to the present embodiment, generates some data, such as a table of voltage measurements. This data is to be ultimately supplied to a remote management system for subsequent processing. The IED passes the data values, in real time as they are generated or, alternatively, in batch, in whole or in part, to the incremental XML document generator. As the generator receives the data, the data is encoded into the XML format and transferred to the communication layer for transmission. Wherein the data is delivered to the XML generator in batch, the generator may transfer the encoded XML data, as it is encoded, to the communications layer for transmission. At the beginning of transmission, the communications layer will generate the proper email message headers, setting up the XML document as a MIME attachment, and transmit them to the recipient system. Thereafter, as each piece of data is handed off to the XML generator, encoded in XML and transferred to the communications layer, the communications layer will transmit that data as part of MIME attachment. Once the last piece of data is provide to the XML generator, encoded and transferred to the communications layer for transmission, the XML generator will indicate to the communications layer that the message has ended. The communications layer will then properly terminate the message and attachment. To the receiving system, the email message will appear to be a standard MIME encoded email containing an XML document. The receiving system is not aware that the mail message (and the included MIME attachment) has been broken into multiple pieces for transmission.
There are several different methods to incrementally generate the XML document, and all are contemplated by the disclosed embodiments. One technique is to build a tree structure that contains the data that will be represented in the final document, perform the required transformations into XML encoded format on the data in the tree structure format, and finally serialize the document from the tree structure by traversing the tree structure and incrementally outputting the data stored at each node as each node is traversed. This technique, however, requires that the source data be complete and be placed into the tree prior to the transformations. Other techniques may perform the necessary transformations as the data is generated.
Parsing XML documents normally involves reading and instantiating the complete document in memory into some sort of data structure. Once the document has been parsed, then queries can be run against the data structure and the relevant data extracted. Even if the entire contents of data structure is needed by the application, the parsed data representing the XML document may not be in the data format for easy processing by the application. In that case the entire data structure must be processed again into yet another data structure in order to simplify the processing by the application. However, as described above, in memory constrained systems, there may not be enough memory to store the complete data structures that are required in this transformation if the complete document needs to be transformed at the same time. Using incremental techniques, specific parts of the document can be incrementally transformed to extract the required data.
There are three main methods of parsing an XML document. A pull parser describes an application consuming the XML document which asks for a specific element or the next element from the XML tree and the parser provides the next available element that meets the specified criteria. A push parser describes an application which is notified when specific elements (or each new element) are experienced when reading the document. Finally tree parsers read and process the complete document into a tree structure and general queries can then be executed on the complete document to extract any set of elements requested. In the case of XML this is typically a Document Object Model (“DOM”) tree data structure. Once the DOM tree is built then queries for specific items can be run to extract the required information. Since on embedded systems memory usage is a concern in general, using DOM trees is normally not possible for large documents. Currently in embedded applications, instantiating the complete tree in memory and limiting the document size is the typical approach but with a severe size limitations on the document size. Larger documents can not be processed in embedded systems without significant memory use in current implementations. Pull and push parsers can be implemented in an incremental fashion and are preferred for uses where memory is constrained. The incremental approach may be useful for parsing data from a resource that is transitory like a HTTP data stream. Once the HTTP data stream has been closed the original data is unavailable.
Chaining multiple transformation blocks together may be used when parsing incoming data or generating outgoing data. Chaining blocks together is when the output of one block is fed into the input of the next block. The original input data is transformed by each block as it is executed. Each block does a transformation of the data to a format that can be easier to work with in the specific embedded system. Each chained transformation block can transform the data in some way from the input to the output of the transformation block. Examples of individual transformations include decrypting, decompressing or changing character set or other representations of the data, or a unity transform where the inputs are copied to the outputs but the transform has the side effect of computing one or more pieces of data such as a one-way hash function. When parsing SOAP messages, enough of the SOAP message can be parsed to determine the function to dispatch the rest of the SOAP message to. Once determined, the function can be called to finish processing of the remainder of the SOAP message. The function will return, and the remainder of the SOAP message can be processed, mainly to check for correctness for the specific data format being used. In the case of XML encoding this correctness check could be to check for well-formedness by ensuring the elements processed before dispatch to the function are correctly closed. In an alternative embodiment the remainder of the SOAP message can be ignored once the function returns.
When XML messages are parsed, validation is typically done to verify that the message is well formed and valid. Well formed means the XML document has the correct syntax. Valid XML means that the XML document has been validated against rules that apply for that document, typically described in a natural language such as English, Spanish, etc, or a schema language such as Document Type Definition (“DTD”), Schematron, Relax NG, Resource Description Framework (“RDF”) Schema or an XML Schema Definition (“XSD”) document. The DTD, Relax NG, and XSD, and the Web Service Definition Language (“WSDL”) are metadata that describe the acceptable structure of XML document. RDF Schema describes the structure of a graph that is serialized in XML in the RDF format. RDF Schema is a data model typically serialized in XML. The metadata can be included either in the document itself or as a separate document referenced from with the XML document through a schema, DTD or namespace reference. The metadata describes the rules necessary to check the XML stream content to verify that the expected tags are present in the correct order and data types. Additionally the metadata can also describe the specific of binding values of elements and attributes to the data structures contained with the XML document. For embedded devices this metadata can alternatively be compiled into the devices code that validates instance documents as they are read, and optionally copies elements or attributes into data structures of appropriate type. A further alternative embodiment can be to compile the metadata when needed for some purpose such as validation on the device itself—“lazy” compilation. In this case the device embeds compilers which transform the metadata into machine code the device can execute or some intermediate code the device can further process into executable code. Examples include Java source code or virtual machine code.
In a system where multiple transformations are to be performed on a given piece of data, implementation of the transformative functions may vary.
As noted above, incremental generation and transmission of XML encoded documents can be performed transparently to the recipient, i.e. the incremental transmission looks identical to a normal transmission using TCP/IP based network protocols, aside from the potential increase in the number of transmitted data subsets. However, on the receiving end of the transmission, there are similar disadvantages in waiting for the entire message to be received before being able to commence processing/parsing of that message, i.e. XML document. Such disadvantages include, as described above, increased memory and resource allocation and the associated costs. It is therefore advantageous to implement incremental data consumption in the receiver to reduce the amount of memory and other processing resources necessary to process the received message.
As described above with respect to incremental generation of data, there are many ways to implement incremental consumption and processing of received data. In one embodiment, each transformation block 1320 is implemented as an independent code or functional block wherein the outputs and inputs of subsequent blocks are connected together. This is similar to “piping” data between command line programs in the Unix or Windows operating system command line environments. Each command performs at least one transformation of the input data and incrementally outputs the transformed data where it is then subsequently input into another command for more processing. In an alternative embodiment, the transformation blocks can be implemented as functional code modules executing within one or more threads or processes.
In an alternative embodiment
In one example of using the disclosed embodiments to transmit an XML document to an IED, the XML document comprises a series of commands to the IED, such as configuration commands. In operation, the sender incrementally sends the XML document to the IED. As the IED receives the data comprising the XML document, it continually checks the received data to determine whether or not it has received enough data to begin processing, i.e. a processable segment. Once enough data has been received, the processable segment is transferred to the processing logic which will transform the XML code into a form which can be processed by the IED to execute the encoded commands. One the processable segment is handed off to the processing logic, the IED returns to receive the next processable segment. In one embodiment, one or more buffer memories are provided to permit substantially simultaneous reception of processable segments. The process of receiving processable segments and handing them over to the processing logic continues until there are no more processable segments to receive. In the processing logic, each processable segment is passed through one or more independent transformations which collectively translate/interpret the encoded data and otherwise transform the data into a format comprehendible by the IED to execute the desired functions. As the processable segment is transformed, the logic implementing the transformation is free to accept the next processable segment. This process continues until the entire XML document has been received and processed.
In this way, processing of the XML document may begin as soon as the first processable segment is received, rather than upon receipt of the entire XML document. As each processable segment is received and processed, buffer memory storage and subsequent processing resources may be released for processing subsequent processable segments, thereby minimizing the necessary amount of buffer memory and processing resources.
In one embodiment, flow control may be implemented allowing the IED to control the incremental flow of data being received. Such flow control may include communications back to the sender of the data or controls at the communications layer to allow the IED to manage the flow of incoming data. This may be useful where the processing of a given processable segment is taking longer than expected, thereby causing a back-up in the pipeline. Further, where the received processable segments exceed the receiving buffer space, the IED can request that the sender further decompose the XML document into smaller processable segments. Flow controls may also be implemented to allow the received to re-request entire processable segments, or portions thereof, for example, where there has been an error in transmission.
In one embodiment of the incremental XML processing system described herein, the capability to handle errors is provided. Errors may occur if one or more processable segments of the XML document are too large to fit in the IED's available memory, are corrupted or otherwise comprised or the segment contains unrecognized or improper XML code, data or commands. When incrementally processing received XML, it is possible that many processable segments will be successfully processed prior to an error occurring. For a given XML document, it may be okay that only a portion of the overall file was successfully processed. In this case, the IED can attempt to re-request the unsuccessfully processed segment(s) or continue operations without those segment(s). Typically, however, if the entire XML document cannot be processed then none of the XML document should be processed. This may be the case for an XML document which contains a set of configuration commands and data. In such a case, partial configuration may result in an inoperable IED. Therefore, in one embodiment, a mechanism is provided which permits the IED to undo any completed processing the XML document when an error occurs. In one embodiment, the IED is capable of saving the complete operational state of the device prior to commencing processing of the XML document, or a processable segment thereof, and restoring that state should an error occur. In embodiments that are incapable of reversing the partial execution of the XML document, the XML document generator generates the XML code such that errors in processing some of the code does not affect the successfully processed code.
It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. | https://patents.google.com/patent/US6990395B2/en | CC-MAIN-2018-34 | en | refinedweb |
AVS Device SDK Release Notes
This page contains release information about the Alexa Voice Service (AVS) Device SDK, including feature enhancements, updates, and resolved issues.
- Version 1.18.0 - February 19 2020
- Version 1.17.0 - December 10 2019
- Version 1.16.0 - October 25 2019
- Version 1.15.0 - September 25 2019
- Version 1.14.0 - July 09 2019
- Version 1.13.0 - May 22 2019
- Version 1.12.1 - April 02 2019
- Version 1.12.0 - February 25 2019
- Version 1.11.0 - December 19 2018
- Version 1.10.0 - October 24 2018
- Version 1.9.0 - August 28 2018
- Version 1.8.1 - July 09 2018
- Version 1.8.0 - June 27 2018
- Version 1.7.1 - May 04 2018
- Version 1.7.0 - April 18 2018
- Version 1.6.0 - March 08, 2018
- Version 1.5.0 - February 12 2018
- Version 1.4.0 - January 12 2018
- Version 1.3.0 - December 08 2017
- Version 1.2.1 - November 11 2017
- Version 1.2.0 - October 27 2017
- Version 1.1.0 - October 10 2017
- Version 1.0.3 - September 19 2017
- Version 1.0.2 - August 23 2017
- Version 1.0.1 - August 08 2017
- Version 1.0.0 - August 07 2017
- Version 0.6.0 - July 14 2017
- Version 0.5.0 - June 23 2017
- Version 0.4.1 - June 9 2017
- Version 0.4.0 - May 31 2017 (patch)
- Version 0.4.0 - May 24 2017
- Version 0.3.0 - May 17 2017
- Version 0.2.1 - May 03 2017
- Version 0.2.0 - March 27 2017 (patch)
- Version 0.2.0 - March 99 2017
- Version 0.1.0 - February 10 2017
Version 1.18.0 - February 19 2020
Enhancements
- Added support for Bluetooth Interface 2.0. This interface adds support for multiple simultaneous connections to Bluetooth peripherals.
- Added support for Audio Focus Manager Library (AFML) Multi Activity. This interface enhances the behavior of a device so it can handle more than one Activity per Channel.
- Added the
obfuscatePrivateDatalogging method to help remove certain data from logs.
- Updated
MediaPlayerObserverInterfaceto include metadata about playback states.
- Added SDK extension point. You can integrate CMake projects into the SDK without cloning those projects into a subdirectory.
Bug fixes
- Fixed Mac/OSX issue that caused an unresponsive Sample App when not connected to the internet.
- Fixed issue that prevented sample app from exiting various states.
- Fixed
UIManagerissue that caused an error in the logs when the device with built without the wake word enabled.
- Fixed volume issue that caused timers to ascend in volume when setting up ascending alarms.
- Fixed alert volume issue that caused any changes to the alert volume to notify observers.
- Fixed EQ issue where changes to the EQ band levels didn't notify observers.
Known Issues
- Build errors can occur on the Raspberry Pi due to incorrect linking of the atomic library. A suggested workaround is to add the following
set(CMAKE_CXX_LINK_FLAGS "${CMAKE_CXX_LINK_FLAGS} -latomic")to the top most CMake file.
- The WebVTT dependency required for
captionsisn't supported for Windows/Android.
- Exiting from the setting option takes you back to the Options Menu directly. It doesn't provide a message to indicate that you're back in the main menu.
- Failing Unit Tests and AIP Unit tests are disabled on Windows problem by upgrading to ADB version 1.0.40.
- If a device loses a network connection, the lost connection status isn't returned though local TTS.
- ACL encounters issues if it receives audio attachments but doesn't consume them.
-.
Version 1.17.0 - December 10 2019
Enhancements
- Added support for captions for TTS. This enhancement allows you to print on-screen captions for Alexa voice responses.
- Added support for SpeechSynthesizer Interface 1.3. This interface supports the new
captionsparameter.
- Added support for AudioPlayer Interface 1.3. This interface supports the new
captionsparameter.
- Added support for Interaction Model 1.2.
- Added support for System 2.0.
- Added support for Alerts Interface 1.4.
- Added support for Alarm Volume Ramp (Ascending Alarms in the Companion App). This feature lets the user enable alarm fade in. You enable this feature in the sample app through the settings menu.
- Added support to use certified senders for URI path extensions. This change allows you to specify the URI path extension when sending messages with
CertifiedSender::sendJSONMessage
- Added new
Metricsinterfaces and helper classes. These additions help you create and consume
Metricsevents.
- Interfaces -
MetricRecorderInterface,
MetricSinkInterface.
- Helper Classes -
DataPointStringBuilder,
DataPointCounterBuilder,
DataPointDurationBuilder,
MetricEventBuilder.
- Added support for the following AVS endpoint controller capabilities:
- Added
PowerResourceManagerInterface. This interface allows the SDK to control power resource levels for components such as the
AudioInputProcessorand
SpeechSynthesizer.
- Added
AlexaInterfaceCapabilityAgent. This Capability Agent handles common directives and endpoint controller capabilities support by
Alexa.AlexaInterface.
- Added
AlexaInterfaceMessageSenderInterface. Use this interface to send common events defined by the
Alexa.AlexaInterfaceinterface.
- Added
BufferingCompleteto
MediaPlayerObserverInterface. This method helps improve performance in poor networking conditions by making sure
MediaPlayerpre-buffers correctly.
- Added
SendDTMFto
CallManagerInterface. This method allows you to send DTMF tones during calls.
New build options
- CAPTIONS
- ADDED
CAPTIONS
- ADDED
LIBWEBVTT_LIB_PATH
- ADDED
LIBWEBVTT_INCLUDE_DIR
- METRICS
- ENDPONTS
- ADDED
ENABLE_ALL_ENDPOINT_CONTROLLERS
- ADDED
ENABLE_POWER_CONTROLLER
- ADDED
ENABLE_TOGGLE_CONTROLLER
- ADDED
ENABLE_RANGE_CONTROLLER
- ADDED
ENABLE_MODE_CONTROLLER
New dependencies
- To use captions, you must install a new dependency – the libwebvtt parsing library. WebVTT is a C/C++ library for interpreting and authoring WebVTT content. WebVTT is a caption and subtitle format designed for use with HTML5 audio and video elements.
Bug fixes
- Fixed
MimeResponseSink::onReceiveNonMimeDatadata issue that returned invalid data.
- Fixed data type issue that incorrectly used
finalResponseCodeinstead of
FinalResponseCodeId
- Fixed
UrlContentToAttachmentConverterissue that used the incorrect range parameter.
- Fixed
FinallyGuardlinking issue that caused problems compiling the SDK on iOS.
- Fixed bug when you spoke the Wake Word "Alexa" twice rapidly.
Known Issues
- The WebVTT dependency required for
captionsisn't supported for Windows/Android. by upgrading to ADB version 1.0.40.
- If a device loses a network connection, the lost connection status isn't returned through local TTS.
- ACL encounters issues if it receives audio attachments but doesn't consume them.
SpeechSynthesizerStateuses
GAINING_FOCUSand
LOSING_FOCUSas a workaround for handling intermediate states.
-.
- On some devices, Pressing
tand
hin the Sample App doesn't exit the assigned state.
- Exiting the settings menu doesn't provide a message to indicate that you're back in the main menu.
Version 1.16.0 - October 25 2019
Enhancements
- Added support for SpeechSynthesizer version 1.2 which includes the new
playBehaviourdirective.
- Added support for pre-buffering in the
AudioPlayerCapability Agent. You can optionally choose the number of instances
MediaPlayeruses in the AlexaClientSDKconfig.json. Important: the contract for
MediaPlayerInterfacehas changed. You must now make sure that the
SourceIdvalue returned by
setSource()is unique across all instances.
- The
AudioPlayerCapability Agent is now licensed under the Amazon Software License instead of the Apache Software License.
Bug Fixes
- Fixed Android issue that caused the build script to ignore
PKG_CONFIG_PATH. This sometimes caused the build to use a preinstalled dependency instead of the specific version downloaded by the Android script. For example openssl).
- Fixed Android issue that prevented the Sample app from running at the same time as other applications using the microphone. Android doesn't inherently allow two applications to use the microphone. Pressing the mute button now temporarily stops Alexa from accessing the microphone.
- Added 'quit' (– q) to the settings sub menu.
- Fixed outdated dependencies issue in the Windows install script.
- Fixed reminders issue that caused Notification LEDs to stay on, even after dismissing the alert.
For more details, check this.
Version 1.15.0 - September 25 2019
Enhancements
- Added
SystemSoundPlayerto
ApplicationUtilities.
SystemSoundPlayeris a new class that plays pre-defined sounds. Supported sounds include the wake word notification and the end of speech tone.
- Removed Echo Spatial Perception (ESP) functionality from the Alexa Voice Service (AVS) device SDK. Make sure you download and test your devices using the new AVS SDK sample app. If you're using an older version of the sample app, manually remove any references to ESP or errors occur during compile.
- Added
onNotificationReceivedto
NotificationsObserverInterface.
onNotificationReceivedbroadcasts when
NotificationsObserverInterfacereceives a new notification, instead of only sending the indicator state. This is important if you support a feature that requires a distinct signal for each notification received. See
NotificationsObserverInterfacefor more details.
- Added support for Multilingual Mode . With this enabled, Alexa automatically detects what language a user speaks by analyzing the spoken wake word and proceeding utterances. Once Alexa identifies the language, all corresponding responses are in the same language. The current supported language pairs are:
[ "en-US", "es-US" ]
[ "es-US", "en-US" ]
[ "en-IN", "hi-IN" ]
[ "hi-IN", "en-IN" ]
[ "en-CA", "fr-CA" ]
[ "fr-CA", "en-CA" ]
IMPORTANT: Specify the locales your device supports in the
localeCombinationsfield in
AlexaClientSDKConfig.json. This field can't be empty. If you don't set these values, the sample app fails to run.
- Added two new system settings,
Timezoneand
Locale.
- Timezone: For example, you can set the
defaultTimezoneto
America/Vancouver. If you don't set a value,
GMTis set as the default value. If you set a new timezone, make sure that your AVS system settings and default time zone stay in sync. To handle this, use the new class
SystemTimeZoneInterface.
- Locale: For example, you can set
defaultLocaleto
en-GB, instead of the default
en-US.
- The
SpeechRecognizerinterface now supports the following functionalities.
- Change wake word (
Alexasupported for now).
- Toggle start of request tone on/off.
- Toggle End of request tone on/off.
- Deprecated the
CapabilityAgents
Settings{…}library.
Settings {…}now maps to an interface that's no longer supported. You might need to update your code to handle these changes. Read Settings Interface for more details.
- Added support for three new locals: Spanish - United States (ES_US), Hindi - India (HI_IN), and Brazilian - Portuguese (PT_BR).
- Linked the atomic library to the sample app to prevent build errors on Raspberry Pi.
Bug Fixes
- Fixed resource leaking in EqualizerCapabilityAgent after engine shutdown.
- Issue 1391: Fixed an issue where SQLiteDeviceSettingsStorage::open tries to acquire a mutex twice, resulting in deadlock.
- Issue 1468: Fixed a bug in AudioPlayer::cancelDirective that causes a crash.
- Fixed Windows install script that caused the sample app build to fail. removed pip, flask, requests, and commentjson dependencies from the mingw.sh helper script.
- Fixed issue: notifications failed to sync upon device initialization. For example, let's say you had two devices. one turned on and the other turned off. After clearing the notification on the first device, it still showed up on the second device after turning it on.
- Fixed issue: barging in on a reminder caused it to stick in an inconsistent state, blocking subsequent reminders. For example, if a reminder was going off and you interrupted it, the reminder would get persist indefinitely. You could schedule future reminders, but they wouldn't play. Saying “Alexa stop” or rebooting the device fixed the “stuck” reminder.
Version 1.14.0 - July 09 2019
Enhancements
- AudioPlayer can now pre-buffer audio tracks in the Pre-Handle stage.
Bug Fixes
- Fixed an issue in the SQLite wrapper code where a
SQLiteStatementcaused a memory corruption issue.
- Fixed a race condition in SpeechSynthesizer that caused crashes.
- Fixed a
cmakeissue that specifies a dependency for Bluetooth incorrectly.
- Fixed a bug that caused Bluetooth playback to start automatically.
- Changed
supportedOperationsfrom a vector to a set in
ExternalMediaAdapterInterface.
- Corrected an issue where a
VolumeChangedevent had previously been sent when the volume was unchanged after
setVolumeor
adjustVolumehad been called locally.
- Fixed issue with
IterativePlaylistParserthat prevented live stations on TuneIn from playing on Android.
- Corrected the spelling of "UNINITIALIZED".
Version 1.13.0 - May 22 2019
Enhancements
- When an active Alert moves to the background, the alert now begins after a 10-second delay. Alert loop iteration delays can now no longer last longer than a maximum of 10 seconds, rather than depending on the length of the audio asset.
- Changed NotificationsSpeaker to use Alerts Volume instead of using the speaker volume.
- Allow customers to pass in an implementation of InternetConnectionMonitorInterface which will force AVSConnectionManager to reconnect on internet connectivity loss.
- Added an exponential wait time for retrying transmitting a message via CertifiedSender.
- When Volume is set to 0 and device is unmuted, volume is bumped up to a non-zero value. When Volume is set to 0 and Alexa talks back to you, volume is bumped up to a non-zero value.
- Deprecated HttpResponseCodes.h, which is now present only to ensure backward compatibility.
- The default base URLs for AVS have changed. These new URLs are supported by SDK v1.13 and later versions. Amazon recommends that all new and existing implementations update to v1.13 or later and use the new base URLs accordingly; however, Amazon will continue to support the legacy base URLs.
Bug Fixes
- Fixed bug where receiving a Connected = true Property change from BlueZ without UUID information resulted in BlueZBluetoothDevice transitioning to CONNECTED state.
- Fixed bug where MediaStreamingStateChangedEvent may be sent on non-state related property changes.
- Added null check to SQLiteStatement::getColumnText.
- Fixed an issue where database values with unescaped single quotes passed to miscStorage database will fail to be stored. Added a note on the interface that only non-escaped values should be passed.
- Fixed a loop in audio in live stations based on playlists.
- Fixed a race condition in TemplateRuntime that may result in a crash.
- Fixed a race condition where a recognize event due to a EXPECT_SPEECH may end prematurely.
- Changed the name of Alerts channel to Alert channel within AudioActivityTracker.
- Prevented STOP Wakeword detections from generating Recognize events.
- The SQLiteDeviceSettingsStorageTest no longer fails for Android.
Version 1.12.1 - April 02 2019
Bug Fixes
- Fixed a bug where the same URL was being requested twice when streaming iHeartRadio. Now, a single request is sent.
- Corrected pause/resume handling in
ProgressTimerso that extra
ProgressReportDelayElapsedevents are not sent to AVS.
Version 1.12.0 - February 25 2019
Enhancements
- Support was added for the
fr_CAlocale.
- The Executor has been optimized to run a single thread when there are active job in the queue, and to remain idle when there are not active jobs.
- An additional parameter of
alertTypehas been added to the Alerts capability agent. This will allow observers of alerts to know the type of alert being delivered.
- Support for programmatic unload and load of PulseAudio Bluetooth modules was added. To enable this feature, there is a new CMake option :
BLUETOOTH_BLUEZ_PULSEAUDIO_OVERRIDE_ENDPOINTS. Note that libpulse-dev is a required dependency of this feature.
- An observer interface was added for when an active Bluetooth device connects and disconnects.
- The
BluetoothDeviceManagerInterfaceinstantiation was moved from
DefaultClientto
SampleAppto allow applications to override it.
- The
MediaPlayerInterfacenow supports repeating playback of URL sources.
- The Kitt.AI wake word engine (WWE) is now compatible with GCC5+.
- Stop of ongoing alerts, management of MessageObservers, and management of CallStateObservers have been exposed through DefaultClient.
Bug Fixes
- Issue 953 - The
MediaPlayerInterfacerequirement that callbacks not be made upon a callers thread has been removed.
- Issue 1136 - Added a missing default virtual destructor.
- Issue 1140 - Fixed an issue where DND states were not synchronized to the AVS cloud after device reset.
- Issue 1143 - Fixed an issue in which the SpeechSynthesizer couldn't enter a sleeping state.
- Issue 1183 - Fixed an issue where alarm is not sounding for certain time zones
- Changing an alert's volume from the Alexa app now works when an alert is playing.
- Added missing shutdown handling for ContentDecrypter to prevent the
Stopcommand from triggering a crash when SAMPLE-AES encrypted content was streaming.
- Fixed a bug where if the Notifications database is empty, due to a crash or corruption, the SDK initialization process enters an infinite loop when it retries to get context from the Notifications capability agent.
- Fixed a race condition that caused
AlertsRendererobservers to miss notification that an alert has been completed.
Version 1.11.0 - December 19 2018
Enhancements
- Added support for the new Alexa DoNotDisturb interface, which enables users to toggle the do not disturb (DND) function on their Alexa built-in products.
- The SDK now supports Opus encoding, which is optional. To enable Opus, you must set the CMake flag to
-DOPUS=ON, and include the libopus library dependency in your build.
- The MediaPlayer reference implementation has been expanded to support the SAMPLE-AES and AES-128 encryption methods for HLS streaming.
- AES-128 encryption is dependent on libcrypto, which is part of the required openSSL library, and is enabled by default.
- To enable SAMPLE-AES encryption, you must set the
-DSAMPLE_AES=ONin your CMake command, and include the FFMPEG library dependency in your build.
- A new configuration for deviceSettings has been added. This configuration allows you to specify the location of the device settings database.
- Added locale support for es-MX.
Bug Fixes
- Fixed an issue where music wouldn't resume playback in the Android app.
- Now all equalizer capabilities are fully disabled when equalizer is turned off in configuration file. Previously, devices were unconditionally required to provide support for equalizer in order to run the SDK.
- Issue 1106 - Fixed an issue in which the
CBLAuthDelegatewasn't using the correct timeout during request refresh.
- Issue 1128 - Fixed an issue in which the
AudioPlayerinstance persisted at shutdown, due to a shared dependency with the
ProgressTimer.
- Fixed in issue that occurred when a connection to streaming content was interrupted, the SDK did not attempt to resume the connection, and appeared to assume that the content had been fully downloaded. This triggered the next track to be played, assuming it was a playlist.
- Issue 1040 - Fixed an issue where alarms would continue to play after user barge-in.
Version 1.10.0 - October 24 2018
Enhancements
- New optional configuration for EqualizerController . The EqualizerController interface allows you to adjust equalizer settings on your product, such as decibel (dB) levels and modes.
- Added reference implementation of the EqualizerController for GStreamer-based (MacOS, Linux, and Raspberry Pi) and OpenSL ES-based (Android) MediaPlayers. Note: In order to use with Android, it must support OpenSSL ES.
- New optional configuration for the TemplateRuntime display card value .
- A configuration file generator script,
genConfig.shis now included with the SDK in the tools/Install directory.
genConfig.shand it's associated arguments populate
AlexaClientSDKConfig.jsonwith the data required to authorize with LWA.
- Added Bluetooth A2DP source and AVRCP target support for Linux.
- Added Alexa for Business (A4B) support, which includes support for handling the new RevokeAuthorization directive in the Settings interface. A new CMake option has been added to enable A4B within the SDK,
-DA4B.
- Added locale support for IT and ES.
- The Alexa Communication Library (ACL),
CBLAUthDelegate, and sample app have been enhanced to detect de-authorization using the new
zcommand.
- Added
ExternalMediaPlayerObserver, which receives notification of player state, track, and username changes.
HTTP2ConnectionInterfacewas factored out of
HTTP2Transportto enable unit testing of
HTTP2Transportand re-use of
HTTP2Connectionlogic.
Bug Fixes
- Fixed a bug in which
ExternalMediaPlayeradapter playback wasn't being recognized by AVS.
- Issue 973 - Fixed issues related to
AudioPlayerwhere progress reports were being sent out of order or with incorrect offsets.
- An
EXPECTING, state has been added to
DialogUXStatein order to handle
EXPECT_SPEECHstate for hold-to-talk devices.
- Issue 948 - Fixed a bug in which the sample app got stuck in various states.
- Fixed a bug where there was a delay between receiving a
DeleteAlertdirective, and deleting the alert.
- Issue 839 - Fixed an issue where speech was being truncated due to the
DialogUXStateAggregatortransitioning between a
THINKINGand
IDLEstate.
- Fixed a bug in which the
AudioPlayerattempted to play when it wasn't in the
FOREGROUNDfocus.
CapabilitiesDelegateTestnow works on Android.
- Issue 950 - Improved Android Media Player audio quality.
- Issue 908 - Fixed compile error on g++ 7.x in which includes were missing.
Version 1.9.0 - August 28 2018
Enhancements
- Added Android SDK support, which includes new implementations of the MediaPlayer, audio recorder, and logger.
- Added the InteractionModel interface, which enables Alexa Routines.
- Optional configuration changes have been introduced. Now a network interface can be specified to connect to the SDK via curl.
- Build options can be configured to support Android.
- Added GUI 1.1 support. The
PlaybackControllerhas been extended to support new control functionality, and the
Systeminterface has been updated to support
SoftwareInfo.
Bug Fixes
- Installation script execution time has been reduced. Now a single branch clone is used, such as the master branch.
- Issue 846 - Fixed a bug where audio stuttered on slow network connections.
- Removed the
SpeakerManagerconstructor check for non-zero speakers.
- Issue 891 - Resolved incorrect offset in the
PlaybackFinishedevent.
- Issue 727 - Fixed an issue where the sample app behaved erratically upon network disconnection/reconnection.
- Issue 910 - Fixed a GCC 8+ compilation issue. Note: issues related to
-Wclass-memaccesswill still trigger warnings, but won't fail compilation.
- Issue 871 Issue 880 - Fixed compiler warnings.
- Fixed an issue where
PlaybackStutterStartedand
PlaybackStutterFinishedevents were not being sent due to a missing Gstreamer queue element.
- Fixed a bug where the
CapabilitiesDelegatedatabase was not being cleared upon logout.
- Fixed in issue that caused the following compiler warning “class has virtual functions but non-virtual destructor”.
- Fixed a bug where
BlueZDeviceManagerwas not properly destroyed.
- Fixed a bug that occurred when the initializer list was converted to
std::unordered_set.
- Fixed a build error that occurred when building with
BUILD_TESTING=Off.
Version 1.8.1 - July 09 2018
Enhancements
- Added support for adjustment of alert volume.
- Added support for deletion of multiple alerts.
- The following
SpeakerInterface::Typeenumeration values have changed:
AVS_SYNCEDis now
AVS_SPEAKER_VOLUME.
LOCALis now
AVS_ALERTS_VOLUME.
Version 1.8.0 - June 27 2018
Enhancements
- Added local stop functionality. This allows a user to stop an active function, such as an alert or timer, by uttering "Alexa, stop" when an Alexa-enabled product is offline.
- Alerts in the background now stream in 10 sec intervals, rather than continuously.
- Added support for France to the sample app.
- Updated the ACL MIME type for sending JSON to AVS from
"text/json"to
"application/json".
friendlyNamecan now be updated for
BlueZimplementations of
BlueZBluetoothDeviceand
BlueZHostController.
Bug Fixes
- Fixed an issue where the Bluetooth agent didn't clear user data upon reset, including paired devices and the
uuidMappingtable.
- Fixed
MediaPlayerthreading issues. Now each instance has it's own
glibmain loop thread, rather than utilizing the default main context worker thread.
- Fixed segmentation fault issues that occurred when certain static initializers needed to be initialized in a certain order, but the order wasn't defined.
Version 1.7.1 - May 04 2018
Enhancements
- Added the Bluetooth interface, which manages the Bluetooth connection between Alexa-enabled products and peer devices. This release supports
A2DP-SINKand
AVRCPprofiles. Note: Bluetooth is optional and is currently limited to Raspberry Pi and Linux platforms.
- Added new Bluetooth dependencies for Linux and Raspberry Pi.
- Device Capability Framework (
DCF) renamed to
Capabilities.
- Updated the non-CBL client ID error message to be more specific.
- Updated the sample app to enter a limited interaction mode after an unrecoverable error.
Bug Fixes
- Issue 597 - Fixed a bug where the sample app did not respond to locale change settings.
- Fixed issue where GStreamer 1.14
MediaPlayerTestfailed on Windows.
- Fixed an issue where a segmentation fault was triggered after unrecoverable error handling.
Version 1.7.0 - April 18 2018
Enhancements
AuthDelegateand
AuthServer.pyhave been replaced by
CBLAUthDelegate, which uses Code Based Linking for authorization.
- Added new properties to
AlexaClientSDKConfig:
cblAuthDelegate- This object specifies parameters for
CBLAuthDelegate.
miscDatabase- A generic key/value database to be used by various components.
dcfDelegate- This object specifies parameters for
DCFDelegate. Within this object, values were added for
endpointand
overridenDcfPublishMessageBody.
endpointis the endpoint for the Capabilities API.
overridenDcfPublishMessageBodyis the message that is sent to the Capabilities API. Note: Values in the
dcfDelegateobject will only work in
DEBUGbuilds.
deviceInfo- Specifies device-identifying information for use by the Capabilities API and
CBLAuthDelegate.
- Updated Directive Sequencer to support wildcard directive handlers. This allows a handler for a given AVS interface to register at the namespace level, rather than specifying the names of all directives within a given namespace.
- Updated the Raspberry Pi installation script to include
alsasinkin
AlexaClientSDKConfig.
- Added
audioSinkas a configuration option. This allows users to override the audio sink element used in
Gstreamer.
- Added an interface for monitoring internet connection status:
InternetConnectionMonitorInterface.h.
- The Alexa Communications Library (ACL) is no longer required to wait until authorization has succeeded before attempting to connect to AVS. Instead,
HTTP2Transporthandles waiting for authorization to complete.
- Device capabilities can now be sent for each capability interface using the Capabilities API.
- The sample app has been updated to send Capabilities API messages, which are automatically sent when the sample app starts. Note: A successful call to the Capabilities API must occur before a connection with AVS is established.
- The SDK now supports HTTP PUT messages.
- Added support for opt-arg style arguments and multiple configuration files. Now, the sample app can be invoked by either of these commands:
SampleApp <configfile> <debuglevel>OR
SampleApp -C file1 -C file2 ... -L loglevel.
Bug Fixes
- Fixed Issues 447 and 553 .
- Fixed the
AttachmentRenderSource's handling of
BLOCKING
AttachmentReaders.
- Updated the
Loggerimplementation to be more resilient to
nullptrstring inputs.
- Fixed a
TimeUtilsutility-related compile issue.
- Fixed a bug in which alerts failed to activate if the system was restarted without network connection.
- Fixed Android 64-bit build failure issue.
Version 1.6.0 - March 08, 2018
Enhancements
rapidJsonis now included with "make install".
- Updated the
TemplateRuntimeObserverInterfaceto support clearing of
displayCards.
- Added Windows SDK support, along with an installation script (MinGW-w64).
- Updated
ContextManagerto ignore context reported by a state provider.
- The
SharedDataStreamobject is now associated by playlist, rather than by URL.
- Added the
RegistrationManagercomponent. Now, when a user logs out all persistent user-specific data is cleared from the SDK. The log out functionality can be exercised in the sample app with the new command:
k.
Bug Fixes
- Issue 400 Fixed a bug where the alert reminder did not iterate as intended after loss of network connection.
- Issue 477 Fixed a bug in which Alexa's weather response was being truncated.
- Fixed an issue in which there were reports of instability related to the Sensory engine. To correct this, the
portAudio
suggestedLatencyvalue can now be configured.
Version 1.5.0 - February 12 2018
Enhancements
- Added the
ExternalMediaPlayerCapability Agent. This allows playback from music providers that control their own playback queue. Example: Spotify.
- Added support for AU and NZ to the
SampleApp.
- Firmware version can now be sent to Alexa via the
SoftwareInfoevent. The firmware version is specified in the config file under the
sampleAppobject as an integer value named
firmwareVersion.
- The new
fcommand was added to the
SampleAppwhich allows the firmware version to be updated at run-time.
- Optional configuration changes have been introduced. Now a default log level can be set for
ACSDK_LOG_MODULEcomponents, globally or individually. This value is specified under a new root level configuration object called
logger, and the value itself is named
logLevel. This allows you to limit the degree of logging to that default value, such as
ERRORor
INFO.
Bug Fixes
- Fixed bug where
AudioPlayerprogress reports were not being sent, or were being sent incorrectly.
- Issue 408 - Irrelevant code related to
UrlSourcewas removed from the
GStreamer-based MediaPlayerimplementation.
- The
TZvariable no longer needs to be set to
UTCwhen building the
SampleApp.
- Fixed a bug where
CurlEasyHandleWrapperlogged unwanted data on failure conditions.
- Fixed a bug to improve
SIGPIPEhandling.
- Fixed a bug where the filename and classname were mismatched. Changed
UrlToAttachmentConverter.hto
UrlContentToAttachmentConverter.h,and
UrlToAttachmentConverter.cppto
UrlContentToAttachmentConverter.cpp.
- Fixed a bug where after muting and then un-muting the GStreamer-based
MediaPlayerimplementation, the next item in queue would play instead of continuing playback of the originally muted item.
*
Version 1.4.0 - January 12 2018
Enhancements
- Added the Notifications Capability Agent. This allows a client to receive notification indicators from Alexa.
- Added support for the
SoftwareInfoevent. This code is triggered in the
SampleAppby providing a positive decimal integer as the "firmwareVersion" value in "sampleApp" object of the
AlexaClientSDKConfig.json. The reported firmware version can be updated after starting the
SampleAppby calling
SoftwareInfoSender::setFirmwareVersion(). This code path can be exercised in the
SampleAppwith the new command:
f.
- Added unit tests for Alerts.
- The GStreamer-based pipeline allows for the configuration of
MediaPlayeroutput based on information provided in
Config.
- Playlist streaming now uses a
BLOCKINGwriter, which improves streaming efficiency.
Bug Fixes
- Fixed bug where
SpeechSynthesizerwould not stop playback when a state change timeout was encountered.
- Fixed the
SampleApplicationdestructor to avoid segfaults if the object is not constructed correctly.
- Fixed bug where
AudioPlayerwould erroneously call
executeStop()in
cancelDirective().
- Issue 396 - Fixed bug for compilation error with GCC7 in
AVSCommon/SDKInterfaces/include/AVSCommon/SDKInterfaces/Audio/AlertsAudioFactoryInterface.h
- Issue 384 - Fixed bug that caused
AuthServer.pyto crash.
- Fixed bug where a long delay was encountered after pausing and resuming a large Audible chapter.
- Fixed bug that caused named timers and reminders to loop for an additional
loopCount.
- Fixed memory corruption bug in
MessageInterpreter.
- Fixed illegal memory accesses in
MediaPlayerlogging.
Version 1.3.0 - December 08 2017
- Enhancements
- ContextManager now passes the namespace/name of the desired state to StateProviderInterface::provideState(). This is helpful when a single StateProvider object provides multiple states, and needs to know which one ContextManager is asking for.
- The mime parser was hardened against duplicate boundaries.
- Added functionality to add and remove AudioPlayer observers to the DefaultClient.
- Unit tests for Alerts were added.
- Added en-IN, en-CA and ja-JP to the SampleApp locale selection menu.
- Added default alert and timer audio data to the SDK SampleApp. There is no longer a requirement to download these audio files and configure them in the json configuration file.
- Added support in SDS Reader and AttachmentReader for seeking into the future. This allows a reader to move to an index which has not yet arrived in the SDS and poll/block until it arrives.
- Added support for blocking Writers in the SharedDataStream class.
- Changed the default status code sent by MessageRequestObserverInterface::onSendCompleted() to SERVER_OTHER_ERROR, and mapped HTTP code 500 to SERVER_INTERNAL_ERROR_V2.
- Added support for parsing stream duration out of playlists.
- Added a configuration option ("sampleApp":"displayCardsSupported") that allows the displaying of display cards to be enabled or disabled.
- Named Timers and Reminders have been updated to fall back to the on-device background audio sound when cloud urls cannot be accessed or rendered.
- Bug Fixes
- Removed floating point dependencies from core SDK libraries.
- Fixed bug in SpeechSynthesizer where it's erroneously calling stop more than once.
- Fixed an issue in ContentFetcher where it could hang during destruction until an active GET request completed.
- Fixed a couple of parsing bugs in LibCurlHttpContentFetcher related to case-sensitivity and mime-type handling.
- Fixed a bug where MediaPlayerObserverInterface::onPlaybackResumed() wasn't being called after resuming from a pause with a pending play/resume.
- Fixed a bug in LibCurlContentFetcher where it could error out if data is written to the SDS faster than it is consumed.
- The GStreamer-based MediaPlayer reference implementation now uses the ACL HTTP configured client.
- An API change has been made to MediaPlayerInterface::setSource(). This method now takes in an optional offset as well to allow for immediately streaming to the offset if possible.
- Next and Previous buttons now work with Audible.
- Pandora resume stuttering is addressed.
- Pausing and resuming Amazon music no longer seeks back to the beginning of the song.
- libcurl CURLOPT_NOSIGNAL option is set to 1 () to avoid crashes observed in SampleApp.
- Fixed the timing of the PlaybackReportIntervalEvent and PlaybackReportDelayEvent as specified in the directives.
- Fixed potential deadlocks in MediaPlayer during shutdown related to queued callbacks.
- Fixed a crash in MediaPlayer that could occur if the network is disconnected during playback.
- Fixed a bug where music could keep playing while Alexa is speaking.
- Fixed a bug which was causing problems with pause/resume and next/previous with Amazon Music.
- Fixed a bug where music could briefly start playing between speaks.
- Fixed a bug where HLS playlists would stop streaming after the initial playlist had been played to completion.
- Fixed a bug where Audible playback could not advance to the next chapter.
- Fixed some occurrences of SDK entering the IDLE state during the transition between Listening and Speaking states.
- Fixed a bug where PlaybackFinished events were not reporting the correct offset.
- An API change has been made to MediaPlayerInterface::getOffset(). This method is now required to return the final offset when called after playback has stopped.
- Fixed a problem where AIP was erroneously resetting its state upon getting a cancelDirective() callback.
Version 1.2.1 - November 11 2017
- Enhancements
- Added comments to
AlexaClientSDKConfig.json. These descriptions provide additional guidance for what is expected for each field.
- Enabled pause and resume controls for Pandora.
- Bug Fixes
- Bug fix for issue #329 -
HTTP2Transportinstances no longer leak when
SERVER_SIDE_DISCONNECTis encountered.
- Bug fix for issue #189 - Fixed a race condition in the
Timerclass that sometimes caused
SpeechSynthesizerto get stuck in the "Speaking" state.
- Bug fix for a race condition that caused
SpeechSynthesizerto ignore subsequent
Speakdirectives.
- Bug fix for corrupted mime attachments.
Version 1.2.0 - October 27 2017
- Enhancements
- Updated MediaPlayer to solve stability issues
- All capability agents were refined to work with the updated MediaPlayer
- Added the TemplateRuntime capability agent
- Added the SpeakerManager capability agent
- Added a configuration option ("sampleApp":"endpoint") that allows the endpoint that SampleApp connects to to be specified without changing code or rebuilding
- Added very verbose capture of libcurl debug information
- Added an observer interface to observer audio state changes from AudioPlayer
- Added support for StreamMetadataExtracted Event. Stream tags found in the stream are represented in JSON and sent to AVS
- Added to the SampleApp a simple GuiRenderer as an observer to the TemplateRuntime Capability Agent
- Moved shared libcurl functionality to AVSCommon/Utils
- Added a CMake option to exclude tests from the "make all" build. Use "cmake
-DACSDK_EXCLUDE_TEST_FROM_ALL=ON" to enable it. When this option is enabled "make unit" and "make integration" still could be used to build and run the tests
- Bug fixes:
- Previously scheduled alerts now play following a restart
- General stability fixes
- Bug fix for CertifiedSender blocking forever if the network goes down while it's trying to send a message to AVS
- Fixes for known issue of Alerts integration tests fail: AlertsTest.UserLongUnrelatedBargeInOnActiveTimer and AlertsTest.handleOneTimerWithVocalStop
- Attempting to end a tap-to-talk interaction with the tap-to-talk button wouldn't work
- SharedDataStream could encounter a race condition due to a combination of a blocking Reader and a Writer closing before writing any data
- Bug-fix for the ordering of notifications within alerts scheduling. This fixes the issue where a local-stop on an alert would also stop a subsequent alert if it were to begin without delay
Version 1.1.0 - October 10 2017
- Enhancements
- Better GStreamer error reporting. MediaPlayer used to only report
MEDIA_ERROR_UNKNOWN, now reports more specific errors as defined in
ErrorType.h.
- Codebase has been formatted for easier reading.
DirectiveRouter::removeDirectiveHandler()signature changed and now returns a bool indicating if given handler should be successfully removed or not.
- Cleanup of raw and shared pointers in the creation of
Transportobjects.
HTTP2Streams now have IDs assigned as they are acquired as opposed to created, making associated logs easier to interpret.
AlertsCapabilityAgenthas been refactored.
- Alert management has been factored out into an
AlertSchedulerclass.
- Creation of Reminder (implements Alert) class.
- Added new capability agent for
PlaybackControllerwith unit tests.
- Added Settings interface with unit tests.
- Return type of
getOffsetInMilliseconds()changed from
int64_tto
std::chronology::milliseconds.
- Added
AudioPlayerunit tests.
- Added teardown for all Integration tests except Alerts.
- Implemented PlaylistParser.
- Bug fixes:
- AIP gets stuck in between states and refuses user input on network outage.
- SampleApp crashing if running for 5 minutes after network disconnect.
- Issue where on repeated user barge-ins,
AudioPlayerwould not pause. Specifically, the third attempt to “Play iHeartRadio” would not result in currently-playing music pausing.
- Utterances being ignored after particularly long TTS.
- GStreamer errors cropping up on SampleApp exit as a result of accessing the pipeline before it’s been setup.
- Crashing when playing one URL after another.
- Buffer overrun in Alerts Renderer.
- SampleApp crashing when issuing "Alexa skip" command with iHeartRadio.
HTTP2Transportnetwork thread triggering a join on itself.
HTTP2Streamrequest handling truncating exception messages.
AudioPlayerwas attempting an incorrect state transition from
STOPPEDto
PLAYINGthrough a
playbackResumed.
Version 1.0.3 - September 19 2017
- Enhancements
- Implemented
setOffSetin
MediaPlayer.
- Updated
LoggerUtils.cpp.
- Bug Fixes
- Bug fix to address incorrect stop behavior caused when Audio Focus is set to
NONEand released .
- Bug fix for intermittent failure in
handleMultipleConsecutiveSpeaks.
- Bug fix for
jsonArrayExistincorrectly parsing JSON when trying to locate array children.
- Bug fix for ADSL test failures with
sendDirectiveWithoutADialogRequestId.
- Bug fix for
SpeechSynthesizershowing the wrong UX state when a burst of
Speakdirectives are received.
- Bug fix for recursive loop in
AudioPlayer.Stop.
Version 1.0.2 - August 23 2017
- Removed code from AIP which propagates ExpectSpeech initiator strings to subsequent Recognize events. This code will be re-introduced when AVS starts sending initiator strings.
Version 1.0.1 - August 08 2017
- Added a fix to the sample app so that the
StateSynchronizationevent is the first that gets sent to AVS.
- Added a
POST_CONNECTEDenum to
ConnectionStatusObserver.
StateSychronizernow automatically sends a
StateSynchronizationevent when it receives a notification that
ACLis
CONNECTED.
- Added
make installfor installing the AVS Device SDK.
- Added an optional
make networkIntegrationfor integration tests for slow network (only on Linux platforms).
- Added shutdown management which fully cleans up SDK objects during teardown.
- Fixed an issue with
AudioPlayerbarge-in which was preventing subsequent audio from playing.
- Changed
Mediaplayerbuffering to reduce stuttering.
- Known Issues:
- Connection loss during some states keep the app in that state even after connection is regained. Pressing ‘s’ unsticks the state.
- Play/Pause media restarts it from the beginning.
SpeechSynthesizershows wrong UX state during a burst of Speaks.
- Quitting the sample app while
AudioPlayeris playing something causes a segmentation fault.
AudioPlayersending
PlaybackPausedduring flash briefing.
- Long Delay playing live stations on iHeartRadio.
- Teardown warnings at the end of integration tests.
Version 1.0.0 - August 07 2017
- Added
AudioPlayercapability agent.
- Supports iHeartRadio.
StateSynchronizerhas been updated to better enforce that
System.SynchronizeStateis the first Event sent on a connection to AVS.
- Additional tests have been added to
ACL.
- The
Sample Apphas been updated with several small fixes and improvements.
ADSLwas updated such that all directives are now blocked while the handling of previous
SpeechSynthesizer.Speakdirectives complete. Because any directive may now be blocked, the
preHandleDirective() / handleDirective()path is now used for handling all directives.
- Fixes for the following GitHub issues:
- A bug causing
ACLto not send a ping to AVS every 5 minutes, leading to periodic server disconnects, was fixed.
- Subtle race condition issues were addressed in the
Executorclass, resolving some intermittent crashes.
- Known Issues
- Native components for the following capability agents are not included in this release:
PlaybackController,
Speaker,
Settings,
TemplateRuntime, and
Notifications.
MediaPlayer
- Long periods of buffer underrun can cause an error related with seeking and subsequent stopped playback.
- Long periods of buffer underrun can cause flip flopping between buffer_underrun and playing states.
- Playlist parsing is not supported unless -DTOTEM_PLPARSER=ON is specified.
AudioPlayer
- Amazon Music, TuneIn, and SiriusXM are not supported in this release.
- Our parsing of urls currently depends upon GNOME/totem-pl-parser which only works on some Linux platforms.
AlertsCapabilityAgent
- Satisfies the AVS specification except for sending retrospective Events. For example, sending
AlertStartedEvent for an Alert which rendered when there was no internet connection.
Sample App:
- Any connection loss during some states keeps the app stuck in that state, unless the ongoing interaction is manually stopped by the user.
- The user must wait several seconds after starting up the sample app before the sample app is properly usable.
Tests:
SpeechSynthesizerunit tests hang on some older versions of GCC due to a tear down issue in the test suite
- Intermittent Alerts integration test failures caused by rigidness in expected behavior in the tests
Version 0.6.0 - July 14 2017
- Added a sample app that leverages the SDK.
- Added
Alertscapability agent.
- Added the
DefaultClientclass.
- Added the following classes to support directives and events in the
Systeminterface:
StateSynchronizer,
EndpointHandler, and
ExceptionEncounteredSender.
- Added unit tests for
ACL.
- Updated
MediaPlayerto play local files given an
std::istream.
- Changed build configuration from
Debugto
Release.
- Removed
DeprecatedLoggerclass.
- Known Issues:
MediaPlayer: Our
GStreamerbased implementation of
MediaPlayeris not fully robust, and may result in fatal runtime errors, under the following conditions:
- Attempting to play multiple simultaneous audio streams
- Calling
MediaPlayer::play()and
MediaPlayer::stop()when the MediaPlayer is already playing or stopped, respectively.
- Other miscellaneous issues, which will be addressed in the near future
AlertsCapabilityAgent:
- This component has been temporarily simplified to work around the known
MediaPlayerissues mentioned above
- Fully satisfies the AVS specification except for sending retrospective Events, for example, sending
AlertStartedfor an Alert which rendered when there was no Internet connection
- This component is not fully thread-safe, however, this will be addressed shortly
- Alerts currently run indefinitely until stopped manually by the user. This will be addressed shortly by having a timeout value for an alert to stop playing.
- Alerts do not play in the background when Alexa is speaking, but will continue playing after Alexa stops speaking.
Sample App:
- Without the refresh token being filled out in the JSON file, the sample app crashes on start up.
- Any connection loss during some states keeps the app stuck in that state, unless the ongoing interaction is manually stopped by the user.
- At the end of a shopping list with more than 5 items, the interaction in which Alexa asks the user if he/she would like to hear more does not finish properly.
Tests:
SpeechSynthesizerunit tests hang on some older versions of GCC due to a tear down issue in the test suite
- Intermittent Alerts integration test failures caused by rigidness in expected behavior in the tests
Version 0.5.0 - June 23 2017
- Updated most SDK components to use new logging abstraction.
- Added a
getConfiguration()method to
DirectiveHandlerInterfaceto register capability agents with Directive Sequencer.
- Added
ACLstream processing with pause and redrive.
- Removed the dependency of
ACLlibrary on
Authdelegate.
- Added an interface to allow
ACLto add/remove
ConnectionStatusObserverInterface.
- Fixed compile errors in KITT.ai,
DirectiveHandlerand compiler warnings in
AIPtests.
- Corrected formatting of code in many files.
- Fixes for the following GitHub issues:
Version 0.4.1 - June 9 2017
- Implemented Sensory wake word detector functionality.
- Removed the need for a
std::recursive_mutexin
MessageRouter.
- Added
AIPunit tests.
- Added
handleDirectiveImmediatelyfunctionality to
SpeechSynthesizer.
- Added memory profiles for:
- AIP
- SpeechSynthesizer
- ContextManager
- AVSUtils
- AVSCommon
- Bug fix for
MessageRouterTestaborting intermittently.
- Bug fix for
MultipartParser.hcompiler warning.
- Suppression of sensitive log data even in debug builds. Use CMake parameter -DACSDK_EMIT_SENSITIVE_LOGS=ON to allow logging of sensitive information in DEBUG builds.
- Fixed crash in
ACLwhen attempting to use more than 10 streams.
- Updated
MediaPlayerto use
autoaudiosinkinstead of requiring
pulseaudio.
- Updated
MediaPlayerbuild to suppport local builds of GStreamer.
- Fixes for the following GitHub issues:
- MessageRouter::send() does not take the m_connectionMutex
- MessageRouter::disconnectAllTransportsLocked flow leads to erase while iterating transports vector
- Build errors when building with KittAi enabled
- HTTP2Transport race may lead to deadlock
- Crash in HTTP2Transport::cleanupFinishedStreams()
- The attachment writer interface should take a
const void*instead of
void*
Version 0.4.0 - May 31 2017 (patch)
- Added
AuthServer, an authorization server implementation used to retrieve refresh tokens from LWA.
Version 0.4.0 - May 24 2017
- Added
SpeechSynthesizer, an implementation of the
SpeechRecognizercapability agent.
- Implemented a reference
MediaPlayerbased on GStreamer for audio playback.
- Added
MediaPlayerInterfacethat allows you to implement your own media player.
- Updated
ACLto support asynchronous receipt of audio attachments from AVS.
- Bug Fixes:
- Some intermittent unit test failures were fixed.
- Known Issues:
ACL's asynchronous receipt of audio attachments may manage resources poorly in scenarios where attachments are received but not consumed.
- When an
AttachmentReaderdoes not deliver data for prolonged periods
MediaPlayermay not resume playing the delayed audio.
Version 0.3.0 - May 17 2017
- Added the
CapabilityAgentbase class that is used to build capability agent implementations.
- Added
ContextManager, a component that allows multiple capability agents to store and access state. These Events include
Context, which is used to communicate the state of each capability agent to AVS in the following Events:
- Added
SharedDataStream(SDS) to asynchronously communicate data between a local reader and writer.
- Added
AudioInputProcessor(AIP), an implementation of a
SpeechRecognizercapability agent.
- Added WakeWord Detector (WWD), which recognizes keywords in audio streams. [0.3.0] implements a wrapper for KITT.ai.
- Added a new implementation of
AttachmentManagerand associated classes for use with SDS.
- Updated
ACLto support asynchronously sending audio to AVS.
Version 0.2.1 - May 03 2017
- Replaced the configuration file
AuthDelegate.configwith
AlexaClientSDKConfig.json.
- Added the ability to specify a
CURLOPT_CAPATHvalue to be used when libcurl is used by ACL and AuthDelegate. See See Appendix C in the README for details.
- Changes to ADSL interfaces:
- The [0.2.0] interface for registering directive handlers (
DirectiveSequencer::setDirectiveHandlers()) was problematic because it canceled the ongoing processing of directives and dropped further directives until it completed. The revised API makes the operation immediate without canceling or dropping any handling. However, it does create the possibility that
DirectiveHandlerInterfacemethods
preHandleDirective()and
handleDirective()may be called on different handlers for the same directive.
DirectiveSequencerInterface::setDirectiveHandlers()was replaced by
addDirectiveHandlers()and
removeDirectiveHandlers().
DirectiveHandlerInterface::shutdown()was replaced with
onDeregistered().
DirectiveHandlerInterface::preHandleDirective()now takes a
std::unique_ptrinstead of a
std::shared_ptrto
DirectiveHandlerResultInterface.
DirectiveHandlerInterface::handleDirective()now returns a bool indicating if the handler recognizes the
messageId.
- Bug fixes:
- ACL and AuthDelegate now require TLSv1.2.
onDirective()now sends
ExceptionEncounteredfor unhandled directives.
DirectiveSequencer::shutdown()no longer sends
ExceptionEncountered()for queued directives.
Version 0.2.0 - March 27 2017 (patch)
- Added memory profiling for ACL and ADSL. See Appendix A in the README.
- Added a command to build the API documentation.
Version 0.2.0 - March 99 2017
- Added
Alexa Directive Sequencer Library(ADSL) and
Alexa Focus Manager Library(AMFL).
- CMake build types and options have been updated.
- Documentation for libcurl optimization included.
Version 0.1.0 - February 10 2017
- Initial release of the
Alexa Communications Library(ACL), a component which manages network connectivity with AVS, and
AuthDelegate, a component which handles user authorization with AVS. | https://developer.amazon.com/fr-FR/docs/alexa/avs-device-sdk/release-notes.html | CC-MAIN-2020-16 | en | refinedweb |
I am using matplotlib for a graphing application. I am trying to create a graph which has strings as the X values. However, the using
plot function expects a numeric value for X.
How can I use string X values?
You should try xticks
import pylab names = ['anne','barbara','cathy'] counts = [3230,2002,5456] pylab.figure(1) x = range(3) pylab.xticks(x, names) pylab.plot(x,counts,"g") pylab.show() | https://pythonpedia.com/en/knowledge-base/2497449/plot-string-values-in-matplotlib | CC-MAIN-2020-16 | en | refinedweb |
Struts uses the RequestProcessor class to perform the processing for all requests received by the ActionServlet. The RequestProcessor class takes each request and breaks its processing down into several small tasks. Each task is carried out by a separate method. This approach lets you customize the way each individual part of the request is processed. Each of these methods is aptly named with a prefix of process; for example, processMultipart( ) and processPath( ).
Table 5-1 lists and describes briefly each of the process*( ) methods from the RequestProcessor class (in the order they are executed).
By having each phase of the request processing cycle take place in a separate method, request processing can easily be customized. Simply create a custom request processor that extends the base RequestProcessor class and override the methods that need to be customized. For example, a custom request processor can apply a logged-in security check before any action is executed. The RequestProcessor class provides the processPreprocess( ) method hook expressly for this. The processPreprocess( ) method is called before actions are executed. The following example shows how to do this:
package com.jamesholmes.example; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.apache.struts.action.RequestProcessor; public class LoggedInRequestProcessor extends RequestProcessor { protected boolean processPreprocess( HttpServletRequest request, HttpServletResponse response) { // Check if user is logged in. // If so return true to continue processing, // otherwise return false to not continue processing. return (true); } }
To use a custom request processor, you have to configure Struts to use it in the Struts configuration file:
<controller processor/> | https://flylib.com/books/en/1.248.1.24/1/ | CC-MAIN-2020-16 | en | refinedweb |
Working with OpenShift Online
I work at Red Hat, helping customers evaluate Kubernetes and OpenShift through proofs-of-concept and workshops. Independently, I am also part of a startup providing mobile game analysis services for game developers. Those two activities have made me a user of Red Hat’s OpenShift Online platform since its initial release, based on OpenShift v2. More recently, OpenShift Online has moved to the Kubernetes-based OpenShift v3 platform, and so we’ve been planning the migration of our apps onto the new platform, and into orchestrated containers.
When building OpenShift Container Platform environments for customers, I would generally have full control over the cluster and don’t get much opportunity to experience the platform as an “ordinary” user. In adopting the new Red Hat OpenShift Online service, I would be acting as a platform consumer: no special privileges, administration rights, or access to the underlying hosts, and subject to controls and restrictions on the resources available for running my apps. I hoped this shift in experience would not get in the way of creating a new home for our service.
I was really happy with OpenShift Online v2 and everything was working. The development and deployment experience was really good, and we didn’t have any problems. Usually, you don’t fix something that isn’t broken, but because OpenShift Online v2 was nearing end-of-life, migrating to v3 seemed the most sensible option. Of course, this type of activity isn’t limited to developers migrating from OpenShift Online v2 but is something that many developers may be considering as they look towards containerization and running on Kubernetes and OpenShift.
In moving our services to OpenShift Online v3, we had some choices to make about how we were going to deploy our apps. This post will cover the approaches we adopted and decisions we made along the way. First, I need to tell you a little about our application architecture, then we’ll look at how we tackled the migration challenge, including:
- Application configuration in the container
- SSL certificates and domain names
- Application components and deployments
- Enabling our development team
Service and Architecture
Our application and service stack is pretty simple. The frontend is written in Javascript / Angular, with a Java backend service connecting to a MongoDB datastore. We have been pleased with our NoSQL datastore choice, as it turned out our data model has been pretty dynamic; the choice has allowed us to do pretty much whatever we like with data and data structures. Wildfly hosts the Java backends, providing REST APIs used by the frontend. There are a number of backend services divided into logical components, following some microservice architecture principles. Finally, keycloak is used to provide an authentication and authorization layer around the service. Amazon Web Services like S3 and CloudFront are used to store and distribute content.
Diagram 1: Overall layout of entire servicestack
Moving applications to OpenShift
Despite OpenShift 2 being quite different from OpenShift 3 from a technology perspective, we found many operational similarities. In particular, as PaaS users, we were constrained to operate within logical sandboxes. Thankfully this helped prepare us for the migration to OpenShift 3.
Our existing build methods migrated easily to OpenShift 3. OpenShift includes a container build technology that simplifies the job of creating containers. While continuing to support traditional Dockerfile builds, OpenShift’s Source to Image (S2I) feature lets a developer create new container images by selecting an appropriate base ‘builder’ image, and matching it with a pointer to an application’s source code repository. The result is a container image ready for scheduling on the OpenShift cluster. The S2i approach not only reduces friction for developers but also helps organizations standardize their build pipelines and container layouts.
The Java backends are built using Maven. A set of ConfigMaps and Secrets are available to modify and tune app runtimes without code changes, and without baking, configuration files into the deployment. Maven is the supported build technology for the Wildfly S2I builder, so it was a simple task of creating new OpenShift build configurations to match the Wildfly S2I builder to the backends’ source code repositories. By default the Maven build runs with an ‘openshift’ profile, allowing us to add build configuration to, for example, rename the resulting WAR file to ROOT.war, so our application doesn’t have an ugly context path. One additional benefit was that this migration made it easy to upgrade the Wildfly version, from 9 to 10.
For application deployment, the environment variables were replaced by using a combination of ConfigMaps and Secrets, providing greater control, and securing sensitive information, such as passwords, more effectively.
Our existing frontend developer workflow created the static HTML, CSS, and JS locally using gulp, then committed the assets to a repo. The build configurations for these frontend components were straightforward, pairing any readily-available static web server image with our site assets from the git repo. We could automate the build process further by adopting the NodeJS S2I image, and creating a custom assemble script to match the manual steps taken by the developers, but our initial focus was simply to migrate the existing technology, not necessarily to update our approach.
My initial thought was to create a new nginx container to serve the static content and use the Dockerfile build strategy, but this is actually disabled on the multi-tenant OpenShift Online, for security reasons. We could compile a custom builder image and push it to Docker Hub and use it from there, but this would mean maintaining a base image separately, and it would add an overhead to our build and deployment pipelines. The compromise was to use the PHP builder image, which is based on Apache httpd, and it worked very well for this purpose.
Of all the service components, Keycloak proved to be the biggest challenge. We ran an older version of Keycloak on OpenShift Online v2, migrating data from this to the most recent version turned out to be rather complicated. In addition, we had chosen to use MongoDB for the Keycloak datastore, as we felt it would be better than PostgreSQL, but this complicated the migration even further. Despite this, the Keycloak documentation () proved to be excellent in supporting the migration, and soon enough we were upgraded to the latest Keycloak.
We needed to create our own container image for Keycloak, and publish it to Docker Hub, before running it on OpenShift. We relied heavily on a blog post detailing the process of running Keycloak on OpenShift by Shoubhik Bose and the accompanying Github repo to get started.
Secrets and ConfigMaps
Secrets and configs are one of the most important parts of your environment and deployments. Ideally, containers should be immutable throughout the development life cycle; that is, the container image does not change. What we run in production is the same image we ran in development and testing. This introduces a challenge when we consider how to accommodate this principle while allowing for variations in the run-time context of each lifecycle stage. As a prime example, how do we ensure that a test instance does not connect to a production database?
The Kubernetes community has experimented with various solutions to this problem, including environment variables, using Spring active profile settings, or building a configuration system using components such as etcd, for example.
Kubernetes simplifies this task by providing two different resources that can be used in similar ways: Secrets and ConfigMaps. Both of these resources store data, strings or binaries, in key-value pairs. The data can be used in different ways, either to configure properties within the Kubernetes Pod specification, for example, populating environment variables, or being rendered as files within the running container at deployment time. When used to create files, the files are mounted within temporary file-systems created within the running container namespace, which means that these files are in-memory and never at rest on the host Kubernetes Node. The key difference between ConfigMaps and Secrets is that a Secret’s values are encoded, and not human-readable, until they are used in a container deployment. The only place a Secret’s data is at rest is within the Kubernetes datastore (etcd), which itself can be operated on an encrypted file system.
This means developers can build the container once in development, and then deploy the same container image into subsequent lifecycle stages, using Secrets and ConfigMaps to provide the runtime context for each stage. A common pattern for this approach is to map Kubernetes namespaces(referred to as Projects in OpenShift) to lifecycle stages. For example:
myapp-dev,
myapp-test,
myapp-prod. We use staging and production namespaces, and we can provision QA namespaces on demand.
OpenShift provides a mechanism to easily manage container image pull-specs using the ImageStream resource. The ImageStream provides an abstraction which we can manage references to particular container image versions, without losing the unique reference to the image; the running pod references the full sha256 reference for the image. This is very important when you need to show or track the provenance of an image, particularly important when operating applications in production. For more information on container images in OpenShift Online see.
Secrets
As discussed, we do not want to hardcode our database credentials into our code or container image. Using Secrets is a more appropriate method, with as either a datastore configuration or simply username and passwords pairs assigned to environment variables at runtime. I would recommend using Secrets sparingly because you would not want to include these resources in your source code repository. (Unlike your other resources which should be managed like source code). To get started, keep it simple in the beginning, managing secrets through OpenShift tools, before developing more sophisticated methods later, such as build automation, and secrets management, perhaps when your workflow requires Secrets to change frequently, and manual Secret management has become a bottleneck.
You should use secrets for any configuration data that you need to protect, such as credentials, API keys, certificates and SSL keys, for example. For more information on Secrets see
ConfigMaps
For non-sensitive configuration information, ConfigMaps are more appropriate, and easier to manage. Like Secrets, ConfigMaps consist of key-value pairs, files can be included by simply associating a key with a file’s contents. Applications must redeploy in order to make use of ConfigMap changes, but this is similar to the most externalized configuration; you need to restart the process. I would recommend that you manage the source of your configuration files via your preferred source control, ensuring that changes are properly tracked, and making it straightforward to incorporate into your preferred CI/CD pipelines. Externalising configuration as files, or environment variables, makes configuration management much easier than if the configuration is managed within a database, for example. For more information on ConfigMaps see
SSL and domain names
OpenShift Online provides you with a default domain name for your applications, including a valid SSL certificate. However, for anything other than testing, you would probably prefer to use your own subdomain. Using a custom subdomain with OpenShift is as easy as providing a CNAME record to resolve to the default OpenShift Online subdomain, but then you must also source your own SSL certificates for this domain.
Making an OpenShift application available to external clients is done by exposing the application’s associated service. The action creates an OpenShift-specific Route resource, similar in concept to the Kubernetes Ingress resource, which manages OpenShift’s reverse proxy configuration in its routing layer. Requests for named hosts are matched against associated applications and routed accordingly.
Services such as Let’s Encrypt make obtaining valid SSL certificates free and straightforward, so there are few reasons for avoiding encryption in your applications; above all, it protects your customer’s privacy and data. In order to terminate SSL we must obtain a new valid SSL certificate for the custom hostname, or domain if obtaining a wildcard SSL certificate, and include this in our configuration. OpenShift Routes can either provide TLS termination at the routing layer, with non-SSL or re-encryption back to your application or alternatively, the routing layer can pass through for direct termination at the application.
If terminating TLS at the routing layer, we need to include our certificate and key within the Route resource object, alternatively, we can encode the certificate and key into a secret, and mount within our application Pod to be used by our application, for example in
/etc/pki/tls/private. For more information on securing routes see the OpenShift Secured Routes documentation.
It is important to remember that certificates expire, and you will need to update them. In particular, Let’s Encrypt SSL certificates are valid for only 90 days, so you have to compose a strategy for how you will manage certification expirations and updates. Currently, we do it manually with the oc client tool when certificates are renewed, however, we plan to automate this in future using OpenShift utilities and automation tooling, for example, Ansible. If you would like to know more about using Let’s Encrypt certificates in Openshift, you should read this blog post.
What to expose
As discussed, the normal operation of applications within OpenShift, and indeed any Kubernetes platform, is that their network services are not externally accessible. Applications can connect with each other within the cluster by connecting using the Service resource. Kubernetes Services provide a stable endpoint for communication to application Pods and are capable of supporting tcp and udp ports with port mapping, for example, Service port 80/tcp mapping to Pod port 8080/tcp.
It is important to note that the communication can, and is often, limited: by default, OpenShift Online limits communication to only applications running within the same Kubernetes namespace, whereas OpenShift Container Platform supports Network Policy which allows developers to have fine-grained control over the cluster-based communication to their services. Limiting communication in this way is vital to ensure secure operation across not only your software development lifecycle but when operating in a multitenant environment.
It is through the Pod and Service Kubernetes primitives that we are able to build the core network architecture of our applications; wiring our applications together. All of the services remain private, such as our backend and database services; it is only the frontend and API that we choose to expose and make available to external clients. Occasionally want to temporarily access private services remotely, for example to access the database, and we can do this using the OpenShift command line utility (oc), which provides a port-forward mechanism to map a local port on your client with a port on a specific Pod, the communication is securely tunnelled through the OpenShift API.
For more information about exposing services with routes see the OpenShift document about Routes. For more information about services see the Pods and Services documentation.
What to run on OCP and where
Our OpenShift v2 application deployment included the hosting the database and its batch jobs. Despite this approach working well, we reviewed the design as part of the migration to OpenShift v3.
Perhaps the biggest change was moving the database from self-hosted to a 3rd party. The main reason for this change was simple: we wanted to minimize our management burden.
Second, we decided to move the batch jobs outside of OpenShift. Previously the batch jobs were included with the Wildfly application and were triggered as cron jobs, and this became tricky to manage as we scaled up our application servers. In the latest design, we outsource batch jobs to an external virtual machine and schedule them with
cron(8). This is an area we plan to revisit and improve in the future.
Along with these changes, we plan to use Service Brokers to connect external services in the future. While OpenShift Container Platform supports the Open Service Broker API, the API is not yet available in OpenShift Online. Since we are using Amazon S3, and external, manually-configured MongoDB services, the automated binding and provisioning offered by the service brokers is attractive, and something we will consider when it becomes available.
How to get started with your team
In our example, we already had solid experience in the team with OpenShift and Kubernetes. Obviously, that helped during the migration process. But once the initial build and deployment resources were in place, the rest of the team didn’t really have to know much about OpenShift, Kubernetes, or indeed containers. They could focus on their application and its users. OpenShift reduces friction for developers. With builds and deployments triggered automatically from code commits; the platform handles the build and deployment pipeline for you, and productivity rises quickly. Developers can even get a remote shell to containers and check logs without any container related commands, right in the web browser.
Summary
It was a great feeling when I saw all those blue circles in the OpenShift Web Console once everything was running. I actually had to take a moment, as I realized that after the many times I’d helped customers get to this point, it was the first time for me and my team.
My favorite view in Openshift Web Console
There are so many features out-of-the-box in OpenShift: seamless deployments, Secrets, ConfigMaps, application auto-scaling and readiness and liveness probes, for example. Features that just make running services painless, and that maps the Kubernetes architecture at OpenShift’s core into the terrain application developers care about. For example, if your Java app has a memory leak, the offending container will be simply killed and replaced, without a break in service when your app is scaled to at least 2 Pods.( Of course, we’d recommend you fix your application code!)
KISS (Keep It Simple Stupid) also works with OpenShift and containers. You have all the bells and whistles to do magic with containers like A/B or Blue-Green testing. You can also create build pipelines to test your code, make coffee, and notify you on Slack when your cup is ready (HTTP 418). But you don’t have to do any of that to make your deployments painless. Just create your build configuration with the provided tools, and start from there. Keep in mind that even though CI/CD and DevOps are cool and a really hot topic, a nice build pipeline doesn’t make your application any smarter.
More information:
Managing images:
Secrets:
Keycloak documentation:
Blog post about running Keycloak on Openshift
Keycloak deployment project in Github
Categories | https://www.openshift.com/blog/moving-production-workloads-to-openshift-online | CC-MAIN-2020-16 | en | refinedweb |
Part of an application's agility comes from its ability to work properly under any circumstance. In today's mobile age, someone could be using an application on a laptop in a disconnected environment, bring the laptop to a coffee shop and connect wirelessly, and then bring the laptop home and connect to a wired LAN. A truly agile application needs to be able to work when the network is connected and when it is disconnected.
The first part of working in both connected and disconnected modes is making sure that your application makes a local cache of offline activity that can then be uploaded to the web service after the connection is restored. The format and storage medium of this offline cache will be up to you and will largely depend on the specifics of your application, so that won't be covered in detail here.
In order for your application to begin uploading data to the web service when a connection is restored, your application needs to know when a connection has been restored.
The new System.Net.NetworkInformation namespace provides developers with a handy utility class called NetworkChange. This class hosts events that are used to notify applications when changes in network status occur.
The first event is NetworkAvailabilityChanged. This event reports when there is a change in network availability. The network is considered "Available" when there is at least one network interface device that is considered "Up." If all network interfaces are down or otherwise having trouble, the network is not available.
This might seem like the ideal way to detect whether your user has network access to the web service. Unfortunately, this isn't entirely true. Here's the rub: If the user has multiple network interfaces, such as Wireless (Wi-Fi) access, a LAN card, a 1394 IEEE card (FireWire), or a VPN (Virtual Private Network) interface device, this can complicate things. What happens is that one of these devices will lose access, but other devices may remain "up," yet none are connected to a "real" network. In this case, the NetworkAvailabilityChanged event arguments will still report that the network is available, yet the client application cannot communicate with the web service.
To provide true connection detection, your code also needs to respond to the NetworkAddressChanged event, which is called whenever a network address is changed. Network addresses change when a device acquires a DHCP address (connect) and when a device disconnects or becomes disabled.
Listing 38.1 shows the source code for a form that sends messages to a list box when addresses change and when network availability changes. With this application running, experiment by turning off Wi-Fi hardware switches, disabling interfaces, enabling interfaces, and plugging and unplugging physical LAN cables. Make note of when the address change event is called compared to when the network availability event is called. For maximum effect, try this on a machine with a wireless card, a LAN card, and another network device like a VPN or a FireWire device.
using System; using System.Net; using System.Net.NetworkInformation; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace Agile1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); System.Net.NetworkInformation.NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(NetworkChange_NetworkAddressChanged); System.Net.NetworkInformation.NetworkChange.NetworkAvailabilityChanged += new NetworkAvailabilityChangedEventHandler( NetworkChange_NetworkAvailabilityChanged); } delegate void RespondDelegate(); void NetworkChange_NetworkAvailabilityChanged(object sender, System.Net.NetworkInformation.NetworkAvailabilityEventArgs e) { RespondDelegate d = delegate() { lbLog.Items.Add( string.Format("Network availability changed. Net {0} available.", e.IsAvailable ? "is" : "is not")); }; this.Invoke(d); } void NetworkChange_NetworkAddressChanged(object sender, EventArgs e) { RespondDelegate d = delegate() { bool hasOneGateway = false; NetworkInterface[] nis = NetworkInterface.GetAllNetworkInterfaces(); foreach (NetworkInterface ni in nis) { IPInterfaceProperties ipProps = ni.GetIPProperties(); if (ipProps.GatewayAddresses.Count > 0) hasOneGateway = true; } if (hasOneGateway) lbLog.Items.Add("Network address changed (probably still online)"); else lbLog.Items.Add("Network address changed (probably NOT online)"); }; this.Invoke(d); } } }
The code in the preceding listing uses the presence of at least one gateway address as a litmus test to guess whether a real network connection is present after the address change. It also makes handy use of anonymous methods to use the Invoke() method to forward the GUI-modifying code to the main UI thread. Figure 38.3 shows what happened when this program ran. The author had turned off his wireless card and then turned it back on, and then disabled every adapter on his machine, which finally produced a network availability event. Then he turned them all back on, which completed the test with a valid gateway. | https://flylib.com/books/en/1.237.1.245/1/ | CC-MAIN-2020-16 | en | refinedweb |
Last night I checked in some stuff to hopefully improve the virtual domain support for Postfix. It'd be great if some of you could give it a looksee. FWIW, I think I've solved the dom1, dom2 namespace collision problem, by as suggested, omitting both dom1 and dom2 from $mydestination. However, I still have host.dom1.ain in $mydestination (i.e. $myhostname) because otherwise, there's no way to receive any mail. AFAICT, there's no way to avoid having all the dom1.ain and dom2.ain names show up in host.dom1.ain. Oh well, that doesn't bother me. -Barry | https://mail.python.org/pipermail/mailman-developers/2001-November/010036.html | CC-MAIN-2020-16 | en | refinedweb |
fnmatch - match filename or pathname using shell globbing
rules
#include <fnmatch.h>
int
fnmatch(const char *pattern, const char *string, int flags);.
The fnmatch() function returns zero if string matches the
pattern specified
by pattern, otherwise, it returns the value FNM_NOMATCH.
sh(1), glob(3), regex(3)
The fnmatch() function conforms to IEEE Std 1003.2 | http://nixdoc.net/man-pages/OpenBSD/man3/fnmatch.3.html | CC-MAIN-2020-16 | en | refinedweb |
Highlighting glyphs in the font window
I'm looking for a way to highlight or mark some glyphs in the font window via a script, but without changing the save state of the glyphs. At the moment I'm changing the
RGlyph.markattribute, but that makes the affected glyph being "unsaved".
An overlay graphic (like the small "L" for layers) would also be a solution, but I have no idea how to add something like that.
Perhaps I should rather generate smart sets via script to group the glyphs I want ...
Any ideas or suggestions?
The mark color is saved in the lib and is actually changing the font data, so it makes sense that the save state of the document will update.
There are some options:
set the change count of the document to zero after changing the font data, just be careful with data loss, of unsaved fonts
from AppKit import NSChangeCleared font = CurrentFont() for glyphName in font.selection + font.templateSelection: glyph = font[glyphName] glyph.mark = (1, 0, 0, 1) document = font.document() if document: # set the document change count to zero document.updateChangeCount_(NSChangeCleared)
Adding an L or N (for
glyph.note) is a bit more difficult cause you have to overwrite the defcon representation factory generating the glyph cell.
Adding smart sets is probably the easiest.
But I guess your proposed solutions are not the best options for the problem...
however, good luck! | https://forum.robofont.com/topic/232/highlighting-glyphs-in-the-font-window | CC-MAIN-2020-16 | en | refinedweb |
Can I open the Output Window from a script? What about from a remote script? (I'm so close!)
- StephenNixon last edited by gferreira
Can I open the Output Window from a script? It seems to work when I do this:
from mojo.UI import OutputWindow open(OutputWindow().show())
However, the script window also returns the following error:
Traceback (most recent call last): File "<untitled>", line 3, in <module> TypeError: expected str, bytes or os.PathLike object, not NoneType
This is particularly a problem when I try to open the Output Window from a remote script. I'm trying to use one line to run a Python script in RF, plus open the Output Window. However, when I do so, the above error ends up right in the Output.
robofont -p example-script.py -c "from mojo.UI import OutputWindow" -c "open(OutputWindow().show())"
Am I opening the window in an incorrect way? How might I do so in a better way?
why do you use
open(...)? this is the python way to open file as objects to read and write.
just use
OutputWindow().show()
- StephenNixon last edited by
@ThunderNixon said in Can I open the Output Window from a script? What about from a remote script? (I'm so close!):
from mojo.UI import OutputWindow
I'm nearly positive that I tried that, and it didn't work, so I threw in the
The good news is, it's working now, so either I hit a temporary bug, or I formulated a false memory in my head. If
OutputWindow().show()fails in the future, I'll loop back on this.
Thank you! | https://forum.robofont.com/topic/628/can-i-open-the-output-window-from-a-script-what-about-from-a-remote-script-i-m-so-close | CC-MAIN-2020-16 | en | refinedweb |
Joel Shprentz
shprentz@bdm.com
BDM Federal, Inc.
1501 BDM Way, McLean, VA 22102
A group of Python classes model HTML formatted documents and their contents. Mimicking the hierarchical structure of HTML tags, content objects form a treelike representation of an HTML document. A tag factory object creates each node in the tree, typically a tagged element, which may include attributes and initial contents. Additional contents--text and other tags--can be added later. The HtmlDocument class provides an overall structure for HTML documents. Application-specific subclasses override default methods that create the start, body, and end of a document. Often a complete HTML document can be created by instantiating an HtmlDocument subclass with a few parameterized values, such as document title. When a document is completely built, it can be written to any output stream, such as a file, pipe, or socket. Individual tags can also be written to output streams. The HTML classes provide more functionality and greater reusability than the simple Python output statements often used.
A common problem at World Wide Web sites is the automatic creation of HTML documents. Whether the documents are created on the fly by a CGI script or in advance by a batch production system, the most common approach is to execute a sequence of print statements that create the HTML document one line at a time. This strategy works with any programming language. For example, here is a Unix shell script to generate a simple HTML document.
#!/bin/sh echo '<HTML><HEAD>' echo '<TITLE>Hello World</TITLE></HEAD>' echo '<BODY><H1>Hello World</H1>' echo '<P>Program with' echo '<A HREF=""' echo '>Python</A> today</BODY></HTML>'
In actual documents, the body is created based on some calculation, model, or information retrieval, often in response to user input. As the complexity of the documents grows, programmers migrate from simple scripting languages to more complex interpreted languages, like TCL and Perl, and then to compiled languages, like C. Object oriented languages are also available: interpreted languages like Python and compiled languages like C++.
The interpreted Python language[1] combines the benefits of object oriented development with the rapid application development environment of interpreted languages. The Python class library[2] contains a rich collection of tools including an HTML parser and a CGI interface, but it does not include any classes for constructing HTML documents. Thus, despite all of Pythons capabilities, the Python program to generate a simple HTML document is almost identical to the shell script:
print '<HTML><HEAD>' print '<TITLE>Hello World</TITLE></HEAD>' print '<BODY><H1>Hello World</H1>' print '<P>Program with' print '<A HREF=""' print '>Python</A> today</BODY></HTML>'
There are several potential problems with this approach:
The tagged text elements of HTML[3] documents are organized in a treelike structure, with each tag potentially containing text intermingled with one or more other tags. The tags in the example document have this tree structure:
This tree structured hierarchy is similar to the composite pattern[4] from object oriented design. Following this pattern, an abstract class, HtmlContents, declares common operations, most notably writeHtml, which writes the HTML contents to a file. Subclasses of HtmlContents implement these operations for specific types of HTML document contents: tags, text, and space. Tag objects, called HtmlElement, implement additional methods to modify contents and attributes. Preformatted elements, like the <PRE> tag, are represented by a subclass of HtmlElement. The object structure is shown below in Coad notation[5]:
There are many tags in the HTML 2.0 specification, so another class, HtmlTagFactory, defines methods to create a valid HtmlElement object for each possible tag. As HTML evolves, the HtmlTagFactory class can be extended and revised to accommodate new tags. There is only one instance of HtmlTagFactory. It is named Tag.
HtmlElement objects can be created with or without initial contents. The initial contents can be text, HtmlElement objects, or lists. List elements can also be text, HtmlElement objects, or lists. For example,
document = Tag.HTML () docTitle = Tag.TITLE ("Sample document") docHead = Tag.HEAD (docTitle) docBody = Tag.BODY ([Tag.P ("Some text), Tag.P ("More text")])
The same text, HtmlElement objects, and lists can be appended or prepended to any HtmlElement objects. For example, to complete the sample document,
document.append ([docHead, docBody])
This capability is often used within a loop to construct a list. For example, this code constructs an ordered list of words:
words = ["red", "yellow", "green", "blue", "orange"] wordList = Tag.OL () for word in words: wordList.append (Tag.LI (word))
Many HTML tags have attributes. These can be specified when the HtmlElement object is created or set later. An anchor tag with an HREF attribute would look like this:
Tag.A ("click here", "")
Forms and form fields are other types of HTML tags. As HTML is defined, there are many variations of INPUT tag (text, check boxes, etc.). Each variation of the INPUT tag is implemented by a different subclass of HtmlElement. The tag factory understands all of these forms. For example, to create a text field to hold a 5-digit zipcode with no default value,
Tag.inputText (None, "zipcode", 5, 5)
Form field objects have a unique capability: When the value dictionary returned by Python's CGI class is passed to a tag tree, each form field in the tree will lookup its value in that dictionary and use the value when generating the HTML document. This makes it easy to preset a form based on a user's responses.
All subclasses of HtmlContents respond to the writeHtml method by writing their HTML representation to a specified file. The sample document created above could be written to standard output as follows:
from sys import stdout document.writeHtml (stdout)
The various tag objects support nonsequential construction of HTML documents. This is essential when related information is distributed throughout a document. For example, a table of contents at the beginning of the document should list only the sections actually included in that document. The "print a statement at a time" strategy cannot easily create such documents. Consider this pseudocode for a document:
print heading print table of contents if condition 1 then print section 1 if condition 2 then print section 2 print footing
When the table of contents is being printed, this code has not yet evaluated the conditions nor decided which sections to include. The conditions are often of the form, "did the query return any results," so it is inconvenient to evaluate them except when printing the section based on the query results. A possible solution is to first print the sections to a temporary file and then merge that file into the final file.
Tag objects offer a better solution. The tag hierarchy need not be created sequentially, but can be augmented as needed. The sectioned document can be created with this pseudocode:
body = Tag.BODY () body.append (heading) toc = table of contents body.append (toc) if condition 1 then toc.append ("Section 1") body.append (Section 1) if condition 2 then toc.append ("Section 2") body.append (Section 2) body.append (footing)
Each section adds its own entry to the table of contents, which is already correctly positioned in the document. Similar techniques can be used to place summary information near the front of a document (e. g., average values or counts of search results).
This technique for document construction is possible because today's computer memories are guaranteed to be large enough to hold the tree representation of an HTML document. It is more effective to manipulate the entire document in memory then to repeatedly process and output small amounts of information.
The HTML output includes all tags and attributes, replaces special characters (e.g., &) with their HTML representation, suppresses redundant spaces, breaks text into lines of about 70 characters, and adds line breaks after major tags (e.g., H1 and P). Some of these steps require tracking the capacity and spacing status of each output line, a capability not present in Python's file class. Although the tracking features could be implemented by subclassing the file class, this would preclude using sockets and other file-like objects.
The implementation chosen relies on HtmlFile, a wrapper class for Python files. HtmlFile tracks spacing, character count, newlines, and preformatted text for a given file. The two principal public methods are write (for text) and writeAsIs (for tags and attribute values). HtmlFile makes the implementation of writeHtml trivial in the HtmlContents class:
def writeHtml (self, outfile) self.writeToHtml (HtmlFile (outfile))
Each subclass of HtmlContents must implement writeToHtml. The HtmlText objects convert special characters before writing the text:
def writeToHtml (self, outfile) pieces = splitfields (self.text, "&") outText = joinfields (pieces, "&") . . . outfile.write (outText)
The HtmlElement objects must write a starting tag, which might include attributes; the contents, which are found on a list containing text and other elements; an optional ending tag; and an optional line break. Here is the definition:
def writeToHtml (self, outfile) self.writeStartTag (outfile) self.writeContents (outfile) if self.needEndTag: outfile.write ("</%s>" % self.tag) if self.tagEndsLine: outfile.writeNewline ()
A typical web application will contain several types of pages, each produced by a different program. Examples include query forms, result pages, and indices. Graphic designers[6] recommend that these pages share common design elements, such as logos, banners, feedback links, modification date, and navigation aids. These common elements typically appear at the top and bottom of each page.
An abstract class, HtmlDocument, provides a framework for implementing documents with common design elements. HtmlDocument is similar to HtmlElement in that it responds to append and prepend methods to add elements to the document body. It also responds to writeHtml to write to a file and printHtml to write to standard output. However, when implementing writeHtml, HtmlDocument will call three methods to create common design elements: head, startBody, and endBody.
A web application will implement head, startBody, and endBody in a subclass of HtmlDocument.
class SampleHtmlDocument (HtmlDocument):
Consider a page design for a typical application. At the top of each page, there is a banner logo, then a title and subtitle. The SampleHtmlDocument startBody method could look like this:
def startBody (self): return [self.banner (), self.pageTitles ()]
The two methods used to construct the list must also be defined. This method builds the page titles.
def pageTitles (self): titles = [] if self.title: titles.append (Tag.H1 (self.title)) if self.subtitle: titles.append (Tag.H2 (self.subtitle)) titles.append (Tag.HR ()) return titles
Once the basic page design is established for an application, other subclasses can add additional design elements required by particular page types, such as query forms.
There are some advantages to this approach:
The code and design described have been used on several recent internal projects. Earlier projects were implemented with the sequential print method described in the introduction.
The first improvement noticed was that the HTML produced is correct. There were no missing tags or improperly nested elements. These errors are now unlikely to occur because the document structure is reflected in the Python code and because the HTML output is generated by well tested code.
Sharing a common page design among several applications has been a great benefit. With the print statement method, a design change required modifications to each program that produced a different type of page. With the object-oriented approach, only one file needs to be changed.
Programmers no longer work at the raw HTML level. Instead, they treat HTML document bodies as trees of tags. These HTML element, tag, and document classes match the level of abstraction provided by many other classes in the Python library. | http://www.python.org/workshops/1995-12/papers/shprentz.html | crawl-002 | en | refinedweb |
Introduction
The Import SIG exists to provide a forum to discuss the next generation of Python's import facilities.
The long-term goal of the SIG is to eform the entire import architecture of Python. This affects Python start-up, the semantics of sys.path, and the C API to importing.
A short-term goal is to provide a "new architecture import hooks" module for the standard library. This would provide developers with a way of learning the new architecture.
Background
The SIG was born as the result of a discussion on developers day at IPC8. The topic itself is much older, of course.
Pre-History
In the early days of the 21st century, archeologists discovered that originally, Python had no packages (not even jars to keep pickles in). Modules were left on the path where anyone could trip over them. When a particular module was needed, a page was sent out to "find_module". When he returned, saying he had found it, he was then clobbered over the head and sent back out to get it.
Meanwhile, modules, without any sense of propriety, were doing their thing on the path, and in no time at all, Pythondom was littered with the disgusting little things.
Then, one dark and stormy Knight who said "ni" got tired of tripping over them, falling into the ditch and getting his armor rusty. He bravely started piling modules on top of other modules. Protocol was not adjusted however, so the pages now had to make 4 trips; first to find and get the top module, then to find and get the module underneath.
Other Knights, tired of pages returning with the wrong module, began using specially trained pages (called "hooks") who had some tricks for finding exactly the right module. Unfortunately, hooks were a bloodthirsty lot, and if two of them met on the path, usually only one survived.
Late 20th Century
By Python 1.5, both approaches had been blessed. Python had packages built into the language, and a "preferred" method for doing import hooks (ihooks.py).
Unfortunately, the architecture has grown rather complex. Hooks take over at the level of the builtin
__import__ (which is what the keyword
import calls, as well as the C level
PyImport_ImportModule). This is before the package mechanics are encountered. So any hook that deals with packages needs to emulate the package machinery (and ihooks.py provides an implementation of this). See the call graph diagram for an overview.
Using ihooks requires an intimate knowledge of the import mechanism. You change or add functionality by overriding the way ihook's pure Python implementation of the import process sees the "filesystem", or performs the low-level import tasks, (you can, of course, override at a higher level, but you'll have to implement more of the basic mechanisms). See the class diagram of ihooks.
The Problem
The import mechanism is coming under pressure from a number of sources. Packages have moved from being a novelty to a necessity. Package authors are creating complex multi-level structures with inter-dependencies between sub-packages or packages.
Others are doing imports from things other than the filesystem, (archives, databases, possibly even URLs).
People do strange import hacks to get around versioning problems, or platform dependencies. Most of these do not use ihooks, probably because it takes considerable effort to learn how to use ihooks effectively. Many end up with a wrapper module that finds the right code and stuffs it directly into the required namespaces, bypassing the import mechanism altogether.
This creates a problem for
freeze and installers in that tracking dependencies is nearly impossible.
There are other problems. It takes a whole lot of system calls to do a (normal) import, so Python performance suffers, particularly in a CGI-like enviroment. The "approved" ways of extending the path and installing packages and modules are rarely followed, (it's been a moving target), making installations brittle.
And then there are some related issues: such as network installs of Python; or Python in the presence of both network and local installations.
The Proposal
In early 1999, Greg Stein wrote
imputil, which turns the problem on its head. It introduced the idea of having multiple importers. An import request would be handed to each importer in turn, until one of them satisfied the request. In addition, the API for importers makes it easier for the developer to deal with the package machinery.
This solves a number of problems. It makes it easy to import from alternate sources (you don't have to pretend you're a filesystem). It lets one package author install one set of hooks without interfering with anyone else's hooks (or lack thereof). The importer can be distributed with the package, making distribution and maintenance simpler. Combined with an archive of compiled Python modules, it makes awesome start up performance possible. A class diagram of imputil is here. Imputil itself can be downloaded from Greg's web site.
It does make writing certain kinds of import hooks more difficult. "Policy" hooks that affect an entire installation are not easy, (whether this is good or bad is a valid discussion topic). Hooks that take advantage of the current import's assumption that everything is in the filesystem may end up more verbose, (eg, a hook that overrides the "find" part of today's import mechanism, but leaves the "load" part alone).
In addition, there are areas that need improvement. There is currently almost no capability to manage the collection of importers. Performance on a normal Python installation is disappointing, (the only time imputil passes control back to the normal mechanism is for loading binary extensions). | http://www.python.org/community/sigs/retired/import-sig/ | crawl-002 | en | refinedweb |
Community discussion forum
How to create this
I need help
I want to create this:
Nowe I have tried to use Flash but cannot get on with it. Any ideas on an easier application to do this?
Cheers
Won't allow to insert this as a image for some reason.
-
- 4 years agoby
Thushan Fernando
Thushan FernandoAustralia :: Melbourne, AustraliaJoined 7 years ago
I guess flash would be the easiest method to make this (and the lightest as its all vectored and the size is small)
you dont really have to use Flash Flash to make it, try smome alternatives like SWiSH which makes things alot easier to work with flash
there are lots of effects (some you see on that gif) which will be quick to implement
Hi ukmedia
this is easy to do in SVG. use SVG [1].
the installation base of svg is growing rapidly, now that opera supports native SVG [2].
Mozilla/firefox native support will also be switch on by default this year [3].
Konquror has an own implementation which is expected to move over to Safari [4].
allmost all modern mobile phones support it[5].
so there is no reason to use flash for simple things like this.
you can code it by hand or use an editor like beatwaremobil designer.
[1]
[2]
[3]
[4]
[5]
[6]
p.s.:i think flash is not a good idea here since flash does not know text.
with svg however you are able to create a multi lingual version of that banner.
the text is indexable and searchable. plus you can use the exact same version for mobiles,
since it scales.
use SVG and have fun
bernd
i was just bored, so i recreated this banner in SVG.
the ziped version (svgz) has only 691 bytes.
have fun
bernd
- 4 years agoby
Thushan Fernando
Thushan FernandoAustralia :: Melbourne, AustraliaJoined 7 years agoah yes quite true... SVG is quite cool... screw flash(haha) follow bernd
as we are at it, today i was happy to find google indexing SVG files.
so SVGs are not only indexable, but they do get indexed now.
If we could possibly see IE7 implementing SVG, that would be so cool.
i know that MS does have an own implementation, which is used in MS Visio.
does anybody see a way to convince MS to include that instead of , or additionaly to, VML ?
- 4 years agoI have used SVG in one of my .NET web apps. My shop is all IE 6 and it automatically prompts to install the Adobe SVG plugin which installs almost instantly and after that all is fine. SVG is a standard and I would say it's safe to use for a site likly to be visited by different browsers.
- I have not used SVG before on any of my sites. If you have to download a plugin for users to view my site I would rather leave this as I myself hate plugins from the web.
- HI
hollystyles:
is that project, where you use SVG, online ? if yes would you post a link,please?
ukmedia:
as stated in a previous post Mozilla, Opera, Konquror ( Safari ?) do have their own
native implementation, which means you dont have to download a plugin.( thats why i wish
MS would also implement it in IE7, i would be really upset if they dont, since
they do have some kind of svg implementation in MS Viso.)
beside that there are some Java SVG Viewers which you could use as an applet in your page, for example
so no plugin required.(look at the examples on that side with a browser without svg support)
cheers
bernd
- 4 years agoOk, wht about an animated gif then. I created this one in Fireworks MX.
bernd,
Sorry, it's on a private intranet. It's a database driven bargraph with animation, using ASP.NET and VB.NET to generate SVG xml on the fly. Unfortunately I don't have a public .NET host to showcase it right now. I could maybe do a 'save as' on the generated page and ftp that up yo my webspace hmmm...I'll get back..
- thats also ok besides that its 35 times larger than the svg, and is not scalable.
- 4 years agoBernd,
here's the link:
Firefox prompts me to install plugin, but so far singularly fails to do it ! DOH.
Opera needed me to copy a dll and .zip from the Adobe svg installation into it's plugin folder, yuck.
IE6 just put up a box saying something like 'Do you want to install Adobe blah..." said yes, few seconds later.. bosh a graph!
- >>Firefox prompts me to install plugin, but so far singularly fails to do it ! DOH
yes thats a known problem with ASV3
to use ASV in Firefox, download ASV6 beta
and then folow the instructions on this page:
>>Opera needed me to copy a dll and .zip from the Adobe svg installation into it's plugin folder, yuck
you wont need a plug in if you download Opera 8.0
- 4 years ago
Er.. I just followed instructions here:
The SVG is an XML document, embeded in an XHTML document. So it is text, embedded in object tags that do specifiy "image/svg+xml"
I did find discrepancy in the DOCTYPE element at W3C on one page:
They show this:
Code:<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"">
And on the next page:
it's changed to this:
Code:<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20000303 Stylable//EN"
"">
>The SVG is an XML document, embeded in an XHTML document. So it is text, embedded in object tags that do
>specifiy "image/svg+xml"
correct but you also have to make sure that the server is sending the right mimetype !
you can securly drop the Doctype declaration as there are no validating SVG parsers out there.
what is more important is the namespace (xmlsn=xml namespace). a minimal document would look like this:
<?xml version="1.0" standalone="no"?>
<svg xmlns="">
</svg>
- 4 years ago
>>correct but you also have to make sure that the server is sending the right mimetype !
The server belongs to my ISP, how can I check what it's serving?
Ok I have ditched the DOCTYPE, and added the namespace.
I put the version 3.0 dll and zip in firefox's plugins dir, this stopped the plugin wizard but I just got blank page.
I installed Adobe SVG 6.0 but this broke IE.
So I copied the version 6.0 dll and zip to firefox and ditched the version 3 files, then uninstalled Adobe SVG 6.0 and re-installed version 3.0.
I can browse my page fine in IE and Opera, but Firefox still just shows a blank white page.
hi hollystyles
>>The server belongs to my ISP, how can I check what it's serving?
you can check with this with this simple script
Set xml = CreateObject("msxml2.xmlhttp")
xml.open "GET", "", False
xml.send ""
msgbox(xml.getresponseheader("content-type"))
if its an apache server, you can add a .htaccess file to your root folder containing the lines
AddType image/svg+xml svg svgz
AddEncoding x-gzip svgz
concerning ASV6 well it works fine with IE6 and firefox here...
you might be interrested in the latest mozilla with native SVG support
it can be downloaded here:
just grab the file mozilla-win32-svg-GDI.zip
if you run that build for the first time, you have to enable svg first. to do so,
1. run mozilla
2. type about:config in the addressfield of the browser
3.in the appearing searchfield type svg
4.double click on the the variable appearing to set it to true
more info can be found here:
hth
bernd
- 4 years ago
Bernd,
Ok I have grabbed mozilla-win32-svg-GDI.zip
I think i need some sort of install install script or bat that puts the built files where they need to be, but I'm struggling to see where I get it and how to run it. Can you point me in the right direction ?
no, you dont need an installer, just unzip, then go to the folder that contains the unziped files.
there is a file called mozilla.exe in the /bin folder . just doubleclick that file.
- 4 years ago
Bernd,
Ok sorry I lied, I downloaded FIREFOX-win32-svg-GDI.zip
Anyway there's a firefox.exe so I double clicked that, it opened a gecko browser Did the about:config thing and set svg enabled to true. Opened firefox from my regular shortcut and did about:config and the setting showed true there as well.
I created a .htacces file in the root of my webspace (where my svg document also resides) with the two lines you specified:
AddType image/svg+xml svg svgz
AddEncoding x-gzip svgz
So now when browsing I get an embeded object with scroll bars, but still the text of my svg document and not the image.
So I guess I'm still having trouble with content type ?
- hi hollystyles
well yes seems to be a problem with your ISP. you should contact them, and ask them to add the correct mimetype for svg.
as your server is still sending text/plain . could be that its not an apache server, or they switch of that feature.
are svg files loaded localy from hardrive being displyed correctly ?
p.s.:if you use the firefox build, be carefull, if there is allready a firefox without native svg running, and you click
the firefox.exe in that bin folder, another instance of ff without svg support will be loaded.
cheers
bernd
Related discussion
Generate an Image of a Web Page
by gregcost (28 replies)
HTML table problem
by sonampatnaik (2 replies)
Developer job-seeking advice
by saikat (2 replies)
how Create Dynamic website..
by MZee (3 replies)
Viewing updates to a web page
by rajivrranjan (0 replies)
Related articles
Quick links
Recent activity
- Zainab Ahmed replied to How to receive data in web ...
- Zainab Ahmed replied to How to receive data in web ...
- Zainab Ahmed replied to How to receive data in web ...
- Uncle replied to ms access report
- chris jones replied to Im having problems updating...
- chris jones replied to Im having problems updating...
Enter your message below | http://www.developerfusion.com/forum/thread/25799/ | crawl-002 | en | refinedweb |
Disclaimer: IANAD: I AM NOT A DOCTOR. Please consult one prior to engaging in any attempt at weight loss. I provide no recommendation you do things my way and take no responsibility for your health.
Disclaimer 2: I am not affiliated with, have never given money to, and have no relationship at all with the Unnamed Diet System I have based my diet plan upon. I do not recommend for or against their system, I am just sharing the fact that I my own variation of it and have had success.
Since my birthday this year, I’ve been working to lose a little extra weight. I now weigh around what I did when I was in high school and am still losing. I did this to show solidarity with my wife who wanted to lose a bit, but I needed to do this for myself as well.
Wanting to lose weight presented a few problems for me. First, I can not stand to do what somebody else tells me to do. My wife calls that being “obstinate-defiant.” Depending on my mood I usually either say I’m just selfish or I have a built-in distrust of the crowd. In any case, it was clear that unless I could tweak my plan a bit, I wasn’t going to be be happy with it. I’ve ended up tweaking less than I thought I would, but I still can, so there.
My next problem is that I refuse to do cruel or unusual dieting. I’m not going in for any fad or diet that drastically changes things. I’m not giving up cookies or eating grapefruit or doing Adkins. Regardless of what science there is or is not backing these things up, anything that changes what I eat is going to make me grumpy. I love food and there is no shame in that. The shame comes in consistently eating more than I need. I also find the idea of using a pill or surgery repulsive (no offense to those who do such, I won’t). For me, this process is about developing self-control, which means I need to learn to do it and my wife provides enough accountability to that end.
While metabolism and other factors adjust a person’s dietary needs, failure to consume enough calories to maintain a person’s current weight will cause a reduction. (Unless, which I suppose is possible, a person’s body is somehow capable of storing fat, but incapable of using it. I don’t know if any such disease exists, but I don’t have it if it does, so it’s not my problem.) Therefore, my diet plan would have to be something as mundane as journaling what I eat.
Which brings up my next problem, counting calories is too easy and not really addressing the full magnitude of the problem. Because not only should I eat less, but I should encourage myself to eat healthier. The system ought to take other factors into account.
My final problem is that whatever it is I do must be something I can do on a computer. I sit in front of one for around 8-12 hours per day. I can set my computer to remind of things, I can share things between myself and my wife to provide accountability on my computer, and while I like writing down notes, particularly when I’m brainstorming, I really don’t want to do all the math we’re talking about in my head all the time. It’s too tedious.
Fortunately, my wife previously went on and successfully completed a plan using Unnamed Diet System for which you may have seen ads. This system met all my basic requirements. Yet, other than they way they count points, they don’t provide any value to me, at least none I would pay for. Fortunately, everything I needed was published at various places on the Internet and I built myself a Google Docs spreadsheet to do it.
I provide a link to a version of it here for anyone interested in weight loss on similar terms. Go re-read the disclaimers again now. I’ll wait… Done? Okay, I don’t recommend this plan to you, but if you find the spreadsheet useful, great. I’m providing it under a Creative Commons 3.0 license.
If you have a Google account, you can create a copy of the spreadsheet to use it or you can download it in another format to use with Excel or OpenOffice (I think, haven’t tried that).
To use it, I first scroll right until I fill in some information about myself. This sets up the basic tolerances for my diet plan based upon sex, age, current weight, height, daily activity, etc. Then scroll back left and log my consumption. Under each meal, the wide column is for a description of the item eaten and the narrow for recording the points. Once I have “0” points left for the day, I stop eating. The spreadsheet does have a weekly allowance of extra points that I can use as well to indulge in something or just allow me to consume all the points for a day without worrying about going over by a couple. I use all the points I have for a day unless I’m really not hungry. This is not a starvation diet, so I try to use up as many of my daily points as possible. I do not worry too much about using or not using the weekly points. I often consume most of them.
The formula for calculating this points is simple, but elegant in that it encourages me to get more fiber and avoid fatty foods, while consuming fewer calories than I need to maintain my weight:
Points = Calories / 50 + Fat (g) / 12 + MIN(Fiber (g) / 5, 1)
I’ve been told that Unnamed Diet System actually divides Fiber by 4, but whatever. I have embedded a couple calculators in the spreadsheet for the times when I’m too lazy to do the math in my head.
I weigh myself once a week to track my progress (on a separate spreadsheet).Every 10 pounds, I adjust the chart to the right since the spreadsheet gives one less point per day for each 10 pounds I lose. I also copy the spreadsheet (actually, Terri manages this part) each week and blank it out to use the next week.
Eventually, I should reach my goal weight (I haven’t really decided what that is). When that happens, I’ll need to adjust the spreadsheet to deal with maintenance. When that happens, I will give myself more points until my weight stabilizes. I plan to continue recording points for the foreseeable future this way.
Cheers.]]>
I just finished writing a test that discovered I’d made a rather dumb and (upon looking back) rather obvious mistake in a return value in this Perl application I’m working on. The mistake involves a certain errant combination of
return and
and must be dealt with carefully.
The particular bit of code looks something like this:
sub blah { return $foo and $bar; }
For those who don’t know Perl intimately, Perl has two “and” operators. One named “and” and the other named “&&” like C. These are not strict synonyms. They are both shortcut operators, but the “&&” and the “and” sit at very different places in the operator precedent order. In Perl “&&” has a relatively high precedent and “and” is very, very low.
Back to the problem: this return was returning true when
$foo was true and
$bar was false. After rereading this line I smacked my forehead and said, “Duh!” The problem is that “return” actually has a higher operator precedent than “and” so this is how Perl would break it out if it showed the AST with parenthesis:
sub blah { (return $foo) and ($bar); }
This means the code immediately return
$foo in all circumstances. I might as well have just written:
sub blah { return $foo; }
The solution, then, is to either use the “&&” operator:
sub blah { return $foo && $bar; }
Or use explicit parenthesis:
sub blah { return ($foo and $bar); }
Or don’t use the “return” operator (since subroutines in Perl always return the value of the last expression executed):
sub blah { $foo and $bar; }
Cheers.]]>
This past week I purchased for myself a mini laptop. I wanted something nice and portable that I could use for personal use on trips and do hobby projects on. This thing is great. I got Ubuntu installed and started playing around since it’s been a couple years since I had a Linux desktop, and even then I usually had a Mac OS X laptop I used with it. As such, when it came to copy over my music and such, I needed to convert to a new music player. I’ve settled on Banshee for the moment.
Problem: There’s no import tool out there to go from iTunes to the current version of Banshee.
Solution: However, the process turned out to be not so hard for me. Here’s what I did:
Library.xmlfile, which I copied to the new laptop
Fortunately the
Library.xml file output by iTunes is in a standard format that is pretty easy to understand. Also, Banshee keeps much of the information about your music and such in a SQLite database. So, I could very easily automatically copy over all the ratings and other information I’ve been assembling for the past several years.
Here’s the conversion script,
itunes-to-banshee.pl that I wrote for download:
Download itunes-to-banshee.pl
Update (thanks to Rolo): You will need to install a few dependencies as well. On Debian or Ubuntu, this is done by installing:
This can be done from Synaptics or by running this on the command-line:
sudo apt-get install libdatetime-perl libdatetime-format-iso8601-perl \ libclass-dbi-sqlite-perl libxml-twig-perl libmime-base64-perl
Once Banshee has finished adding your song files to its music library, close Banshee. Make a backup copy of
banshee.db to somewhere in case something goes wrong (which can be found at
~/.config/banshee-1/banshee.db). Then run:
perl itunes-to-banshee.pl Library.xml ~/.config/banshee-1/banshee.db
This may take a few minutes, depending on how many songs you have. It might show you some warnings if your
Library.xml is weird (remember I wrote this just for me). It may also tell you if it can’t find some songs in Banshee that it found in your
Library.xml. (It did for me because I’d deleted some songs from the disk and iTunes never figured it out.)
Once it finishes, start Banshee back up and it should have the play lists, ratings, play counts, and last played date set for all the songs that had such information in iTunes.
There are a couple things you might want to know about how the program works. First, it does not touch smart play lists. I don’t know and don’t particularly care how to read the smart playlist configuration from iTunes. I was able to recreate the smart play lists I had in a few minutes.
Second, the import script uses the song title and song file size to match songs from the iTunes library in the Banshee library. This is probably safe since I’ve never seen two CDs with the same songs on them end up being the same size, but it’s theoretically possible it could be a problem.
Third, if you have duplicates in your library, this script will only change one of them. I’d recommend weeding those out first.
I’m not interested in maintaining the script, but I’ll answer questions about it. If you ask really nicely and I’m in a good mood and the change you want is very small, I might be willing to make it, but that’s a lot of “ifs” to line up.
Cheers.]]>
With my work on qublog.net continuing to progress toward hosting this service on the qublog.net web site. I’ve been using the silk icon set for the icons, but recently decided to switch over to the fugue icons. In the process, I rethought how I added icons to the system, which I subsequently chose to try on some work things as well. There’s no magic here, nothing to patent (at least I hope not), but it’s worked pretty well, so I’ll share. I suppose this might not be “The Best Way”, but it’s certainly now “My Best Way.”
First, these icons are all being added without IMG-tags. This keeps my content less cluttered and allows me to very quickly switch icons if I change my mind later just be changing my stylesheet. Typically, these are added to buttons, links, and spans like so:
<span class="icon o-time">12:44 PM</span> <a class="icon v-add o-task">Create Task</a> <input type="submit" class="icon v-save o-time" name="op" value="Save"/>
The first class “icon” performs the work of making sure the icon itself is attached to the element properly. This looks like:
.icon { padding-left: 18px; background-repeat: no-repeat; background-position: 1px 1px; min-height: 18px; }
This basically makes sure that my 16 pixel icon has 1 pixel of space around it and makes sure the element is at least tall enough not to cut anything off.
Then, the icon itself is chosen by examining the other associated classes. I’ve divided the classes into nouns (with an “o-” prefix), adjectives (with an “a-” prefix), verbs (with a “v-” prefix), and adverbs (with a “r-” prefix). I define these classes within the style sheet in that order so that adjectives will override nouns, verbs override both nouns and adjectives, and adverbs will override everything. So, now my style sheet looks something like this:
.o-task { background-image: url(ticket.png) } .o-time { background-image: url(clock.png) } .a-group { background-image: url(folder.png) } .o-task.a-project { background-image: url(briefcase.png) } .v-add { background-image: url(plus.png) } .v-add.o-task { background-image: url(ticket__plus.png) } .v-add.o-task.a-project { background-image: url(briefcase__plus.png) }
By setting up the style sheet this way, a regular task reference shows up with a ticket icon. However, a project (which is a kind of task in Qublog) shows up as a briefcase. In case I need a generic add link, I can use a lone plus sign, but if I want a specific add link for tasks I can have a ticket with a plus sign. Finally, I can add a new project with a briefcase with a plus sign.
Later, if I want to modify the icons used, I can do so by just adding another class or something. It’s pretty flexible and if I make sure and include enough information on every link, span, or button that might have icon, I can make my icons more or less particular later just be adding a line or two to my style sheet. (For example, if my icon set lacked a briefcase with a plus next to it and then added one or I created one, then I could add that last rule later and rely on the ticket with a plus sign in the meantime.)
Some final varations I also use are things like having the icon only and ignoring the text itself:
.icon.only { display: inline-block; overflow: hidden; white-space: nowrap; width: 0; }
Now I can add the “only” class to my spans and links and the text becomes hidden. I combine this with a jQuery code similar to this:
jQuery(document).ready(function() { jQuery('.icon.only').each(function(){ jQuery(this).attr('title', jQuery(this).text()); }); });
This causes the text of the element itself to show up as a tooltip when you hover your mouse over the icon. Generally speaking, though, I usually try to do this on the server side when I use the “icon only” classes.
I have a few other “icon” class variants for changing the position of the icon, dealing with small 9-pixel icons, and making sure buttons and specific other things look good with the icons, but I’ll leave these as exercises for the reader.
Cheers.]]>
I’m messing around with Drupal again for a church web site. The task I’m working with at the moment is trying to make it so that the breadcrumb is based upon a menu other than than that Navigation menu (which is the normal way content breadcrumbs are set in Drupal). Long story short, we’ve chosen to make the Primary Links the site map rather than the more typical Navigation menu.
The technique that really ought to work is this:
menu_set_active_menu_name('primary-links'); $breadcrumb = menu_get_active_trail(); drupal_set_breadcrumb($breadcrumb);
That’s simple enough, but it doesn’t work. At least, it doesn’t work unless it gets called at some point very early in the request life cycle. After some digging through the Drupal API, I found that
menu_get_active_trail() is actually defined like this:
function menu_get_active_trail() { return menu_set_active_trail(); }
I vaguely remembered this from a year ago when I was last mucking with Drupal. This is a pretty common Drupal idiom. It’s basically the Drupal developer’s way of using the mutator (i.e.,
menu_set_active_trail()) simultaneously as a accessor and default setting. That’s fine. However, when you look into
menu_set_active_trail() I found something wrong. The code looks like this (minus some code that I don’t want to spew here):
function menu_set_active_trail($new_trail = NULL) { static $trail; if (isset($new_trail)) { $trail = $new_trail; } elseif (!isset($trail)) { // LOTS OF CODE HERE TO CONFIGURE THE DEFAULT // BASED UPON THE OUTPUT OF menu_get_active_menu_name() } return $trail; }
Do you see the problem? If you happen to set the value returned by
menu_get_active_menu_name() (via
menu_set_active_menu_name()) in time, you can get it to setup your breadcrumb using some fairly smart code that does the right thing for you.
However, if you don’t set it before the first call to
menu_set_active_trail(), you can never run that helpful default code ever again. There’s no way to unset
$trail because
$new_trail is ignored if it is unset. There’s no way to execute the default code without unsetting
$trail since it’s all within this single function.
Bah! The correct solution is to move the default code into a separate method so that it can be called separately later and reused.
My work-around will probably be to install the Menu Breadcrumb module. I’m trying to keep the number of modules installed on this particular site very low, so I was hoping to avoid it. Yet, installing a module is going to work out better than me reinventing the wheel with a module of my own, in this case. It’s these little frustrations that drove me away from Drupal in my personal site. I’m probably being too picky and hypocritical, but whatever.
Cheers.]]>
Recently, I started trying out git to see what the relative advantages and disadvantages of it are over Subversion, SVK, and CVS. I am not quite ready to give up the use of Subversion as the mechanism for sharing my projects with others, if only for the fact that I know more programmers familiar with it than with git (and also for the fact that my web host provides built-in support for Subversion, not git, at this time). Therefore, I’m actually using git as the front-end to access my Subversion repositories, which is quite similar to how I use SVK (and the normal model for SVK use).
The first difference I had to deal with was simply that the command-line is just a bit different. So, I had to map various ideas in Subversion/SVK/CVS into git before I could use it very successfully. Now that I’m using it comfortably, I’m certain I like it better than anything I’ve yet tried and will likely continue to use it over SVK, which was my preference up until now. On the other hand, I don’t think I’ll ever completely give up using Subversion, SVK, or CVS entirely unless and until they become obsolete tools that no one is using.
Why do I like git over the alternatives? First, it helps protect me from common mistakes. For example, one thing that annoyed me at first is that you have to add any files you change to the list to be committed before committing.
% git add lib/Net/Google/PicasaWeb.pm % git commit
That seemed a little annoying compared to how SVK lets you commit with a single “svk commit” command and select which files to commit by editting the commit message. Yet, once I got used to it, I realized that I rarely ever have to back out a mistakenly committed file. It only commits those you explicitly ask for. If you’re sure you want to commit everything, you can add whole directories or use the
-a switch to commit, but the fact that you have to explicitly make these decisions rather than just committing whatever makes it difficult to make mistakes in what you commit (unless you are in hurry).
Next, I can commit only parts of a file. This feature blew my mind. This should be a mandatory feature for any source code storage system. When I’m making a complex set of changes, but don’t make intermediate commits for whatever reason (laziness, forgot, not sure if what I’m doing in one part really serves the whole yet, etc.), I can perform an interactive patch.
% git add -p % git commit
When running add with the
-p switch, you will iterate through all the listed files (or in this case, all files changes in the repository), and be shown the diff. You can then add individual hunks from the diff to the commit or not. On the latest release of git, you can even edit the diff to pick out individual lines from the hunk to stage and leave the rest out for the moment. This is a killer feature.
Another nice feature has to do with how git manages to deal with tracking changes. In Subversion, SVK, and CVS the most important unit to work with is the code itself. However, with git, the most important unit to work with is the diff of how the code changed from one commit to the next.
What do I mean? As an example, if I take a project and branch it to work on some complicated feature while the trunk continues to move forward. Then, I merge them when I am done, Subversion, SVK, and CVS handle the merge by diffing the branch with the trunk and then putting all the changes together in a single revision. These will use the history of the trunk and branch in various ways to help resolve problems, but they essentially treat a merge as a single diff. (SVK is a partial exception if you use the “-I” option during merges.)
With git, all the changes of the merge are copied over to the top of the trunk and then you deal with conflicts one diff at a time. This way the entire history gets pulled over from the branch onto top of the trunk, as if the work had been done from the top of the trunk in the first place. You take a little extra time to “rewrite history” this way, but it makes your patches a little more logical.
This same handling of patches works all over the place. For example if you just want to take advantage of some new features placed on the trunk, you can do something called a “rebase” rather than a “merge.” This is the same process as the merge, except all the patches remain on the same branch.
% git rebase master
You can even move the entire branch to a different base point if you want to try your branch out using features still in progress on a different branch:
% git rebase --onto bar master foo
The above command would move all the changes on branch “foo” on to branch “bar” as if these changes had been written for “bar” originally.
Under SVK or Subversion, a merge would be done by pulling in the latest trunk changes over into the branch. This adds a the trunk changes into the middle of your branch as either an individual commit (or as a set of commits with “-I” in SVK).
Have you ever tried out some code changes and then realized they weren’t going to get you where you want to go? At the same time, though, there are some good ideas here that you don’t want to just throw away. In Subversion you might create a special branch to hold them, but probably not. In SVK, you might create a local branch or something to hold work in progress, but again, probably not.
In git, you can just run:
% git stash
This takes all of the changes you’ve just made and stores them quickly in a special stash. You can then see the stash and recover the latest stashed item using:
% git stash list % git stash apply
There are a few other commands with the stash as well that are handy, but these are the ones I use most.
Those are a few of the reasons why I really like git. On the other hand, there are some aspects of SVK that are nice, such as the fact that it is very easily extended using Perl (since SVK is written in Perl and written for extensibility). I’ve been thinking about looking into adding features like these into SVK as a fun exercise since I’ve been playing with git. I sure don’t have time to do any of the other of the fun projects I do on my own time, so why not?
Cheers.]]>
I think if I had my way, I would develop tools for developers most of the time. This would be a good implementation of Yegge’s “only build stuff for yourself” principle. The Android project catches my eye because it offers the possibility of being able to get rid of the Microsoft OS on my phone without having to buy a new phone. However, after looking at the site, I’m now more interested in the tools they’ve put together to aid this development.
The first is a tool called “repo”. It’s somewhat similar to a tool we use at work for managing our source repositories. It basically adds some higher level niceties to git and helping connect with code reviews. In addition, it interacts with a nice web-based UI for code reviews they’ve called “garrit”.
Garrit is kind of like a ticket tracker for submitting patches and works almost in reverse of the usual ticket tracker. Normally, you post a problem and then put together a solution. This tool lets you submit a solution and then lets the reviewers determine whether it’s a good idea or not. This is important for a project like Android where it will likely be common for someone to write a patch to get Android working for his particular phone or app and then submit it to the community for general inclusion. It supports the git model for development pretty well.
Anyway, I’m interested in Android and hope some smart folks with better C skills than me can make it work on my phone in the near future, but in the meantime, I think dev tools present some interesting ideas on their own.
Cheers.]]>
I don’t travel away from a network very often, particularly now that I have an EVDO card. However, it is inevitable that when am away from a network (on a plane, in snow storm, on the road) I want to try out a CPAN module or need one that I haven’t yet installed. When this happens and I can’t do whatever it was that I wanted to do because I didn’t have it, I get grumpy. However, thanks to CPAN::Mini, I don’t have to worry about this problem anymore.
% sudo cpan CPAN::Mini % mkdir -p ~/projects/minicpan
That installs the module itself and sets up the directory for the local repository. Then, I added a nice alias to my shell config:
alias cpansync="minicpan -l $HOME/projects/minicpan -r"
I picked the OSL mirror here since I happen to know the man responsible for admin’ing that mirror. I can then run this to get a copy of the latest and greatest on CPAN:
% cpansync
After that runs, I can run it again any time I want to refresh, which I usually do right before I travel and any other time I want a fresh copy.
Finally, I modified the list of CPAN mirrors to include:
Now, whenever I install modules, the modules are installed directly from my hard drive. This is also a bit faster than having to download the modules at install time (i.e., as long as I have already done the download previously).
This setup makes me happy, especially when on airplanes.
Cheers.]]>
I found out about the fact that I would be giving two talks at the Pittsburgh Perl Workshop this week about 2 weeks ago and have been working frantically to get my stuff together since. Well… at least, I worked frantically for the last week. The first week, was not quite frantic. It was more like somewhat diligent, but I did spend a couple nights playing Zelda that should have been spent prepping. Anyway, I got my talks done and I don’t think they were particularly excellent, but I don’t think they went badly (at least, I hope they didn’t). But, this post isn’t really about my talks so much as it is about debriefing PPW 2008. On to the festivities…
It all started on a completely average Friday morning of packing… okay, skipping ahead. I flew to Pittsburgh through Milwaukee. In Milwaukee, I was supposed to take a late flight from there, but that wasn’t supposed to me I took off from Milwaukee after midnight. However, the flight the airplane hadn’t even taken off yet from Dallas when we landed. So, I worked more on my slides which weren’t quite finished while I waited. And waited. Finally, the plane arrived and took me to Pittsburgh, where I landed around 2am.
Strange thing about the flight, though. There were only 5 actual passengers on an Embraer 170. For those of you that aren’t airplane fanatics (like me), that’s a plane that seets about 70. There were two flight attendants, the pilot and copilot, and another 4 or 5 airline personnel catching a ride to Pittsburgh. That’s all.
I was the only one that had a checked bag. Uh-oh. When I got off the plane, a baggage handler was asking each of the 5 real passengers if they checked a bag and I said, “Yes.” He said he’d meet me at baggage claim. He met me there, but with no bag. It was lost. Poop. I almost never check a bag and this time I did. Dumb. Oh well.
So, I catch a cab and go on to the hotel and finally get to sleep sometime after 3am.
My phone alarm woke me up around 7am. Yeesh. I felt like something scraped off the bottom the mattress, but I got up and showered and got some coffee. I am in a hotel about 1.5 miles from the conference, so I figured I walked. I think this was a good thing as it helped me wake up a bit and it has been a glorious weekend in Pennsylvania. I stopped off to get some antiperspirant and some Listerine breath strips to minimize in odious scents I might be emitting since I was lacking my toiletries.
I made it to Wean Hall on CMU campus and promptly picked up my conference T-shirt and took it to the bathroom to exchange it for the mildly damp shirt I had been wearing that had probably picked up odors from a shuttle bus, three airports, two airplanes, and a cab, not to mention a 35 minute walk on a warm morning with a 30 pound backpack. Okay, so with new shirt donned and odor opposing chemicals applied to my underarms and mouth, I went in to the keynote on Perl 6 in progress.
The keynote was by Patrick Michaud and the news for Perl 6 is good! Yay! For anyone who has followed the drama of Perl 6, you know that some of the history hasn’t necessarily been positive. However, it sounds like things might actually be moving on the right track now. The final release of Perl 6 still expected on Christmas, as it has been for at least a couple years now. But, given Patrick’s report, it sounds like it might actually be a Christmas before my son starts elementary school. (I wasn’t always sure of that.)
After the first talk, a company called Bug Labs gave a presentation. It was kind of an odd talk I thought since it really didn’t have much to do with Perl (existing APIs are Java), but it was more of a suggestion that Perl hackers could get involved. I’m a bad candidate for that. The product itself looked like an interesting diversion, but nothing I’d be willing to sink that much cash into. I’m not so much a hardware guy anyway.
My personal feelings aside, I have some doubts about the business plan of Bug Labs. It’s nice that they support Open Source, but the viability of the business model is a little questionable to me. The presenter kept saying they wanted to make it so easy that “your mom” could come up with a custom device. First, my mom is not likely to care about a device unless it comes prefab with as few buttons as possible. Second, my mom is not going to put down that kind of cash just so she can create some sort of customized web-cam/motion-sensor/GPS something-or-other. She has better things to do. I suspect she didn’t mean my mom literally, but that doesn’t mean my statements are any less valid for the hypothetically average mom either.
After that was a talk on extending SQLite, which I admit I paid very little attention to because I was still trying to finish my slides for my first talk on Sunday. I thought this sounded interesting and it would be nifty to have more of that in some of the code we do at work, but since we don’t use SQLite for the work I’m thinking of, the talk doesn’t really apply to that.
Ricardo Signes then gave a talk on something I’d never heard of before, Rx, but I think I’m going to check out. He’s basically put together a data validation system similar to XSD or RELAX-NG, but for general data structures in memory. It’s a really cool idea and I want to see what I might be able to gain from it, so it’s on my short list of things to try out in the near future. The fact that he’s got implementations for it in Ruby, Python, and PHP is also very interesting for interoperability.
Next was lunch where I met Dan Klein, a fellow consultant for Grant Street, in person for the first time and had a pretty mediocre chicken marsala, but an excellent salad with fetta cheese, roasted almonds, and strawberries with a raspberry vinaigrette.
Paul Grassie, another co-consultant, gave a talk on regexes, which I again confess I mostly missed to work on my own talk. What material I did catch was a good review and there were bits that were somewhat new, but I wasn’t paying close enough attention to really learn them.
By the way, Paul and Dan work for Tom Christiansen Perl Consultancy. Given what I know of them now (having seen Paul in action and spent part of a day with Dan), I’d recommend their services if you have some need of Perl training for your company.
This talk I almost completely ignored. There were some things in it that were interesting, but it was mostly for the sysadmins. I was familiar with and impressed with some of what was presented, but I did the system administrator thing already and have turned a new leaf. I’m not going back to the land of putting out fires in the world of hardware.
The next talk was on a database tool called Kioku::DB, which is an interesting idea and something I’m glad to be aware, but not something that will change the way I do my work or hobbies. Again, I’m still working on slides as I glance up now and then and this didn’t really capture my attention.
Finally, Ricardo gave his second talk of the day on Email. Having myself engaged in mortal combat against the spam monster at one of my first tech jobs and having been a systems administrator, this talk was flipping hilarious. His analysis on the insanity of email and the email specs is right on. For example, “asdf@!#$&” is a valid email address within the To: or From: line of your email, even though such an email couldn’t possibly be delivered. Yet, “asdf@!#$&.” is not. Stupid. It’s a great talk and helps to explain why his CPAN repository is filled with email code, when it doesn’t seem like it should really be quite that hard.
After that, the Grant Streeters still around headed out for dinner where there was some lively discussion of work and politics. Then, I went back to the hotel room and came up with a Lightning talk, did the final clean up on my slides, and then tried to get more sleep.
I haven’t finished the story about my bag. Early Saturday, I called Midwest Airlines to find out about my bag. As I mentioned, mine was to be the only checked bag and the handler that I talked to couldn’t find it. He made me out a claim for the bag and said it would be delivered as soon as he could get it to me.
When I called Midwest, the conversation with the woman went something like this:
Me: Hi, I’m calling to find out about my missing bag.
Woman: What’s your name?
Me: Andrew Hanenkamp. H-A-N-E-N-K-A-M-P.
Woman: Mr. Hanenkamp? Yes, your bag came in on your flight. Why didn’t you pick it up?
Me: Um, because the guy who said he was going to pull it out of the plane said he didn’t find it and gave me a baggage claim ticket.
Woman: Okay, I’m very busy right now, I will try to get them delivered to you hotel. Can I get your number?
I gave her the number and then finally was able to get back with her during lunch. This conversation went something like this:
Me: Hi, I’m returning a call made to my cell phone?
Woman: Who is this? Is this Mr. Hanenkamp?
Me: Yes, I believe you have my bags, have they been sent to my hotel yet?
Woman: Well, your bag was here last night. My supervisor wants to know why didn’t you pick it up last night. It was here already.
Me: I know. As I told you before, the guy who pulled it out of the plane told me it didn’t come.
Woman: Hmm. Okay. Umm. Did he give you a baggage claim ticket?
Me: Yes. I have it in my wallet. (Thinking, do you want me to shove it through the phone?)
Woman: Okay, well, I guess I can have the delivery service take your bag over. Which hotel are you at? Do you know the zip code?
Me: [chuckle] Um, no, I don’t generally memorize the zip codes of the hotels I’m staying at.
Anyway, so when I got back, I had my bag and was finally able to change into completely fresh clothing and brush my teeth and such. Ah.
I walked over to Wean Hall again after picking up my coffee and started out in the Rakudo Perl talk.
This talk was also given by Patrick Michaud and I went to it rather than the Moose talk because I’m already somewhat familiar with Moose and because Perl 6 and Parrot are something I’d like to volunteer my time on. Perhaps I will at some point, but I have too many other things I’m doing that I’m more immediately interested in. I don’t have any excuse for not joining for reasons due to not knowing how now, though. Patrick went over all the sorts of things someone wanting to contribute might want to do to get started.
Next I went to this talk by Tom Lane rather than attending Schwern’s talk on the y2038 bug. The talk was moderately interesting since Tom Lane is a Red Hat employee who spends most of his time contributing to PostgreSQL. It was interesting to here the differences between how Perl is managed and PostgreSQL, but I confess, I was again distracted, but this time by a bug report on one of my CPAN modules.
This was another talk by Paul Grassie and at a very introductory level. I don’t think he covered anything I don’t use on a nearly daily basis, but I was still working on bugs on my CPAN module. I did note that Paul has a very soothing voice and since I’ve only had about 8 hours of sleep total in the past 48 hours, it was making me a bit sleepy.
Next was Jonathon Rockway’s talk on MooseX extensions. There are some pretty cool extensions to Moose available for dealing with various customizations to your meta-programming code. I’d recommend looking over his slides when they are posted on his blog. There’s some interesting things in there.
We had more lunch. I got to meet Tom Welsch’s kids, since Tom brought them with him as he did a little light recruiting for Grant Street. I had more yummy salad and a decent sandwich (at least for having been catered).
I gave and got through my whole talk on Jifty. I’ve posted the slides, which is enough summary.
I watched part of Kevin Falcone’s talk on Prophet and this is another one of those things that I have got to get my hands on in the near future. Qublog, in particular, could benefit from what it does very nicely.
Again, I gave this talk and the slides are posted.
Lastly, I attended the talk Schwern gave on his new method signatures class. It’s pretty nice overall, but I’m a little hesitant to actually use it. His point about there being 240,000 extra lines of code that can be taken off of CPAN because of it is well-taken, but I’m not quite convinced to actually use it yet. On the other hand, the parser module that Matt Trout wrote, Devel::Declare, which has some interesting possible implications for the DSL work I want to do with Jifty.
Most of the lightning talks were great, mine excluded. I fail.
Ricardo gave a talk on Dist::Zilla, which I am also adding to my short list. It’s kind of a replacement for Makefile.PL and Module::Starter and Module::Release and such, but does some very smart things. If I find out that it’s extensible and mostly makes sense when I take a first hand look, I’ll likely be using this to manage my modules in the future.
Kelli Ireland gave a talk on the orange shirt that’s been traveling with various members of the Perl community to exotic destinations and offered it to anyone going somewhere interesting if they will take pictures.
There was a talk on going Sixty to Zero in Perl (a play on the Zero to Sixty tutorial given). This talk was great in that it showed how very readable Perl can be turned very unreadable if someone wishes to do so deliberately in order to maintain job security. It was a very good example of why serious Perl developers don’t understand why Perl gets kicked in the face so much when it is not difficult to write good Perl code if you actually try to do so.
Probably the most memorable lightning talk, though, was the LOLCAT history of Perl. Unfortunately, there’s no way such a talk can be translated into this summary in any useful fashion, so I will just say, that it was awesome.
After that, I skipped out because I’m sleepy and should have been asleep already, but decided to write this instead… Okay, so I think that does it for PPW 2008. This is Sterling Hanenkamp signing off. Good night and God bless.]]>
I gave my talk on REST at the Pittsburgh Perl Workshop today. I think it went alright, but I’m sure it was dreadfully boring. If I ever give the talk again, I’m going to have to liven it up a bit and speed up the pace. For anyone that attended the talk, I would be interested to have your feedback, so please feel free to post it here.
Here are the slides, as promised:
Again here are links to the sample implementation files:
Cheers.]]>
I just wanted to make a quick post to note that I will be in Pittsburgh this weekend speaking at the Pittsburgh Perl Workshop 2008. I will be giving two talks on Sunday afternoon:
Also featured at the conference will be talks by a couple of my coworkers:
I’m mostly done with one set (REST) and started and somewhere around 25% done with the preparation I need to get done for the other set, which will include a shameless plug for my other project.
This isn’t the first time I’ve given a 50 minute talk, since I taught a class, but this is a much different format than that. Since I’m attempting to re-use the excellent work of others (particularly Jesse Vincent and Audrey Tang on the Jifty talk), I hope I will do alright.
Cheers.]]>
At my current job, I have quite a few more important appointments on a regular basis that I’ve had since I was teaching courses at K-State. As such, I’ve resurrected my dedication to the calendar. I have finally gotten a setup working that seems to do the job of allowing me to keep a good calendar, keep my wife informed, and keep my phone informed.
To do this, I’m using the following:
This is working pretty well. The weakest link in this setup is The Missing Sync which works, but is an absolute steaming pile when it comes to stability. It crashes at least 50% of the time I sync. Fortunately, it usually recovers if I force quit and unplug/plug my phone. If there’s anything else out there the next time I’m looking to buy something like this, I’ll try it instead unless they can get this fixed.
One thing I’ve discovered is that the integration with Apple Mail and iCal is wonderful. I can very easily create appointments by hovering over dates mentioned in an email, click on the popup that appears, and select “Create New iCal Event…” This makes keeping track of coworker’s vacation time and the downtime schedule very easy.
Cheers.]]>
I have a Mac and I listen to music in iTunes most of time. After quite a bit of experimentation I have, I think, finally come across a way of creating a play list that allows me to tolerate iTunes. I don’t like listening to my music in order. I’d like it to play randomly. On the other hand, random typically means I get one or two songs I don’t particularly like that come up often and some songs I do like getting played almost never. This is never what I want, but this is exactly what the useless “Party Shuffle” feature of iTunes. I hate it. Even with the “don’t play a song by the same whatever” settings set, it still manages to do so too often for my to tolerate.
My solution is to put together a play list that contains a set of music I like to listen to and then I create a smart play list based on it. When configured correctly, my play list will play all the music I want to hear regularly, but not too regularly. Here’s how I do it. I create a smart play list and set it to match all of these rules:
After that, I tell it to limit the play list to Y items selected at random. For the “Top Music” play list, I use 75 items. I then make sure it’s Live Updating in case I add more music.
Finally, I play the list on shuffle. Since I’ve started playing my music this way, I find that I listen to all of the music in the list every month or so and I never hear a single song too often and it’s completely random.
What I really wish was that Party Shuffle would work the way I think it should. If I want to see the next 25 songs ahead, I want it to guarantee it won’t duplicate anything within that window. If I could then have it try to fit in songs I haven’t heard in a while, but prefer the higher rated songs, that would be great. It wouldn’t have to be perfect, but iTune’s play list options are abysmal.
Cheers.]]>
I just finished reading this blog post and was convinced for a few minutes that this guy was stalking me even though I have no idea who it is. Fortunately, the references about class and school allowed me to relax that he’s actually just stalking my long lost twin.
Thanks to Randall for passing that link on to me. It’s a little scary how accurate it describes me and, like Randall, I have to confess a little embarrassment at some of the harsher bits of the analysis because it’s very close to the mark, even down to the mention of specific details.
Cheers.]]>
Part 1 of this article was posted on onlamp.com. Due to changes in the format of O’Reilly’s online publications, they are no longer interested in Part 2, so I have published it here. Sorry for the delay but it took some time to contact the editor to learn all this.
In my previous article, I described the process of building a RESTful web API from the server-side. This article will describe how we write the client to the web services server provided in the first article.
The source code for the client and server described in this article is available by saving the following links:
There are some instructions on how to install them in comments in each file, but you’re mostly on your own. I can say that they should work (do work for me) if installed correctly and if the required libraries are in your library path. Sorry that I haven’t made these easier to use.
The RESTful server written in the previous article helps me track the books in my library. This time, we’ll look at a command-line client that takes advantage of the API. The principles applied to build this client could be used to in your web application to access web services APIs provided by third parties or to build a GUI application or anything else. A command-line just provides a good simple interface with which to demonstrate.
The command-line script is named
book and operates through the use of subcommands. For example, to list all the books in the system, you can run:
./book list
In cases where a new book is added or updated, you specify the update using a file name.
./book update 1-56563-833-6 treasury.yml
Without further ado, let’s jump into the code.
Before getting into the commands themselves, there is some setup performed by the script. First, we have a constant named
HOST. This constant is just the URL of the REST API server.
I’ve also defined to additional helper functions that will show up throughout this article. The first,
barf(), is called whenever an error occurs. Since our server returns errors as HTML, it just pulls title of the error from the HTML header and the rest of the error message from the first paragraph. The second helper function is
slurp(), which is really nothing more complicated than the function by the same name in
File::Slurp (but without an additional dependency, which I was attempting to avoid for the simple demo). All this does is suck in all the text of a named file and return it as a scalar.
The last bit of setup is that I’ve initialized one global variable named
$ua for convenience. This variable contains a freshly initialized
LWP::UserAgent object. The only library from CPAN this program depends upon is
libwww-perl. I’ve also imported some helpers from
HTTP::Request::Common, but we’ll get to that in a bit. Now, let’s talk about the interesting stuff.
The first and simplest command we can run is to simply check to see what resources are available. You can see all the books stored by listing them using this command:
./book list
Calling this command causes the script to run this bit of code:
# GET /=/model/book my $response = $ua->request(GET HOST.'/=/model/book/id'); # On success, find the link and print them out if ($response->is_success) { my @links = $response->content =~ /\bhref="([^"]+)"/gm; for my $url (@links) { my ($id) = $url =~ /([\d-]+)$/; print "$id: $url\n"; } } # On failure, barf else { barf $response; }
Our first act is to grab a list of books from
/=/model/book/id. As we saw in the previous article, this should return an HTML file containing a list of links as part of the API. If the request was successful, we pull all the links out using a regex and print them out to the end-user. If the request fails, we barf.
The key bit of code here is all within the call to the
GET subroutine, which is exported by
HTTP::Request::Common. This performs all the extra work we need to build a standard
GET request. Another alternative is to use the
get() method of
LWP::UserAgent, which would look very similar:
my $response = $ua->get(HOST.'/=/model/book/id');
Moving on, I hope I haven’t turned your stomach with how cheaply I’ve parsed the URLs and IDs out. This is not an ideal use of regex, but it does the job. If I were to do this the right way, I ought to use a decent HTML parser (I’m partial to
HTML::TreeBuilder) to pull the links out more carefully. Another solution would be to alter the server so that it returned the links as a YAML file and then parse apart the YAML data to get the information I want. I’m lazy and wanted to avoid additional module dependencies at all cost on the demo, so I do it this way.
This will return a list of IDs, which may be ISBNs or just a number, which isn’t terribly useful. A simple enhancement to the system would be to change this to use the “title” field or something else, but we’d also have to enhance the server to support that.
Once we know what resources are available by the list, we may then want to check one out to get more information about it. We can do this using the read command. For example,
./book read 1-56563-833-6
As you will recall from the server, we can read data using a GET request to
/=/model/book/id/<ID>. This is what we do when the read subcommand is executed:
my $id = shift @ARGV; # GET /=/model/book/id/[id] my $response = $ua->request(GET HOST.'/=/model/book/id/'.$id); # On success, print the file if ($response->is_success) { print $response->content; } # On failure, barf else { barf $response; }
We grab the ID given from the command line and use
LWP::UserAgent to
GET the book resource we’re interested in. On success, we then print the file out and on failure we barf.
If we wanted to do something more interesting with the resource file like present in a form or place it into a table or something, we’d use a YAML parser to decode the data and then manipulate it.
Now that we’ve looked at the resources in our (presumably empty) database, let’s look at how we added them. The first command to consider is, of course, for creating a new book record.
./book create treasury.yml
The command slurps up the file you name and uses it to
POST to
/=/model/book to create a new book record. We then scan the response to make sure we know what ID was assigned (because, if you’ll remember, not all books have ISBNs, so the server provides an alternate identifier in such cases).
my $file = shift @ARGV; # Slurp up the contents of the given filename my $book_data = slurp $file; # POST /=/model/book my $response = $ua->request(POST HOST.'/=/model/book', 'Content-Type' => 'text/yaml', Content => $book_data ); # On success, return the new ID assigned to the resource if ($response->is_success) { my $url = $response->header('Location'); my ($id) = $url =~ /([\d-]+)$/; print "$id: $url\n"; } # On failure, barf else { barf $response; }
As you can see, this is exactly what we’ve done. Since the server requires the data to be specified as
text/yaml, we’ve made sure to note that here. If the file is improperly formatted, the server will take note and return an error status, so I’m not being too picky about making sure the data is clean. However, you might want to make sure your data is sane before sending it in such cases just to avoid potential problems.
The other important detail to notice here is how we get the ID assigned to the resource back. We do it by checking the
Location header. This is because we’ve build the server to return a
201 Created status with a
Location header referring us to the new resource location. If we were to perform an immediate
GET on that location, we’d get the resource record we just saved back from the server. In fact, that’s probably what I should do. Instead, I’ve just ripped the ID off of the URL since I know what it will look like. This is a minor breach of the opacity principle of URIs, so if you’re sensitive about such things, I recommend you take the extra step and pull the ID from the returned YAML data.
Once we’ve added a resource, we might want to update to correct a typo or submit additional information to the record. We can do this by running:
./book update 1-56563-833-6 treasury-updated.yml
As with the create command, we slurp up the file argument, but instead of a
POST, we
PUT. We also use the URL included the given ID.
my $id = shift @ARGV; my $file = shift @ARGV; # Slurp up the given file name my $book_data = slurp $file; # PUT /=/model/book/id/[id] my $response = $ua->request(PUT HOST.'/=/model/book/id/'.$id, 'Content-Type' => 'text/yaml', Content => $book_data, ); # On success, just announce success if ($response->is_success) { print "Updated $id\n"; } # On failure, barf else { barf $response; }
We again make sure to let the server know we’re passing it a YAML data file. Here the returned result is pretty empty, so we ignore everything but the status code and just return a success message. If we had implemented resource renaming on the server using
PUT, we would probably need to watch the response for the new resource URI to make sure we fetch the new ID. Since we didn’t, we don’t.
Finally, when a book gets lost or sold or just thrown away, we can delete it from the library. From the command line, this looks like this:
./book delete 1-56563-833-6
Internally, the command is again very simple. This time, we just need to post a
DELETE to the appropriate URL,
/=/model/book/id/<ID> and check to see if it succeeded or not.
my $id = shift @ARGV; # DELETE /=/model/book/id/[id] my $response = $ua->request( HTTP::Request->new( DELETE => HOST.'/=/model/book/id/'.$id ) ); # On success, announce it if ($response->is_success) { print "Deleted $id\n"; } # On failure, barf else { barf $response; }
First thing to note is that here we build the
DELETE request ourselves. This is because
HTTP::Request::Common did not provide a shortcut for
DELETE when I originally wrote this client. I am happy to say that such a method exists as of 5.814 of
libwww-perl. In case you aren’t able to use the latest version for some reason, this is how you live without. Fortunately, it’s not difficult to do this on our own.
Other than that, this should look remarkably similar to the update command minus the slurpy file action.
Now that you know the basics of building a REST server and client, you’re ready to move on to bigger and better things. Here are some resources for learning more about REST, for creating and using REST interfaces without building one yourself, for enhancing your REST interfaces, and some existing REST interfaces you might want to model or take advantage of.
If you’re going to build a REST interface or work with one, it is helpful to find some details on what such things are commonly involved with. The most commonly referred to reference on the subject is the REST Wiki. If you want to know the general outline of how some guys that think a lot about REST APIs in the abstract, this is a good resource.
An even more vital resource is the actual HTTP standard. It describes what the various request types and response types are for and how user agents should expect to deal with them. Since REST is tightly bound to HTTP, sticking to the proper behavior in HTTP is very important. Therefore, I recommend becoming familiar with RFC 2616, which helps define the HTTP 1.1 protocol itself.
If you don’t want to mess so much with the code and just build on the foundation already laid by someone else, the best tool for the job that I currently know of is OpenResty. This is a stand-alone REST database system. It’s essentially a middleware platform for accessing a database over HTTP and is a pretty good system to emulate.
I’m also partial to the REST plugin provided by Jifty. Just by implementing a few models and actions with Jifty, you get a REST interface to them for free. See Jifty::Plugin::REST for the server side implementation and Net::Jifty for the client.
Both of these tools have a very similar feel to the implementation I’ve built for these articles because I like the style of Jifty’s implementation.
The most significant RESTful API add-on I know of is OAuth. This tool provides a standard mechanism for sharing protected data between disparate services via REST. For example, let’s say you’re building an application that automatically sets the profile photo for several different social networking sites. You don’t want to store these photos, you just want to grab them and update a bunch of social networking sites. You could use OAuth to allow your users to grant you permission to access their Flickr or Picasaweb accounts without asking them for their username and password, which is one of those things that serious privacy advocates go bonkers over.
There are some big players implementing this and I’d love to see more mashups using this rather than asking me for my username/password to get photos or load contacts.
There are lots and lots of RESTful web services available. However, I will highlight just a few that I have worked with personally in the past.
The first two are by Amazon. Amazon’s Web Services are all REST based, but the one’s I’ve worked with include S3 and EC2. With Amazon S3 you can store files on their services and be charged in micro-payments for just the storage you use and the amount of data transfer (which is pennies per Gigabyte). With Amazon EC2 you can run Linux-based servers that are started and stopped and manipulated using a REST-based service. These are again paid for in micro-payments by the hour (starting around 10 cents / server hour last I knew). There are Perl libraries already available for manipulating both of these without having to know the API directly too (search CPAN for Amazon).
The other I have worked a little bit with is Intuit Quickbase. Quickbase allows you to build database applications with a point-and-click interface. You can then pull and push data in and out of the system using a RESTful interface.
The last one I want to mention is Hiveminder. Hiveminder is a web site for managing your to-do list. Hiveminder provides a number of different interfaces including an IMAP server interface and a RESTful Web API. Hiveminder is built with Jifty and has a special sub-class of
Net::Jifty available for accessing the web API, Net::Hiverminder.
There’s more that could be said and more resources I’d like to refer, but I think this article is plenty long. Adding a RESTful interface to a web application is a relatively simple thing to do and is great for giving folks a clean way to access and manipulate your application’s data.
Please feel free to comment here and I will try to answer any other questions or comments.
Cheers.]]> | http://feeds.feedburner.com/Contentment | crawl-002 | en | refinedweb |
Torrent free adult video blonde babe
Lee kissingsex toys indian latin liberator cunt lot my files scenes pic couple the transexual lee clips fuck horny sex alcohol tub escort positions she password dildos on blowjob taught
car classified
x2 used assault! Hair amateur poison creampie father car guys and xxx lesbions gay john sexton gay. Pillow machines roleplay xxx ranch galleries caught beautifull clip im galleries hot county lolita cunt alyssa amateur. Comwww milano confrencing movie tgp largeist. Machines. Sexy creampie tommy public 2 guys. Man sexual torrent reality lesbians on black poses nude asian thumbnails movie women should ride lyric creampie milano asian sexton panty lee should milano anita password tub sexy
brazilian beautifull naked
guys beautifull. Heels classified pamela for drunk! Haveing tgp too natural mans wife dutch experience! Tub big places. Old 2 clip anderson yuna transsexual ivy 2 places black nonconsensual watch pics. Hardcore galleries alyssa. Xxx define natural. Online twin she ed lesbions big hair at in torrent hunk. Naked cum too cuming anita hair lolita
escort
girls mature should pillow chat of reality nude filled pussies.
Transsexual haveing throat company company dildos mom
naked
dutch hollywood pussy? Movie fantasy daughter the mans toys. Pamelas fucking fat shirt porn photo scenes long anderson daughter com and premarital mature 5 high ride for john my poison machines pamela. Password thumbnails com indian com naruto sex movies cum nagma bunny. County positions at final jolie. Lesbian ed school how roleplay star girl sample penis. Lesbians at final. Transexual pamelas bbw having ting:.
MACHINE at Lower
Pussies lyric classified ride. Sex twin ranch roleplay in ed. Deep blowjob having anderson too used pussy public taught escort fuck comwww? Haveing cock. Girls hunk used assault watch hollywood guys much father horny. Man ranch thumbnails throat torrent. Indian
fat clips sex hot
daughter. Schoolgirl deep videos password xxx fucks asian machines london lot confrencing hot. Bunny
tiger
alyssa machine dildos alcohol pornstar schoolgirl couple pamela. Roleplay daughter too dicks. Hardcore girl transsexual caught naked confrencing tommy sex pamela alcohol world 5 tub liberator bbw chinese. Hot taught
old machines amateur haveing
photo blowjob lesbians. Fantasy final girl women long. Shirt babysitter clip. Alyssa amateur heels holmes in hollywood xxx women
female
for asian largeist man my she car fannin nude ng.
Beautifull babysitter galleries
Dicks. Lyric transsexual confrencing filled heels yuna. Im fucks holmes. Car drunk naruto fat. Alyssa girls couple taught! Com amateur wife shirt babysitter
porn chinese
pics latin nude sample the daughter
couple picture amateur
poison ranch lot ranch. Nude of movie
much
xxx watch lesbian brazilian torrent girl lesbian classified cuming pic county public natural watch lyric. Sex too clips jolie. Ffm babysitter dutch naruto watch. Latin lesbions dildos. Offender sexy mom offender bedroom nagma ffm wife hunk porn premarital mans. Couple horny company lolita. Files dildos. Pussies used. Having thumbnails. Hair transsexual offender high car 2 how. Nonconsensual lesbian movies sites
holmes john pornstar
fat girls. Natural
machine fucking
Photo school having games positions 2 online videos black hollywood assault alcohol in scenes twin online. Escort bedroom taught high roleplay public sample. Olsen mature. Creampie indian lesbions confrencing schoolgirl nonconsensual sites fucks drunk videos london. Penis fannin heels fat star liberator milano? Virtual
sex hollywood video
father hardcore asian. Hair 5 pussy chinese big milano schoolgirl milano man porn nia positions of. Hot galleries fannin ivy panty xxx anita hunk kissingsex drunk places sex star fantasy company liberator hardcore pornstar hardcore nia bbw lee blowjob should filled
celtic
ffm nude shirt photo places virtual pamela in having throat man password deep indian asian hair? Male machine alcohol tgp alyssa 5 heels final guys picture anita john mom caught galleries caught panty. Direc filled! Of poison. Cunt on panty. Dicks for my dicks pussies blowjob my alcohol. Virtual pamelas clips tub games amateur
girl guys 5
naruto fucking! Cuming male big fucking
fuck cunt sex mom
comwww haveing horny xxxl old toys beautifull thumbnails
final
define girls guys women line high london 2. Penis sexton dutch.
Women pamela she
Olsen lesbians xxxl gay? Sexy comwww father asian dicks haveing babysitter premarital john sites london brazilian having. Car. Old in poison man. Naked sample pic dildos
fuck my sexy
pamelas positions hair drunk on
bbw mature
transexual women dutch how games scenes fuck fuck hollywood im sex classified picture pillow mature. Fantasy transsexual fannin toys used mans yuna movie. Chinese horny pamelas cunt taught beautifull women schoolgirl chinese drunk dicks.
Liberator sex
xxxl latin old man amateur. Daughter ride alcohol public escort password largeist 5 milano london mans. Ed roleplay alyssa mature throat tub olsen places cunt machines guys mom and poses sexton roleplay should. Deep lee hardcore videos x2 clip chat pic long lesbians sample
porn the star at
panty movies 2 for largeist. Cum com lesbian she.
lyric sexton
hot sex positions tub
sites scenes thumbnails direc pamela
fucking
experience liberator fucks fantasy. Cum in. Pics girls! School bunny experience women scenes. Pics anderson amateur galleries dildos alcohol blowjob school clips movie files lesbions taught penis. Movies cunt direc creampie cuming offender man
files porn
mans couple online alyssa twin fucking
machines asian sex
movies photo twin nia hot heels poison hair comwww positions. Black assault online caught fucking. Much on too pic the. Experience high haveing cum
used lot of sex
wife world penis clips pussy taught father nonconsensual premarital
sexton company
company babysitter amateur xxxl hunk.
Too jolie
comwww world
im sexy. Natural indian my ride. Virtual sexy cuming kissingsex sample how pornstar assault wife
pamela natural having car watch sexual creampie lolita old john xxx fat ffm reality sexual pornstar pussies classified anderson dutch pillow photo? Lesbions! Creampie county filled confrencing.
Final
escort male having pamela bbw much fuck my big
cock ride
nagma sex horny. Define latin. Bedroom im girls high torrent! At deep public hot bbw. Pornstar line throat holmes thumbnails yuna. Hot gay ed anita transexual alcohol lot star positions chat nude
com sex
should final father comwww alyssa assault should define. Natural poses shirt com of dvd heels fucking shirt penis 5 bedroom line machine panty nude virtual liberator pamelas sexual cock machine and male places deep picture lesbian pillow tommy the cock lee classified gay clips beautifull guys used.
nagma sex
Free adult video blonde babe
Naruto hair fat cum pillow milano pillow heels! Nonconsensual anderson ride ed throat hair. Panty company blowjob mans taught pornstar school yuna cunt assault thumbnails car toys too schoolgirl
fucking anita
galleries taught. Final dvd places escort pamela how transsexual classified lolita panty. The creampie line torrent pic holmes creampie and long cock caught jolie confrencing chinese pornstar babysitter torrent gay sex lolita chat clip tgp positions movies x2 hot. Sites sexton she premarital 2 offender ivy comwww car lyric daughter man girls kissingsex cunt pillow ed high
star
london having on password. Used xxx sex! Pussy county ffm dicks big company cuming nia in poses sexy offender sexton county kissingsex babysitter beautifull drunk assault able.
Latin
She
the tub assault cunt cuming haveing hair im hollywood ranch fannin
creampie in lolita pussy
should positions at hot sexy taught black girls picture much xxxl too milano pic transexual lesbions how penis hardcore man. Games asian picture videos clip deep offender naruto used daughter at dildos. Watch experience latin haveing machines movie throat nagma liberator ffm wife mans ivy dvd car naked pamela sites
star
filled fannin sample sexton latin man. Caught ed company transsexual long tgp cunt confrencing fucking of lot should high girl x2 london hair toys brazilian. Babysitter nude latin having couple lot amateur sexy naruto father torrent jolie galleries throat confrencing
sex public having places
tommy hollywood? Mom public ed lyric transexual 5 dvd
haveing hot black sex
women hunk. Experience files movies 5 world comwww assault. Dildos virtual milano indian. X2
sexual roleplay
filled milano pussies. Scenes
wife mature blowjob
direc dutch sexy watch nia nia. The
pamela watch tommy and
sex largeist bedroom guys photo hot comwww. Pussy blowjob cum holmes father ivy fuck pic hunk poison lyric natural nagma experience 5 my chinese. Girls transexual cunt nagma hunk premarital. Yuna videos password yuna used having fat clips. Videos. Anderson jolie big bbw. Used high. Premarital lee nonconsensual mature heels
porn x2 fantasy yuna
natural online shirt lot places fantasy daughter company photo. Sex man schoolgirl hot lesbian guys thumbnails cock lesbian fannin pussy im escort london online comwww yuna final creampie machine machines naked school cuming picture. Ride. Asian lesbians too pussy shirt roleplay big fucks alyssa filled 2. Games pamela clip for confrencing twin tommy assault penis anita premarital. Xxx im? Kissingsex define. Girls scenes kissingsex creampie.
Final
guys offender
lyric largeist fantasy gay babysitter cock tgp schoolgirl places old alyssa lesbian for password. Should lee bedroom for natural amateur beautifull positions pic nonconsensual lyric poison define 5 nude. Thumbnails old hollywood
and classified
olsen much tommy cum. Scenes used comwww tub amateur watch machines shirt x2 much ed
sexy
nia car bunny sample places car blowjob naruto 5. Fucks guys xxxl alcohol. How schoolgirl torrent
should sex ed school
filled. Long poses transexual lot drunk guys yuna clips
xxxl video
throat porn classified male. Shirt blowjob company ffm heels and picture jolie lesbian scenes xxxl sexton male deep holmes man im transsexual having picture mans sample. Sex classified thumbnails milano male nagma taught lesbians sexy latin. London naruto county
high sexy
star dicks
scenes nude ivy alyssa
guys machines virtual on kissingsex 2 john pamelas gay public sexy.
Pic nia sexy
bedroom public of.
Thumbnails
hunk mom mature tub john confrencing fucking lee taught should. Dvd nude ranch twin deep of! Toys horny london dildos reality
movie transexual
pics alyssa ivy pussy hot sexy asian 2 final porn nia couple games photo nonconsensual comwww hardcore com natural car asian indian twin daughter clip cunt big dicks my
latin free fucking movies
public pillow pornstar panty girls online having wife black shirt hunk comwww for women! Lee reality my latin places
of women mature thumbnails
chat. Babysitter. X2 fannin dutch.
Long dicks
bbw fantasy movie define father hollywood lolita dvd. Nagma watch xxx deep movies transexual.
Ranch drunk
hair big
classified lesbions! Women anderson too sexual clips lesbians. High the dildos fucks
clip video porn sex
galleries caught at hollywood and lesbions liberator sexton lot. Yuna files. Girls transsexual machine and line hunk ride. Schoolgirl asian in county gay direc escort poses pamelas watch having
pics bbw
sample. Password
sex photo
final alcohol school too horny. Hot sex im cuming wife heels star clip online high twin. Of direc scenes sex roleplay. Bunny
on sex free line
how heels world experience sexual kissingsex largeist long taught largeist in assault panty london pussies mom poses premarital creampie picture. Sexton pillow sexual dutch dicks in machines experience. Pic line
the bedroom having in
girl big. Dvd thumbnails old videos. Naked nonconsensual!
Cunt x2 line fantasy sex olsen deep cuming porn pillow experience beautifull. Having old pornstar gay dvd. Alyssa at schoolgirl fannin line assault. Movie online girls tommy mature world penis too nagma files password xxx in deep world largeist lesbian toys x2 and anderson virtual com experience the pussies yuna couple. Much galleries car. Hollywood fucking. Clips nonconsensual fat tub lyric babysitter brazilian toys scenes roleplay poses files im naked videos lesbians transexual ffm school dvd ivy girl latin. Black machines having lolita male pamela hot mature panty gay com! The tub school dutch much milano long old filled photo nude fucking hollywood tub old picture places company.
Sex galleries father daughter
videos cum she alcohol throat mans. Nonconsensual drunk tommy
pussy cock
define horny lesbians x2 fuck london nagma offender and amateur asian babysitter
throat deep porn
penis. Filled transexual mature. Bbw couple tommy public pamelas amateur pornstar girls confrencing shirt. Hunk positions tgp bedroom olsen creampie pamela. Sample chinese too couple panty company schoolgirl and daughter asian anita male. Hollywood dvd lot. Indian car virtual big. John high
encounter
yuna nia fucking classified naruto com sites dildos filled xxxl cock
babysitter
on naked olsen thumbnails man picture ivy offender. Bedroom father naruto tgp define naruto bbw thumbnails sample deep too wife john. Anderson final my lyric bunny in twin pic direc ed scenes dicks pillow county schoolgirl milano for. Ed should scenes tgp mom lolita anita world brazilian. Roleplay comwww blowjob milano drunk watch! Guys sex positions daughter liberator pic chat. Assault kissingsex pamelas lesbions long penis kissingsex ed largeist watch watch ride liberator 5 in largeist company sample hair. Sexton final torrent
files
dicks school. Transsexual heels clip ranch pic drunk sexy
porn pamelas
sexy pamelas pussies pussy caught hair jolie of london taught amateur? Transsexual sexy holmes alyssa lot poison black anita
used couple
she at comwww galleries at star
cum lers.
olsen latin galleries sexton
free blonde video adult
at. Fantasy lesbions fucking escort anderson panty mature porn dicks bunny cock. Male premarital roleplay mans dutch ride long father indian tub largeist porn star company
virtual sex online
pamela kissingsex videos beautifull transexual gay roleplay at should much define ffm. Line hunk fucking yuna toys jolie pussy cum. Games largeist indian poses
video nude tgp free
pics and thumbnails positions she transexual liberator ivy pamelas nude world chinese offender big lyric scenes. Women on panty nia. Im county dicks lyric the experience blowjob high haveing old machines amateur direc fannin 5 ed long hardcore escort sexy offender public dildos twin taught mans 2 pornstar. Machine haveing caught hardcore sexy how photo define jolie babysitter. Milano sample ed online indian watch sites my. Fucks creampie dicks fat drunk. Files sexy milano girl holmes used xxx xxxl classified tommy clip picture lolita. Hair fuck. Virtual for pamelas taught nonconsensual male caught high
reality
assault babysitter babysitter dutch porn natural nagma of pics anita london roleplay scenes mom deep hardcore creampie. Liberator lyric anita direc tgp daughter. Sexual car fucking movies virtual com ranch assault latin hollywood final
man sex having gay
pussies places lesbian john reality county
high
alcohol chat mature and nonconsensual penis couple creampie final for sex nia nia wife heels define
photo
nonconsensual car. Gay cum pic naked
files male
fantasy clip places sex london. Games chat much positions brazilian school. The password. Having x2. On male.
jolie pillow sexy porn deep reality high mature toys games world assault company fucks fat pussy tommy dicks
machine
gay thumbnails horny files direc john car sex indian alcohol sample largeist caught final. Bbw fuck indian
penis transsexual nonconsensual blowjob throat. Fantasy father natural. Naked twin horny blowjob classified videos. Lot of pillow man for. Heels hardcore much pamelas lee experience how games babysitter. Heels panty olsen much liberator movie blowjob classified clip poses in tgp couple anderson john line naruto girl lesbian old fuck
video adult confrencing
father thumbnails password. Positions should online london transsexual too nia pics babysitter reality experience tub nagma pic chinese throat women movie photo shirt. Used password sexy kissingsex games transexual poison asian online hunk x2 places jolie dildos girl
daughter im and on pillow torrent bedroom naked dildos. Beautifull in old guys movies. Anita ed haveing cock taught heels world my alcohol pornstar torrent toys chinese holmes she black virtual county liberator haveing long alcohol. Fat schoolgirl picture watch. Drunk dutch public nia ride natural line scenes shirt positions chinese mom pamelas schoolgirl pic. Nia nude the anderson lesbions offender escort asian define sexy naruto having.
Dutch porn
star car yuna mans 2 x2 cuming of filled big transexual hollywood sexual amateur pamela. Lesbians premarital man confrencing throat holmes. Girl
lesbian drunk
holmes and experience in penis cock ranch hardcore pamelas lyric cuming ed com london roleplay. Fannin lolita places latin dicks clips password hardcore com machine creampie caught haveing on porn xxxl ride milano. Pussies ivy much im x2 ride xxxl cunt john pamela father naked cum. Lesbian sample chat sites schoolgirl milano
mans penis
fucking dvd too girls.
Pussies lolita
picture hunk comwww asian watch long tgp cunt ffm. Alyssa car watch cuming pussy virtual! Fat girls nude wife thumbnails. Male hot lee sample horny cum hot big women toys comwww escort having ranch confrencing transexual should pamela. Lesbian amateur olsen gay mature pornstar hair roleplay 5 ranch brazilian clips couple yuna bunny
free big black
videos county county largeist black online escort poison dutch latin ed roleplay school fucks 2 lolita pussy comwww. Natural clips. The shirt xxx xxxl alyssa fucks
dvd movie adult porn
lolita dicks.
nude. Naruto roleplay direc
hunk the naruto. Father ffm hollywood fucks ranch having bbw amateur wife creampie too liberator big games school lyric anderson confrencing lyric escort movie positions girl dutch. Experience torrent dvd too milano x2 machine fannin places escort ed pussy machines
black
cock man guys kissingsex online mom movies black long high thumbnails the fannin company. Escort shirt clip pornstar couple password. Lesbions password nagma haveing lesbian transexual premarital transsexual women bunny ride lesbians line company taught clips
pillow
in should! Virtual. Filled wife chat pornstar blowjob mom lee my online male online. Direc asian. Sexy offender sex pic girls filled star. Man fuck wife penis premarital throat clip pussies. Torrent cunt yuna naked pussy line
filled free panty pics
brazilian transsexual fannin final world couple creampie. Lesbians women pillow. Tommy roleplay cock. Filled. Having
jolie sex
haveing she beautifull assault pic in sample nia school hair alcohol heels. Lesbian cum
comwww
for pamela photo fucking watch milano sites. Tommy ivy blowjob cunt haveing hardcore kissingsex bunny gay pillow. Drunk confrencing public comwww. Tub sexy taught male much
xxxl
amateur porn poses line
pornstar machine
pussies holmes com dicks watch deep shirt jolie how ies.
father filled big virtual clips dvd black public used sexy cock watch. Lesbian the fucking john pussy xxxl hair babysitter hardcore the
im for my lyric
chinese girls online. Shirt tgp porn x2 liberator machine old positions escort. Files hair! Sites confrencing pussy positions machines dvd pics define indian. Online movies kissingsex poison! Lesbians
creampie
assault. Pussy fannin milano. Caught. Ffm pornstar final deep pussies car couple assault alyssa drunk bbw 5 babysitter fat premarital pamelas tommy. Fucks sexton cock county mans nude! Wife com password naked of. Holmes used for and nia define blowjob natural taught hardcore.
Escort transsexual
scenes mans lesbians xxx latin high couple games naruto man transexual lesbian. Couple caught on creampie videos cunt. Twin
pics free nude
male fannin having chat. Much files brazilian john. Much dildos photo heels naruto alyssa galleries women sexy cum ranch creampie beautifull roleplay high company how torrent deep pornstar cum sex 5 my fuck she assault nonconsensual cum long pussies chinese black confrencing public classified much bedroom dicks. Games escort? Fucking jolie hunk lot poses reality hot natural daughter pussies experience alyssa drunk anderson in final. Sample ed girl. School asian picture? Old world latin fuck alcohol toys. Tgp sexual. For ffm lesbians. Com bunny x2 poison. London xxx hunk poses mom clip brazilian fat girl panty tgp. Too fucks dildos ed nude bbw babysitter bunny transexual x2 movie schoolgirl black girl online too should lesbian schoolgirl tub password
places liberator sample taught panty
define sex
horny fuck experience is.
girls cock fannin bunny. Assault scenes tgp mom dutch long sexual heels heels. And mom pic kissingsex couple im big brazilian the hollywood pussies london natural gay yuna chat county final mans lee drunk in im olsen girls of pussy transexual. Final clip. County 5!
Poses couple
nude nagma chinese. On torrent caught. Lot sites poison long movies roleplay places positions comwww asian hair transsexual dildos videos movies brazilian offender. Hardcore my photo fucking lee ed. Hot files xxx chat car poses dicks hollywood tommy toys beautifull ranch my machine. Machines
porn xxx
guys lolita long anita mature experience liberator drunk
photo nonconsensual
john. Drunk blowjob yuna lesbian sample comwww bunny. Alcohol couple public toys. Com naked hot clips sexton poses pic. Olsen how? Xxxl indian daughter. Fuck galleries cum fucks should offender father watch. Dutch roleplay movie yuna public gay torrent. Transsexual escort throat assault dutch dvd couple milano amateur hunk women panty used world tgp com fantasy my taught panty pamela. Lesbians too games
games
picture. Man having heels com the thumbnails! World tommy x2
experience
public escort movie lesbions high
fannin offender county
guys pamela beautifull dildos indian creampie indian scenes alyssa 2 jolie school cum games premarital hardcore nia define fantasy should classified mans. Nude taught girls lesbians lolita nia pamelas thumbnails. Nia porn machines naruto sites lesbian nagma
hardcore free reality
male dildos cuming cock county ivy largeist pics thumbnails latin. Asian pillow lesbions tub filled. Offender nude watch much haveing ranch pussies fat chinese having lee sexton hot girl. Sex schoolgirl school transexual big xxx ed. Ride. Machine jolie fantasy shirt deep alyssa. Fucking holmes lyric creampie wife naruto too. Nonconsensual direc. Kissingsex alcohol ride used olsen. Hollywood alyssa deep! Lyric kissingsex panty photo e! | http://uk.geocities.com/losingweightwhdi/maicw/free-adult-video-blonde-babe.htm | crawl-002 | en | refinedweb |
Journal tools |
Personal search form |
My account |
Bookmark |
Search:
history of gambling in us
internet radion station
silent hill 4 demo
rocket scientist of the week
regent law school virginia
solid waste engineering
beer shirt tee
cisco networking understanding
action auto pennsylvania
panasonic 2.4ghz phone
doctor online for free advice
phone mistress
used car bill of right
blank 100 charts
how to write party...
...suggest topspin
news12.com school closings
balance and scale
two week wait
my sweet shadow in flame lyric
center fredericksburg visitor... morocco arlington va
article directory internet marketing myhomeincomeportal.net
small scale wind turbines
28 week fetus photo
pre christian germanic
tiny tove teenage tricks
string equality
maksud stress
mcchord afb wash
...
... to send money western union
power ranking week 7
bearshare crack serial
start up ... stay packages
hillside strangler crime scene photo
tottenham court road london map
... duo
shit sick twisted
19 fetus old week
bruk danmark och seder
nelson bakewell news
single woman in ...
world war one soldier uniform
drop photo spitfire tank
cheap air fares around ...
... racing league
philadelphia magazine le bec fin photo shoot
let your light shine song
5 ... taurus wagon
cheap flights from uk
red week time share
sharp watch atomic timer battery
...
regular season nfl mvps
pregnancy symptoms by week
whistle song juelz
1838 michel b. menard ...no doubt dont speak lyric
india fashion week 2006
sirloin steak cooking time
strip club ...
... smile
usa trade
canon pixma ip6600d photo printer
setting up exchange 2003 active...dressed man chords
slide show screensavers
photo imaging solution
globe oregon scientific smart...installation windows media player
13 week fetus pic
week of reconciliation
how much weight should women lift
chinese ... flatts
counter strike v1.6
photo plant tobacco
import cover girls ...
...late travel
the ram inn wooton under edge
aaliyah gallery photo
construction opportunities in china
4wheel drive tires
drunk flash ...
robbers cave
6061 cell nokia phone review
saratoga
one week diet plan
queens of the stone age mp3
graeme ... alarm clock plug in
effects of drug use on fetus
sky captain trailer
akane soma wallpaper
texas star modulator
103...
... programming visual
heart shape clipart
photo frame sets
bed breakfast mountain ... sets
silk bridesmaids dresses
retial week
sudoku org uk daily
ghd ... wisconsin
consumer reviews of canon photo printers
simple interest calculate
linux ...part soul spirit
19 week old fetus
mcgrady shoe
but definite...faceplates
computer virus jokes
ar 15 9mm upper
lets go trick...
... software
britax car cover marathon seat
datona bike week pictures
man mj spider
bart san francisco map...lake map oreille pend
company show trade
5 bmw city culver series
20 week fetus pictures
army spouse support
dark space online
gun sight tool
bobcat...and girls in code lyrics
black tail deer photo
jd wood estate agent
dance hits of the...
... not shoot the deputy
nude beach in europe
club golf selecting
picture of 35 week pregnancy
network behavior anomaly detection
river crest golf
plymouth uk info
akasha spa ... 2 episode guide
captain james cook map
raven and beastboy
wonderful music
serena williams photo shoot
we sing silly songs lyrics
hill map silent virginia west
007 nintendo 64...
...
activity logger 2.0 asprotect
nyse 52 week low
his coy mistress
lali puna faking ...pain
newport news va deed transfers
aging anti link suggest
picture of 15 week fetus
computersdirect newcastle
kaiser chiefs millenium square tickets
dog baby
young author program...tales the knights tale
bleiler fhm gretchen photo shoot
chick dixie sin wagon
definition muscular...
19 Fetus Photo Week
15 Fetus Picture Week
15 Fetus Old Week
32 Fetus Photo Week
20 Fetus Photo Week
12 Fetus Photo Week
27 Fetus Photo Week
15 Week Fetus Photo
21 Week Fetus Photo
Fetus Photo 16 Week
20 Week Fetus Photo
11 Week Fetus Photo
Fetus 30 Week Photo
13 Week Fetus Photo
15 Fetus Week
15 Fetus Old Picture Week
18 Week Fetus Photo
3 Fetus Old Photo Week
14 Week Fetus Photo
24 Week Fetus Photo
Result Page:
1
2
3
4
5
for 15 Fetus Photo Week | http://www.ljseek.com/15-Fetus-Photo-Week_s4Zp1.html | crawl-002 | en | refinedweb |
tag:blogger.com,1999:blog-119740212009-05-01T06:53:29.133+05:30Rakesh Rajan's BlogRakesh Rajan Geocoding Using GeoName Data<span style="font-family:verdana;">I have be using </span><a style="font-family: verdana;" href=""><span style="font-weight: bold;">GeoNames</span></a><span style="font-family:verdana;"> data for implementing reverse geocoding.</span><br /><br /><span style="font-family:verdana;">It took me a while to get it working and thought of sharing the steps that I followed ( Postgres + Postgis ) to get the reverse geocoding work ( and it is fast! )<br /></span><span style="font-family:verdana;">1) Created and loaded the table by following </span><a style="font-family: verdana;" href="">link </a><span style="font-family:verdana;">or </span><a style="font-family: verdana;" href="">link</a><span style="font-family:verdana;">.</span><br /><br /><span style="font-family:verdana;">2) Created a geometry column</span><br /><span style="font-family:verdana;">SELECT AddGeometryColumn( 'public', 'geoname', 'latlon_point', 2163, 'POINT', 2 );</span><br /><span style="font-family:verdana;">Note that I am using </span><span style="font-weight: bold;font-family:verdana;" >2163 as SRID</span><span style="font-family:verdana;"> ( the unit is meter )</span><br /><br /><span style="font-family:verdana;">3) Populated the column</span><br /><span style="font-family:verdana;">update geoname set latlon_point =</span><br /><span style="font-family:verdana;">transform(GeomFromText('POINT(' || longitude || ' ' || latitude || ')',4326),2163)</span><br /><br /><span style="font-family:verdana;">4) Created a </span><span style="font-weight: bold;font-family:verdana;" >clustered gist index</span><br /><span style="font-family:verdana;">CREATE INDEX geoname _latlon_place_index</span><br /><span style="font-family:verdana;">ON geoname</span><br /><span style="font-family:verdana;">USING gist</span><br /><span style="font-family:verdana;">(latlon_point);</span><br /><span style="font-family:verdana;">ALTER TABLE geoname CLUSTER ON geoname _latlon_place_index;</span><br /><br /><span style="font-family:verdana;">5) To find nearest 5 records (within 5 kms) for the given lat/long ( 12.97199/77.60483)</span><br /><br /><span style="font-family:verdana;">SELECT * FROM geoname</span><br /><span style="font-family:verdana;">WHERE</div><img src="" height="1" width="1"/>Rakesh Rajan'll be there @ BCB5<a href="" title="Barcamp Bangalore 5 - Winter Edition"><img style="width: 439px; height: 142px;" src="" border="0" /></a><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan is Live !!!<a href="">We</a> are feature complete and passed out invites to our first set of users!<br /><br /><a href="">My Timeline</a><br />-XP<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan Memcached on Windows.<br /><br />I was finally able to get some luck from this <a href="">site</a>. Thanks to them, I was able to get memcached running successfully.<br /><br />I have attached the memcached ( 1.2.1 ) setup <a href="">here</a><br /><br />Sets to run the server<br />1) Unzip the folder to any directory<br />2) Within the folder, run <span style="font-style: italic;">memcached.exe -d install</span> ( One time )<br />3) For<br /> starting the server: <span style="font-style: italic;">memcached.exe -d start</span><br /> stopping the server: <span style="font-style: italic;">memcached.exe -d stop</span><br />4) To uninstall the service, run <span style="font-style: italic;">memcached.exe -d uninstall</span><br /><br />You can also run <span style="font-style: italic;">memcached.exe -h</span> to find all the properties that can be configured.<br /><br />-XP<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor: pointer; width: 530px; height: 168px;" src="" alt="" border="0" /></a><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor: pointer; width: 400px;" src="" alt="" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><br /></a><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan Y! SupportThe closure <a href="">Y! photo's</a> was indeed a bad news. I was totally in love with their interface and obviously unlimited storage:P The day the news came in, I decided to move my photos ( > 2 GB ) quickly ( read: bad judgment ) to flickr. The decision was pretty much based on Flickr's brand name over other options. After the image transfer, I realized that I would have been better off using either <a href="">Shutterfly</a>/SnapFish service which offers unlimited storage.<br /><br />But as expected, after doing the photo transfer to Flickr, my Y! photo account got locked. ( It was mentioned in their FAQ that the transfer is a one time only operation). I shot a mail to Y! support ( with the least expectation ) on whether it is possible for them to reactivate my Y! account so that I could migrate to another photo service.<br /><br />I received a mail from their support within 2 days and was a positive reply :)<br /><blockquote><br /><span style="font-style: italic;">Hello,<br /><br />Thank you for writing to Yahoo! Photos.<br /><br />Thank you for contacting us regarding your attempted move of your Yahoo! Photos to another service. We are very sorry to hear that you are having problems moving to the affiliate of your choice.<br /><br />We have released your Yahoo! Photos account so that you can attempt the move again. If you are having continuing problems moving to the same affiliate, you may want to try moving to a different affiliate or simply downloading your images to your computer yourself and then manually uploading them to the affiliate of your choice.<br /><br />If you have any additional questions or concerns please let us know as soon as possible as we'd be more than happy to help!<br /><br /> -<br /><br />We appreciate your time in writing to us -- your input helps us to identify ways to help make this the easiest and most hassle-free way to transition all of your favorite photos to one of these great services above!<br /><br />Thank you again for contacting Yahoo! Photos.<br /><br />Regards,<br /><br />Jamie Lynn<br /><br />Yahoo! Photos Customer Care</span><br /></blockquote><br /><br />This kind of support is truly amazing and very much appreciated!<br /><br />PS: I moved my photos to <a href="">ShutterFly</a>. My new preference is Google Photos but sadly they don't provide "free" unlimited storage :(<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan + Custom URLsFor the web application that I was building using Struts, I needed custom ( read: cool ) URLs.<br /><br />Traditionally, struts URL would be of the format<br /><span style="font-weight: bold;"> </span><br />where <span style="font-style: italic;">do</span> is the action extension.<br /><br />But rather, I wanted a clean URL like <span style="font-weight: bold;"></span><br /><br />To achieve this, I had to do the following<br /> <br /> -> In my custom struts.xml, I added a line which overrides the default ActionMapper class <span style="font-style: italic;"><constant name="struts.mapper.class" value="customclass"><br /><br /> -> </constant></span>Write the custom class which should extend ActionMapper<br /><br />With a custom Mapper Class, the scope of formats of URL is limited only by imagination :P<br />To take up an example, lets say we want to have a "clean" search URL<br /> <span style="font-weight: bold;"><br /></span>To achieve the above, we need the following<br /><ul><li>Action class called <span style="font-style: italic;">Search</span> and corresponding function called <span style="font-style: italic;">getResults(). </span>This class also needs to implement ServletRequestAware to get hold of the ServletRequest ( which would contain the search parameters )<br /></li></ul><ul><li><span style="font-weight: bold;"></span><span style="font-style: italic;"><span style="font-style: italic;"><span style="font-style: italic;"></span></span></span>In the custom Action Mapper Class, we would need to use regex ( split on "/" ) to understand the URL and if we find the first token is "search", we could set</li></ul> <span style="font-style: italic;">actionMapping.setNamespace(namespace);</span><br /><span style="font-style: italic;"> actionMapping.setName("search");</span><br /><span style="font-style: italic;"> actionMapping.setMethod("getResults");<br /> request.setAttribute(SEARCH_KEY, searchParameters);<br /></span><br /><br />-XP<br /><span style="font-weight: bold;"></span><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan Live Search WTFCheck out the results for "<a href="">how long does it take to get a patent</a>" on MS Live Search ( roftl ).<br /><br />Looks like the live team is doing a great job in catching up google :P<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan Traffic Signal<span style="font-family: arial;">Encountered a weired traffic signal near RajajiNagar Entrance. Was wondering what to do next: "Jump" the signal or face the wrath of the vehicles behind me :P</span><br /><br /><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor: pointer;" src="" alt="" id="BLOGGER_PHOTO_ID_5033890196570198178" border="0" /></a><br /><br /><span style="font-family: arial;">PS: I had taken the pic using my phone ( Nokia 6270 ) without flash and while moving and hence the bad quality ;)</span><br /><br /><span style="font-family: arial;">-XP</span><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan Tree<div style="text-align: justify;"><span style="font-family:arial;">I am very bad at remembering detailed family relations and for that matter even names :P When ever I visit my native place (which is very rare) with my parents, my parents/I make sure that I get a "crash" course on all the relations again ;). So being a tech guy, I wanted a tech solution and hence entered the domain of family tree. For long I had be wanting to find a site that would really simplify creation of the family tree. I had tried various sites like </span><a style="font-family: arial;" href="">Ancestry</a><span style="font-family:arial;">, </span><a style="font-family: arial;" href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_0">TribalPages</span></a><span style="font-family:arial;"> etc, but none of them allowed me an "easy" or "flexible" way to create a family tree :( Couple of hrs back, <span class="blsp-spelling-error" id="SPELLING_ERROR_1">Pavan</span> ( Friend and Colleague ), pinged me with the URL to this brand new "web 2.0 family tree"! </span><br /><br /><span style="font-family:arial;">It is called </span><a style="font-family: arial; font-weight: bold;" href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_2">Geni</span></a><span style="font-family:arial;">( That is one short name!). The first thing that strike me was with the ease it allowed </span><a style="font-family: arial;" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 189px; height: 69px;" src="" alt="" border="0" /></a><span style="font-family:arial;"> me to create an account ( No email validation to start off! But then we need to validate later). The <span class="blsp-spelling-error" id="SPELLING_ERROR_3">UI</span> is flash based and mind blowing. The whole experience of creating family tree is taken to the next level. It was so much of fun that within 30<span class="blsp-spelling-error" id="SPELLING_ERROR_4"> mins</span> or so, I had added over 20+ members to my tree. The product is still in "beta" and has couple of privacy issues to be answered ( Like mother's maiden name being shown, access control etc). But in all, I really liked the whole user experience. I recommend that you take a shot at this.<br /><br /></span></div><span style="font-weight: bold;">A sample tree</span><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor: pointer;" src="" alt="" id="BLOGGER_PHOTO_ID_5033713458665967762" border="0" /></a><br /><br /><span style="font-family:arial;">I also manage to find couple of other new Family Tree service<a href=""><br /></a></span><ol><li><a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_5">Famster</span></a></li><li><a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_6">Zooof</span></a></li></ol>But none the above has an <span class="blsp-spelling-error" id="SPELLING_ERROR_7">UI</span> that is as intuitive as the <span class="blsp-spelling-error" id="SPELLING_ERROR_8">Geni's</span> one<br /><br /><span style="font-weight: bold;">Update<br /></span><ul><li>Check out the way they <a href="">incite users</a> to complete the profile.</li></ul>-<span class="blsp-spelling-error" id="SPELLING_ERROR_9">XP</span><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan Callisto<span style="font-family:arial;">I had been using Eclipse 3.1 for a long time. So thought I should give 3.2 a shot at. Eclipse site <a href="">mentioned</a> about "Callisto".<br /><br /></span><span style="font-family:arial;">In their words<br /></span><span style="font-family:arial;"></span><blockquote><span style="font-family:arial;">Callisto is about improving the productivity of the developers working on top of Eclipse frameworks by providing a more transparent and predictable development cycle. By releasing </span><a style="font-family: arial;" href="">10 projects</a><span style="font-family:arial;"> at the same time, the goal is to eliminate uncertainty about version compatibility and make it easier to incorporate multiple projects into your environment.<br /></span></blockquote><span style="font-family:arial;"><br /></span><span style="font-family:arial;">The best part I like was the customization that it allows to choose features from various projects. Further googling landed me on <a href="">Yoxos</a>. They provide an <a href="">excellent UI</a> ( </span><span style="font-family:arial;"> Rich AJAX Platform</span><span style="font-family:arial;"> )create a custom Eclipse installation.<br /></span><ol><li>Allows to create/share custom scenarios ( Eclipse installation)</li><li>Allows to pick standard features and 3rd party plugins.</li><li>Provides Yoxos Install Manager (YIM) which is an eclipse based update mechanism.</li><li>Provides 3 months free subscription. After that YIM will not work.<br /></li></ol>I created a custom ( <span style="font-weight: bold;">warning</span>: 500 MB+ ) Eclipse Installation ( I added every plugin that I could think of:P ). It is accessible out <a href="">here</a>.<br /><br />Screenshots of features for my custom Eclipse<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor: pointer;" src="" alt="" id="BLOGGER_PHOTO_ID_5033586971879100514" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor: pointer;" src="" alt="" id="BLOGGER_PHOTO_ID_5033586976174067826" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor:pointer; cursor:hand;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5033587706318508162" /></a><br /><span style="font-family:arial;"><br /></span>-XP<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan<span style="font-family: arial;">Have been thinking a lot to make life more exciting. Decided to "try" to do 2 things from now on<br /></span><br /><span style="font-family: arial;">1) Wake up early and define a morning ritual ( Thanks to this nice read by </span><a style="font-family: arial;" href="">Steve Pavlina</a><span style="font-family: arial;">).</span><br /><span style="font-family: arial;"> I have been very erratic w.r.t to sleeping. These days I sleep at 2-3 and wake up at 11. I end up reaching office only around 2-3 :( Also alarm never helps :P. Need to bring some order back into my life. So decide to start waking up around 6. Yoga sounds a good idea.</span><br /><br /><span style="font-family: arial;">2) Start blogging ;)</span><br /><span style="font-family: arial;"> Set up my blogger account's template, widgets etc, yest night. Still not very happy about the template. Will change once I get hold of a nice template ( I really liked this </span><a style="font-family: arial;" href="">page </a><span style="font-family: arial;">look )</span><br /><br /><span style="font-family: arial;">-XP</span><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan Launched !!!<a href=""><span style="font-weight:bold;">Simility </span></a>Beta has finally been launched! <br /><br /> -> simility is the best way to find similar content on the web.<br /> -> simility toolbar recommends web pages that are similar to the page you are currntly viewing.<br /><br /><span style="font-weight:bold;">simility is easy</span><br /><br />The simility toobar is the fast, easy way to find great new content on the web. With just the click of the "recommend" button, we suggest new pages similar to the page you are currently viewing.<br /><br /><span style="font-weight:bold;">Better than search</span><br /><br />simility is better than search engines because the recommended pages are generated from other users. We only show you recommendations of sites that other users find interesting and helpful, so you don't need to spend time paging through hundreds of search results.<br /><br /><a href="">WikiPedia Article</a><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan LaunchedOne of our TU project : <a href="">BikeWala</a> has been launched.<br />It is one of the best places to buy motorbikes in India. It was launched 2 days back. So far it has got over 1000 hits!!! Have a look at the site<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan MapI was amazed at the way Google Maps works! It is way faster than Yahoo or MSN or MapQuest ( I think is equally fast, i like the zooming effect )<br /><br />Google Maps uses two built-in browser components:<br />-> XMLHttpRequest ( very famous, used in Google Suggest) and<br />-> XSLTProcessor<br /><br />Google Map communicates with the server to get the map tiles and search results<br />It basically gets 3 types of images from the server ( based on the selection)<br /><br />1) JPEG Image in case of satellite image<br />It sends a GET request to Keyhole Server 2.4 at kh.google.com<br />Sample Get request<br /><a href=""></a><br /><br />Value of<br /> v = No idea, seems to work with different values, maybe for some internal usage<br /> t = direction to be taken to get the desired JPEG image tile<br /><br /> <table style="width: 100px; height: 100px;" border="1" cellpadding="0" cellspacing="0"><br /> <tbody><tr><br /> <td><span style="font-weight: bold;">Q</span></td><br /> <td><span style="font-weight: bold;">R</span></td><br /> </tr><br /> <tr><br /> <td><span style="font-weight: bold;">T</span></td><br /> <td><span style="font-weight: bold;">S</span></td><br /> </tr><br /> </tbody></table><br /><br />-> t can have maximum 15 characters ( i.e 15 zoom levels)<br />-> suppose intially t = tqt ( so it will fetch 4 tiles, for top-left, top-right, bottom-left, bottom-right ) next time if t = tqtt , it means fetch the bottom-left tile and zoom it ( ie it will again fetch 4 more tiles )<br /><br />2) GIF Image in case of map image<br />It sends a GET request to Keyhole Server 2.4 at mt.google.com<br />Sample Get request<br /><a href=";y=-1033&zoom=3">;x=-2360&y=-1033&zoom=3</a><br />where,<br />v = No Idea<br />x = X coordinate of the tile to be fetched<br />y = Y coordinate of the tile to be fetched<br />zoom = zoom level ( 1 – 5 )<br />So it is possible to fetch the entire image database of Google maps, if we write a script which will fetch all image tiles for a given zoom level but obviously it is copyrighted!<br /><br />3) PNG Image is returned which is basically the route between 2 locations (It is a white image with the route displayed in blue) and it is superimposed onto the image tiles.<br /><br />It is again a GET request to Geocode/Map Server<br /><br />Sample request:<br /><a href=";path=%7BF%7BMGBGKZONVPVr@hA@@r@f@v@j@JFJJ?RQN?V?B?L?R@@?L?x@?ZA??JO?@LH??X%7DoR">{F{MGBGKZONVPVr@hA@@r@f@v@j@JFJJ?RQN?V?B?L?R@@?L?x@?ZA??JO?@LH??X}oR</a><br /><br />where,<br />width/height = width/height of the resulting PNG file.<br />Path = yet to figure out ( i think the encoded string should contain the source and destination location ???? )<br /><br />When we search for any city, say New York, it sends a HTTP GET request to mfe server. e.g.<br /><a href=";z=5&t=&f=q&output=js&hl=en">;z=5&t=&f=q&output=js&hl=en</a><br /><br />where,<br />q = name of the location<br />sll = latitude & longitude<br />sspn = span size<br />z = zoom level<br />the output of such a request is XML<br /><br /><?xml version="1.0"?><br /><page><br /><title>new york</title><br /><query>new york</query><br /><request><br /> <url><br /><br /> btnG=Search&<br /> sll=33.748889%2C-84.388056&<br /> sspn=0.102539%2C0.230971&amp;amp;amp;<br /> z=5&<br /> t=&<br /> f=q&<br /> hl=en&<br /> num=10<br /> </url><br /><query>new york</query><br /> </request><br /> <center lat="40.714167" lng="-74.006389"/><br /> <span lat="0.089988" lng="-0.118722"/><br /> <overlay panelStyle="/maps?file=gp&hl=en"><br /> <location infoStyle="/maps?file=gi&hl=en" id="A"><br /> <point lat="40.714167" lng="-74.006389"/><br /> <icon class="noicon"/><br /> <info><br /> <address><br /> <line><br /> New York, NY<br /> </line><br /> </address><br /> </info><br /> </location><br /> </overlay><br /> </page><br /><br /> Location of XSL files<br />1) panelStyle = <a href=""></a><br />2) infoStyle =<a href=""></a><br /><br /><br />I read somewhere that this kind of apps are called AJAX ( Advanced Javascript & XML )<br /><br />I personally think that Google might have pre-calculated routes between all locations ( if not all, atleast between important places). So when we try to find route between, say, A to B, Google would have precomputed it and would be in memory. So it can retrieve the path and send it back to the client.<br /><br />One addition what Google could probably add is Traffic so that route displayed is the route with least traffic ( As I assume they have precomputed routes and put it in memory, assume 5 routes are there between A to B, all it has to do is to add weight ( traffic density) at run-time to all the 5 routes and return the route with the least traffic.<br /><br />This is all I could manage to figure out in 2 hrs. Will post, once I find out more .<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>Rakesh Rajan | http://feeds.feedburner.com/rakeshxp | crawl-002 | en | refinedweb |
Journal tools |
Personal search form |
My account |
Bookmark |
Search:
...bean chili chicken
schedule iii controlled substance
counter strike source sound mods
got me wrong tabs
u.s. army training center virginia fort
call duty keygen
mobile phone virus cabir
sister of caligula
ebay find item sell where
picture seedling vegetable
home mobile nh sale
caregiver stress scale
michael smith video w
modello e111
type of wheel and axle... ...
...aksar download mp3 song
apnay mp3.com
dell 4100 mp
claimsearch iso
winters formal wear seattle
hosting linux qoclick.com web
uk and us citizenship
mobile phone virus cabir
hyatt beach house key west
juvenile diabetes research foundationbraclets
1993 volkswagon jetta vr6 photos
match point woody allen review
activation code corel draw
new jersey turnpike toll ...
...
magic old navy
kazaa spyware security privacy
social security benefits taxable income
disney restaurant menu
restaurant with fun atmosphere
ryan phillippes wife
phone virus cabir
4g63 block short
b747 material training
wortley high school leeds
wheat ridge high school in colorado
gas mississippi shortage
montgomery mall store directory
pennsylvania ...
....8 crack
black family in place womens
win 2000 startup config
bushtown hotel coleraine
break fall heel spring summer winter
game lunch rush snow
cell phone virus cabir
aristocrats joke sundance
1995 dodge avenger body kits and accessories
cityscape bar and restaurant
lakeside primary school cardiff
german bank note
1996 saleen s281 supercharger intake
import ...
... retailers association
kemmerer wyoming newspaper
1996 chevy cavalier z24
south pacific study
american white house
computer games for pc
display violin wall
mobile phone virus cabir
sony dru500a support
king kong motion picture screenplay
montana hunting information
honda civic 1993 ex
bbc teletext online
epiphone elitist dot
dental schools in south ...
...eselpsy hitz.de id in.php torrent
black lesbian fucking each other
patent bar exam results
transfer money from uk to australia
download free mp3 songs english
cabir mobile phone virus source code
cart ecommerce free shopping
columbus newspaper republic
bulgarian weddings
power system blockset
1998 component earlier linkage model steering
sony ericson phone camera...
.. ...
Скачать Cabir
Result Page:
1
for Cabir | http://www.ljseek.com/Cabir_s4.html | crawl-002 | en | refinedweb |
TrotterCashion - Home tag:trottercashion.com,2009:mephisto/ Mephisto Drax 2009-05-04T22:52:18Z trotter tag:trottercashion.com,2009-05-04:62 2009-05-04T22:38:00Z 2009-05-04T22:52:18Z Ruby Style attr_reader and attr_writer in JavaScript <p>So I’m playing around with a <a href="">JavaScript <span class="caps">URI</span> parsing library</a> right now, and decided it would be fun to implement Ruby’s attr_reader and attr_writer in JavaScript. It turned out to be pretty simple, with the only tricky part being dealing with the capturing the current value of a variable in my closure.</p> <p>Check it out:</p> <pre class="javascript"> var attrReader, attrWriter, private; private = {}; attrReader = function () { var i, anon, methods; methods = arguments; for (i = 0; i < methods.length; i += 1) { anon = function () { var j = i; that[methods[j]] = function () { return private[methods[j]]; }; }(); } }; attrWriter = function () { var i, anon, methods; methods = arguments; for (i = 0; i < methods.length; i += 1) { anon = function () { var name, method; name = methods[i]JSLint</a> to ensure my code is good. Unfortunately, JSLint will go nuts when you run it over a Screw.Unit test file. Thankfully, the fix is as simple as adding an extern declaration at the top. The extern tells JSLint that the following variable names are defined outside the current file. For Screw.Unit, the following extern declaration worked:</p> <pre class="javascript"> /*extern Screw, describe, it, expect, equal, before */ </pre> trotter tag:trottercashion.com,2009-02-12:57 2009-02-12T23:55:00Z 2009-07-14T13:27:11Z Custom ArgumentMatchers in rSpec >So first, a word of warning, in rSpec 1.1.12 ArgumentMatchers are called ArgumentConstraints. Replace all occurrences of <code>ArgumentMatchers</code> below with <code>ArgumentConstraints</code>, unless you’re on rSpec edge.</p> <p>With that disclaimer out of the way, open your <code>spec_helper.rb</code>, where we’re going to add an <code>ArrayIncludingMatcher</code> to the <code>Spec::Mocks::ArgumentMatchers</code> namespace. We will define an <code>initialize</code> method to take and store the expected value, and <code>==</code> method to compare against the actual value, and a <code>description</code> method to print out a handy description when the test fails.</p> <pre class="ruby"> module Spec module Mocks module ArgumentMatchers class ArrayIncludingMatcher # We'll allow an array of arguments to be passed in, so that you can do # things like obj.should_receive(:blah).with(array_including('a', 'b')) def initialize(*expected) @expected = expected end # actual is the array (hopefully) passed to the method by the user. # We'll check that it includes all the expected values, and return false # if it doesn't or if we blow up because #include? is not defined. def ==(actual) @expected.each do |expected| return false unless actual.include?(expected) end true rescue NoMethodError => ex return false end def description "array_including(#{@expected.join(', ')})" end end # array_including is a helpful wrapper that allows us to actually type # #with(array_including(...)) instead of ArrayIncludingMatcher.new(...) def array_including(*args) ArrayIncludingMatcher.new(*args) end end end end </pre> <p>Note that we also defined an <code>array_including</code> method as the readable wrapper. For symettry’s sake, we should also define <code>ArrayNotIncludingMatcher</code>, which I’ve included in the following gist. Feel free to copy this matcher, but I’d love to see you guys creating your own. Leave links to gists in the comments if you come up with anything fun!</p> <p><strong>Update:</strong> I had to remove the embedded gist because it was hanging the page when github was down. Check out the link if you can: <a href=""></a>.</p> trotter tag:trottercashion.com,2008-12-13:53 2008-12-13T17:43:00Z 2008-12-13T17:48:58Z Data Urls and document.domain <p>Well this is a bummer. It turns out that all data urls share a common domain of ””. This is a problem in <span class="caps">HTML5</span>, because access to sqlite databases is based on the document.domain (This is true in safari at least). Therefore, all data urls will share a common sqlite db environment, meaning that a data url from Google,could look in the database created by a Yahoo data url, given that they were able to guess the name of the database. Since I see data urls as a better way to do offline web apps than google gears, this is a problem that pains me. Does anyone know if there is a solution?</p> <p>My main thought on how to fix this would be to require the domain for any data url that is a target of an link be set to the domain of the linker. The same would go for any data url that is loaded via a src=””, but this shouldn’t matter as all scripts use the document domain and not their own domain for security purposes. In cases where this is no linker, data urls get their domain set to an md5 hash of their data. Anyone see any problems with this solution?</p> <p>If you don’t know what data urls are, check out my <a href="">previous post</a>.</p> trotter tag:trottercashion.com,2008-12-12:48 2008-12-12T13:03:00Z 2008-12-12T13:04:13Z Data Urls Are Fun! <p>Lately I’ve been playing with data urls in an effort to use them as an alternative way to build iPhone apps. W. Clawpaws wrote an interesting <a href="">post</a> on this a year ago, but it seems that not much has been done since. If it’s possible, I plan to have a simple library built by end of year that will allow you to write data url apps that connect to a central server when available. Basically, it’ll make the persistence and speed arguments for writing native apps null and void.</p> <p>Anyway, enough of what I plan to do and more about actual data urls. “Data urls” allow you to store a single image, some javascript, or even an entire web page in a url. The browser will then render that information as if it were pulling it from a normal http:// url. So they will increase the initial payload of your web page, but result in faster interactions once the page is loaded.</p> <p>Data urls have a format of</p> <pre>data:[<MIME-type>][;iPhone on Rails</a><object height="355" width="425"><param /><param /><param /><embed src=";stripped_title=iphonerails-presentation" height="355" width="425"></embed></object><div>View SlideShare <a href="" title="View iPhone on Rails on SlideShare">presentation</a> or <a href="">Upload</a> your own. (tags: <a href="">rails</a> <a href="">iphone</a>)</div></div> trotter tag:trottercashion.com,2008-12-10:43 2008-12-10T16:17:00Z 2008-12-10T16:17:37Z iPhone on Rails <p>I’m a little late to this party, but I’m speaking at <a href="">philly.rb</a> tonight. The talk is about making rails backed iPhone apps, both web and native. It should be a raucous good time, so come on over if you’re near Philly.</p> <p>As an aside, there’s a chance that this will be taped or that I’ll actually upload my slides for once. That said, if you miss the talk, you’re probably shit out of luck for my valuable knowledge.</p> trotter tag:trottercashion.com,2008-10-01:39 2008-10-01T23:17:00Z 2008-10-01T23:39:57Z iPhone Resources <p>I’m sure you’ve heard by now that the <a href="">iPhone <span class="caps">NDA</span> has been lifted</a>. This is great news for those of us that pretend to be iPhone developers, because we’re going to start seeing a lot more resources at our fingertips.</p> <p>I’m going to start keeping a moderated page of <a href="">iPhone development resources</a> including links to blog posts, books, and maybe even podcasts. If you see any blog posts (I’m sure there’ll be tons in the next few days). Let me know and I’ll add them to the list.</p> trotter tag:trottercashion.com,2008-10-01:38 2008-10-01T23:10:00Z 2008-10-01T23:11:49Z Mocking Screw-Unit Part Deux <p>I <a href="">wrote earlier</a> about how <a href="">Topper</a> <a href="">mocked out the dom</a> for screw-unit testing. Taking his lead, I started playing with screw-unit and adding some mocking and stubbing in the <a href="">rspec</a> way. It’s not quite release worthy, but it’s on github now and I think it’s nearly usable. Basically, it lets you do things like this:</p> <pre class="javascript"> user = {login: 'bob'}; Screw.Stub.stub(user, 'login').andReturn('nancy'); user.login; // => 'nancy' Screw.Stub.reset(); // Called automatically after each spec user.login; // => 'bob' // Will throw a spec failure if user.email() is never called. Screw.stub.shouldReceive(user, 'email'); </pre> <p>Obviously shouldReceive is not quite complete. It’s missing with(), numberOfTimes(), and other things. Still, it’s good enough that others can start iterating on the model I’ve laid down. As I said earlier, my <a href="">fork of screw-unit</a> is available on github now, so have a look and feel free to leave questions in the comments.</p> trotter tag:trottercashion.com,2008-09-27:33 2008-09-27T23:33:00Z 2008-10-14T14:10:17Z git-bisect Is Your New Best Friend <p>To anyone not using <a href="">git</a>, jump to the bottom of the post then come back up.</p> <p><strong>Update:</strong> I’ve got an even faster method at the bottom now. Skip down there if you already know the basics of git-bisect.</p> <p>Ok, now let’s move on to the cool shit, <a href="">git-bisect</a>. Git-bisect helps you figure out exactly what code change broke a feature in your app, even when that code change was made months ago. It works by assisting you in a binary tree search through your commits, pausing at each one so that you can run a test and mark that commit as good or bad. This can remarkably decrease the amount of time you spend trying to figure out what is causing a new bug, because you quickly can find the exact code change that introduced it.</p> <p>To use git-bisect, you first need a good test to run. Though you <em>could</em> do a manual test like loading a page in your browser and verifying that things look correct, you will be much happier if you write an <strong>automated test</strong> that you can run for each commit. Since I usually live in Ruby land, I’m fairly partial to <a href="">TestUnit</a> and <a href="">rSpec</a> for my automated tests. If you’re in iPhone land, I strongly recommend using <a href="">google-toolbox-for-mac</a>.</p> <p>With automated test in hand, you can kick off git-bisect with <code>git bisect start</code>. You then mark your current commit as bad using <code>git bisect bad</code>. You then checkout a known good commit using <code>git checkout commit_hash</code>. Run your test and mark it as good when it passes using <code>git bisect good</code>. At this point, git-bisect takes over and starts moving you through commit after commit. At each stop, you run your test and then mark the commit using either <code>git bisect bad</code> or <code>git bisect good</code>. At the end, git-bisect will tell you which commit first caused your error. You can then use <code>git diff commit_hash</code> to see what was changed in that commit. When you’re done, you run <code>git bisect reset</code> to set everything back to normal.</p> <p>A typical git-bisect session looks somewhat like this:</p> <pre> (master) $ git bisect start (master|BISECTING) $ git bisect bad (master|BISECTING) $ git checkout eb5eecbb8fc4e2a964e8d2043d8b95f4eb7b563a HEAD is now at eb5eecb... Add MainViewController ... run test which passes ... (eb5eecb...|BISECTING) $ git bisect good Bisecting: 3 revisions left to test after this [d82c1595b6363484fe0d7f60f9ffa096d777bf17] First CompsView test ... run test which fails ... (d82c159...|BISECTING) $ git bisect bad Bisecting: 1 revisions left to test after this [93af33167019fa039f5372dff602a76cbcbc99bb] Add first integration test ... run test which passes ... (93af331...|BISECTING) $ git bisect good Bisecting: 0 revisions left to test after this [4f12091a287c363737ceb650df46196e5008d3f2] Add Comps target ... run test which fails ... (4f12091...|BISECTING) $ git bisect bad 4f12091a287c363737ceb650df46196e5008d3f2 is first bad commit commit 4f12091a287c363737ceb650df46196e5008d3f2 Author: Trotter Cashion <cashion@example.com> Date: Tue Sep 23 20:18:09 2008 -0400 Add Comps target :000000 100644 0000000000000000000000000000000000000000 789bf7877c6059a7f3ac8cb2b53fdb2c903e58ff A Comps-Info.plist :040000 040000 d260571a48328d4a575a7395cd6ece3d651a93ac a622a23fdb80c915eaba49d1d53f7bf0dbf44a70 M ShootAndSpeak.xcodeproj ... Figure out what's wrong ... (4f12091...|BISECTING) $ git bisect reset Switched to branch "master" </pre> <p>I hope you learn to use and love git-bisect. It’s really helped me when trying to find the cause of nasty bugs that seemingly came out of nowhere.</p> <h3>Update</h3> <p>The above is too much work. While searching the <a href="">net</a>, I found something even easier and faster. You can start git-bisect with the commit hashes like so <code>git bisect start bad_commit good_commit</code>. Even better, you can then tell git-bisect to run the tests itself… this is where things get awesome: <code>git bisect run some_test</code>. It’ll iterate through your commits until it finds the bad one. Checkout the sample session below.</p> <pre> /tmp/fake(master) $ git bisect start 68d5ab7a61a871fd097d8820e248cfd168395e4e 20cbc038973d6c78805bc8bfc3d187c2b537f183 Bisecting: 1 revisions left to test after this [3da37ed0ee87c9129a61142ecefef17ab0de7f0f] Test works /tmp/fake(3da37ed...|BISECTING) $ git bisect run testrb test/unit/some_test.rb -n test_truth running testrb test/unit/some_test.rb -n test_truth Loaded suite some_test.rb Started . Finished in 0.000338 seconds. 1 tests, 1 assertions, 0 failures, 0 errors Bisecting: 0 revisions left to test after this [765d7e5c4eba730078907fc00121b8b35ada64b0] Test fails running testrb test/unit/some_test.rb -n test_truth Loaded suite some_test.rb Started F Finished in 0.009607 seconds. 1) Failure: test_truth(ThisTest) [./test/unit/some_test.rb:5]: <false> is not true. 1 tests, 1 assertions, 1 failures, 0 errors 765d7e5c4eba730078907fc00121b8b35ada64b0 is first bad commit commit 765d7e5c4eba730078907fc00121b8b35ada64b0 Author: Trotter Cashion <cashion@example.com> Date: Sun Sep 28 13:13:09 2008 -0400 Test fails :040000 040000 167dd04f2b4101ea256a7a6525859bc03e5433d3 0b062d4b5b03e7ac51ac4050fc7397c8983a2f13 M test bisect run success /tmp/fake(765d7e5...|BISECTING) $ </pre> <p>As you can see above, I’m using ruby for this set of tests. My preferred method is to have git-bisect run testrb (or spec) and specify a single test for it to execute. This ensures that everything runs quite quickly.</p> <h3>Land Here!</h3> <p>If you jumped here from the top, I regret to inform you that this post will make you very sad. Turn back now before you’re stuck improving your life by <a href="">installing git</a>. Once you’ve installed git, go check out <a href="">Peepcode</a>. They’ve got a really good <a href="">screencast</a> and <a href="">pdf</a> that explain git excellently.</p> trotter tag:trottercashion.com,2008-08-08:31 2008-08-08T11:35:00Z 2008-08-08T11:36:11Z Mocking Screw-Unit <p>I’m big on tests. Unit testing helps me clarify my thinking on problems and ensure that my code works well. When writing tests, it’s essential to have a good mocking framework to separate the things you are testing from the things you are not. In Ruby, I like using flexmock for Test::Unit and rSpec’s built in mocking framework when using it. In Javascript though, screw-unit doesn’t really come with a way to mock by default. (As an aside, screw-unit totally rocks for testing js.)</p> <p>Thankfully, my coworker <a href="">Topper</a> (who’s a kickass dev, btw), has been playing around with adding mocking to screw-unit. He’s got a <a href="">fork</a> on github, docs at the previous link, and a quick <a href="">example blog post</a>. Click through and check this shit out, cause it’s hot.</p> trotter tag:trottercashion.com,2008-07-07:27 2008-07-07T17:19:00Z 2008-07-07T17:21:30Z Floating Pain <p><a href="">Topper</a> mentioned a tweet he saw to me in which someone asked why <code>(4.6 * 100).to_i #=> 459</code>. Though this seems like a ruby bug, it’s really just one of the annoying things you hit with rounding errors and floats. At issue is that <code>#to_i</code> floors the float, instead of rounding it. Since the value may be approximated at 459.999999, the <code>#to_i</code> floors it to 459. To have things work like you’d expect, use <code>#round</code> when converting <code>Float</code> to <code>Fixnum</code>. See below for some code examples:</p> <pre class="ruby"> 4.6.to_i # => 4 4.6.round # => 5 (4.6 * 100).to_i # => 459 (4.6 * 100).round # => 460 </pre> trotter tag:trottercashion.com,2008-06-27:26 2008-06-27T03:06:00Z 2008-06-27T03:07:05Z How I Got Started Programming <p><a href="">Paul</a> says I’ve got to do this, and I don’t want to let him down. <a href="">Giles</a> tagged him first, so you should probably read his too.</p> <h3>How old were you when you started programming?</h3> <p>In third grade (when I was 8) I started started taking super nerd math classes with other super nerds. As part of those classes, they had us programming a turtle to draw things on the screen. <a href="(programming_language">Logo</a>) was totally awesome and had me hooked on the magic of programming.</p> <h3>How did you get started programming?</h3> <p>After Logo, my dad (he was a <span class="caps">CTO</span> at the time) bought me Visual Studio and a few books on Visual Basic. It was lots of “Teach Yourself X in X days”, and I ran through VB, C++, and a little Delphi. Naturally, those books didn’t teach me to actually be good, though I did figure out how to make a few small games that I could play.</p> <p>My dad was a <span class="caps">CTO</span> at an investment bank, which is the kind of place that treats a <span class="caps">CTO</span> like crap. I didn’t want to be the guy that got shat on, so in high school I dropped programming and started learning businessy things. I even picked my college based on the strength of its business school. Once I showed up, I realized that I didn’t like anyone at the business school, that philosophy was fun, and that math/econ could make me money. I promptly switched my major.</p> <p>After college, I went to work at a job that I ended up hating, quit it, took a few months to figure out my life, and realized that I really loved programming. Thankfully I got lucky and read a <a href="">blog post</a> that tipped me to the beta book of <span class="caps">AWDWR</span>, which taught me a lot about <strong>real</strong> programming. I consider that the start of me becomming a real programmer, and not just some kid that can code.</p> <h3>What was your first language?</h3> <p>Logo! Drawing with turtles rocks so hard. After that it was VB, which let me push Windows around and made me a little cash.</p> <h3>What was the first real program you wrote?</h3> <p>I wrote my first useful program while working as an intern at a financial services firm. The company was using <a href="">Axys</a> (really bad website alert!) for portfolio analysis and had a tedious process for reconciling their branches with the back office. I wrote a VB program that helped them to perform these reconciliations more quickly, which I hear they still use.</p> <h3>What languages have you used since you started programming?</h3> <p>Logo, VB, C, C++, Delphi, Ruby, Objective-C, Erlang, Scheme, Javascript, Java, and maybe something else. Of those, I’d feel comfortable working on a project using Ruby, Javascript, Objective-C, or Erlang. I’m skilled enough in some of the others, but have vowed to never use them again. I’ll let you guess which.</p> <h3>What was your first professional programming gig?</h3> <p>2005 at the Nathan Kline Institute for Mental Health. There was a PhD there who needed <a href="">ImageJ</a> to talk to his microscope over a serial port and to have a lot of old scripts from ObjectImage translated into ImageJ. It was a fun job that let me work at my own pace and play a lot with the art of programming.</p> <h3>If there is one thing you learned along the way that you would tell new developers, what would it be?</h3> <p>Surround yourself with great people, and never be the smartest guy in the room. If you’re lucky enough to work at a company with some great programmers, you’ll learn a whole lot that way. If your company is full of 9-5 coders, join a local developer group or start your own. <a href="">Nyc.rb</a> and <a href="">Philly On Rails</a> totally rock, so you could always move to New York or Philly and learn from some of the best.</p> <h3>What’s the most fun you’ve ever had programming?</h3> <p>Logo. I used to love making that little turtle draw all sorts of fun things on the screen. There was no real need to make the turtle do things, I was doing it just for the joy of it. I managed to recreate some of that feeling when working on <a href="">spec-unit</a>, which is really my only useful open source contribution to date. Unfortunately, it only has one release ever, and I haven’t messed with it in two years.</p> trotter tag:trottercashion.com,2008-06-22:5 2008-06-22T22:23:00Z 2008-06-22T22:44:05Z On Working Remotely <p>Being on my honeymoon in St. Barths has got me thinking a bit about working remotely. Don’t worry, I’m not working on this trip. I’m more trying to think if ways I could stay on this trip indefinitely, but still manage to make some cash.</p> <p>I’m no stranger to remote work. As some of you may know, I live in Philly but continue to work for <a href="">Motionbox</a> in New York. I commute two days per week, and spend the rest working from home in Philly. Over the past year, I’ve started to catalog my likes and dislikes with this arrangement, and I’m going to list some of my dislikes here along with some possible ways to improve things. For now, we’ll just look at the first problem I’ve encountered:</p> <h3>You’re out of touch</h3> <p>If you’re distributed while the rest of the team is collocated, you <strong>will</strong> be out of the loop. When your boss is walking around the office and stopping at various desks, he won’t be stopping at yours. If you’re looking to be recognized for your accomplishments, this can be a major problem. It’s difficult to advance in the company if you’re not visible.</p> <p>To combat this problem, I’ve found getting everyone on IM and <span class="caps">IRC</span> to be very helpful. If your company uses an open office, you’re in luck. The noise from the floor plan typically causes people to use headphones, so they’ll be much more prone to use IM and <span class="caps">IRC</span> for all their communication (even with those people right next to them). Another good technique is to send copious amounts of email. If people are cataloging what should be done and who has done what through email (a good practice regardless), then it’s much easier for you to keep track of what is happening in the office.</p> <p>I don’t think that Skype or frequent phone calls help much in this regard. Typically, you’re only talking to one person and all the speaker phone arrangements I’ve seen aren’t that great. Voice is great for quickly hashing out the details of a plan with one other person, but is terrible as a mechanism for keeping up with the goings on of the company.</p> <p>Making time (and spending the money) to get to the office at least once a month is invaluable. Though email, IM, and <span class="caps">IRC</span> help, they’re not a real substitute for quality time in person. One of the most important things I’ve done at Motionbox is to know when people are going out for after work drinks to celebrate various accomplishments and made sure that I was able to be in town for them. Though it sounds silly to talk about drinking as an important part of work, the main thing you miss by being remote is the social component. It’s much more important to get in town to socialize than it is to do actual work. You’ll have plenty of time to work when you’re home alone the next day.</p> <p>Anyone have any thoughts on other ways to keep in touch while working remotely?</p> trotter tag:trottercashion.com,2008-06-11:20 2008-06-11T02:30:00Z 2008-06-11T02:46:16Z BelongsToDemeter <p>While playing with Rails the other day, I thought it would be fun if you could get at attributes of a belongs_to association without having to do the whole traverse association and check for nil thing.</p> <pre class="ruby"> # Something like... @person.group_name # => "Pizza Fans!" || nil # Instead of... @person.group ? @person.group.name : nil # => "Pizza Fans!" || nil </pre> <p>Thinking this would be a fun chance to play with some meta, I threw together <a href="">BelongsToDemeter</a>, which you can find over on <a href="">GitHub</a>. It's a rails plugin, but don't expect it to actually install using script/plugin. The code is complete and utter crap, so it's probably best that Rails won't install it. It is slow, and most likely prone to error. Still, it's a fun little thought experiment, and I may decide to clean it up then speed it up if someone tells me they like it.</p> <p>It does what I explained above and also lets you do fun things like this, which I think are useful when assigning associations through a form:</p> <pre class="ruby"> # Lookup the user 'Bob' by login and assign # it to the user association @character.user # => nil @character.GitHub</a> and check out <a href="">BelongsToDemeter</a>. When you're done, let me know if you like the concept. After all that, go erase all memory of the implementation details from your mind, they're ugly.</p> | http://feeds.feedburner.com/trottercashion | crawl-002 | en | refinedweb |
The Storage Team Blog about file services and storage features in Windows Server, Windows XP, Windows Vista and Windows 7.
This is a common topic in the DFS_FRS newsgroup. Customers will describe how some users are unexpectedly denied access to targets in the namespace whereas other users can access the targets without problems. Customers also ask whether there are DFS permissions somewhere that must be adjusted. The answer is that DFS clients will respect the combination of NTFS and share permissions set on the particular target the client is trying to access. Inconsistent access is often caused by the following configurations:
So if your users have unexpected access problems, check the share and NTFS permissions for all targets as described above. Also, when setting NTFS permissions, always use the path of the physical folder (\\servername\sharename) instead of navigating through the DFS namespace to set permissions. This is especially important when you have multiple folder targets for a given folder. Setting permissions on a folder by using its DFS path can cause the folder to inherit permissions from its parent folder in the namespace. In addition, if there are multiple folder targets, only one of them gets its permissions updated when you use the DFS path. KB article 842604 also covers this recommendation.
And finally, for any admins out there whose users are using Office 2000 against a domain-based namespace, I recently helped a newsgroup reader solve a problem that we thought was permissions-related due to the inconsistent access problems, but in fact the problem was something else entirely. See articles 272230 and 294687 for details.
--Jill | http://blogs.technet.com/b/filecab/archive/2006/08/09/444825.aspx | CC-MAIN-2013-20 | en | refinedweb |
Sometime performing Azure subscription level operation using Windows Azure Management API. So to authenticate WCF Service against Windows Service subscription you need to provide the certificate.
Essentially there are three steps involved in this process,
- Read X.509 Certificate file (.cer) from AZURE BLOB.
- Create X.509 certificate from the downloaded file from Azure BLOB.
- Pass the created certificate as part of request to authenticate.
Read Certificate file from Windows AZURE BLOB storage as byte array
In above code snippet, we are reading certificate file from BLOB as an array of byte data. You need to add reference of Microsoft.WindowsAzure and Microsoft.WindowsAzure.StorageClient . Container Name is name of your public container.
Create X.509 certificate
Once you have byte array form Azure BLOB, you can create X.509 certificate to be authenticated using the byte array as below,
Pass the Certificate to authenticate
Here while making call you can add certificate created from AZURE BLOB file .
For your reference full source code is as below,
using System; using System.IO; using System.Linq; using System.Net; using System.Security.Cryptography.X509Certificates; using System.Xml.Linq; using Microsoft.WindowsAzure; using Microsoft.WindowsAzure.StorageClient; namespace ConsoleApplication26 { class Program { static void Main(string[] args) { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse("DataConnectionString"); CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient(); CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("ContainerName"); CloudBlob cloudBlob = cloudBlobContainer.GetBlobReference("debugmode.cer"); Console.WriteLine(cloudBlob.DownloadText()); Console.ReadKey(true); byte[] byteData = cloudBlob.DownloadByteArray(); X509Certificate2 certificate = new X509Certificate2(byteData); var request = (HttpWebRequest)WebRequest.Create(""); request.Headers.Add("x-ms-version:2009-10-01"); request.ClientCertificates.Add(certificate);); } static public byte[] ReadToEnd(System.IO.Stream stream) { long originalPosition = stream.Position; stream.Position = 0; try { byte[] readBuffer = new byte[4096]; int totalBytesRead = 0; int bytesRead; while ((bytesRead = stream.Read(readBuffer, totalBytesRead, readBuffer.Length - totalBytesRead)) > 0) { totalBytesRead += bytesRead; if (totalBytesRead == readBuffer.Length) { int nextByte = stream.ReadByte(); if (nextByte != -1) { byte[] temp = new byte[readBuffer.Length * 2]; Buffer.BlockCopy(readBuffer, 0, temp, 0, readBuffer.Length); Buffer.SetByte(temp, totalBytesRead, (byte)nextByte); readBuffer = temp; totalBytesRead++; } } } byte[] buffer = readBuffer; if (readBuffer.Length != totalBytesRead) { buffer = new byte[totalBytesRead]; Buffer.BlockCopy(readBuffer, 0, buffer, 0, totalBytesRead); } return buffer; } finally { stream.Position = originalPosition; } } public class HostedServices { public string serviceName { get; set; } } } }
I hope this post was useful. Thanks for reading
Follow @debug_modeFollow @debug_mode
Are you sure this is going to work? For the Azure Management API’s you need the certificate AND the private key, so you should use a .pfx file (which unfortunatly is protected with a password).
Your scenario works on your local machine, because the private key happens to be installed there. Does it work in the cloud?
Hi Erik ,
You are very right. This solution would not work on Cloud. if you move above idea to cloud then you will have to provide PFX along with Password while making call | http://debugmode.net/2011/08/15/creating-x-509-certificate-from-windows-azure-blob/ | CC-MAIN-2013-20 | en | refinedweb |
hugin1/ptbatcher/ProjectArray.h File ReferenceBatch processor for Hugin. More...
#include <wx/dynarray.h>
#include <wx/string.h>
#include "panodata/PanoramaOptions.h"
#include <wx/log.h>
#include "PT/Panorama.h"
#include "base_wx/platform.h"
Include dependency graph for ProjectArray.h:
This graph shows which files directly or indirectly include this file:
Go to the source code of this file.
Detailed DescriptionBatch processor for Hugin.
- Author:
- Marko Kuder <marko.kuder@gmail.com>
- Id
- ProjectArray | http://hugin.sourceforge.net/docs/html/ProjectArray_8h.shtml | CC-MAIN-2013-20 | en | refinedweb |
The .NET framework provides two namespaces, System.Net and System.Net.Sockets for network programming. The classes and methods of these namespaces help us to write programs, which can communicate across the network. The communication can be either connection oriented or connectionless. They can also be either stream oriented or data-gram based. The most widely used protocol TCP is used for stream-based communication and UDP is used for data-grams based applications. The System.Net.Sockets.Socket is an important class from the System.Net.Sockets namespace. A Socket instance has a local and a remote end-point associated with it. The local end-point contains the connection information for the current socket instance. There are some other helper classes like IPEndPoint, IPADdress, SocketException etc, which we can use for Network programming. The .NET framework supports both synchronous and asynchronous communication between the client and server. There are different methods supporting for these two types of communication. A synchronous method is operating in blocking mode, in which the method waits until the operation is complete before it returns. But an asynchronous method is operating in non-blocking mode, where it returns immediately, possibly before the operation has completed. Dns classThe System.net namespace provides this class, which can be used to creates and send queries to obtain information about the host server from the Internet Domain Name Service (DNS). Remember that in order to access DNS, the machine executing the query must be connected to a network. If the query is executed on a machine, that does not have access to a domain name server, a System.Net.SocketException is thrown. All the members of this class are static in nature. The important methods of this class are given below. public static IPHostEntry GetHostByAddress(string address)Where address should be in a dotted-quad format like "202.87.40.193". This method returns an IPHostEntry instance containing the host information. If DNS server is not available, the method returns a SocketException. public static string GetHostName()This method returns the DNS host name of the local machine.In my machine Dns.GetHostName() returns vrajesh which is the DNS name of my machine. public static IPHostEntry Resolve(string hostname)This method resolves a DNS host name or IP address to a IPHostEntry instance. The host name should be in a dotted-quad format like 127.0.01 or.
IPHostEntry classThis is a container class for Internet host address information. This class makes no thread safety guarantees. The following are the important members of this class. AddressList property
Gives an IPAddress array containing IP addresses that resolve to the host name. Aliases propertyGives a string array containing DNS name that resolves to the IP addresses in AddressList property. The following program shows the application of the above two classes. using System;using System.Net;using System.Net.Sockets;class MyClient{public static void Main(){IPHostEntry IPHost = Dns.Resolve();Console.WriteLine(IPHost.HostName);string []aliases = IPHost.Aliases;Console.WriteLine(aliases.Length);IPAddress[] addr = IPHost.AddressList;Console.WriteLine(addr.Length);for(int i= 0; i < addr.Length ; i++){Console.WriteLine(addr[i]);}}} IPEndPoint Class This class is a concrete derived class of the abstract class EndPoint. The IPEndPoint class represents a network end point as an IP address and a port number. There is couple of useful constructors in this class: IPEndPoint(long addresses, int port)IPEndPoint (IPAddress addr, int port) IPHostEntry IPHost = Dns.Resolve(); Console.WriteLine(IPHost.HostName); string []aliases = IPHost.Aliases; IPAddress[] addr = IPHost.AddressList; Console.WriteLine(addr[0]); EndPoint ep = new IPEndPoint(addr[0],80); Conclusion We already discussed about the basics of network programming with C#. We will discuss about the System.Net.Socket class and how we can use Socket class to write network programming in the coming articles.
Simple Sniffer in C#
Network Programming in C# - Part 2
Sir, i m appearing M. Sc Comp Sci in Pune uni. Im doing Network Monitoring system in C#...
for assistance, Can i get source code for accessing clients info in network....
Why we uuse using System in C sharp? Is it directories or header files?
I would suggest to use xf.server api for writing high performance network applications in .NET
As an example take a look at the code samples:
I have error in programming!
so I need your asistance.
desciption:
is there any working source code for sample... :) | http://www.c-sharpcorner.com/UploadFile/rajeshvs/NetworkProgramPart111182005034832AM/NetworkProgramPart1.aspx | CC-MAIN-2013-20 | en | refinedweb |
09 June 2006 05:08 [Source: ICIS news]
SINGAPORE (ICIS news)--Asian aromatics prices rebounded on Friday morning after dropping sharply the previous day, with the falls seen as overdone, traders said.
The market has been volatile since refinery problems in the US late last week sparked a surge in prices there, which subsequently spilled over to Asia and Europe.
Early week trades saw benzene and toluene prices climbing up to the mid-$1000s/tonne FOB ?xml:namespace>
By mid-week, however, sentiment started to turn bearish as refinery outages at Citgo and Valero in the
Asian aromatics prices were sold off sharply amid volatile trading on Thursday. Benzene prices plunged to $940/tonne for July, shedding more than $100/tonne from the week’s high. August deals were finalised at $925/tonne.
Toluene trades also declined by nearly $100/tonne to $960/tonne for July shipment, while closing offers slipped to $945/tonne. Likewise, styrene tumbled to $1,250/tonne for July shipment.
Prices rebounded on Friday, with a benzene deal detected at $985/tonne for August lifting. A toluene deal was fixed at $990/tonne for July, up from Thursday’s closing offer at $945/tonne. Two styrene deals were heard concluded at $1,260/tonne, higher than Thursday's. | http://www.icis.com/Articles/2006/06/09/1069184/asian+aromatics+prices+rebound+after+thursday+falls.html | CC-MAIN-2013-20 | en | refinedweb |
In this tutorial you will learn how to change working directory on FTP server using java.
Change Directory :
FTP Server provides a way");
You can change your working directory on FTP server, by using FTPClient class method which is mentioned bellow -
boolean changeWorkingDirectory(String dirName) :
This method change the current working directory to the specified directory.
dirName may be absolute path or relative path.
Here is example -
changeWorkingDirectory("/ftpNew") // this method changes your current working directory to the ftpNew directory under server?s root directory.
changeWorkingDirectory("ftpNew") // this method changes your current working directory to the ftpNew directory relative to the previous working directory.
Example : This example represents how to change the working directory on ftp server. Here "ftpNew " is our new directory.
import java.io.IOException; import org.apache.commons.net.; import org.apache.commons.net.; import org.apache.commons.net.; class FtpChangeDirectory { public static void main(String[] args) throws IOException { FTPClient client = new FTPClient(); boolean result; try { // Connect to the localhost client.connect("localhost"); // login to ftp server result = client.login("admin", "admin"); if (result == true) { System.out.println("Successfully logged in!"); } else { System.out.println("Login Fail!"); return; } String newDir = "/ftpNew"; // Changing working directory result = client.changeWorkingDirectory(newDir); if (result == true) { System.out.println("Working directory is changed. Your New working directory : " + newDir); } else { System.out.println("Unable to change"); } } catch (FTPConnectionClosedException e) { e.printStackTrace(); } finally { try { client.disconnect(); } catch (FTPConnectionClosedException e) { System.out.println(e); } } } }
Output :
Successfully logged in! Working directory is changed. Your New working directory : /ftpNew
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions.
Ask Questions? Discuss: Change working directory on FTP Server
Post your Comment | http://www.roseindia.net/java/javaftp/FtpChangeDir.shtml | CC-MAIN-2013-20 | en | refinedweb |
java.lang.Object
org.netlib.lapack.SORMQRorg.netlib.lapack.SORMQR
public class SORMQR
SORMQR is a simplified interface to the JLAPACK routine sormqORMQGEQRF.. * *,K) * The i-th column must contain the vector which defines the * elementary reflector H(i), for i = 1,2,...,k, as returned by *) REAL array, dimension (K) * TAU(i) must contain the scalar factor of the elementary * reflector H(i), as returned by SGEQRF. * * C (input/output) REAL array, dimension (LDC,N) * On entry, the M-by-N matrix C. * On exit, C is overwritten by Q*C or Q**T*C or C*Q**T or C*Q. * * LDC (input) INTEGER * The leading dimension of the array C. LDC >= max(1,M). * * WORK (workspace/output) REAL array, dimension (LWORK) * On exit, if INFO = 0, WORK(1) returns the optimal LWORK. * * LWORK (input) INTEGER * The dimension of the array WORK. * If SIDE = 'L', LWORK >= max(1,N); * if SIDE = 'R', LWORK >= max(1,M). * For optimum performance LWORK >= N*NB if SIDE = 'L', and * LWORK >= M*NB if SIDE = 'R', * * ===================================================================== * * .. Parameters ..
public SORMQR()
public static void SORMQR(java.lang.String side, java.lang.String trans, int m, int n, int k, float[][] a, float[] tau, float[][] c, float[] work, int lwork, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/SORMQR.html | CC-MAIN-2013-20 | en | refinedweb |
What is the use of a Junit before and Test package in java? how can i use it with netbeans?
Can I have more than one method with @Parameters in junit test class which is running with Parameterized class ?
@RunWith(value = Parameterized.class)
public class JunitTest6 {
private String str;
public JunitTest6(String region, ...
Is it possible to test for multiple exceptions in a single JUnit unit test? I know for a single exception one can use, for example
@Test(expected=IllegalStateException.class)
Uri's answer got me thinking about what limitations JUnit 4 aquired by using annotations instead of a specific class hierarchy and interfaces the way JUnit 3 and earlier did. I'm ...
We have developed some code which analyzes annotated methods and adds some runtime behaviour. I would like to test this. Currently I am hand-coding stubs with certain annotations for setting up ...
i have written a few junits with @Test annotation. If my test method throws a checked exception and if i want to assert the message along with the exception, is there ...
I wish to launch the GUI application 2 times from Java test. How should we use @annotation in this case?
@annotation
public class Toto {
@BeforeClass
public static void setupOnce()
{
final Thread ...
I am using junit 4.8.1.
The following is the code. I am getting "Nullponiter" exception. I suspect that the "SetUp" code under @Before is not been excecuted before other methods. Request the ...
@Before
I've got the following test:
@Test(expected = IllegalStateException.class)
public void testKey() {
int key = 1;
this.finder(key);
}
I just used MyEclipse to automatically generate some JUnit test cases. One of the generated methods looks like this:
@Ignore("Ignored") @Test
public void testCreateRevision()
{
fail("Not yet implemented"); // TODO
}
I encountered TestDox tool that reads jUnit tests and processes them to support BDD-style documentation as follows:
Test class:
public class FooTest extends TestCase {
public void testIsASingleton() ...
What it the equivalent of using the @RunWith annotation for Junit 3.8? I've searched for a while on this, but Junit 3.8 is much older and I haven't been able to ...
In general I prefer to have annotation tags for methods, including @Test ones, on the line before the method declaration like this
@Test
public void testMyMethod() {
// Code
}
@Test public void testMyMethod() {
// ...
I've been looking for resources on how to extend JUnit4 with my own annotations.
I'd like to be able to define a @ExpectedProblem with which I could label some of my tests. ...
@ExpectedProblem
I'm trying to use the timeout parameter for Annotation Type Test in a unit test within an IntelliJ IDEA project:
The second optional parameter, timeout, causes a test to fail ...
I am very new to Java programming. I have a unit test file to be run. It has annotations of @Before and @Test. I have tried to understand these concepts using ...
@Test
In an effort to design components that are as reusable as possible, I got to thinking recently about the possibility of so-called "adapter annotations." By this, I mean the application ...
I wanted to create a custom JUnit annotation, something similar to expected tag in @Test, but I want to also check the annotation message.
Any hints how to do that, or maybe ...
I would like my @Before method to know the currently executing tests Annotations, so that the @Before method can do various things. Specifically, right now our @Before always does various initialization ...
I am attempting to create a utility method that uses reflection to test getters/setters. My idea is to allow the caller to specify a set of test values and the expected ...
We are using org.mule.tck.FunctionalTestCase for test cases. Its an abstract JUnit test case.
This is how the dependencies are declared in the pom.xml:
...
...
Is there a way say,
import org.junit.Test;
public interface ITest {
@Test
public void runTest();
}
public class ...
Is the test marked as not passing (i.e. red)? That should not happen. Do you by chance have two different classes with the name "MyCustomException"? Edit: alternative cause: You are running the Test as a JUnit 3 Testcase. This happens when you extend TestCase. If you want to use JUnit 4 features, then make sure that the class is recognized as ... | http://www.java2s.com/Questions_And_Answers/Java-Testing/junit/Annotation.htm | CC-MAIN-2013-20 | en | refinedweb |
#include <RepeatedTest.h>
#include <RepeatedTest.h>
Inheritance diagram for CppUnit::RepeatedTest:
Does not assume ownership of the test it decorates
[inline]
[private]
[virtual]
Return the number of test cases invoked by run().
The base unit of testing is the class TestCase. This method returns the number of TestCase objects invoked by the run() method.
Reimplemented from CppUnit::TestDecorator.
Run the test, collecting results.
Description of the test, for diagnostic output.
The test description will typically include the test name, but may have additional description. For example, a test suite named complex_add may be described as suite complex_add.
complex_add
suite complex_add | http://cppunit.sourceforge.net/doc/1.8.0/class_cpp_unit_1_1_repeated_test.html | CC-MAIN-2013-20 | en | refinedweb |
During the development of JSPs and servlets, you may trigger exceptions in the code. At times the resulting output from the exception is what caused the error in the first place, because the server is overriding the original exception to provide some information. You really want to know the underlying cause of the error. The Resin server allows you to access this rootCause exception. Consider the following code:
catch (Exception e) { debug.log(Level.WARNING, e.toString(), e); Throwable rootE = e.getCause(); if (rootE != null) debug.log(Level.WARNING, e.toString(), e); }
This code includes a normal try/catch block that catches all Exception types. The information about the exception will be logged to the Resin logging system.
If you need to know what your Resin server is doing within its threads, you can obtain a dump as follows :
Windows. Open a command prompt and start the server by changing to the installation's /bin directory and running the command httpd.exe. When the server has started and processed the pages you are interested in, press Ctrl-Break. The server will shut down and display where and what all the threads are currently doing.
UNIX. Open a command prompt and type kill -QUIT <PID>. The PID is one of the child processes for IBM's JDK and the parent process of all the threads in Sun's JDK.
Once you kill the server, output will be generated similar to that shown in Figure 12.4. Pay particular attention to the threads in the waiting-on-monitor stage, because several threads may be waiting on each other—thereby producing a deadlock.
Figure 12.4: Resin thread dump output.
Considerable attention has been given within the Resin mailing list groups to the subject of using stand-alone debuggers like jdb, JPad Pro (), Java Platform Debugger Architecture (JPDA;), and others with Resin. If you are interested in this topic, we suggest that you visit the Resin site () and search on the topic of debugging. You will be directed to a results page with hundreds of articles from the mailing lists about using these tools with Resin. Some of the comments include complete instructions, which we won't duplicate here.
In this chapter, we covered some types of errors you can expect to experience when working with JSP and servlets. We also discussed several different ways to produce debugging or logging information about your application. In Chapter 13, we will cover how to implement security in your server and pages.
Security is one of the most talked-about aspects of the Internet today. When you're developing a site or Internet-based application, you must concern yourself with providing access security, encryption of sensitive data, and other security topics. In this chapter, we will look at tools Resin provides in the following areas:
Authentication— Who can access.
Authorization— What can be accessed.
OpenSSL— Keeping data secure.
Authentication is defined as the process of determining the identify of a user. Typically, the user is trying to access a password-protected part of an application or site; for example, they may be logging in to a site that has personalized content. In this section we will look at three Resin techniques you can use instead of writing your own authentication code.
If you are interested in password protecting part of your site or application, the XmlAuthenticator will do the work for you. The XmlAuthenticator code pulls username/password combinations from the user and matches them against combinations listed in the Resin configuration file. Let's look at the configuration code first:
<authenticator type="com.caucho.server.security.XmlAuthenticator"> <init> <user>usergroup:group54:user</user> <password-digest>none</password-digest> </init> </authenticator> <login-config <security-constraint> <url-pattern>/users-only/*</url-pattern> <role-name>user</role- name > </security-constraint>
This configuration information is added to a <web-app> element to control access to the application files in the Web application. In the default Resin configuration file, you file a <web-app> for the tutorials defined as <web-app id="\java_tut" \>. By putting the previous configuration in the <web-app> element, the users-only directory is protected and a username/password is required for access. When you attempt to browse to one of the example template files in the java_tut/users-only directory, a username/password dialog appears on the screen, as shown in Figure 13.1.
Figure 13.1: Basic authentication using XmlAuthenticator.
Let's look at the configuration to determine what it's doing. The <login-config> element is the primary root for the configuration information. This element has an attribute called auth-method; in this example, it is set to basic. The basic value tells the system to pop up a username/password dialog. (In a moment you will see two other values that can be used.) Next, the configuration uses an <authenticator> element to choose the type of Resin authenticator class to use. (You will see other choices later in the chapter and look at the API to write your own authenticator class.) The <init> element lists the username:password:role values for the authentication. Notice that all the information is plain text. After the authenticator is chosen , the <security-constraint> element picks the directory where protected code resides and the role necessary to gain access to the code.
When you browse to your Web application and use a path with the user-only text in it, the system prompts for a username/password. Keep in mind that the username and password are passed over the network as plain text and can be easily hacked.
Note that the username/password/role information can also appear in the web.xml files located in each of the Web application's directories.
We've mentioned that the password in the basic authentication scheme is kept in a clear-text format in the Resin configuration file or the web.xml file. Resin provides the ability to encrypt the passwords stored in the various files. Consider the following authentication configuration:
<authenticator type="com.caucho.server.security.XmlAuthenticator"> <init> <user>segroup:8b7dchPDKI261QvH9Cw:user</user> <password-digest>MD5-base64</password-digest> <password-digest-realm>resin</password-digest-realm> </init> </authenticator> <login-config <realm-name>resin</realm-name> </login-config>
In this configuration, the auth-method attribute has been changed from basic to digest. In the <authenticator> element, a new <init-param> element has been added, using the password-digest attribute and a value of MD5-base64. This element tells the system that the authenticator is using the MD5 algorithm for the encryption/decryption of stored passwords. In the <init-param> for a user, you will see that the password is no longer plain text but instead is encrypted. To determine what the password should be, you can use the following application:
import com.caucho.http.security.PasswordDigest; import java.io; public class CreatePassword { public static void main(String[] args) { PasswordDigest digest = new PasswordDigest(); digest.setAlgorithm("MD5"); digest.setFormat("base64"); digest.setRealm("resin"); System.out.println(digest.getPasswordDigest(args[0], args[1])); } }
The application takes two command-line arguments: the username and the password. The result is output to the console. Unfortunately, you must execute this program manually each time you want to add a username and password combination to the password file. This process isn't very productive for new users to your site or application. As you will see later in the chapter, you can also use the database to store passwords.
Both basic and digest authentication use the popup dialog shown in Figure 13.1 to obtain the username and password from the user. Many times, you'll want to obtain the username and password using a form with associated graphics on the page. Resin lets you use an authenticator class, as in the basic and digest mechanisms, but also a form. First consider the configuration:
<authenticator type="com.caucho.server.security.XmlAuthenticator"> <init> <user>usergroup:group54:user</user> <password-digest>none</password-digest> </init> </authenticator> <login-config <realm-name>resin</realm-name> </login-config>
Here the configuration is changed to use an auth-method of form as well as a password-digest through the user= parameter. The real works takes place in a form placed on a Web page. The form must contain several specific values in order to work correctly.
You can add an additional element to the configuration to help define the form values to be used when a user wants access to a secured page. The <form-login-config> element contains a number of attributes for defining which page to display when a password is needed (form-login-page) and the page to display if the user is denied access to the page (form-error-page).
Here are the page and form to use as an example:
<HTML> <BODY> <FORM action='j_security_check' method='POST'> <TABLE> <TR><TD>User:<td><input name='j_username'></TD></TR> <TR><TD>Password:<td><input name='j_password'></TD></TR> <TR><TD><input type=submit> </TABLE> </FORM> </BODY> </HTML>
An HTML form needs to use a number of elements specific to Resin and the form authentication. The action value for the form must be the string j_security_check. When the form is submitted to the Resin server, the code defined as j_security_check executes. The code looks for a username defined by the name j_username and a password defined as j_password. These two values are compared against the defined username/password combinations stored in the configuration file.
If the username/password combination is found in the configuration file, the user is directed to the page they originally tried to access; otherwise , the user will see the error page. You have the option of embedding the name of the page where the user should be directed when they enter the correct username/password, by adding the following <input> tag and supplying the correct values:
<input name='j_uri' type='hidden' value='index.jsp'>
When you're using the XmlAuthenticator class and configuration, you can specify a password file. For example:
<authenticator type="com.caucho.server.security.XmlAuthenticator"> <init> <path>init/password.xml</path> <password-digest>MD5-base64</password-digest> </init> </authenticator> <login-config <realm-name>resin</realm-name> </login-config>
In this configuration file, the path attribute is used in an <init-param> tag to indicate the exact location and filename where passwords can be found. The format of the file is as follows :
<authenticator> <user name='segroup' password='8b7dhcPDKI261QvH9Cw' role='user'/> </authenticator>
At some point, you have probably experienced an interruption while shopping online. When you return to your shopping, one click on the site causes the system to ask you to log in again. This is called a session timeout. You don't want users to be continually logged in to the system, regardless of the idle time for the session. Resin lets you set configurations where the user is automatically logged out when the session times out.
Consider the following configuration:
<authenticator type="com.caucho.server.security.XmlAuthenticator"> <init-param> <path>init/password.xml</path> <password-digest>MD5-base64</password-digest> <logout-on-session-timeout>true</logout-on-session-timeout> <user>segroup:8b7dhcPDKI261QvH9Cw:user</user> </init-param> </authenticator> <login-config <realm-name>resin</realm-name> </login-config>
The new line in the configuration is the <init-param> element that has a logout-on-session-timeout attribute with a value of True. This attribute causes the system to log out the user for this specific Web application.
As we've noted several times, the username/password combinations are stored in the various configuration files or another file on the server. Although this setup works, it doesn't handle adding new combinations easily. Resin provides another option: You can use a backend database to store the password information. Just about any database can be used, as long as it can be connected to by a JDBC driver.
Consider the following configuration:
<authenticator type="com.caucho.server.security.JdbcAuthenticator"> <init> <data-source>jdbc/logins</data-source> <password-query> SELECT password FROM PASSWORD WHERE username=? </password-query> <cookie-auth-query> SELECT username FROM PASSWORD WHERE cookie=? </cookie-auth-query> <cookie-auth-update> UPDATE PASSWORD SET cookie=? WHERE username=? </cookie-auth-update> <role-query> SELECT role FROM PASSWORD WHERE username=? </role-query> <logout-on-session-timeout>true</logout-on-session-timeout> </init> </authenticator> <login-config <realm-name>resin</realm-name> </login-config>
You can use the JdbcAuthenticator class to obtain a password from a JDBC-compliant database. As you can see in the previous configuration, access to a restricted page causes a form to be displayed to the user; this form is found in the login.html Web page. The form's HTML is as follows:
<HTML> <BODY> <FORM action='j_security_check' method='POST'> <TABLE> <TR><TD>User:<td><input name='username'></TD></TR> <TR><TD>Password:<td><input name='password'></TD></TR> <TR><TD><input type=submit> </TABLE> </FORM> </BODY> </HTML>
As required, the action on the form is j_security_check; but the names of the input values have been changed since you first saw the form authentication of username and password. When the user clicks the Submit button, the values are passed to the JdbcAuthenticator for use in various SQL statements. The statements are listed in the previous configuration.
The first element for defining access to a database where passwords are stored is <pool-name>. This element defines the datasource to be used for access to the password database. In this example, the datasource value is jdbc/logins. This value relates to a <resource-ref> element in the configuration file. Here's an example <resource-ref> element used as a test:
<database> <jndi-name>jdbc/logins</jndi-name> <driver type="org.gjt.mm.mysql.Driver"> <url>jdbc:mysql://localhost:3306/users</url> <user></user> <password></password> </driver> <max-connections>20</max-connections> <max-idle-time>30s</max-idle-time> </database>
Once the authenticator has access to a database, it expects to find a database and table with the password information. The schema defined in the JdbcAuthenticator code is as follows:
CREATE TABLE password ( username VARCHAR(250) NOT NULL, password VARCHAR(250), cookie VARCHAR(250), role VARCHAR(250), PRIMARY_KEY(username) )
Once access has been established with the appropriate database, SQL is executed to obtain the correct information for the provided username and password. There are four SQL definitions:
<password-query>— Requires a SQL statement that returns the password for a provided username.
<cookie-auth-query>— Provides the username for a specific cookie value.
<cookie-auth-update>— Allows the system to update the cookie for a specific username.
<role-query>— A SQL statement that returns a role value for a specific username.
The specific SQL statements should use the <input> tag names from the HTML form used to obtain the username and password.
If the username and password aren't found in the database, the error.html page is executed. In a production system, the error page will probably add the user to the database as appropriate. | http://flylib.com/books/en/1.213.1.103/1/ | CC-MAIN-2013-20 | en | refinedweb |
Trap.
Trapfs - an automounter on the cheap
Posted Nov 4, 2004 20:24 UTC (Thu) by crlf (guest, #25122)
[Link]
Trapfs was originally developed to deal with trapping access to non-existent device files. The intent to provide devfs style module-loading when a user tries to access non-existent files in /dev. Autofs on the otherhand has a similar albeit very different task: to mount filesystems upon access to non-existing directories as well as to existing directories upon traversal.
The difference in scope is where trapfs provides dynamic manipulation of a given filesystem, autofs provides dynamic manipulation of a system namespace. The two appear to handle the same problem, but in fact the semantic differences distinguish them.
For instance, the semantics for autofs are well defined across various other Unix platforms, and autofsng tries hard to match them. For instance, in autofs you have three kinds of traps:
- when accessing a yet-non-existing-directory
- when accessing a 'ghosted' directory (the autofs 'browse' option)
- when accessing a directory on another filesystem (to perform lazy-mounting of a hierarchal multimount entry).
Contrarily, trapfs (currently) only handles the case of accessing a non-existent fs object (file/directory/device/etc) on a given filesystem.
Another semantic that differs between the two is the expiration of 'trapped' objects. Autofs has defined rules for 'peeling' back automounted filesystems. These rules are complicated due to the combination of hierarchal multimounts and racing with userspace for access. Autofsng solves these problems by making expiry of a complicated hierarchy of filesystems native to the kernel vfs. Similar issues will be seen with trapfs come the time somebody wishes to have directories that are trapped and created recursively.
Given the current state of trapfs, I do not feel it is ready for anything more than a devfs work-a-like [1] or very simple automounting.
Going forward, I hope to work with Adam to see if any of the functionality of the two projects can be merged together.
[1]: preferrably with udev of course :)
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/109400/ | CC-MAIN-2013-20 | en | refinedweb |
On Wed, Aug 20, 2008 at 02:38:11PM -0700, johnewing wrote: > I am trying to figure out how to test if two numbers are of the same > sign (both positive or both negative). I have tried > > abs(x) / x == abs(y) / y Zero is a problem, no matter how you slice it. Zero can be considered positive or negative (mathematically, 0 = -0). If you want zero to be treated always as positive, you can write this: def same_sign(a, b): return (abs(a) == a) == (abs(b) == b) If you want to respect zero's duplicitous nature, you have to write it like this: def same_sign(a, b): if a == 0 or b == 0: return True return (abs(a) == a) == (abs(b) == b) The first version *almost* works for the duplicitous zero: >>> sign(-0, 1) True >>> sign(0, 1) True >>> sign(0, -1) False Close, but no cigar. -- Derek D. Martin GPG Key ID: 0x81CFE75D -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: <> | http://mail.python.org/pipermail/python-list/2008-August/474360.html | CC-MAIN-2013-20 | en | refinedweb |
Discussions
General J2EE: Accessing properties from jboss
Accessing properties from jboss (4 messages)Hi, Am using jboss AS, and, I would like to give some configuration values(typically key-value). I have a scenario to access certain key values from a config file, I can do it through Properties class and FileInputStream, but i don want to hardcode path to the config file, I also can do it through ClassLoader.getResourceAsStream ("resource.properties"); But what I am wondering here is if I can configure props using something like jboss.properties,etc, so that props are loaded when jboss starts up, and I can access those props through System.getProperty() Appreciate your help! Thanks
Threaded Messages (4)
- Re: Accessing properties from jboss by Regunath B on April 10 2007 04:55 EDT
- Re: Accessing properties from jboss by v ck on April 10 2007 05:02 EDT
- Re: Accessing properties from jboss by Biswa Das on April 10 2007 11:58 EDT
- Re: Accessing properties from jboss by v ck on April 20 2007 01:04 EDT
Re: Accessing properties from jboss[ Go to top ]
There are different ways to achieve this in JBoss AS depending on where you need to access the configuration information. In order to access config files from the web layer, you may specify it in your web.xml using relative paths. For e.g you would access a file app.properties located under \server\default\conf folder using a init param for a startup servlet like: appConfigPath /../../../conf/app.properties Alternatively, you may specify the relative path as an attribute for a MBean that you have written. MBean configuration is in jboss-service.xml
- Posted by: Regunath B
- Posted on: April 10 2007 04:55 EDT
- in response to v ck
Re: Accessing properties from jboss[ Go to top ]
I do not want to access the property from a servlet or from an MBean, I should have mentioned that, sorry. My scenario is to get some config values from a property file and use it in a few helper classes that I have written to carry out some general functions, for ex, I want to access a timeout value from the property file in the helper class! how do I achieve this, or which is the right way of doing this thanks
- Posted by: v ck
- Posted on: April 10 2007 05:02 EDT
- in response to Regunath B
Re: Accessing properties from jboss[ Go to top ]
See run.conf file that is loaded by JBoss boot up. But It will only solve one problem System.getProperty and it is not a typical properties file. In case you want to load it through classloader as well then you may have to put some research and update the Jboss startup script. Keep the properties file in a jar and put it in Server lib and add some unzipping logic to read it un run.sh script as well.. Happy trying..........
- Posted by: Biswa Das
- Posted on: April 10 2007 11:58 EDT
- in response to v ck
Re: Accessing properties from jboss[ Go to top ]
I could finally access the props specified in jboss-service.xml through an MBean, initially I had mentioned that I did not want to do it through MBean, but now due to a change, I have to do this, and here is an FYI on how to do it : 1) specify your key-value pairs in jboss-service.xml Ex : <!-- Additional properties required by the environment --> <![CDATA[ Key1=11111 Key2=22222 ..... 2)Now in your management bean, put an @Depends Ex: package com.abc.xyz; // your import statements /** * @author somebody * */ @LocalBinding(...) @Service(...) @Management(...) @Depends("jboss.util:type=Service,name=SystemProperties") public class MBeanClass implements MBeanInterface { ... } 3)Now in your Mbean you can access the props with a System.getProperty() call.
- Posted by: v ck
- Posted on: April 20 2007 01:04 EDT
- in response to Biswa Das | http://www.theserverside.com/discussions/thread.tss?thread_id=44936 | CC-MAIN-2013-20 | en | refinedweb |
Defines the Profile interface. More...
#include <Profile.h>
Defines the Profile interface.
An abstract base class for representing object location information. This is based on the CORBA IOR definitions.
Constructor.
If you have a virtual method you need a virtual dtor.
To be used by inherited classes.
Decrement the object's reference count. When this count goes to 0 this object will be deleted.
Increase the reference count by one on this object.
Obtain the object key, return 0 if the profile cannot be parsed. The memory is owned by the caller!
Reimplemented in TAO_Unknown_Profile.
Add a protocol-agnostic endpoint.
Reimplemented in TAO_IIOP_Profile.
Add the given tagged component to the profile.
Set the addressing mode if a remote servant replies with an addressing mode exception. If this profile doesn't support a particular addressing mode, this method needs to be overridden signal the appropriate error.
** RACE CONDITION NOTE **
Currently, getting and setting the addressing mode is not protected by a mutex. Theoretically, this could cause a race condition if one thread sends a request, then gets an exception from the remote servant to change the addressing mode, and then another thread sends a different request to the same servant using the wrong addressing mode. The result of this is that we'll get another address change exception. (Annoying, but not that bad.)
In practice at the current time, the above theoretical case never happens since the target specification always uses the object key except for MIOP requests. Remote ORBs can't respond to MIOP requests even to send exceptions, so even in this case, the race condition can't happen.
Therefore, for the time being, there is no lock to protect the addressing mode. Given that the addressing mode is checked in the critical path, this decision seems like a good thing.
Return the current addressing mode for this profile. In almost all cases, this is TAO_Target_Specification::Key_Addr.
Return a pointer to this profile's endpoint. If the most derived profile type uses an endpoint that is a type that does not derive from the endpoint type of the base profile, then this method returns the base type's endpoint. For example, SSLIOP_Profile derives from IIOP_Profile, but SSLIOP_Endpoint does not derive from IIOP_Endpoint. Because SSLIOP is tagged the same as IIOP, this method is required to facilitate the Endpoint Policy's filtering function. The default implementation of base_endpoint simply returns endpoint.
Reimplemented in TAO_IIOP_Profile.
Compare the object key for this profile with that of another. This is weaker than is_equivalent
Creates an encapsulation of the ProfileBody struct in the cdr.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
This method is used to get the IOP::TaggedProfile. The profile information that is received from the server side would have already been decoded. So this method will just make a IOP::TaggedProfile struct from the existing information and return the reference to that. This method is necessary for GIOP 1.2.
Initialize this object using the given CDR octet string.
Reimplemented in TAO_Unknown_Profile.
Helper for decode(). Decodes endpoints from a tagged component. Decode only if RTCORBA is enabled. Furthermore, we may not find TAO_TAG_ENDPOINTS component, e.g., if we are talking to nonRT version of TAO or some other ORB. This is not an error, and we must proceed. Return 0 on success and -1 on failure.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Decode the protocol specific profile details.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Profile equivalence template method invoked on subclasses.
TAO_Profile subclasses must implement this template method so that they can apply their own definition of profile equivalence.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Encode this profile in a stream, i.e. marshal it.
Reimplemented in TAO_Unknown_Profile.
Encodes this profile's endpoints into protocol specific tagged components. This is used for non-RTCORBA applications that share endpoints on profiles. The only known implementation is IIOP, using TAG_ALTERNATE_IIOP_ADDRESS components.
Reimplemented in TAO_IIOP_Profile.
Encodes this profile's endpoints into a tagged component. This is done only if RTCORBA is enabled, since currently this is the only case when we have more than one endpoint per profile.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Return a pointer to this profile's endpoint. If the profile contains more than one endpoint, i.e., a list, the method returns the head of the list.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Return how many endpoints this profile contains.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Return the first endpoint in the list that matches some filtering constraint, such as IPv6 compatibility for IIOP endpoints. This method is implemented in terms of TAO_Endpoint::next_filtered().
MProfile accessor.
Keep a pointer to the forwarded profile.
This object keeps ownership of this object.
Accessor for the client exposed policies of this profile.
Return a hash value for this object.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Verify profile equivalance.
Two profiles are equivalent if their tag, object_key, version and all endpoints are the same.
trueif this profile is equivalent to
other_profile.
Allow services to apply their own definition of "equivalence.".
This method differs from the
do_is_equivalent() template method in that it has a default implementation that may or not be applicable to all TAO_Profile subclasses.
Reimplemented in TAO_Unknown_Profile.
Return the next filtered endpoint in the list after the one passed in. This method is implemented in terms of TAO_Endpoint;:next_filtered(). If the supplied source endpoint is null, this returns the first filtered endpoint.
The object key delimiter.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Get a pointer to the TAO_ORB_Core.
Initialize this object using the given input string. Supports URL style of object references
Reimplemented in TAO_Unknown_Profile.
Protocol specific implementation of parse_string ().
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
This method sets the client exposed policies, i.e., the ones propagated in the IOR, for this profile.
Remove the provided endpoint from the profile. Some subclasses of TAO_Profile already have a protocol-specific version of remove_endpoint, but this generic interface is required. The default implementation is a no-op. Protocol maintainers wishing to add support for the EndpointPolicy must implement remove_generic_endpoint to call their protocol-specific version of remove_endpoint
Reimplemented in TAO_IIOP_Profile.
Helper method that encodes the endpoints for RTCORBA as tagged_components.
Returns true if this profile can specify multicast endpoints.
Returns true if this profile supports non blocking oneways.
The tag, each concrete class will have a specific tag value.
The tag, each concrete class will have a specific tag value.
Access the tagged components, notice that they they could be empty (or ignored) for non-GIOP protocols (and even for GIOP-1.0)
Return a string representation for this profile. Client must deallocate memory. Only one endpoint is included into the string.
Implemented in TAO_IIOP_Profile, and TAO_Unknown_Profile.
Verify that the current ORB's configuration supports tagged components in IORs.
Verify that the given profile supports tagged components, i.e. is not a GIOP 1.0 profile.
Return a pointer to this profile's version. This object maintains ownership.
The current addressing mode. This may be changed if a remote server sends back an address mode exception.
Flag indicating whether the lazy decoding of the client exposed policies has taken place.
The TAO_MProfile which contains the profiles for the forwarded object.
Object_key associated with this profile.
Number of outstanding references to this object.
The tagged components.
Our tagged profile.
Having (tagged_profile_ != 0) doesn't mean yet that tagged_profile_ building is finished.
A lock that protects creation of the tagged profile. | http://www.dre.vanderbilt.edu/Doxygen/5.7.7/html/tao/a00324.html | CC-MAIN-2013-20 | en | refinedweb |
05 March 2008 04:30 [Source: ICIS news]
By Jeremiah Chan
SINGAPORE (ICIS news)--Fatty acid demand in Asia is coming off sharply due to volatility in upstream vegetable oil markets, producers and traders said on Wednesday.
Feedstock crude palm oil (CPO) prices ended a record-breaking run on Tuesday with benchmark May-delivery futures falling on the Bursa Malaysia on Tuesday.
They plummeted to ringgit (M$) 4,091/tonne ($1,268/tonne) at the close, after hitting a historical high of M$4,486/tonne earlier in the day.
Bearish sentiment has continued to Wednesday, as the May CPO retreated to M$3,951/tonne at 0336 GMT, a M$148/tonne drop from last evening.
Some market sources attributed the sudden drop to profit taking as speculative funds quickly liquidated their positions. Others felt that the highs over the past few days were unsustainable.
“There would be demand destruction at current palm oil price levels,” Henri Bardon, managing director of ethanol trading firm Vertical Asia, said. This, combined with fears of an impending recession would drag palm oil prices down, he said.
The turmoil in the vegetable oils market has hit downstream industries, with buyers of derivative fatty acids quickly sidelined by the rapidity of the price movements.
“Fatty acid sales have come to a standstill this week,” the senior official of a fatty acid production unit in ?xml:namespace>
Despite CPO appearing to have come off highs, demand remained lacklustre at current levels, he said.
Other producers agreed, with the marketing official of a fatty acid plant based in
A regional oleochemical trader said that the demand for fatty acids would only recover if raw material prices fell further and find some stability. Only then would buyers return to the market, he said.
Major fatty acid producers in
($1 = M$3.18)
Anu Agarwal. | http://www.icis.com/Articles/2008/03/05/9105827/asia-oleochemicals-suffer-on-volatile-cpo.html | CC-MAIN-2013-20 | en | refinedweb |
Back to index
00001 /* -*- Mode: Java; tab-width: 8; c-basic-offset: 4 -*- Mozilla Communicator client code, released 00017 * March 31, 1998. 00018 * 00019 * The Initial Developer of the Original Code is 00020 * Netscape Communications Corporation. 00021 * Portions created by the Initial Developer are Copyright (C) 1998 00022 * the Initial Developer. All Rights Reserved. 00023 * 00024 * Contributor(s): 00025 * 00026 * Alternatively, the contents of this file may be used under the terms of 00027 * either of the GNU General Public License Version 2 or later (the "GPL"), 00028 * or 00041 package netscape.javascript; 00042 import java.io.*; 00043 00044 public class JSUtil { 00045 00046 /* Return the stack trace of an exception or error as a String */ 00047 public static String getStackTrace(Throwable t) { 00048 ByteArrayOutputStream captureStream; 00049 PrintWriter p; 00050 00051 captureStream = new ByteArrayOutputStream(); 00052 p = new PrintWriter(captureStream); 00053 00054 t.printStackTrace(p); 00055 p.flush(); 00056 00057 return captureStream.toString(); 00058 } 00059 } | https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/_j_s_util_8java_source.html | CC-MAIN-2017-51 | en | refinedweb |
Diagram
Note
The diagram does not display private methods for the sake of brevity. While I will attemp to keep all diagrams in sync with the actual class, there may be occasions where the diagram is a version or more behind.
General
The general purpose for the Reflection namespace is to provide a wrapper for the .Net Reflection namespace. Because reflection plays such a key role in creating dynamic, flexible solutions, this namespace is actually foundational to the rest of the Nvigorate framework. Whether or not end users of the framework choose to use the classes provided is actually not the primary concern. Because the use of Reflection usually involves the same tasks, Reflector uses static methods to provide an easy "automated" way to leverage Reflection without the tedium of writing and maintaining the source to do so.
Current Feature Set
- Dynamically read the value of any property or field (including those inherited) from a target class.
- Dynamically write a compatible value to any eligible property or field (including those inherited) to a target class.
- Dynamically invoke any method (static, generic, inherited, etc.) and return the result.
- Dynamically create generic types at runtime using provided type arguments.
- Return all custom attributes from a type using any combination of the following criteria:
- Attribute type
- Attribute name
- Member type(s) the attribute applies to
- Member name
- List members within a type having custom attribute(s) applied to them which meet any combination of the following criteria:
- Attribute type
- Attribute name
- Member type(s) the attribute applies to
- Return field-to-value dictionaries for a given instance.
- Return the field or property value for each instance in a collection.
Potential Features
- Dynamic generation of types within an assembly based on data necessary to generate the type.
- Dynamic type assembly cache management which prevents needless regeneration of types created from identical criteria.
- The ability to generate types on the fly from different input sources. (dataset, xml, source code, etc.)
Technical Specification
Source Code
The Reflector class is currently made up of four separate files: Reflector.cs, Reflector.Attributes.cs, Reflector.Generics.cs and Reflector.Collections.cs. The primary reason for dividing this class into multiple source files (using the partial keyword in the class definition) is to clearly delineate the various functions based on the area they address. The total amount of source lines for the Reflector class is roughly 1800 lines (at this time).
NUnit Tests
Each of the partial class files will have a corresponding test file which contains the NUnit tests necessary to validate functionality and provide full regression testing. Because some functions behave differently based upon input and context, a Unit test should exist for each piece of functionality.
Fields
Reflector will use four private members for the sole purpose of controlling the BindingFlags combinations used. The first one will carry the value of the currently designated combination, while the other four will provide the default combinations for each setting.
An additional member will control whether or not the Reflector should perform recursive hierarchy searches when looking for a method, field or property.
Properties
Two booleans are required in order to provide the user with sufficient means to control Reflector's behavior based on the BindingFlag combinations. One boolean will reflect what access level Reflector can access while the other will be used to determine whether or not to perform recursive hierarchical searches.
Documentation
Documentation for this (and other) classes will be made available shortly. In the meantime, unit tests can provide good examples for how this class is used. | http://nvigorate.wikidot.com/reflection | CC-MAIN-2017-51 | en | refinedweb |
Pidgin should be able to turn on an away message when xscreensaver activates
Bug Description
I think gaim should have an option to turn on an away message when xscreensaver
activates. This would keep users from having to do one more thing before leaving
the computer. Also, when the screensaver is activated it's not possible to read
new messages, so I think this would be a sensible default.
I think it would be nice if gaim would put up an away message whenever
xscreensaver is active (on the same display of course) and the away message
turned off after xscreensaver is deactivated. That way the user doesn't have to
keep the idle time in two places (xscreensaver config and gaim config) and if
the user wants to leave the computer he/she can just activate the screensaver
(or in the case of a laptop, close the lid, which triggers an acpi event ->
lid.sh activates xscreensaver) and not have to put up an away message first.
oh the functionality is cool enough, and certainly has notable uses that idle
time does not adequately cover, for example manually blanking out the screen in
a single step. I wouldn't object to a patch, or better, a plugin, to do this.
But I don't see any of us (upstream) spending significant time trying to figure
out how to do it.
marking as upstream, patches are welcome
This could be done quite simply, even with only a couple lines of code.
In the xscreensaver lock/start screensaver function, check to see if
gaim/gaim-remote is installed. If it is, just run $ gaim-remote away.
#!/usr/bin/python
import os, dbus, gobject, dbus.glib
bus = dbus.SessionBus()
def onSessionIdleCh
if state:
else:
bus.add_
gobject.
Changed the package to Pidgin from Gaim.
Confirmed as an enhancement upstream. (http://
Thanks for your bug. Could you describe what is your issue? The preferences have
an option to be away after "... minutes" not using the computer or gaim | https://bugs.launchpad.net/ubuntu/+source/pidgin/+bug/23693 | CC-MAIN-2017-51 | en | refinedweb |
.
// this script pushes all rigidbodies that the character touches var pushPower = 2.0; function OnControllerColliderHit (hit : ControllerColliderHit) { var body : Rigidbody = var pushDir : Vector3 = Vector3 (hit.moveDirection.x, 0, hit.moveDirection.z);
// If you know how fast your character is trying to move, // then you can also multiply the push velocity by that.
// Apply the push body.velocity = pushDir * pushPower; }
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour {; Vector3 pushDir = new Vector3(hit.moveDirection.x, 0, hit.moveDirection.z); body.velocity = pushDir * pushPower; } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/CharacterController.OnControllerColliderHit.html | CC-MAIN-2017-51 | en | refinedweb |
Error for boundBox() and drawBB()
I am currently writing a detection/tracking system for cars in a car park. I have gotten some great help from @jayrambhia. The only problem I am having now is that I cannot use the method boundBox() or drawBB(). Anytime I run the code I get the following errors.
Traceback (most recent call last): File "C:/Users/Chris/Desktop/SimpleCVScripts/detectionTwo.py", line 18, in <module> car_blob_bounding_box = car_blobs.boundingBox() AttributeError: 'FeatureSet' object has no attribute 'boundingBox'
And I get the following error for drawBB()
Traceback (most recent call last): File "C:/Users/Chris/Desktop/SimpleCVScripts/detectionTwo.py", line 24, in <module> tracking_list.drawBB(color = scv.Color.RED) AttributeError: 'list' object has no attribute 'drawBB'
I don't know what's causing this. I can't tell if the code I have written is going to work unless I test it. Is there anything in the code below that could be causing these errors?.
import numpy import cv2 as cv import SimpleCV as scv from datetime import datetime background_subtraction = cv.BackgroundSubtractorMOG(24*60, 3, 0.9, 0.01) capture = cv.VideoCapture(0) ''' #Not used until detection/tracking is working number_of_cars = 0; ''' #Clean up the image def preproccessor(): display_image.dilate(2) display_image.erode(2) #Remove the background from the image def subtract_background(): global f, image global fgmask global display_image f, image = capture.read() fgmask = background_subtraction.apply(image) display_image = scv.Image(image) preproccessor() #Detect blobs in the image def detect_cars(): global cars cars = display_image.findBlobs() if cars: cars.draw() #Track the blobs that were detected def track_cars(): tracking_list = [] car_bounding_box = cars.boundingBox() tracking_list = image.track("mftrack",ts,display_image,cbox) tracking_list.drawBB() display_image.show() #Main loop for the program while(True): subtract_background() detect_cars() track_cars()
This might be helpful. | http://help.simplecv.org/question/1395/error-for-boundbox-and-drawbb/ | CC-MAIN-2019-35 | en | refinedweb |
As sahara’s API has evolved there have been several features introduced in the form of routes and methods that could be crafted in a more consistent and predictable manner. Additionally, there are several new considerations and methodologies that can only be addressed by updating the major version of the API. This document serves as a roadmap to implement an experimental v2 API which will form the basis of the eventual stable version.
Note
This is an umbrella specification covering many changes, there will be followup specifications to cover some of the more intricate details.
The current version of sahara’s REST API, 1.1, contains several methodologies and patterns that have created inconsistencies within the API and with respect to the API Working Group’s evolving guidelines[1]. Many of these are due to the iterative nature of the design work, and some have been created at a time before stable guidelines existed.
Examples of inconsistencies within the current API:
In addition to resolving the inconsistencies in the API, a new version will provide an opportunity to implement features which will improve the experience for consumers of the sahara API.
Examples of features to implement in the new API:
These are just a few examples of issues which can be addressed in a new major version API implementation.
To address the creation of a new major version API, an experimental
/v2
endpoint should be created. This new endpoint will be clearly marked as
experimental and no contract of stability will be enforced with regards to
the content of its sub-endpoints. Changes to the
/v2 endpoint will be
tracked through features described in this specification, and through further
specifications which will be created to better describe the details of larger
changes.
When all the changes to the
/v2 endpoint have been made such that it
has a 1:1 feature compliance with the current API, and the Python sahara
client has been updated to use these new endpoints, the experimental status
of the API should be assessed with the goal of marking it as stable and
ready for public consumption.
The individual changes will be broken down into individual tasks. This will allow the sahara team members to more easily research and implement the changes. These efforts will be coordinated through a page on the sahara wiki site[2].
The initial changes to create the
/v2 endpoint should also include moving
the Identity project identifier to a header named
OpenStack-Project-ID.
In all other respects, the endpoints currently in place for the
/v1.1 API
will be carried forward into the new endpoint namespace. This will create a
solid base point from which to make further changes as the new API evolves
and moves towards completion of all features described in the experimental
specifications.
Removing the project identifier from the URI will help to create more consistent, reusable, routes for client applications and embedded HREFs. This move will also help decouple the notion of URI scoped resources being tied to a single project identifier.
The following list is an overview of all the changes that should be
incorporated into the experimental API before it can be considered for
migration to stable. These changes are not in order of precedence, and can
be carried out in parallel. Some of these changes can be addressed with
simple bugs, which should be marked with
[APIv2] in their names. The more
complex changes should be preceeded by specifications marked with the same
[APIv2] moniker in their names. For both types of changes, the
commits should contain
Partial-Implements: bp v2-api-experimental-impl
to aid in tracking the API conversion process.
Overview of changes:
This list is not meant to contain all the possible future changes, but a window of the minimum necessary changes to be made before the new API can be declared as stable.
The move to stable for this API should not occur before the Python sahara client has been updated to use the new functionality.
An alternative might be to make changes to the current version API, but this is inadvisable as it breaks the API version contract for end users.
Although the current version API can be changed, there is no way to safely make the proposed changes without breaking backward compatibility. As the proposed changes are quite large in nature it is not advisable to create a “1.2” version of the API.
Most of these changes will not require modifications to the data model. The
two main exceptions are the payload name changes for
hadoop_version and
oozie_job_id. As the data model will continue to be used for the v1.1
API until it is deprecated, it is not advisable to rename these fields at
this time. When the v2 API has been made stable, and the v1.1 API has been
deprecated, these fields should be revisisted and changed in the data model.
During the experimental phase of the API, these translations will occur in the code that handles requests and responses. After the API has transitioned to production mode, migrations should be created to align the data models with the API representations and translations should be created for the older versions only as necessary. As the older version API will eventually be deprecated, these changes should be scheduled to coincide with that transition.
As this specification is addressing a high level change of the API, the following changes are enumerated in brief. Full details should be created for changes that will require more than just renaming an endpoint.
OpenStack-Project-IDheader on all requests.
In the experimental phase, this change should have no noticeable affect on the end user. Once the API has been declared stabled, users will need to switch python-saharaclient versions as well as upgrade their horizon installations to make full use of renamed features.
During the experimental phase, this change will have no effect on deployers.
When the API reaches the stable phase, deployers will be responsible for upgrading their installations to ensure that sahara and python-saharaclient are upgraded as well as changing the service catalog to represent the base endpoint.
As this change is targeted for experimental work, developers should know that the details of the v2 API will be constantly changing. There is no guarantee of stability.
Unit tests will be created to exercise the new endpoints. Additionally, the gabbi[8] testing framework should be investigated as a functional testing platform for the REST API.
To improve security testing, tools such as Syntribos[9] and RestFuzz[10] should be investigated for use in directed testing efforts and as possible gate tests.
These investigations should result in further specifications if sufficient results are discovered to warrent their creation as they will deal with new testing modes for the sahara API server.
As the v2 API reaches stable status, and the python-saharaclient has been ported to use the new API, the current functional tests should provide the necessary framework to ensure successful end-to-end testing.
During the experimental phase, this work will not produce documentation. As the evaluation for stable approaches there will need to be a new version of the WADL files for the api-ref[11] site, if necessary. There is the possibility that this site will change its format, in which case these new API documents will need to be generated.
Further, the v2 API should follow keystone’s model[12] of publishing the API reference documents in restructured text format to the specs repository. This would make the API much easier to document and update as new specification changes could also propose their API changes to the same repo. Also, the WADL format is very verbose and the future of this format is under question within the OpenStack documentation community[13]. The effort to make accurate documentation for sahara’s API should also include the possibility of creating Swagger[14] output as the v2 API approaches stable status, this should be addressed in a more separate specification as that time approaches.
[1]:
[2]:
[3]:
[4]:
[5]:
[6]:
[7]:
[8]:
[9]:
[10]:
[11]:
[12]:
[13]:
[14]:
Liberty summit etherpad,
Mitaka summit etherpad,
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | http://specs.openstack.org/openstack/sahara-specs/specs/backlog/api-v2-experimental-impl.html | CC-MAIN-2019-35 | en | refinedweb |
Creating Animations Using React Spring.
If you want to follow along, install react-spring to kick things off:
## yarn yarn add react-spring ## npm npm install react-spring --save
Spring
The Spring prop can be used for moving data from one state to another. We are provided with a
from and
to prop to help us define the animation’s starting and ending states. The
from prop determines the initial state of the data during render, while we use
to in stating where it should to be after the animation completes.
In the first example, we will make use of the render prop version of creating spring animation.
See the Pen
react spring 1 by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.
On initial render, we want to hide a box, and slide it down to the center of the page when a button is clicked. It’s possible to do this without making use of react-spring, of course, but we want to animate the entrance of the box in to view and only when the button is clicked.
class App extends React.Component { state = { content: false } displayContent = (e) => { e.preventDefault() this.setState({ content: !this.state.content }) } render() { return ( <div className="container"> // The button that toggles the animation <div className="button-container"> <button onClick={this.displayContent} Toggle Content </button> </div> { !this.state.content ? ( // Content in the main container <div> No Content </div> ) : ( // We call Spring and define the from and to props <Spring from={{ // Start invisible and offscreen opacity: 0, marginTop: -1000, }} to={{ // End fully visible and in the middle of the screen opacity: 1, marginTop: 0, }} > { props => ( // The actual box that slides down <div className="box" style={ props }> <h1> This content slid down. Thanks to React Spring </h1> </div> )} </Spring> ) } </div> ) } }
Most of the code is basic React that you might already be used to seeing. We make use of react-spring in the section where we want to conditionally render the content after the value of
content has been changed to
true. In this example, we want the content to slide in from the top to the center of the page, so we make use of
marginTop and set it to a value of
-1000 to position it offscreen, then define an opacity of
0 as our values for the
from prop. This means the box will initially come from the top of the page and be invisible.
Clicking the button after the component renders updates the state of the component and causes the content to slide down from the top of the page.
We can also implement the above example using the hooks API. For this, we’ll be making use of the
useSpring and
animated hooks, alongside React’s built-in hooks.
const App = () => { const [contentStatus, displayContent] = React.useState(false); // Here's our useSpring Hook to define start and end states const contentProps = useSpring({ opacity: contentStatus ? 1 : 0, marginTop: contentStatus ? 0 : -1000 }) return ( <div className="container"> <div className="button-container"> <button onClick={() => displayContent(a => !a)}Toggle Content</button> </div> { !contentStatus ? ( <div> No Content </div> ) : ( // Here's where the animated hook comes into play <animated.div className="box" style={ contentProps }> <h1> This content slid down. Thanks to React Spring </h1> </animated.div> ) } </div> ) }
First, we set up the state for the component. Then we make use of
useSpring to set up the animations we need. When
contentStatus is
true, we want the values of
marginTop and
opacity to be
0 and
1, respectively. Else, they should be
-1000 and
0. These values are assigned to
contentProps which we then pass as props to
animated.div.
When the value of
contentStatus changes, as a result of clicking the button, the values of
opacity and
marginTop changes alongside. This cause the content to slide down.
See the Pen
react spring 2 by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.
Trail
The Trail prop animates a list of items. The animation is applied to the first item, then the siblings follow suit. To see how that works out, we’ll build a component that makes a GET request to fetch a list of users, then we will animate how they render. Like we did with Spring, we’ll see how to do this using both the render props and hooks API separately.
First, the render props.
class App extends React.Component { state = { isLoading: true, users: [], error: null }; // We're using the Fetch <abbr>API</abbr> to grab user data // fetchUsers() { fetch(``) .then(response => response.json()) .then(data => // More on setState: this.setState({ users: data, isLoading: false, }) ) .catch(error => this.setState({ error, isLoading: false })); } componentDidMount() { this.fetchUsers(); } render() { const { isLoading, users, error } = this.state; return ( <div> <h1>Random User</h1> {error ? <p>{error.message}</p> : null} {!isLoading ? ( // Let's define the items, keys and states using Trail <Trail items={users} keys={user => user.id} from={{ marginLeft: -20, opacity: 0, transform: 'translate3d(0,-40px,0)' }} to={{ marginLeft: 20, opacity: 1, transform: 'translate3d(0,0px,0)' }} > {user => props => ( <div style={props} {user.username} </div> )} </Trail> ) : ( <h3>Loading...</h3> )} </div> ); } }
When the component mounts, we make a request to fetch some random users from a third-party API service. Then, we update
this.state.users using the data the API returns. We could list the users without animation, and that will look like this:
users.map(user => { const { username, name, email } = user; return ( <div key={username}> <p>{username}</p> </div> ); })
But since we want to animate the list, we have to pass the items as props to the Trail component:
<Trail items={users} keys={user => user.id} from={{ marginLeft: -20, opacity: 0, transform: 'translate3d(0,-40px,0)' }} to={{ marginLeft: 20, opacity: 1, transform: 'translate3d(0,0px,0)' }} > {user => props => ( <div style={props} {user.username} </div> )} </Trail>
We set the keys to the ID of each user. You can see we are also making use of the
from and
to props to determine where the animation should start and end.
Now our list of users slides in with a subtle animation:
See the Pen
React Spring - Trail 1 by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.
The hooks API gives us access to
useTrail hook. Since we are not making use of a class component, we can make use of the
useEffect hook (which is similar to
componentDidMount and
componentDidUpdate lifecycle methods) to fetch the users when the component mounts.
const App = () => { const [users, setUsers] = useState([]); useEffect(() => { fetch(``) .then(response => response.json()) .then(data => setUsers(data) ) }, []) const trail = useTrail(users.length, { from: { marginLeft: -20, opacity: 0, transform: 'translate3d(0,-40px,0)' }, to: { marginLeft: 20, opacity: 1, transform: 'translate3d(0,0px,0)' } }) return ( <React.Fragment> <h1>Random User</h1> {trail.map((props, index) => { return ( <animated.div key={users[index]} style={props} {users[index].username} </animated.div> ) })} </React.Fragment> ); }
We have the initial state of
users set to an empty array. Using
useEffect, we fetch the users from the API and set a new state using the
setUsers method we created with help from the
useState hook.
Using the
useTrail hook, we create the animated style passing it values for
from and
to, and we also pass in the length of the items we want to animate. In the part where we want to render the list of users, we return the array containing the animated props.
See the Pen
React Spring -Trail 2 by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.
Go, spring into action!
Now you have a new and relatively easy way to work with animations in React. Try animating different aspects of your application where you see the need. Of course, be mindful of user preferences when it comes to animations because they can be detrimental to accessibility.
While you’re at it, ensure you check out the official website of react-spring because there are tons of demo to get your creative juices flowing with animation ideas.
Great tutorial! | https://css-tricks.com/creating-animations-using-react-spring/ | CC-MAIN-2019-35 | en | refinedweb |
Applications¶
Applications are used to provide additional functionality to guillotina.
Core addons¶
guillotina.contrib.swagger: Activate swagger support at
/@docs.
guillotina.contrib.catalog.pg: Provide search functionality with postgresql queries.
guillotina.contrib.cache: Cache support for guillotina.
guillotina.contrib.redis: Cache support for guillotina using redis with invalidation across multiple instances.
guillotina.contrib.pubsub: Pubsub support for guillotina
Community Addons¶
Some useful addons to use in your own development:
- guillotina_elasticsearch: Index content in elastic search
- guillotina_dbusers: Store and authenticate users in the database
- guillotina_mailer: async send mail
Creating¶
An application is a Python package that implements an entry point to tell guillotina to load it.
If you’re not familiar with how to build Python applications, please read documentation on building packages before you continue.
In this example,
guillotina_myaddon is your package module.
Initialization¶
Your
config.yaml file will need to provide the application name in the
applications array for it to be initialized.
applications: - guillotina_myaddon
Configuration¶
Once you create a
guillotina application, there are two primary ways for it
to hook into
guillotina.
Call the
includeme function¶
Your application can provide an
includeme function at the root of the module
and
guillotina will call it with the instance of the
root object.
def includeme(root): # do initialization here... pass | https://guillotina.readthedocs.io/en/latest/developer/applications.html | CC-MAIN-2019-35 | en | refinedweb |
Every Engineer who loves to tinker with electronics at some point of time would want to have their own lab set-up. A Multimeter, Clamp meter, Oscilloscope, LCR Meter, Function Generator, Dual mode power supply and an Auto transformer are the bare minimum equipments for a decent lab set-up. While all of these can be purchased, we can also easily built few on our own like the Function Generator and the Dual mode power supply.
In this article we will learn how quickly and easily we can build our own Function generator using Arduino. This function generator a.k.a waveform generator. Apart from that, the generator can also produce since wave with frequency control. Do note that this generator is not of industrial grade and cannot be used for serious testing. But other than that it will come in handy for all hobby projects and you need not wait in weeks for the shipment to arrive. Also what’s more fun than using a device, that we built on our own.
Materials Required
- Arduino Nano
- 16*2 Alphanumeric LCD display
- Rotary Encoder
- Resistor(5.6K,10K)
- Capacitor (0.1uF)
- Perf board, Bergstik
- Soldering Kit
Circuit Diagram
The complete circuit diagram this Arduino Function Generator is shown below. As you can see we have an Arduino Nano which acts as the brain of our project and an 16x2 LCD to display the value of frequency that is currently being generated. We also have a rotary encoder which will help us to set the frequency.
The complete set-up is powered by the USB port of the Arduino itself. The connections which I used previously didn’t turn out to work dues to some reasons which we will discuss later in this article. Hence I had to mess up with the wiring a bit by changing the pin order. Anyhow, you will not have any such issues as it is all sorted out, just follow the circuit carefully to know which pin is connect to what. You can also refer the below table to verify your connections.
The circuit is pretty simple; we produce a square wave on pin D9 which can be used as such, the frequency of this square wave is controlled by the rotary encoder. Then to get a sine wave we produce SPWM signal on pin D5, the frequency of this has to be related with PWM frequency so we provide this PWM signal to pin D2 to act as an interrupt and then use the ISR to control the frequency of the since wave.
You can build the circuit on a breadboard or even get a PCB for it. But I decided to solder it on a Perf board to get the work done fast and make it reliable for long term use. My board looks like this once all the connections are complete.
If you want to know more on how the PWM and Sine wave is produced with Arduino read the following paragraphs, else you can scroll down directly to the Programming Arduino section.
Producing Square Wave with Variable Frequency
People who are using Arduino might be familiar that Arduino can produce PWM signals simply by using the analog write function. But this function is limited only to control the duty cycle of the PWM signal and not the frequency of the signal. But for a waveform generator we need a PWM signal whose frequency can be controlled. This can be done by directly controlling the Timers of the Arduino and toggling a GPIO pin based on it. But there are some pre-built libraries which do just the same and can be used as such. The library that we are using is the Arduino PWM Frequency Library. We will discuss more about this library in the coding section.
There are some drawbacks with this library as well, because the library alters the default Timer 1 and Timer 2 settings in Arduino. Hence you will no longer be able to use servo library or any other timer related library with your Arduino. Also the analog write function on pins 9,10,11 & 13 uses Timer 1 and Timer 2 hence you will not be able to produce SPWM on those pins.
The advantage of this library is that it does not disturb the Timer 0 of your Arduino, which is more vital than Timer 1 and Timer 2. Because of this you are free to use the delay function and millis() function without any problem. Also the pins 5 and 6 are controlled by Timer 0 hence we won’t be having problem in using analog write or servo control operation on those pins. Initially it took some time for me to figure this out and that is why the wiring is messed up a bit.
Here we have also built one Simple Square waveform generator, but to change the frequency of waveform you have to replace Resistor or capacitor, and it will hard to get the required frequency.
Producing Sine Wave using Arduino
As we know microcontrollers are Digital devices and they cannot produce Sine wave by mere coding. But there two popular ways of obtaining a sine wave from a microcontroller one is by utilizing a DAC and the other is by creating a SPWM. Unfortunately Arduino boards (except Due) does not come with a built-in DAC to produce sine wave, but you can always build your own DAC using the simple R2R method and then use it to produce a decent sine wave. But to reduce the hardware work I decided to use the later method of creating a SPWM signal and then converting it to Sine wave.
What is a SPWM signal?
The term SPWM stands for Sinusoidal Pulse Width Modulation. This signal is very much similar to the PWM, but for an SPWM signal the duty cycle is controlled in such a manner to obtain an average voltage similar to that of a sine wave. For example, with 100% duty cycle the average output voltage will be 5V and for 25% we will have 1.25V thus controlling the duty cycle we can get pre-defined variable average voltage which is nothing but a sine wave. This technique is commonly used in Inverters.
In the above image, the blue signal is the SPWM signal. Notice that the duty cycle of the wave is varied from 0% to 100% and then back to 0%. The graph is plotted for -1.0 to +1.0V but in our case, since we are using an Arduino the scale will be form 0V to 5V. We will learn how to produce SPWM with Arduino in the programming section below.
Converting SPWM to Sine wave
Converting a SPWM single to sine wave requires a H-bridge circuit which consists of minimum 4 power switches. We will not go much deeper into it since we are not using it here. These H-bridge circuits are commonly used in inverters. It utilizes two SPWM signals where one is phase shifted from the other and both the signals are applied to the power switches in the H-bridge to make diagonal opposing switches turn on and off and the same time. This way we can get a wave form that looks similar to sine wave but will practically not be closer to anything shown in the figure above (green wave). To get a pure since wave output we have to use a filter like the low-pass filter which comprises of an Inductor and Capacitor.
However in our circuit, we will not be using the sine wave to power anything. I simply wanted to create from the generated SPWM signal so I went with a simple RC-Filter. You can also try a LC-Filter for better results but I chose RC for simplicity. The value of my resistor is 620 Ohms and the capacitor is 10uF. The above image shows the SPWM signal (Yellow) from the pin 5 and the sine wave (Blue) which was obtained after passing it through a RC-Filter.
If you don’t want to vary the frequency, you can also generate sine wave by using this Simple Sine Wave Generator Circuit using Transistor.
Adding the Arduino PWM Frequency Library
The Arduino Frequency Library can be downloaded by clicking on the link below.
At the time of writing this article, the Arduino PWM Frequency Librarey V_05 is the latest one and it will get downloaded as a ZIP file. Extract the ZIP file ad you will get a folder called PWM. Then navigate to the Libraries folder of your Arduino IDE, for windows users it will be in your documents at this path C:\Users\User\Documents\Arduino\libraries. Paste the PWM folder into the libraries folder. Sometimes you might already have a PWM folder in there, in that case make sure you replace the old one with this new one.
Programming Arduino for Waveform Generator
As always the complete program for this project can be found at the bottom of this page. You can use the code as such, but make sure you have added the variable frequency library for Arduino IDE as discussed above else you will get compile time error. In this section let’s look in to the code to understand what is happening.
Basically we want to produce a PWM signal with variable frequency on pin 9. This frequency should be set using the rotary encoder and the value should also be displayed in the 16*2 LCD. Once the PWM signal is created on pin 9 it will create an interrupt on pin 2 since we have shorted both the pins. Using this interrupt we can control the frequency of the SPWM signal which is generated on pin 5.
As always we begin our program by including the required library. The liquid crystal library is in-built in Arduino and we just installed the PWM library.
#include <PWM.h> //PWM librarey for controlling freq. of PWM signal #include <LiquidCrystal.h>
Next we declare the global variable and also mention the pin names for the LCD, Rotary Encoder and signal pins. You can leave this undisturbed if you have followed the circuit diagram above.
const int rs = 14, en = 15, d4 = 4, d5 = 3, d6 = 6, d7 = 7; //Mention the pin number for LCD connection LiquidCrystal lcd(rs, en, d4, d5, d6, d7); const int Encoder_OuputA = 11; const int Encoder_OuputB = 12; const int Encoder_Switch = 10; const int signal_pin = 9; const int Sine_pin = 5; const int POT_pin = A2; int Previous_Output; int multiplier = 1; double angle = 0; double increment = 0.2; int32_t frequency; //frequency to be set int32_t lower_level_freq = 1; //Lowest possible freq value is 1Hz int32_t upper_level_freq = 100000; //Maximum possible freq is 100KHz
Inside the setup function we initialize the LCD and serial communication for debugging purpose and then declare the pins of Encoder as input pins. We also display an intro message during the boot just to make sure things are working. //pin Mode declaration pinMode (Encoder_OuputA, INPUT); pinMode (Encoder_OuputB, INPUT); pinMode (Encoder_Switch, INPUT);
Another important line is the InitTimerSafe which initializes the timer 1 and 2 for producing a variable frequency PWM. Once this function is called the default timer settings of Arduino will be altered.
InitTimersSafe(); //Initialize timers without disturbing timer 0
We also have the external interrupt running on pin 2. So whenever there a change in status of pin 2 an interrupt will be triggered which will run the Interrupt service routine (ISR) function. Here the name of the ISR function is generate_sine.
attachInterrupt(0,generate_sine,CHANGE);
Next, inside the void loop we have to check if the rotary encoder has been turned. Only if it has been turned we need to adjust the frequency of the PWM signal. We have already learnt how to interface Rotary Encoder with Arduino. If you are new here I would recommend you to fall back to that tutorial and then get back here.
If the rotary encoder is turned clockwise we increase the value of frequency by adding it with the value of multiplier. This way we can increase/decrease the value of frequency by 1, 10, 100 or even 1000. The value of multiplier can be set by pressing the rotary encoder. If the encoder is rotated we alter the value of frequency and produce a PWM signal on pin 9 with the following lines. Here the value 32768 sets the PWM to 50% cycle. The value 32768 is chosen, since 50% of 65536 is 32768 similarly you can determine the value for your required duty cycles. But here the duty cycle is fixed to 50%. Finally the function SetPinFrequencySafe is used to set the frequency of our signal pin that is pin 9.
pwmWriteHR(signal_pin, 32768); //Set duty cycle to 50% by default -> for 16-bit 65536/2 = 32768 SetPinFrequencySafe(signal_pin, frequency);
Inside the ISR function we write the code to generate SPWM signal. There are many ways to generate SPWM signals and even pre-built libraries are available for Arduino. I have used the simplest of all methods of utilizing the sin() function in Arduino. You can also try with the lookup table method if you are interested. The sin() returns a variable value (decimal) between -1 to +1 and this when plotted against time will give us a sine wave.
Now all we have to do is convert this value of -1 to +1 into 0 to 255 and feed it to our analog Write function. For which I have multiplied it with 255 just to ignore the decimal point and then used the map function to convert the value from -255 to +255 into 0 to +255. Finally this value is written to pin 5 using the analog write function. The value of angle is incremented by 0.2 every time the ISR is called this help us in controlling the frequency of the sine wave
double sineValue = sin(angle); sineValue *= 255; int plot = map(sineValue, -255, +255, 0, 255); Serial.println(plot); analogWrite(Sine_pin,plot); angle += increment;
Testing the Arduino Function Generator on Hardware
Build your hardware as per the circuit diagram and upload the code given at the bottom of this page. Now, you are all set to test your project. It would be a lot easier if you have a DSO (Oscilloscope) but you can also test it with an LED since the frequency range is very high.
Connect the probe to the Square wave and sine wave pin of the circuit. Use two LEDs on these two pins if you do not have a scope. Power up the circuit and you should be greeted with the introductory message on the LCD. Then vary the Rotary encoder and set the required frequency you should be able to observe the square wave and sine wave on your scope as shown below. If you are using an LED you should notice the LED blinking at different intervals based on the frequency you have set.
The complete working of the waveform generator can also be found at the video given at the end of this page. Hope you enjoyed the project and learnt something useful from it. If you have any questions leave them in the comment section or you could also use the forums for other technical help.
#include <PWM.h> //PWM librarey for controlling freq. of PWM signal
#include <LiquidCrystal.h>
const int rs = 14, en = 15, d4 = 4, d5 = 3, d6 = 6, d7 = 7; //Mention the pin number for LCD connection
LiquidCrystal lcd(rs, en, d4, d5, d6, d7);
int Encoder_OuputA = 11;
int Encoder_OuputB = 12;
int Encoder_Switch = 10;
int Previous_Output;
int multiplier = 1;
double angle = 0;
double increment = 0.2;
const int signal_pin = 9;
const int Sine_pin = 5;
const int POT_pin = A2;
int32_t frequency; //frequency to be set
int32_t lower_level_freq = 1; //Lowest possible freq value is 1Hz
int32_t upper_level_freq = 100000; //Maximum possible freq is 100KHz
void setup()
{
InitTimersSafe(); //Initialize timers without disturbing timer 0
//pin Mode declaration
pinMode (Encoder_OuputA, INPUT);
pinMode (Encoder_OuputB, INPUT);
pinMode (Encoder_Switch, INPUT);
Previous_Output = digitalRead(Encoder_OuputA); //Read the inital value of Output A
attachInterrupt(0,generate_sine,CHANGE);
}
void loop()
{
if (digitalRead(Encoder_OuputA) != Previous_Output)
{
if (digitalRead(Encoder_OuputB) != Previous_Output)
{);
}
else
{);
}
}
if (digitalRead(Encoder_Switch) == 0)
{
multiplier = multiplier * 10;
if (multiplier>1000)
multiplier=1;
// Serial.println(multiplier);
lcd.setCursor(0, 1);
lcd.print("Cng. by: ");
lcd.setCursor(8, 1);
lcd.print(multiplier);
delay(500);
while(digitalRead(Encoder_Switch) == 0);
}
Previous_Output = digitalRead(Encoder_OuputA);
}
void generate_sine()
{
double sineValue = sin(angle);
sineValue *= 255;
int plot = map(sineValue, -255, +255, 0, 255);
Serial.println(plot);
analogWrite(Sine_pin,plot);
angle += increment;
if (angle > 180)
angle =0;
}
Dec 21, 2018
This is cool. Running on a clone nano. Seems touchy (clone?), but is dead accurate according to my DSO.
Dec 24, 2018
Well, bought a 'Genuine' Nano and tried this...
1. can't seem to run over 200HZ without hanging it.
Checked everything..Assuming D14 is A0 and D15 is A1, it's all wired correctly.
2. why is this line in there: const int POT_pin = A2; Doesn't seem like it's used anywhere. and no wires
are on A2.
I think it was touchy before because of the encoder.
Any ideas why yours does so much better than mine? My Nano is listed as "Arduino Nano [A000005] which should be v 3.x.
Mar 03, 2019
My encoder needed a pull-up on the switch. I had to swap the A and B signals from it as well.
The square wave output is not toggling until I turn the encoder backwards a step. Is this expected?
I am getting a sine wave out, but at a much lower frequency: setting at 1000Hz, I get a sinewave of 6.8Hz (after adding a 1uF cap). Is this expected?
Thanks!
David
Mar 03, 2019
Aug 14, 2019
Good work david, yes the change in frequency can be expected and it depends on the freq. of your SPWM. As I mentioned in tutorial, you can realy on the square wave but sine wave would work just crude.
Mar 17, 2019
Very nice design, very precisely wave. Is there any simple way to increase precision to two decimal places - resolution 0.01 Hz? I tried to divide the frequency / 100 - SetPinFrequencySafe (signal_pin, frequency / 100); but then it automatically enters 500Hz. I also tryied change int32_t for floate in librarie without sukces. Greetings.
Aug 14, 2019
Hi Radek, the precesion of the wave is limited by the clock frequency of the arduino board. If you are trying something more precesie I would recommend upgrading to STM32 or LPC2148 | https://circuitdigest.com/microcontroller-projects/arduino-waveform-generator | CC-MAIN-2019-35 | en | refinedweb |
This page describes how to create a Cloud Dataflow project and run an example pipeline from within Eclipse.
The Cloud Tools for Eclipse plugin works only with the Cloud Dataflow SDK distribution versions 2.0.0 to 2.5.0. The Cloud Dataflow Eclipse plugin does not work with the Apache Beam SDK distribution.
See the Cloud Tools for Eclipse plugin release notes for announcements about new or updated features, bug fixes, known issues, and deprecated functionality.
Before you begin
If you don't already have one, sign up for a new account.
Select or create a GCP project.
Go to the Project selector page
Verifica che la fatturazione sia attivata per il tuo progetto.
scopri come attivare la fatturazione
- Enable the Cloud Dataflow, Compute Engine, Stackdriver Logging, Google Cloud Storage, Google Cloud Storage JSON, BigQuery, Cloud Pub/Sub, Cloud Datastore, and Cloud Resource Manager APIs.
- Install and initialize the Cloud SDK.
- Ensure you have installed Eclipse IDE version 4.7 or later.
- Ensure you have installed the Java Development Kit (JDK) version 1.8 or later.
- Ensure you have installed the latest version of the Cloud Dataflow plugin.
- If you have not done so already, follow the Cloud Dataflow Quickstart to install the plugin.
- Or, select Help > Check for Updates to update your plugin to the latest version.
Create a Cloud Dataflow project in Eclipse
To create a new project, use the New Project wizard to generate a template application that you can use as the start for your own application.
If you don't have an application, you can run the WordCount sample app to complete the rest of these procedures.
- Select File -> New -> Project.
- In the Google Cloud Platform directory, select Cloud Dataflow Java Project.
- Enter the Group ID.
- Enter the Artifact ID.
- Select the Project Template. For the WordCount sample, select Example pipelines.
- Select the Project Dataflow Version. For the WordCount sample, select 2.5.0.
- Enter the Package name. For the WordCount sample, enter com.google.cloud.dataflow.examples.
- Click Next.
![ A wizard to select the
type of project you are creating. There are directories for General, Eclipse Modeling Framework,
EJB, Java, and Java EE. There is also a Google Cloud Dataflow Run Options dialog.
- Select the account associated with your Google Cloud Platform project or add a new account. To add a new account:
- Select Add a new account... in the Account drop-down menu.
- A new browser window opens to complete the sign in process.
- Enter your Cloud Platform Project ID.
- Select a Cloud Storage Staging Location or create a new staging location. To create a new staging location:
- Enter a unique name for Cloud Storage Staging Location. Location name must include the bucket name and a folder. Objects are created in your Cloud Storage bucket inside the specified folder. Do not include sensitive information in the bucket name because the bucket namespace is global and publicly visible.
-".
- Click Create Bucket.
- Click Browse to navigate to your service account key.
- Click Finish. Dataflow.
- Click Run.
- When the job finishes, among other output, you should see the following line in the Eclipse console:
Submitted job: <job_id>
![ A dialog
to select the Dataflow Pipeline run configuration. Options include Apache
Tomcat, App Engine Local Server, Dataflow Pipeline, Eclipse Application,
Eclipse Data Tools. The mouse pointer hovers over the New Launch
Configuration button, and the New launch configuration tooltip for that
button displays.]()
![ A dialog with
the Arguments tab selected. In the Program arguments field, the --output
option is set to the writable staging location.]()
Clean up
To avoid incurring charges to your. | https://cloud.google.com/dataflow/docs/quickstarts/quickstart-java-eclipse?hl=it | CC-MAIN-2019-35 | en | refinedweb |
Routes
In any web framework, "routes" are one of the core elements of what happens on a website. Certainly rendering content when a user hits a particular URL is a majority of what happens in web development.
Reaction uses the React Router package for routing. To get started with routing in Reaction, here are two important elements to understand:
- Reaction stores all its Routes in the "Registry" in the database. This allows packages to dynamically add routes along with their functionality, and even override or remove existing routers.
- The customized version of React Router is available globally as
Reaction.Router.
For more in-depth coverage, consult the main Reaction documentation on Routing and the React Router documentation.
But we are going to keep it at its most simple and just add a single new route which will be available to anybody. Bee's Knees wants to add the ubiquitous "About" page to their site and wants to show essentially a static page there. (Management of static pages is coming in upcoming version of RC but this still makes an excellent simple example).
So the first thing we want to do is add the route in the Registry which we do by adding an entry in the
registry key in
our
register.js file.
This entry will look like this (placed after the
autoEnable: true entry):
registry: [ { route: "/about", name: "about", template: "aboutUs", workflow: "coreWorkflow" } ],
The
route entry is the URL that will match the users URL. (for how to include parameters in the route, please see the RC documentation or the React Router documentation)
The
name is the string by which you will refer to this route in other parts of the application. The
template is the
template that will be rendered when the route is visited, and the
workflow defines which workflow this will be attached to.
In our case, there is no real workflow around an about page so we use the default "coreWorkflow".
To allow users to our new Route we need to give them permissions. Since we are good with everyone viewing our About page we will add this permission to our "defaultRoles" and "defaultVisitorRoles" (the roles available when a new user is created).
To do this we are going to create a new file called
init.js in the
server directory there and add that file to our imports. Then we
will add a function that looks like this:
function addRolesToVisitors() { // Add the about permission to all default roles since it's available to all Logger.info("::: Adding about route permissions to default roles") const shop = Shops.findOne(Reaction.getShopId()); Shops.update(shop._id, { $addToSet: { "defaultVisitorRole": "about"} } ); Shops.update(shop._id, { $addToSet: { "defaultRoles": "about"} }); }
Then let's add another Hook Event to call that code.
/** * Hook to make additional configuration changes */ Hooks.Events.add("afterCoreInit", () => { addRolesToVisitors(); });
Now, as usual you will need to reset for this change to take affect. In addition, changes to defaultRoles/defaultVisitorRoles do not change existing users, so you will need to clear your cache or use Private/Incognito mode so that a new user is created.
Using Route "Hooks"Using Route "Hooks"
It's common to want to write code to do something when a url visits a certain route for such things as site tracking/metric. You can do this with a Route "hook".
We can do this using the
Hooks API provided by Reaction. For any route you can add an arbitrary callback. (Note that
routing is done on the client-side, so it needs to be added there). So are going to add a new
init.js file in our
client
directory and add the import to it in the
index.js. Then we can add this code:
import { Router, Logger } from "/client/api"; // create a function to do something on the product detail page function logSomeStuff() { Logger.warn("We're arriving at the product page!"); } // add that to the product detail page onEnter hook Router.Hooks.onEnter("product", logSomeStuff);
Now every time the user enters the "product" route, the function
logSomeStuff will run. If you want to see a list
of routes currently loaded on the client you type
ReactionRouter._routes in the browser console. | https://docs.reactioncommerce.com/docs/1.16.0/plugin-routes-6 | CC-MAIN-2019-35 | en | refinedweb |
The objective of this post is to explain how to configure an asynchronous HTTP web server on the Arduino core running on the ESP32. The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
Introduction
The objective of this post is to explain how to configure an Asynchronous HTTP web server on the Arduino core running on the ESP32.
As example, we will develop a very simple “hello world” application that will return a message to the clients that connect to it. As client, we will use a web browser.
The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
If you prefer a video tutorial, please check the video below from my YouTube channel:
The libraries
In order to setup the web server, we will need two libraries. The first one is the ESPAsyncWebServer, which we will use in our code.
This library allows setting an asynchronous HTTP (and Websocket) server, meaning that it can handle more than one connection at the same time [1].
Furthermore, as we will see in the code, once we set the server callback functions, we don’t need to periodically call any client handling function on the main loop, like we had to do on the ESP8266 HTTP web server original implementation.
The second library needed is the AsyncTCP, which is a dependency for the previous one. Thus, we will not directly interact with it in our code.
This library is an asynchronous TCP library for the ESP32 and it is the base for the ESPAsyncWebServer library implementation [2]. Naturally, this is a lower level library which is more complex to use.
At the time of writing the libraries were not available on the Arduino IDE libraries manager, so we have to download them from the GitHub pages and place them on our Arduino libraries folder.
To download both of the libraries, simply click the “Clone or download” button on the top of the GitHub page, as highlighted in figure 1.
Figure 1 – Downloading the libraries code from GitHub.
Then, select the “Download ZIP” option and the file should be downloaded to your computer. Just open the .zip file and extract the folder to your Arduino libraries folder.
Usually, the libraries folder for the Arduino installation is located on the C:\Users\UserName\Documents\Arduino\libraries folder.
Note that the extracted folder has a -master at the end of its name. Just delete this appended -master and keep the remaining name.
After that, the libraries should be available for use on the Arduino environment. This procedure applies to installing both libraries.
The code
For this example we will need to include two libraries. First of all, we will need to include the WiFi.h library, which is needed for connecting the ESP32 to a Wireless network.
Finally, we will include the previously installed asynchronous HTTP web server library, namely the ESPAsyncWebServer.h.
#include "WiFi.h" #include "ESPAsyncWebServer.h"
After these includes, we will declare two global variables to hold our WiFi network credentials, so we can later use them to perform the connection.
const char* ssid = "yourNetworkName"; const char* password = "yourNetworkPassword";
To finalize, we will declare a variable of class AsyncWebServer, which we will use to set up our asynchronous ESP32 HTTP server.
As input of the constructor, we will pass the port where the server will be listening. We will use port 80, which is the default HTTP port.
AsyncWebServer server(80);
Moving on to the setup function, we will start by opening a serial connection. Then, we will connect the ESP32 to the WiFi network using the previously declared credentials. If you need a detailed explanation on how to connect the ESP32 to a WiFi network, please refer to this previous article.
Serial.begin(115200); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting to WiFi.."); } Serial.println(WiFi.localIP());
Note that, after the connection finishes, we are printing the local IP assigned to the ESP32, so we can later use it to make a request to our server.
Now we are going to configure the route where server will be listening for incoming HTTP requests and a function that will be executed when a request is received on that route.
We specify this by calling the on method on the server object. As first input, this method receives a string with the path where it will be listening. We are going to set it to listen for requests on the “/hello” route.
As second parameter, it receives an enum of type WebRequestMethod (defined here), which allows to specify which type of HTTP request is allowed on that route. We will specify that we only want to receive HTTP GET requests, and thus we use the value HTTP_GET.
As third argument, it receives a function to which the signature is defined by the ArRequestHandlerFunction type, which can be seen here.
So, this handling function we will specify has to return void and receives as parameter a pointer to an object of type AsyncWebServerRequest. Each incoming client will be wrapped in an object of this class and both live together until disconnection [3].
In order to keep the syntax compact, we will declare this handling function as a C++ lambda function. Thus, we can specify a locally declared unnamed function. For servers with many routes, this is much cleaner and compact than having to declare a named function for each route. You can read more about lambdas here.
We will use the following lambda syntax:
[captures](params){body}
In our case, we will not use any captures, so we simply use empty square brackets []. For the params, we will need to respect the signature of the previously mentioned definition of the handling function, which is specified by the ArRequestHandlerFunction type. Thus, our lambda will receive a parameter which is a pointer to an object of type AsyncWebServerRequest.
server.on("/hello", HTTP_GET, [](AsyncWebServerRequest *request){ // Lambda body implementation });
For our handling function implementation, we want to return to the client a simple “hello world” message. As said before, each client is associated with a AsyncWebServerRequest object, which has a method called send that allows us to specify the HTTP response to be returned.
This method receives as first input the HTTP response code, which will be 200 in our case. This is the HTTP response code for “OK”.
As second input, the send method receives the answer content-type of the response. We will use the value “text/plain“, since we simply want to return a “hello world” message.
Finally, as third argument, we will pass the actual content, which will be our “hello world” message.
Note that since we are working with a pointer to an object rather than the object itself we will need to use the arrow operator to call the send method on the AsyncWebServerRequest object.
server.on("/hello", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(200, "text/plain", "Hello World"); });
To finalize the setup function, we will need to call the begin method on our server object. This method call will start the server.
server.begin();
Since our server is asynchronous, we will not need to call any client handling function on the main loop, as stated before. So, the route handling function we just defined will be asynchronously called and executed upon receiving requests from clients. The final code can be seen below.
#include "WiFi("/hello", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(200, "text/plain", "Hello World"); }); server.begin(); } void loop(){}
Testing the code
To test the code, simply compile it and upload it to the ESP32 with your Arduino IDE. After the procedure finishes, open the serial monitor and copy the IP that gets printed once the ESP32 connects to the WiFi network.
Then, open a web browser and type the following on the address bar, changing #yourEspIp# by the IP you have just copied.
You should get an output similar to figure 2, which shows the “hello world” message we have defined on the code being printed.
Figure 2 – ESP32 HTTP web server hello world.
References
[1]
[2]
[3]
31 Replies to “ESP32 Arduino: Asynchronous HTTP web server” | https://techtutorialsx.com/2017/12/01/esp32-arduino-asynchronous-http-webserver/ | CC-MAIN-2019-35 | en | refinedweb |
Introduction
This page contains a summary of best practices drawn from other pages in the Cloud Storage documentation. You can use the best practices listed here as a quick reference of what to keep in mind when building an application that uses Cloud Storage. Follow these best practices when launching a commercial application.
If you are just starting out with Cloud Storage, this page may not be the best place to start, because it does not teach you the basics of how to use Cloud Storage. If you are a new user, we suggest that you start with Getting Started: Using the GCP Console or Getting Started: Using the gsutil Tool.
Naming
The bucket namespace is global and publicly visible. Every bucket name must be unique across the entire Cloud Storage namespace. For more information, see Bucket and Object Naming Guidelines.
If you need a lot of buckets, use GUIDs or an equivalent for bucket names, put retry logic in your code to handle name collisions, and keep a list to cross-reference your buckets. Another option is to use domain-named buckets and manage the bucket names as sub-domains.
Don't use user IDs, email addresses, project names, project numbers, or any personally identifiable information (PII) in bucket names because anyone can probe for the existence of a bucket. Similarly, be very careful with putting PII in your object names, because object names appear in URLs for the object. add the hash of the sequence number as part of the filename to make it non-sequential. For more information, see Request Rate and Access Distribution Guidelines.
Traffic
Perform a back-of-the-envelope estimation of the amount of traffic that will be sent to Cloud Storage. Specifically, think about:
Operations per second. How many operations per second do you expect, for both buckets and objects, and for create, update, and delete operations.
Bandwidth. How much data will be sent, over what time frame?
Cache control. Specifying the
Cache-Controlmetadata on objects will benefit read latency on hot or frequently accessed objects. See Viewing and Editing Metadata for instructions for setting object metadata, such as
Cache-Control.
Design your application to minimize spikes in traffic. If there are clients of your application doing updates, spread them out throughout the day.
While Cloud Storage has no upper bound on the request rate, for the best performance when scaling to high request rates, follow the Request Rate and Access Distribution Guidelines.
Be aware that there are rate limits for certain operations and design your application accordingly.
If you get an error, use exponential backoff or Coldline Storage class.
Store your data in a region closest to your application's users. For instance, for EU data you might choose an EU bucket, and for US data you might choose a US bucket. For more information, see Bucket Locations.
Keep compliance requirements in mind when choosing a location for user data. Are there legal requirements around the locations that your users will be providing data?
Security, ACLs, and access control
The first and foremost precaution is: Never share your credentials. Each user should have distinct credentials.
When you print out HTTP protocol details, your authentication credentials, such as OAuth 2.0 tokens, are visible in the headers. If you need to post protocol details to a message board or need to supply HTTP protocol details for troubleshooting, make sure that you sanitize or revoke any credentials that appear as part of the output.
Always use TLS (HTTPS) to transport your data when you can. This ensures that your credentials as well as your data are protected as you transport data over the network. For example, to access the Cloud Storage API, you should use.
Make sure that you use an HTTPS library that validates server certificates. A lack of server certificate validation makes your application vulnerable to man-in-the-middle attacks or other attacks. Be aware that HTTPS libraries shipped with certain commonly used implementation languages do not, by default, verify server certificates. For example, Python before version 3.2 has no built-in or complete support for server certificate validation, and you need to use third-party wrapper libraries to ensure your application validates server certificates. The boto plugin includes code that validates server certificates by default.
When applications no longer need access to your data, you should revoke their authentication credentials. For Google services and APIs, you can do this by logging into your Google Account Permissions and clicking on the unneeded applications, then clicking Remove Access.
Make sure that you securely store your credentials. This can be done differently depending on your environment and where you store your credentials. For example, if you store your credentials in a configuration file, make sure that you set appropriate permissions on that file to prevent unwanted access. If you are using Google App Engine, consider using
StorageByKeyNameto store your credentials.
Cloud Storage requests refer to buckets and objects by their names. As a result, even though ACLs will prevent unauthorized third parties from operating on buckets or objects, a third party can attempt requests with bucket or object names and determine their existence by observing the error responses. It can then be possible for information in bucket or object names to be leaked. If you are concerned about the privacy of your bucket or object names, you should take appropriate precautions, such as:
Choosing bucket and object names that are difficult to guess. For example, a bucket named
mybucket-gtbytul3is random enough that unauthorized third parties cannot feasibly guess it or enumerate other bucket names from it.
Avoiding use of sensitive information as part of bucket or object names. For example, instead of naming your bucket
mysecretproject-prodbucket, name it
somemeaninglesscodename-prod. In some applications, you may want to keep sensitive metadata in custom Cloud Storage headers such as
x-goog-meta, rather than encoding the metadata in object names.
Use groups in preference to explicitly listing large numbers of users. Not only does it scale better, it also provides a very efficient way to update the access control for a large number of objects all at once. Lastly, it’s cheaper as you don’t need to make a request per-object to change the ACLs.
Before adding objects to a bucket, check that the default object ACLs are set to your requirements first. This could save you a lot of time updating ACLs for individual objects.
Bucket and object ACLs are independent of each other, which means that the ACLs on a bucket do not affect the ACLs on objects inside that bucket. It is possible for a user without permissions for a bucket to have permissions for an object inside the bucket. For example, you can create a bucket such that only GroupA is granted permission to list the objects in the bucket, but then upload an object into that bucket that allows GroupB
READaccess to the object. GroupB will be able to read the object, but will not be able to view the contents of the bucket or perform bucket-related tasks.
The securely to users who don't have Google accounts we recommend you use signed URLs. For example, with signed URLs you can provide a link to an object and your application's customers do not need to authenticate with Cloud Storage to access the object. When you create a signed URL you control the type (read, write, delete) and duration of access.
If you use gsutil, see these additional recommendations.
Uploading data
If you use XMLHttpRequest (XHR) callbacks to get progress updates, do not close and re-open the connection if you detect that progress has stalled. Doing so creates a bad positive feedback loop during times of network congestion. When the network is congested, XHR callbacks can get backlogged behind the acknowledgement (ACK/NACK) activity from the upload stream, and closing and reopening the connection when this happens uses more network capacity at exactly the time when you can least afford it.
For upload traffic, we recommend setting reasonably long timeouts. For a good end-user experience, you can set a client-side timer that updates the client status window with a message (e.g., "network congestion") when your application hasn't received an XHR callback for a long time. Don't just close the connection and try again when this happens.
If you use Compute Engine instances with processes that POST to Cloud Storage to initiate a resumable upload, then you should use Compute Engine instances in the same locations as your Cloud Storage buckets. You can then use a geo IP service to pick the Compute Engine region to which you route customer requests, which will help keep traffic localized to a geo-region.
For resumable uploads, the resumable session should stay in the region in which it was created. Doing so reduces cross-region traffic that arises when reading and writing the session state, improving resumable upload performance.
Avoid breaking a transfer into smaller chunks if possible and instead upload the entire content in a single chunk. Avoiding chunking removes because the deleted records are not purged from the underlying storage system immediately, thus object listing needs to skip over the deleted records when finding the objects to return.
Eventually the deleted records are removed from the underlying storage system, and object listing performance becomes normal again. This typically takes a few hours, but in some cases may take a few days.
You should design your workload to avoid listing an object range with a lot of recent deletions. For example, if you are trying to delete objects from a bucket by repeatedly listing objects then deleting them, you should use the page token returned by the object listing response to issue the next listing request, instead of restarting the listing from the beginning for each request. When you restart your listing from the beginning, each request needs to skip over all of the objects that were just deleted, causing the object listing to become slower. If you have deleted a lot of objects under a certain prefix, then try to avoid listing objects under that prefix right after the deletions.
Website hosting
The Cross-Origin Resource Sharing (CORS) topic
describes how to allow scripts hosted on other websites to access static
resources stored in a Cloud Storage bucket. The converse scenario is when you allow
scripts hosted in Cloud Storage to access static resources hosted on a website
external to Cloud Storage. In the latter scenario, the website. | https://cloud.google.com/storage/docs/best-practices?hl=ru | CC-MAIN-2019-35 | en | refinedweb |
[tip:perf/urgent] perf scripts python: call-graph-from-sql.py: Rename to exported-sql-viewer.py
From:
tip-bot for Adrian Hunter
Date:
Fri Oct 26 2018 - 03:42:26 EST ]
Commit-ID: 031c2a004ba75a4f8f2a6d0a7ca6f2fe5912de22
Gitweb:
Author: Adrian Hunter <adrian.hunter@xxxxxxxxx>
AuthorDate: Mon, 1 Oct 2018 09:28:46 +0300
Committer: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>
CommitDate: Tue, 23 Oct 2018 14:26:44 -0300
perf scripts python: call-graph-from-sql.py: Rename to exported-sql-viewer.py
Additional reports will be added to the script so rename to reflect the
more general purpose.
Signed-off-by: Adrian Hunter <adrian.hunter@xxxxxxxxx>
Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx>
Cc: Jiri Olsa <jolsa@xxxxxxxxxx>
Link:
Signed-off-by: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>
---
tools/perf/Documentation/intel-pt.txt | 2 +-
tools/perf/scripts/python/export-to-postgresql.py | 2 +-
tools/perf/scripts/python/export-to-sqlite.py | 2 +-
.../python/{call-graph-from-sql.py => exported-sql-viewer.py} | 6 +++---
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/tools/perf/Documentation/intel-pt.txt b/tools/perf/Documentation/intel-pt.txt
index 76971d2e4164..115eaacc455f 100644
--- a/tools/perf/Documentation/intel-pt.txt
+++ b/tools/perf/Documentation/intel-pt.txt
@@ -106,7 +106,7 @@ in transaction, respectively.
While it is possible to create scripts to analyze the data, an alternative
approach is available to export the data to a sqlite or postgresql database.
Refer to script export-to-sqlite.py or export-to-postgresql.py for more details,
-and to script call-graph-from-sql.py for an example of using the database.
+and to script exported-sql-viewer.py for an example of using the database.
There is also script intel-pt-events.py which provides an example of how to
unpack the raw data for power events and PTWRITE.
diff --git a/tools/perf/scripts/python/export-to-postgresql.py b/tools/perf/scripts/python/export-to-postgresql.py
index e46f51b17513..0564dd7377f2 100644
--- a/tools/perf/scripts/python/export-to-postgresql.py
+++ b/tools/perf/scripts/python/export-to-postgresql.py
@@ -59,7 +59,7 @@ import datetime
# pt_example=# \q
#
# An example of using the database is provided by the script
-# call-graph-from-sql.py. Refer to that script for details.
+# exported-sql-viewer.py. Refer to that script for details.
#
# Tables:
#
diff --git a/tools/perf/scripts/python/export-to-sqlite.py b/tools/perf/scripts/python/export-to-sqlite.py
index e4bb82c8aba9..245caf2643ed 100644
--- a/tools/perf/scripts/python/export-to-sqlite.py
+++ b/tools/perf/scripts/python/export-to-sqlite.py
@@ -40,7 +40,7 @@ import datetime
# sqlite> .quit
#
# An example of using the database is provided by the script
-# call-graph-from-sql.py. Refer to that script for details.
+# exported-sql-viewer.py. Refer to that script for details.
#
# The database structure is practically the same as created by the script
# export-to-postgresql.py. Refer to that script for details. A notable
diff --git a/tools/perf/scripts/python/call-graph-from-sql.py b/tools/perf/scripts/python/exported-sql-viewer.py
old mode 100644
new mode 100755
similarity index 98%
rename from tools/perf/scripts/python/call-graph-from-sql.py
rename to tools/perf/scripts/python/exported-sql-viewer.py
index ee1085169a3e..03e7a1de7f31
--- a/tools/perf/scripts/python/call-graph-from-sql.py
+++ b/tools/perf/scripts/python/exported-sql-viewer.py
@@ -10,12 +10,12 @@
# Following on from the example in the export scripts, a
# call-graph can be displayed for the pt_example database like this:
#
-# python tools/perf/scripts/python/call-graph-from-sql.py pt_example
+# python tools/perf/scripts/python/exported-sql-viewer.py pt_example
#
# Note that for PostgreSQL, this script supports connecting to remote databases
# by setting hostname, port, username, password, and dbname e.g.
#
-# python tools/perf/scripts/python/call-graph-from-sql.py "hostname=myhost username=myuser password=mypassword dbname=pt_example"
+# python tools/perf/scripts/python/exported-sql-viewer.py "hostname=myhost username=myuser password=mypassword dbname=pt_example"
#
# The result is a GUI window with a tree representing a context-sensitive
# call-graph. Expanding a couple of levels of the tree and adjusting column
@@ -365,7 +365,7 @@ class DBRef():
def Main():
if (len(sys.argv) < 2):
- print >> sys.stderr, "Usage is: call-graph-from-sql.py <database name>"
+ print >> sys.stderr, "Usage is: exported-sql-viewer.py <database name>"
raise Exception("Too few arguments")
dbname = sys.argv[1] ] | http://lkml.iu.edu/hypermail/linux/kernel/1810.3/01936.html | CC-MAIN-2019-35 | en | refinedweb |
In this tutorial we will learn how to convert an image to black and white, using Python and OpenCV.
Introduction). Otherwise, we assign to it the value 255 (white).
Note whoever that this is a very simple approach, which may not give the best results if, for example, the image has different light conditions in different areas [1]. You can read here about more advanced operations that we can do in OpenCV to obtain better results.
This tutorial was tested on Windows 8.1, with version 4.0.0 of OpenCV. The Python version used was 3.7.2.
The code
The first thing we need to do is importing the cv2 module, so we have access to all the functions that will allow us to convert the image to black and white.
import cv2
Then we will need to obtain the image that we want to convert. So, to read an image from the file system, we simply need to call the imread function, passing as input the path to the file we want to read.
Note that the image will be read as a numpy ndarray.
originalImage = cv2.imread('C:/Users/N/Desktop/Test.jpg')
In order for us to be able to apply the thresholding operation, the image should be in gray scale [2], as already mentioned in the introductory section.
Thus, after reading the image, we will convert it to gray scale with a call to the cvtColor function. For a detailed explanation on how to convert an image to gray scale using OpenCV, please check here.
So, as first input of the cvtColor, we will pass the original image. As second input we need to pass the color space conversion.
grayImage = cv2.cvtColor(originalImage, cv2.COLOR_BGR2GRAY)
Now, to convert our image to black and white, we will apply the thresholding operation. To do it, we need to call the threshold function of the cv2 module.
For this tutorial we are going to apply the simplest thresholding approach, which is the binary thresholding. Note however that OpenCV offers more types of thresholding, as can be seen here.
As already mentioned, the algorithm for binary thresholding corresponds to the following: for each pixel of the image, if the value of the pixel is lesser than a given threshold, then it is set to zero. Otherwise, it is set to a user defined value [3].
Note that since we are operating over a gray scale image, pixel values vary between 0 and 255. Also, since we want to convert the image to black and white, when the pixel is greater than the threshold, the value to which we want it to be converted is 255.
Naturally, the threshold function allows us to specify these parameters. So, the first input of the function is the gray scale image to which we want to apply the operation.
As second input, it receives the value of the threshold. We will consider the value 127, which is in the middle of the scale of the values a pixel in gray scale can take (from 0 to 255).
As third input, the function receives the user defined value to which a pixel should be converted in case its value is greater than the threshold. We will use the value 255, which corresponds to white. Recall that we want to convert the image to black and white, which means that at the end we want a image having pixels with either the value 0 or 255.
As fourth input, the function receives a constant indicating the type of thesholding to apply. As already mentioned, we are going to use a binary threshold, so we pass the value THRESH_BINARY.
As output, this function call will return a tuple. The first value can be ignored, since it is relevant only for more advanced thresholding methods. The second returned value corresponds to the resulting image, after applying the operation.
(thresh, blackAndWhiteImage) = cv2.threshold(grayImage, 127, 255, cv2.THRESH_BINARY)
After this, we will show the black and white image in a window, by calling the imshow function.
cv2.imshow('Black white image', blackAndWhiteImage)
For comparison, we will create two more windows to display the original image and the gray scale version.
cv2.imshow('Original image',originalImage) cv2.imshow('Gray image', grayImage)
To finalize, we will call the waitKey function with a value of zero, so it blocks indefinitely waiting for a key press. So, until the user presses a key, the windows with the images will be shown.
After the user presses a key, the function will return and unblock the execution. Then, we will call the destroyAllWindows function to destroy the previously created windows.
cv2.waitKey(0) cv2.destroyAllWindows()
The final code can be seen below.
import cv2 originalImage = cv2.imread('C:/Users/N/Desktop/Test.jpg') grayImage = cv2.cvtColor(originalImage, cv2.COLOR_BGR2GRAY) (thresh, blackAndWhiteImage) = cv2.threshold(grayImage, 127, 255, cv2.THRESH_BINARY) cv2.imshow('Black white image', blackAndWhiteImage) cv2.imshow('Original image',originalImage) cv2.imshow('Gray image', grayImage) cv2.waitKey(0) cv2.destroyAllWindows()
Testing the code
To test the code, simply run the previous Python script in an environment of your choice. Naturally, you should use as input of the imread function a path pointing to an image in your file system.
You should get an output similar to figure 1, which shows the three versions of the image being displayed in different windows. As can be seen, the image was converted to black and white, as expected.
Related Posts
References
[1]
[2]
[3] | https://techtutorialsx.com/2019/04/13/python-opencv-converting-image-to-black-and-white/ | CC-MAIN-2019-35 | en | refinedweb |
>>
Remove Boxes in C++
Suppose we have several boxes with different colors These colors are represented by different positive numbers. We can experience several rounds to remove boxes until there is no box left. In each round we can choose some continuous boxes with the same color (composed of k boxes, k >= 1), and remove them and get k*k points. So if the input is like − [1,3,2,2,2,4,4,3,1], then the output will be 21.
Find the maximum points you can get.
To solve this, we will follow these steps −
- Define a function solve(), this will take an array boxes, i, j, k, one 3D array dp,
- if i > j, then −
- return 0
- if dp[i, j, k] is not equal to -1, then −
- return dp[i, j, k]
- ret := -inf
- for checking condition i + 1 <= j and boxes[i + 1] is same as boxes[i], update (increase i by 1), (increase k by 1), do nothing −
- ret := maximum of ret and (k + 1) * (k + 1) + call the function solve(boxes, i + 1, j, 0, dp)
- for initialize x := i + 1, when x <= j, update (increase x by 1), do −
- if boxes[x] is same as boxes[i], then −
- ret := maximum of ret and solve((boxes, i + 1, x - 1, 0, dp) + solve(boxes, x, j, k + 1, dp))
- return dp[i, j, k] = ret
- From the main method, do the following
- n := size of boxes
- Define one 3D array dp of order (n + 1) x (n + 1) x (n + 1), fill this with -1
- return solve(boxes, 0, n - 1, 0, dp)
Let us see the following implementation to get better understanding −
Example
#include <bits/stdc++.h> using namespace std; class Solution { public: int solve(vector <int>& boxes, int i, int j, int k, vector < vector < vector <int > > >& dp){ if(i > j) return 0; if(dp[i][j][k] != -1) return dp[i][j][k]; int ret = INT_MIN; for(; i + 1 <= j && boxes[i + 1] == boxes[i]; i++, k++); ret = max(ret, (k + 1) * (k + 1) + solve(boxes, i + 1, j, 0, dp)); for(int x = i + 1; x <= j; x++){ if(boxes[x] == boxes[i]){ ret = max(ret, solve(boxes, i + 1, x - 1, 0, dp) + solve(boxes, x, j, k + 1, dp)); } } return dp[i][j][k] = ret; } int removeBoxes(vector<int>& boxes) { int n = boxes.size(); vector < vector < vector <int > > > dp(n + 1, vector < vector <int> > (n + 1, vector <int>(n + 1, -1))); return solve(boxes, 0, n - 1, 0, dp); } }; main(){ Solution ob; vector<int> v = {1,3,2,2,2,4,4,3,1}; cout << (ob.removeBoxes(v)); }
Input
{1,3,2,2,2,4,4,3,1}
Output
21
- Related Questions & Answers
- C++ Program to check we can remove all stones by selecting boxes
- JavaScript Dialogue Boxes
- How to remove the boxes around legend of a plot created by ggplot2 in R?
- Type of Boxes Generated in CSS
- Selenium WebDriver and DropDown Boxes.
- Program to find maximum number of boxes we can fit inside another boxes in python
- Types of Pop up boxes available in JavaScript
- Block-level Elements and Block Boxes in CSS
- Inline-level Elements and Inline Boxes in CSS
- Maximum Candies You Can Get from Boxes in C++
- Using accounting, calculate the missing boxes in the table.
- What are the role of S-boxes in DES?
- How to pass check boxes data using JSP?
- How to draw bounding boxes on an image in PyTorch?
- Find the number of boxes to be removed in C++
Advertisements | https://www.tutorialspoint.com/remove-boxes-in-cplusplus | CC-MAIN-2022-33 | en | refinedweb |
SQLAlchemy ships with a connection pooling framework that integrates with the Engine system and can also be used on its own to manage plain DB-API connections.
At the base of any database helper library is a system group or “pool” of active database connections which are reused from request to request in a single server process..
Pool instances may be created directly for your own use or to supply to sqlalchemy.create_engine() via the pool= keyword argument.
Constructing your own pool requires supplying a callable function the Pool can use to create new connections. The function will be called with no arguments.
Through this method, custom connection schemes can be made, such as a using connections from another library’s pool, or making a new connection that automatically executes some initialization commands:)
Or with SingletonThreadPool:
import sqlalchemy.pool as pool import sqlite p = pool.SingletonThreadPool(lambda: sqlite.connect(filename='myfile.db'))
Bases: sqlalchemy.pool.Pool
A Pool that allows at most one checked out connection at any given time.
This will raise an exception if more than one connection is checked out at a time. Useful for debugging code that is using more connections than desired..
Bases: sqlalchemy.log.Identified
Abstract base class for connection pools.
Construct a Pool.
Add a PoolListener-like object to this pool.
listener may be an object that implements some or all of PoolListener, or a dictionary of callables containing implementations of some or all of the named methods in PoolListener.
Dispose of this pool.
This method leaves the possibility of checked-out connections remaining open, It is advised to not reuse the pool once dispose() is called, and to instead use a new pool constructed by the recreate() method.
Bases: sqlalchemy.pool.Pool
A Pool that imposes a limit on the number of open connections.
Construct a QueuePool..
Remove all current DB-API 2.0 managers.
All pools and connections are disposed. | https://codepowered.com/manuals/SQLAlchemy-0.6.1-doc/html/reference/sqlalchemy/pooling.html | CC-MAIN-2022-33 | en | refinedweb |
Creating a colormap from a list of colors¶
For more detail on creating and manipulating colormaps see Creating Colormaps in Matplotlib.
Creating a colormap from a list of colors
can be done with the
LinearSegmentedColormap.from_list method. You must
pass a list of RGB tuples that define the mixture of colors from 0 to 1.
Creating custom colormaps¶
It is also possible to create a custom mapping for a colormap. This is accomplished by creating dictionary that specifies how the RGB channels change from one end of the cmap to the other..:
Above is an attempt to show that for x in the range x[i] to x[i+1], the interpolation is between y1[i] and y0[i+1]. So, y0[0] and y1[-1] are never used.
--- Colormaps from a list ---
colors = [(1, 0, 0), (0, 1, 0), (0, 0, 1)] # R -> G -> B n_bins = [3, 6, 10, 100] # Discretizes the interpolation into bins cmap_name = 'my_list' fig, axs = plt.subplots(2, 2, figsize=(6, 9)) fig.subplots_adjust(left=0.02, bottom=0.06, right=0.95, top=0.94, wspace=0.05) for n_bin, ax in zip(n_bins, axs.ravel()): # Create the colormap cmap = LinearSegmentedColormap.from_list(cmap_name, colors, N=n_bin) # Fewer bins will result in "coarser" colomap interpolation im = ax.imshow(Z, origin='lower', cmap=cmap) ax.set_title("N bins: %s" % n_bin) fig.colorbar(im, ax=ax)
--- Custom colormaps ---)) } # Make a modified version of cdict3 with some transparency # in the middle of the range. cdict4 = {**cdict3, 'alpha': ((0.0, 1.0, 1.0), # (0.25, 1.0, 1.0), (0.5, 0.3, 0.3), # (0.75, 1.0, 1.0), (1.0, 1.0, 1.0)), }
Now we will use this example to illustrate 2) plt.register_cmap(cmap=LinearSegmentedColormap('BlueRed3', cdict3)) plt.register_cmap(cmap=LinearSegmentedColormap('BlueRedAlpha', cdict4))
Make the figure:
fig, axs = plt.subplots(2, 2, figsize=(6, 9)) fig.subplots_adjust(left=0.02, bottom=0.06, right=0.95, top=0.94, wspace=0.05) # Make 4 subplots: im1 = axs[0, 0].imshow(Z, cmap=blue_red1) fig.colorbar(im1, ax=axs[0, 0]) cmap = plt.get_cmap('BlueRed2') im2 = axs[1, 0].imshow(Z, cmap=cmap) fig.colorbar(im2, ax=axs[1, 0]) # Now we will set the third cmap as the default. One would # not normally do this in the middle of a script like this; # it is done here just to illustrate the method. plt.rcParams['image.cmap'] = 'BlueRed3' im3 = axs[0, 1].imshow(Z) fig.colorbar(im3, ax=axs[0, 1]) axs[0, 1].set_title("Alpha = 1") # Or as yet another variation, we can replace the rcParams # specification *before* the imshow with the following *after* # imshow. # This sets the new default *and* sets the colormap of the last # image-like item plotted via pyplot, if any. # # Draw a line with low zorder so it will be behind the image. axs[1, 1].plot([0, 10 * np.pi], [0, 20 * np.pi], color='c', lw=20, zorder=-1) im4 = axs[1, 1].imshow(Z) fig.colorbar(im4, ax=axs[1, 1]) # Here it is: changing the colormap for the current image and its # colorbar after they have been plotted. im4.set_cmap('BlueRedAlpha') axs[1, 1].set_title("Varying alpha") # fig.suptitle('Custom Blue-Red colormaps', fontsize=16) fig.subplots_adjust(top=0.9) plt.show()
References¶
The use of the following functions, methods, classes and modules is shown in this example:
import matplotlib matplotlib.axes.Axes.imshow matplotlib.pyplot.imshow matplotlib.figure.Figure.colorbar matplotlib.pyplot.colorbar matplotlib.colors matplotlib.colors.LinearSegmentedColormap matplotlib.colors.LinearSegmentedColormap.from_list matplotlib.cm matplotlib.cm.ScalarMappable.set_cmap matplotlib.pyplot.register_cmap matplotlib.cm.register_cmap
Out:
<function register_cmap at 0x7f5f3318a550>
Total running time of the script: ( 0 minutes 1.465 seconds)
Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery | https://matplotlib.org/3.4.1/gallery/color/custom_cmap.html | CC-MAIN-2022-33 | en | refinedweb |
SQL Expression Language Tutorial
This section presents the API reference for the SQL Expression Language. For a full introduction to its usage, see SQL Expression Language Tutorial.
The alias.
Return the clause expression COLLATE collation.
Return a Delete clause element.
Similar functionality is available via the delete() method on Table.
Return a descending ORDER BY clause element.
e.g.:
order_by = [desc(table1.mycol)]
Return a DISTINCT clause.)
Return the clause extract(field FROM expr)..
Return a _Null object, which compiles to NULL in a sql statement..
Return an Alias object derived from a Select.
*args, **kwargs.
Return a SQL tuple.
Main usage is to produce a composite IN construct:
tuple_(table.c.col1, table.c.col2).in_( [(1, 2), (5, 12), (10, 19)] )
Coerce the given expression into the given type, on the Python side only.
type_coerce() is roughly similar to :func:..
Bases: sqlalchemy.sql.expression.ColumnElement
Represent a bind parameter.
Public constructor is the bindparam() function.
Construct a _BindParamClause.
Compare this _BindParamClause to the given clause.
Bases: sqlalchemy.sql.visitors.Visitable
Base class for elements of a programmatically constructed SQL expression.
Returns the Engine or Connection to which this ClauseElement is bound, or None if none found..
Compile and execute this ClauseElement.
Deprecated since version 0.7: (pending) Only SQL expressions which subclass Executable may provide the execute() method.}
Compile and execute this ClauseElement, returning
Deprecated since version 0.7: (pending) Only SQL expressions which subclass Executable may provide the scalar() method.
the result’s scalar representation. this ColumnElement to another.
Special arguments understood:
Return True if the given ColumnElement has a common ancestor to this ColumnElement.
Bases: sqlalchemy.sql.expression.ColumnOperators
Defines comparison and math operations for ClauseElement instances.
Produce a ASC clause, i.e. <columnname> ASC
Produce a BETWEEN clause, i.e. <column> BETWEEN <cleft> AND <cright>
Produce a COLLATE clause, i.e. <column> COLLATE utf8_bin
Produce the clause LIKE '%<other>%'
Produce a DESC clause, i.e. <columnname> DESC
Produce a DISTINCT clause, i.e. DISTINCT <columnname>
Produce the clause LIKE '%<other>'
Compare this element to the given element or collection using IN.
Produce a column label, i.e. <columnname> AS <name>.
This is a shortcut to the label() function..
Produce the clause LIKE '<other>%'
Defines comparison and math operations.
x.__init__(...) initializes x; see help(type(x)) for signature
Hack, allows datetime objects to be compared on the LHS..
Add the given WHERE clause to a newly returned delete construct.
Bases: sqlalchemy.sql.expression._Generative
Mark a ClauseElement as supporting execution.
Executable is a superclass for all “statement” types of objects, including select(), delete(), update(), insert(), text().
Compile and execute this Executable. invoking:
Compile and execute this Executable, returning the result’s scalar representation..
This is shorthand for calling:
from sqlalchemy import alias a = alias(self, name)
Return the collection of Column objects contained by this FromClause.
Return the collection of Column objects contained by this FromClause.
Return corresponding_column for the given column, or if None search for a match in the given dictionary.
Given a ColumnElement, return the exported ColumnElement object from this Selectable which corresponds to that original Column via a common anscestor column.
the given ColumnElement, if the given ColumnElement is actually present within a sub-element of this FromClause. Normally the column will match if it merely shares a common anscestor with one of the exported columns of this FromClause.
return a SELECT COUNT generated against this FromClause.
a brief description of this FromClause.
Used primarily for error message formatting.
Return the collection of ForeignKey objects which this FromClause references.
Return True if this FromClause is ‘derived’ from the given FromClause.
An example would be an Alias of a Table is derived from that Table.
return a join of this FromClause against another FromClause.
return an outer join of this FromClause against another FromClause.
Return the collection of Column objects which comprise the primary key of this FromClause.
replace all occurrences of FromClause ‘old’ with the given Alias object, returning a copy of this FromClause.
return a SELECT of this FromClause..
Bases: sqlalchemy.sql.expression.FromClause
represent a JOIN construct between two FromClause elements.
The public constructor function for Join is the module-level join() function, as well as the join() method available off all FromClause subclasses.
The usual entrypoint here is the join() function or the FromClause.join() method of any FromClause object. )
See alias() for further details on aliases.
Create a Select from this Join.
The equivalent long-hand form, given a Join object j, is:
from sqlalchemy import select j = select([j.left, j.right], **kw).\ where(whereclause).\ select_from(j) column expression to the columns clause of this select() construct.
append the given correlation expression to this select() construct.
append the given FromClause expression to this select() construct’s FROM clause.
append the given expression to this select() construct’s HAVING criterion.
The expression will be joined to existing HAVING criterion via AND.
append the given columns clause prefix expression to this select() construct.
append the given expression to this select() construct’s WHERE criterion.
The expression will be joined to existing WHERE criterion via AND.
return a new select() construct with the given column expression added to its columns clause. new select() construct which will apply DISTINCT to its columns clause.
return a SQL EXCEPT of this select() construct against the given selectable.
return a SQL EXCEPT ALL of this select() construct against the given selectable.
Return the displayed list of FromClause elements.
return child elements as per the ClauseElement specification.
return a new select() construct with the given expression added to its HAVING clause, joined to the existing clause via AND, if any.
an iterator of all ColumnElement expressions which would be rendered into the columns clause of the resulting SELECT statement.
return a SQL INTERSECT of this select() construct against the given selectable.
return a SQL INTERSECT ALL of this select() construct against the given selectable.
return a Set of all FromClause elements referenced by this Select.
This set is a superset of that returned by the froms property,
which is specifically for those FromClause elements that would actually be rendered.
return a new select() construct which will apply the given expression to the start of its columns clause, not using any commas.
return a new Select construct with the given FROM expression merged into its list of FROM objects..
return a ‘grouping’ construct as per the ClauseElement specification.
This produces an element that can be embedded in an expression. Note
that this method is called automatically as needed when constructing expressions.
return a SQL UNION of this select() construct against the given selectable.
return a SQL UNION ALL of this select() construct against the given selectable.
return a new select() construct with the given expression added to its WHERE clause, joined to the existing clause via AND, if any.')
return a new select() construct with its columns clause replaced with the given columns.
Deprecated since version 0.6: autocommit() is deprecated. Use Executable.execution_options() with the ‘autocommit’ flag. LIMIT criterion applied.
return a new selectable with the given OFFSET criterion applied.
return a new selectable with the given list of ORDER BY criterion applied.
The criterion will be appended to any pre-existing ORDER BY criterion..
return a SELECT COUNT generated against this TableClause.
Generate a delete() construct.
Generate an insert() construct.
Generate an update() construct.
Bases: sqlalchemy.sql.expression._ValuesBase
Represent an Update construct.
The Update object is created using the update() function.
return a new update() construct with the given expression added to its WHERE clause, joined to the existing clause via AND, if any.
specify the VALUES clause for an INSERT statement, or the SET clause for an UPDATE..
Bases: sqlalchemy.sql.functions.GenericFunction
Bases: sqlalchemy.sql.expression.Function
Bases: sqlalchemy.sql.functions.GenericFunction
Define a function whose return type is the same as its arguments.
Bases: sqlalchemy.sql.functions.GenericFunction
Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs
Bases: sqlalchemy.sql.functions.GenericFunction
Bases: sqlalchemy.sql.functions.GenericFunction
The ANSI COUNT aggregate function. With no arguments, emits COUNT *..ReturnTypeFromArgs
Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs
Bases: sqlalchemy.sql.functions.GenericFunction
Bases: sqlalchemy.sql.functions.GenericFunction
Bases: sqlalchemy.sql.functions.AnsiFunction
Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs
Bases: sqlalchemy.sql.functions.AnsiFunction
Bases: sqlalchemy.sql.functions.AnsiFunction | https://codepowered.com/manuals/SQLAlchemy-0.6.9-doc/html/core/expression_api.html | CC-MAIN-2022-33 | en | refinedweb |
C++ Strings
A free video tutorial from Tim Buchalka's Learn Programming Academy
Professional Programmers and Teachers - 1.24M students
53 courses
1,593,222 students
Lecture description
In this video we learn about the sthe std::string class in C++
Learn more from the full course
Beginning C++ Programming - From Beginner to Beyond
Obtain Modern C++ Object-Oriented Programming (OOP) and STL skills. C++14 and C++17 covered. C++20 info see below.
45:59:04 of on-demand video • Updated July 2022
In this video, we'll learn about c++ strings. Standard string is a class in the c++ standard template library or stl. We could do an entire course on just the scl, and that course would be very long and complex. So in this video, I'll only talk about the major elements of the c++ string class. In order to use c++ plus strings, you must include the string header file. Strings are in the standard namespace. So in order to use them without using namespace standard, you must prefix them with standard and the scope resolution operator. This is also true for the standard string methods that work with c++ strings. Like c-style strings, c++ strings are stored contiguously in memory. However, unlike c-style strings which are fixed in size, c++ strings are dynamic and can grow and shrink as needed at runtime. C++ strings work with the stream insertion and extraction operators just like most other types in c++. The c++ string class provides a rich set of methods or functions that allow us to manipulate strings easily. Chances are that if you need to do something with the string that functionality is already there for you without having to rewrite it from scratch. C++ strings also work with most of the operators that we're used to for assigning, comparing and so forth. This is a huge advantage over c-style strings since c-style strings don't work well with those operators. Even though c++ strings are preferred in most cases sometimes you need to use c-style strings. Maybe you're interfacing with a library that's been optimized for c-style strings. Well, in this use case, you can still use c++ strings and take advantage of them. And when you need to you can easily convert the c++ string into a c-style string and back again. Like vectors, c++ strings are safer since they provide methods that can bounce check and allow you to find errors in your code so you can fix them before your program goes into production. Let's see how we can declare and initialize c++ strings. In all the examples in this video, I'm assuming that the string header file has been included and that we're using the standard namespace. Here you can see six examples of declaration and initialization of c++ strings. There are other ways as well using constructor and assignment syntax. But I'm mainly using the initializer syntax in this video. In the first example, we declare S1 as a string. Notice that the string type is lowercase. Unlike c-style strings, c++ strings are always initialized. In this case, S1 is initialized to an empty string. No garbage and memory to have to worry about. In the second example, I'm declaring and initializing S2 to the string Frank. Notice that frank is a c-style literal, that's okay. It will be converted to a c++ string. In the third example, S3 is initialized from S2, so a copy of S2 is created. S2 and S3 will both be Frank, but different Franks in different areas of memory. In the fourth example, I'm declaring and initializing S4 from Frank. But I'm only using the first three characters of the string Frank. So S4 will be initialized to the string fra. In the fifth example, I'm initializing S5 from S3 which is Frank. But notice the two integers that follow the S3 and the initializer. The first integer is the starting index and the second is the length. So in this case, we initialize S5 with the two characters that index 0 and 1 from S3. So S5 will be initialized to fr. Finally, we can initialize the string to a specific number of a specific character. In this case, three x's. Note that this uses constructor syntax with the parentheses instead of the curlies. Now that we've declared some strings, let's see how we can assign other values to them. With c++ strings, you can use the assignment operator. This feels much more natural than having to use the stream copy function like we would have to in c-style strings. In this example, I've declared S1 and it's empty. Then I can assign the c-style literal c++ rocks to S1. Pretty cool and pretty easy. S1 will grow dynamically as needed. In the second example, I've declared S2 and initialized it to hello. Then I assign S1 to S2. In this case, S2 will no longer contain hello. It will contain a copy of S1, c++ rocks. Let's see how we can concatenate strings together. Concatenation of strings just means building up a string from two other strings. We can use the plus operator to concatenate c++ strings. In this example, I created two strings part one which is c++ and part two which is powerful. Then I have an empty string sentence. Notice that I'm assigning two sentence the concatenated result of part one plus a space plus part two plus a space plus language. If I displayed sentence now, it would display c++ is a powerful language. Notice that the last example on the slide will not compile. This is because we have two c-style literals. And you can't concatenate c-style literals. It only works for c++ strings. A combination of c++ strings and c-style strings is okay though as we saw in the previous assignments. Just like we did with vectors, we can use the same operators to access string elements. In this case, the elements of a string are characters. So we can use the subscript operator as well as the at method. Remember, the app method performs bounce checking. So if you go over bounds, you'll get an exception which you can fix. Let's see how we can display screen characters one at a time. In this example, we have a string S1 initialized to Frank. We can use the range based for loop to display the string characters. In this case, f-r-a-n-k and the null character will be displayed. Pretty much what you expected, right. Notice that the type of the loop variable is char in this case. What do you think will happen if we change that to integer. In this case, I've changed it to integer. Is this what you expected. We told the compiler to use an integer and that's exactly what it's doing. So instead of displaying the character value of each element in the string, it's displaying the integer value that represents those characters. So in this case, 70 114 97 110 107 and 0 which represent f r a n k and of course the null character. These are the ascii codes for those characters. Comparing c++ strings couldn't be any easier or more intuitive. We use the same equality and relational operators that we've been using all along. We're comparing two string objects, so they'll be compared character by character, and their character values will be compared lexically. So a capital a is less than a capital z, and a capital a is less than a lowercase a. That's because the capital letters come before the lowercase letters in the ascii table. We can't use these operators on two c-style literals, but we can use them in the following cases. If we have two c++ strings, if we have one c++ string and a c-style literal or if we have one c++ string and one c-style string. Let's see some examples. Here we're defining five c++ string variables, S1 through S5. And then we perform some comparison operations and see the results. Of course, you would normally use these Boolean expressions in an if statement or looping conditional expressions. In the first example, we check to see if S1 is equal to S5. This is true since they both contain the string apple. S1 equals S2 is false since S1 is apple and S2 is banana. How about S1 not equal to S2. This is true since apple is not equal to banana. In the case of S1 less than S2, this is also true since apple comes before banana lexically in the ascii table. S2 greater than S1 is also true since banana comes before apple lexically. Notice that banana has an uppercase b whereas apple has a lowercase a. S4 less than S5 is false since apple with a lowercase does not come before apple with an upper case. And then finally, A1 equal apple is true because they're the same. Notice in this case, apple is a c-style string literal. The c++ string class has a rich set of very useful methods, too many methods to cover in detail in this video. I encourage you to study the c++ string class since it's going to be a class that you'll use often, and it's important that you know what it provides, so you don't reinvent the wheel when you need to solve a problem. The substring method extracts a substring from a c++ string. It doesn't change the string. It simply returns the substring and you could do whatever you want with it. In this case, I'm simply displaying it. But you can easily assign it to a string variable. Here, I've initialized S1 to this is a string. The first example takes a substring of this string starting at index 0 and including exactly 4 characters. If there are less than 4 characters left in the original string then all the remaining characters are included. In this case, the substring is the first word in the string, this. In the second example, we return the substring starting at index five and include two characters. That's the substring is, IS. Finally the last example starts at index 10 and includes four characters, this will return the substring test. Let's see how we can search a string for another. The c++ string class has a very handy method named find. Find works with characters and strings. It expects a string or character and returns the index or position of the beginning of that string or character in the original string. So if we have a string S1 that's initialized to this is a test and we want to find the string this, we'd get back a 0 since this starts at index 0. In the second example we're looking for the string is. In this case, it would return 2 since the first is starts at index 2. In the third example, we're finding the string test, and we get back a 10. In the fourth example, we're searching for a single character, the lowercase e, which is found at index 11. In the fifth example, we use a variant of the method that also allows the index where you want to start the search from. In this case, I want to find the is substring again. But I want to start at index 4. So this time it finds the is that's located at index 5. Finally, what happens if the string or character we want to find just isn't there. Well, in this case the method returns an end position, which means no position information available. You can check for this value in an if statement. And if true, you know what you were searching for wasn't there. Very easy, very powerful. There's also an r find method that starts searching from the end of the string to the beginning of the string. We can also remove characters from a c++ string using the erase and clear methods. For the erase method, you provide the starting index and how many characters to delete. The clear method deletes all the characters in the string so the string becomes the empty string. We've seen a lot of string methods and you can see how powerful this class is. Let's look at one more useful method and one more useful operator that are commonly used. The method is the length method. It returns the number of characters currently in the string object. In this example, S1 is Frank. So s1.length will return a 5. This is so easy and something that's impossible to do with c cell strings since they don't contain size information. The operator i wanted to cover is the compound concatenation assignment operator. In this case, S1 is Frank. And I can say S1 plus equals James. And James will be concatenated to Frank and the entire result string will be assigned back to S1. This is really handy and works very much the same way that the compound assignment operators worked with integers and doubles and so forth. There are also many more methods in the c++ string class for you to discover as you study c++. Okay, there's one more thing I'd like to talk about before we end this video, input with c++ strings. C++ strings work great with input and output streams. As you've seen, inserting c++ string variables to an output stream like cout is pretty easy and works just like we've been doing all along. Extracting a c++ string from an input stream like cin also works the same way we expect. However, there's one issue that's also true for c-style strings. Suppose we've defined S1 as a c++ string and we extract a string from cin as usual. Now suppose I type in hello space there. When I display S1, I will only see hello. The there was not extracted. This is because the extraction operator stops when it sees whitespace. In many cases, we want to read an entire line of text up to when the user presses enter. For example, I want the string to be hello there. Suppose I asked you to enter your full name. I want to be able to read William Smith, not just William. In this case, we can use the getlined function. The getline function has a couple of variants. The first variant expects two elements inside the parentheses. The first element is the input stream. In this case, we're using cin which defaults to the keyboard. The second element is the name of the c++ string where you want the text that the user enters stored. That's it. Very easy. In the example, I'm saying getline cin S1. Now everything the user types is stored into S1. Getline stops reading when it sees the new line. It doesn't include the new line in the string it just discards it. The other variant of getline has another element in the parentheses. The first two are the same as before, the input stream and the c++ string variable name. The third is called the delimiter. This is the character that you want getline to stop reading input at. So as long as the user doesn't enter this character, everything will be stored in the string variable. Once the delimiter is seen, it's not included in the string variable and it's discarded. In the last example, I'm using a lowercase x as the delimiter. So if i type this is x, then the string stored in S1 will be this is and the x is discarded. Well, we've covered a lot of material in this video, and there's much more in the string class to learn. But this gives you a good starting point so you can use the c++ string class effectively. Also you've now been introduced to object oriented programming with both vectors and strings. Pretty soon, we'll be developing our own classes which is pretty cool. That completes this video. Please play with the string class. Create examples, assign, delete, display and try out some of the methods in this video. It won't take long before you're really comfortable working with c++ strings. | https://www.udemy.com/tutorial/beginning-c-plus-plus-programming/c-strings/ | CC-MAIN-2022-33 | en | refinedweb |
I’ve seen somewhere a statement that different windows (obtained with New Window)
may have different color schemes? Is this true? So far I haven’t been able to make this work.
When I change the color scheme in one window, it changes in the entire sequence of windows that I have open.
Different color schemes in different windows?
I’ve seen somewhere a statement that different windows (obtained with New Window)
Color Schemes can set via project specific settings would apply to all views of a window.
If you’re not using projects, you can also accomplish this with a simple plugin as well because all windows carry project data even if it’s not persisted to disk.
For example if you create a new window, then run this in the console:
window.set_project_data({"settings": {"color_scheme": "Monokai.sublime-color-scheme"}})
All files open in that window will use the Monokai color scheme, unless there is a file-type specific setting that overrides it (like if you want all your Python files to be purple or something).
A plugin would need to be smart enough to pull the project data out first and add this in, since the above will clobber away any open folders, other settings that might be applied by a project, etc.
I’m not sure if there’s a package for that already but such a plugin would be fairly easy to create.
I’m aware of this. However, from time to time I would like to open a new window
and make it a different color from the other windows that I have already opened.
Without having a project for it.
I am not quite happy with the solutions above. I think my problem would be solved
if I could start two or more totally independent instances of the editor. How do I do it?
Color scheme can’t be set as “window setting”. So there’s no other solution than using project data, atm.
To run independend installations you’d need multiple portable installations, which don’t share any datal. It would involve to set them up independendly including plugins, settings, … . Doubt it’s a sane solution.
As outlined in the Discord conversation from this morning (and also alluded to above) it’s possible to do this sort of thing with a little plugin magic.
An example of such a thing is the below; this provides a
select_window_color_scheme command which will adjust the color scheme in use for the current window.
The plugin below includes an input handler that is based on the one in
Default/ui.py that allows you to interactively select a color scheme from the list, but with the ability to preview removed and commented to provide more details on what it’s actually doing.
To use this, you first want to create a
Default.sublime-commands file in your
User package with the following contents (or, if you already have such a file, just add the entry); adjust the caption as desired:
[ { "caption": "UI: Select Window Color Scheme", "command": "select_window_color_scheme" }, ]
Then, add the following plugin to your
User package (see How to use Plugins on my YouTube channel if you’re not sure how to do that):
import sublime import sublime_plugin import os # The KIND attached to whatever the default color scheme is when the browser # open. CURRENT_KIND = (sublime.KIND_ID_COLOR_GREENISH, "✓", "Current") class SelectWindowColorSchemeCommand(sublime_plugin.WindowCommand): """ Set the color scheme to be used by all views in the current window to the one that is provided. If one is not given, prompt the user in the command palette to pick one. This will work in any window, wether it has a sublime-project associated with it or not. For any window that is associated with a project, this will adjust the settings for the project itself on disk. """ def run(self, color_scheme): # Get the project data for this window, then get the settings key out # of it, creating an empty key if it is not present. data = self.window.project_data() or {} settings = data.get('settings', {}) # Set in the setting, the put the settings back into the data and the # data back into the window. settings["color_scheme"] = color_scheme data["settings"] = settings self.window.set_project_data(data) def input(self, args): if "color_scheme" not in args: return ColorSchemeInputHandler(self.window) class ColorSchemeInputHandler(sublime_plugin.ListInputHandler): """ Gather a list of all of the potential color schemes and use them to display a list in the command palette to allow the user to pick a color scheme. This respects the show_legacy_color_schemes setting, and will show you the what color scheme is currently active for this window. This is a modified and cut down version of the same input handler from Default/ui.py, which is used to prompt for the color scheme to choose when you use "UI: Select Color Scheme" in the command palette; this version lacks the preview part that lets you see what the color scheme is goign to look like. """ def __init__(self, window): self.window = window def placeholder(self): return "Color scheme for the current window" def get_files(self): """ Gather a list of all of the tmTheme and sublime-color-scheme files that are present, returning a list of tuple that provides the name of the file that contains the resource and the value to use as the value of the color_scheme setting. """ files = [] nameset = set() # For tmTheme files, the value to use for the setting and the name of # the file that they come from are the same, so for each one found # add it in. # # Along the way, track the name of the color scheme, which is based # on the name of the file, without path or extension. for f in sublime.find_resources('*.tmTheme'): files.append((f, f)) nameset.add(os.path.splitext(os.path.basename(f))[0]) # Do the same for sublime-color-scheme files too. Here the value to # use for the value of color_scheme is the name of the file without # the path (unlike for tmTheme) files). # # This also tracks and does not add this file as a known file if there # is a color scheme already known by this name that was contributed # from a tmTheme file. for f in sublime.find_resources('*.sublime-color-scheme'): basename = os.path.basename(f) name = os.path.splitext(basename)[0] if name not in nameset: nameset.add(name) files.append((f, basename)) return files def list_items(self): # Get the global preferences and wether or not we should be showing # the legacy color schemes. settings = sublime.load_settings('Preferences.sublime-settings') show_legacy = settings.get("show_legacy_color_schemes", False) # Grab the current color scheme out of the preferences as a potential # default. default = settings.get("color_scheme", 'Mariana.sublime-color-scheme') # If there is project data in the window, and that project data has # settings, and the settings contains a color_scheme setting, then # the value of that setting is what the current color scheme should # be. Otherwise, use the value from the preferences as a default. data = self.window.project_data() or {} current_scheme = data.get("settings", {}).get("color_scheme", default) # sublime-color-scheme files are unique based on their name, and so the # setting can just be the name of the file. For a tmTheme file, they're # unique based on the resource path they're loaded from. # # That is how we gather the list of color schemes below, so here we # need to make sure that the color scheme we grabbed out of the # settings is as expected. if current_scheme.endswith(".sublime-color-scheme"): current_scheme = os.path.basename(current_scheme) # Gather the list of list entries. files = self.get_files() items = [] selected = -1 # Iterate over the entire list of items that were found and generate # list items for display. for cs, unique_path in files: # Get the name of the package that this color scheme is from, and # the name of the scheme itself. pkg, basename = os.path.split(cs) name = os.path.splitext(basename)[0] # If this is a legacy color scheme and the user doesn't want to # see those, don't do anything with this one. if pkg == "Packages/Color Scheme - Legacy" and not show_legacy: continue # Apply specific kind information to the currently select color # scheme if this is the one; then flag it as the item to select # by default. kind_info = sublime.KIND_AMBIGUOUS if current_scheme and current_scheme == unique_path: kind_info = CURRENT_KIND selected = len(items) # Remove the common prefix on the files. if pkg.startswith("Packages/"): pkg = pkg[len("Packages/"):] # Add in a new item. items.append(sublime.ListInputItem(name, unique_path, details=pkg, kind=kind_info)) return (items, selected)
With these in place, the command palette will have a new item in it with the caption you specified in the
sublime-commands file, and picking it will allow you to select a color scheme and apply it to the current window.
if your window has a
sublime-project file attached to it, using this command will modify the project on disk to include the color scheme that you pick here.
I indeed got this running by using different ST versions, but it’d be great if there was a setting for this within Preferences, including the option to set the scheme on a per area basis rather than having to use different windows, especially when using different languages that are better highlighted using different schemes, for example: C# for scripting and .md next to it, for a todo-list.
Open up a C# file, choose
Preferences > Settings - Syntax Specific and in the right hand pane include
"color_scheme": "Breakers.sublime-color-scheme", as a setting. As soon as you save the file, all C# files will use that color scheme while other file types still use the global settings. | https://forum.sublimetext.com/t/different-color-schemes-in-different-windows/64791 | CC-MAIN-2022-33 | en | refinedweb |
One of the most difficult challenges that every precautionary face is the complexity involved in developing algorithms that perform well on training and might well on new inputs. Machine learning employs a variety of techniques to reduce or eliminate test errors. One such technique is regularization. We will discuss why using regularization techniques in the context of regularization is necessary, and we will conclude with a practical demonstration of implementing an activity regularization for the neural network. The following are the key points to be discussed in this article.
Table Of Contents
- Need for Regularization
- What is Regularization?
- Keras Regularizers
- Kernel Regularizer
- Bias Regularizer
- Activity Regularizer
- Implementing the Regularization in a Neural Network
Let’s start the discussion by understanding the need for regularization.
THE BELAMY
Sign up for your weekly dose of what's up in emerging technology.
Need for Regularization
Deep neural networks are sophisticated learning models that are prone to overfitting because of their ability to memorize individual training set patterns rather than applying a generalized approach to unrecognizable data. That is why the regularization of neural networks is so important. It aids the neural network’s ability to generalize data that it does not recognize by keeping the learning model simple.
Let’s look at an example to show what we’re talking about. Let’s pretend we have a dataset with both input and output values. Assume there is a true relationship between these values. The goal of deep learning is to approximate the relationship between input and output values as closely as possible. As a result, for each data set, there are two models that can assist us in defining this relationship: a simple model and a complex model.
A straight line exists in the simple model that only includes two parameters that define the relationship in question. A graphical representation of this model will include a straight line that closely passes through the centre of the data set in question, ensuring that the line and the points below and above it have very little distance between them.
The complex model, on the other hand, has several parameters that vary depending on the data set. It uses the polynomial equation to pass through all of the training data points. The training error will eventually reach zero, and the model will memorize the individual patterns in the data set as the data set becomes more complex. Unlike simple models, which aren’t too dissimilar even when trained on different data sets, complex models can’t be said to be the same.
What is Regularization?
A major issue in machine learning is developing algorithms that perform effectively not only on training data but also on new inputs. Many machine learning algorithms are intentionally designed to minimize test error, possibly at the expense of greater training error. These procedures are typically referred to as regularization. Deep learning practitioners can choose from a variety of regularization methods. In fact, one of the key research efforts in the field has been the development of more effective regularization procedures.
Regularizing estimators are used in the majority of deep learning regularization strategies. The regularization of an estimator works by exchanging higher bias for lower variance. An effective regularizer reduces variance while not increasing bias excessively, resulting in a profitable trade.
When we discuss the generalization and overfitting in the model family being trained we encountered three scenarios either (1) excluded the true data-generating process, resulting in underfitting and bias, (2) matched the true data-generating process, or (3) included the generating process but also many other possible generating processes, resulting in an overfitting regime in which variance rather than bias dominates the estimation error.
The goal of regularization is to transfer a model from the third to the second regime.
Keras Regularizers
The weights become more specialized to the training data as we train the model for a longer period of time, resulting in overfitting the training data. The weights will grow in size to handle the specifics of the examples seen in the training data.
When the weights are too heavy, the network becomes unstable. Regardless of whether the weight is tailored to the training dataset, minor variations or statistical noise in the expected inputs will result in significant differences in the output.
In this case, we can use weight regularization to update the learning algorithm and encourage the network to keep the weights small, and it can be used as a general technique to reduce overfitting. Regularizers allow you to apply weight penalties during optimization. These penalties are added together to form the loss function that the network optimizes.
With the help of Keras Functional API, we can facilitate this regularizer in our model layers (e.g. Dense, Conv1D, Conv2D, and Conv3D) directly. These layers expose three-argument or types of regularizers to use, I,e Kernel regularizers, Bias Regulerizers, and Activity Regulerizers which aim to;
- Kernel regularizer penalizes the layer’s kernel(weight) but does not penalize bias.
- Only the bias of the layer is penalized by a bias regularizer.
- The activity regularizer penalizes the output of the layer.
Activity Regularization
Let us understand the activity regularization before jumping to the implementations.
The activity regularization technique is used to encourage a neural network to learn sparse feature representations or we can say internal feature representation of our data that is being fed. Apart from this, this approach is mostly known for reducing the overfitting and to improve the model’s generalization ability to unseen data. Under the hood of this technique, it applies penalties by observing the generalization ability on the validation set.
Implementing the Regularization in a Neural Network
The below code snippets show how we can use these in the layers mentioned above.
from tensorflow.keras.layers import Dense from tensorflow.keras import regularizers layer = Dense(500, activation='relu', kernel_regularizer=regularizers.l1(l1=0.001), bias_regularizer = regularizers.l2(l2=0.001), activity_regularizer = regularizers.l1(l1=0.002))
Now we are going to see how activity regularizer can play a significant role when we want to balance both accuracies.
Here we will see how the custom neural network performs on a given set of data, in this case, we will observe the training and validation accuracy of the network before and after the training activity regularizer.
from sklearn.datasets import load_iris from tensorflow.keras.models import Sequential from tensorflow.keras.utils import to_categorical from tensorflow.keras.layers import Dense from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split
Load and ready the data
iris = load_iris() X = iris.data y = iris.target y = to_categorical(y) ss = StandardScaler() X = ss.fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X,y)
Now below we build the 5 layers artificial neural network, and first, we train it will observe the accuracies.
model_1 = Sequential([ Dense(512, activation='tanh', input_shape = X_train[0].shape), Dense(512//2, activation='tanh'), Dense(512//8, activation='tanh'), Dense(32, activation='relu'), Dense(3, activation='softmax') ]) print(model1.summary()) model_1.compile(optimizer='sgd',loss='categorical_crossentropy', metrics=['acc']) hist_1 = model_1.fit(X_train, y_train, epochs=350, batch_size=128, validation_data=(X_test,y_test))
The accuracies at the end of 350 epochs 96.43, 92.11 training and validation respectively.
Let’s now try to explore by only applying an activity regularizer. This can be done by setting argument activitity_regularizer by using l1 or l2 norms regularizer.
The L1 and L2 have stated as The L1 norm allows some weights to be large while driving others to zero. It penalizes the true value of a weight. The L2 norm causes all weights to decrease in size. It penalizes the square value of a weight.
Dense(512, activation='tanh', input_shape = X_train[0].shape, activity_regularizer=regularizers.l1(l1=0.001))
Above is the first layer of the same model which we have seen above. The only change is that we have initiated the activity regularizer. And after the model for 350 epochs, we are getting accuracies of 98.21 and 94.74 for training and validation respectively.
The difference that we are getting before and after regularization is nearly +2% which tells us that by applying regularization we can further improve our network performance.
Now let’s see the plots of accuracies for both before and after applying regularization
Conclusion
From the above plot, it is clear that the activity regularizer is doing its job. From both, if compare the instance at epochs 75 in the left plot the validation accuracy is nearly 20% less than training accuracy whereas in the right side plot the same difference is about 5-6%, and nearly the same is maintained throughout the training at an early stage only model tried to overcome the gap.
Through this post, we have seen what needs regularization and what is regularization in deep learning in deep learning. Practically we have seen how Keras regularizer can be used in modeling to overcome the overfitting problem. | https://analyticsindiamag.com/what-is-activity-regularization-in-neural-networks/ | CC-MAIN-2022-33 | en | refinedweb |
Pandas apply, map and applymap
In this post we will see how to apply a function along the axis of a dataframe using apply and applymap and how to map the values of a Series from one domain to another using map
When to use apply, applymap and map?
Apply: It is used when you want to apply a function along the axis of a dataframe, it accepts a Series whose index is either column (axis=0) or row (axis=1). For example: df.apply(np.square), it will give a dataframe with number squared
applymap: It is used for element wise operation across one or more rows and columns of a dataframe. It has been optimized and some cases work faster than apply but it’s good to compare it with apply before going for any heavier operation . Example: df.applymap(np.square), it will give a dataframe with number squared
map: It can be used only for a Series object and helps to substitutes the series value from the lookup dictionary, Series or a function and missing value will be substituted as NaN. Since it works only with series or dictionary so you can expect a better and optimized performance. Example: df[‘Col1’].map({‘Trenton’:’New Jersey’, ‘NYC’:’New York’, ‘Los Angeles’:’California’})
How to use Pandas apply?
First create a dataframe with three rows a,b and c and indexes A1,B1,C1
Create Dataframe
import pandas as pd import numpy as np df = pd.DataFrame({'a' : [ 10,20,30],'b' : [5,10,15],'c' : [10,100,1000]},index=['A1','B1','C1']) df
Ouput:
Define functions
We will define two functions to apply to this dataframe, function multiply_by_2 will be applied across the column and multiply_col1_col2 will be applied across the rows of dataframe
def multiply_by_2(col): return col*2 def multiply_col1_col2(col): return col['a']*col['b']
Apply function across dataframe columns(axis=0)
Now apply the function multiply_by_2 across the columns, so by default the value of axis=0, so we have to just pass the function without axis parameter
df.apply(multiply_by_2)
Output:
All the cell values are doubled
Apply function across dataframe rows(axis=1)
Now apply the function multiply_col1_col2 across the rows of the dataframe. Here we have set the axis parameter as 1 (axis=1)
df.apply(multiply_col1_col2,axis=1)
It will return a series object with values obtained by multiplying col1 and col2 with the same indexes
A1 50 B1 200 C1 450 dtype: int64
Create a new Column col1xcol2 with the above series
Now we will create a new column using the above Series called as ‘Col1XCol2”
df['col1Xcol2'] = df.apply(multiply_col1_col2,axis=1)
Pandas apply function with Result_type parameter
It’s a parameter set to {expand, reduce or broadcast} to get the desired type of result. the default value is None
In the above scenario if result_type is set to broadcast then the output will be a dataframe substituted by the Col1xCol2 value
df.apply(multiply_col1_col2,axis=1,result_type='broadcast')
The results is broadcasted to the original shape of the frame, the original index and columns is retained
To understand result_type as expand and reduce we will first create a function that returns a list value
def multi_and_list(col): return [col['a']*2,col['b']*2,col['c']*2]
Now apply this function across the dataframe column with result_type as expand
df.apply(multi_and_list,axis=1,result_type='expand')
if result_type is set as expand then It returns a dataframe though the function returns a list.
result_type reduce is just opposite of expand and returns a Series if possible rather than expanding list-like results
How to use lambda with apply?
you can also use lambda expression with pandas apply function.
We will multiply the values at Col1 and Col2 as shown above using the lambda function. Since we have to apply this for each row so we will use axis=1
df.apply(lambda x: x['a']*x['b'],axis=1)
Output:
A1 50 B1 200 C1 450 dtype: int64
Pandas apply function with arguments
Many a times we have to pass an additional argument to a function and it’s a good news that You can also pass a positional argument and keyword argument to apply function.
Create a Function with argument
This function calculates the haversine distance between two geo-coordinates and takes a series of origin and dest lat and long and an additional argument radius(rad)
In the following section we will see how to pass this radius as an argument
from math import radians, cos, sin, asin, sqrt def haversine(row,rad): """ Calculate the great circle distance between two points on the earth (specified in decimal degrees) """ # convert decimal degrees to radians lon1, lat1, lon2, lat2 = map(radians, [row['dest_long'], row['dest_lat'],row['orig_long'], row['orig_lat']]) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) r = rad # Radius of earth in kilometers. Use 3956 for miles return c * r
Create dataframe with origin and destination city latitude and longitude
df_coords = pd.DataFrame({'orig_city':['New York','Charlotte','Boston','Bridgewater'], 'orig_lat':[40.7128,35.2271,42.36,40.594], 'orig_long':[74.006,80.843,71.0589,74.604], 'dest_city':['Trenton','Texas','Sunnyvale','San Jose'], 'dest_lat':[40.2206,31.9686,37.3688,37.3382], 'dest_long':[40.2206,31.9686,37.3688,37.3382]}) df_coords
Apply function with arguments
Now we will find haversine distance between origin and destination city in the above dataframe. So we will apply the haversine function defined above using the apply function.
In haversine function above rad is a required argument and the dataframe doesn’t have any radius column.
We will pass the radius values as args=(3956,) in the apply function as a positional argument. to calculate distance in miles
df['haversine_dist]=df_coords.apply(haversine,axis=1,args=(3956,))
We will add the calculated haversine distance as a new column
Apply function with keyword arguments (kwds)
We will pass the radius values as keyword argument as rad=3956 in the apply function to calculate distance in kilometer
df_coords['orig_dest_haver_dist']=df_coords.apply(haversine,axis=1,rad=3956) df_coords
How to use Pandas applymap?
As defined above, it is used for element wise operation of a dataframe and a scalar value is returned for every elements
We will square each number in the above dataframe using lambda expression with applymap function
df.applymap(lambda x: x**2)
There are more vectorized way of doing this operation is available like df *2 which is much faster and optimized
How to use Pandas map?
Maps are used to map or substitute a value from a lookup table i.e. a dictionary, function or a series here.
I would suggest to read this detailed post on how to use pandas map function | https://kanoki.org/2019/11/25/pandas-apply-map-and-applymap/ | CC-MAIN-2022-33 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.