text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
I've inherited a moderate size network that I'm trying to bring some sanity to. Basically, its 8 public class Cs and a slew of private ranges all on one vlan (vlan1, of course). Most of the network is located throughout dark sites.
I need to start separating some of the network. I've changed the ports from the main cisco switch (3560) to the cisco router (3825) and the other remote switches to trunking with dot1q encapsulation. I'd like to start moving a few select subnets to different vlans.
To get some of the different services provided on our address space (and to separate customers) on to different vlans, do I need to create a subinterface on the router for each vlan and, if so, how do I get the switch port to work on a specific vlan? Keep in mind, these are dark sites and geting console access is difficult if not impossible at the moment. I was planning on creating a subinterface on the router for each vlan then setting the ports with services I want to move to a different vlan to allow only that vlan. Example of vlan3:
3825:
interface GigabitEthernet0/1.3
description Vlan-3
encapsulation dot1Q 3
ip address 192.168.0.81 255.255.255.240
the connection between the switch and router:
interface GigabitEthernet0/48
description Core-router
switchport trunk encapsulation dot1q
switchport mode trunk
show interfaces gi0/48 switchport
Name: Gi0/48
Unknown unicast blocked: disabled
Unknown multicast blocked: disabled
Appliance trust: none
So, if the boxen hanging off of gi0/18 on the 3560 are on an unmanaged layer2 switch and all within the 192.168.0.82-95 range and are using 192.168.0.81 as their gateway, what is left to do, especially to gi0/18, to get this working on vlan3? Are there any recommendations for a better setup without taking everything offline?
Pardon, in your cut and pasted configs, you appear to be describing Gi0/48 - your uplink to your router, but in your question refer specifically to hosts connected to Gi0/18. I'm going to assume you're describing two different ports here. Further, I'm assuming from details in your config statements and question, that vlan 3 is being used for the 192.168.0.80/28 traffic. I'm going to assume that the vlan has already been declared on your 3560. (Check sh vlan)
First of all, your port Gi0/18 should be configured for access mode on vlan 3. Likely, something like this:
interface GigabitEthernet 0/18
switchport access vlan 3
switchport mode access
As far as for other recommendations. Will all/most of your traffic from your IP subnets be to and from the internet. Basically, If you have enough traffic between subnets, it may suit you to have the 3560 act as your internal router and then dedicate your 3825 to be your border router. The problem is that if your router is baring the entire load for all routing, then a packet from one subnet will arrive at your switch, then be forwarded via the dot1q to your trunk on some vlan X, the router then makes a routing decision and sends the same packet back along the dot1q trunk on some new vlan Y now destined for the destination machine. Btw, I'm simply describing the situation of internal traffic to your customers/organization that crosses your different subnets.
Instead, you can configure the 3560 at the, assuming normal conventions, first address of each vlan/subnet. E.g. 192.168.0.81 and enable ip routing. The next step is you create a new subnet specifically for between the router and switch. For convenience, i'd use something completely different, for example, 192.0.2.0/24 is reserved for documentation examples. Configure the router at 192.0.2.1 and the switch at 192.0.2.2. Have the switch use 192.0.2.1 as the default route. Configure the router to reach 192.168.0.0/16 via the switch at 192.0.2.2. If your network is small enough, static routes should be sufficient. No need for OSPF or anything.
Of course, this would be a rather dramatic change; but it has potential for being a large improvement. It all depends on the nature of your traffic.
For reference, cisco lists the Cisco Catalyst 3560G-48TS and Catalyst 3560G-48PS having a 38.7 Mpps forwarding rate and the Cisco 3825 as having 0.35Mpps forwarding rate. Mpps, just in case you don't know, is millions of packets per second.
It's not bandwidth, but it's how many 64 byte packet routing decisions the device can make a second. The length of the packet doesn't affect how long it takes to making a routing decision. So the peak performance in bits/bytes will be somewhere in a range. In terms of bandwidth, it means that 350kpps is 180Mbps w/ 64byte packets and 4.2Gbps w/ 1500 byte packets. Mind you, that's in bits per second, so think of it as 18 Megabytes or 420 Megabytes per second in regular file-size terms.
In theory, this means that your 3560G can route somewhere between 19.8Gbps and 464Gbps or rougly 2GBps and 45GBps.
Actually, looking at those numbers, you most definitely should consider the plan I described above. Dedicate your 3825 to handling, presumably, NAT'd external traffic and let your 3560 handle the rest.
I'm sorry this is so long; I'm bored at work waiting for tapes to finish. | http://serverfault.com/questions/216562/vlans-and-subinterfaces | crawl-003 | en | refinedweb |
pocketcmd.h File Reference
PocketBus command abstraction layer. More...
#include "pocketbus.h"
#include <cfg/compiler.h>
Go to the source code of this file.
Detailed Description
PocketBus command abstraction layer.
Definition in file pocketcmd.h.
Function Documentation
Init pocketBus command layer.
ctx is pocketBus command layer context. bus_ctx is pocketBus context. addr is slave address (see pocketcmd_setAddr for details.) search is the lookup function used to search command ID callbacks.
Definition at line 197 of file pocketcmd.c.
pocketBus Command poll function.
Call it to read and process pocketBus commands.
Definition at line 80 of file pocketcmd.c.
pocketBus Command recv function.
Call it to read and process pocketBus commands.
Definition at line 100 of file pocketcmd.c.
Send command cmd to/from slave adding len arguments in buf.
Address used is contained in ctx->addr . If we are master and the message has a reply, you must set wait_reply to true.
- Returns:
- true if all is ok, false if we are already waiting a replay from another slave.
Definition at line 154 of file pocketcmd.c.
Set slave address addr for pocketBus command layer.
If we are a slave this is *our* address. If we are the master this is the slave address to send messages to.
Definition at line 107 of file pocketcmd.h. | http://doc.bertos.org/2.7/pocketcmd_8h.html | crawl-003 | en | refinedweb |
Tell us what you think of the site.
Below is a function called by a button action.
Ths code should :
- get the name given by the user in an edittext field.
- parse scene to see if the name exists.
- if yes, append selected objects to the existing group
- if not, add a new group with the new name and add selected objects to it.
I can create new groups the first time I launch the script. But when I try to add objects to a group that I created with the script, Motion Builder crashes.
I cannot find out what is wrong with this code ? :(
def CreateGroupfromSelection(control,event):
global nameGroup
# nameGroupTxtLyt = FBEdit where user can name the group
nameGroup = nameGroupTxtLyt.Text
if nameGroup != "":
modelList = FBModelList()
FBGetSelectedModels(modelList, None, True) # gets selected objects #
sel = modelList.GetCount()
if sel > 0:
# Accessing the scene to view all the groups
lScene = FBSystem().Scene
lGroups = lScene.Groups
# Append all the groups in the scene in a list
for group in lGroups:
if group.Name == nameGroup:
# Connect selected objects to group
for model in modelList:
group.ConnectSrc(model)
elif group.Name != nameGroup:
# Rename group according to username
group = FBGroup (nameGroupTxtLyt.Text)
# Connect selected objects to group
for model in modelList:
group.ConnectSrc(model)
** ** | http://area.autodesk.com/forum/autodesk-motionbuilder/python/group-creation---crash/ | crawl-003 | en | refinedweb |
java.util.Collection; 21 import java.util.Collections; 22 import org.apache.commons.pipeline.validation.ConsumedTypes; 23 import org.apache.commons.pipeline.validation.ProducesConsumed; 24 25 /** 26 * This is a simple stage in the pipeline which will add each processed object 27 * to the specified collection. 28 * 29 * For the purposes of validation, this stage is considered to be able to consume 30 * objects of any class although the process() method may throw a ClassCastException 31 * if a processed object cannot be added to the collection. 32 */ 33 @ConsumedTypes(Object.class) 34 @ProducesConsumed 35 public class AddToCollectionStage<T> extends BaseStage { 36 37 /** 38 * Holds value of property collection. 39 */ 40 private Collection<T> collection; 41 42 /** 43 * Creates a new instance of AddToCollectionStage. This constructor 44 * will synchronized the collection by default. 45 */ 46 public AddToCollectionStage(Collection<T> collection) { 47 this(collection, true); 48 } 49 50 /** 51 * Creates a new instance of AddToCollectionStage. 52 * @param collection The collection in which to add objects to 53 * @param synchronized A flag value that determines whether or not accesses 54 * to the underlying collection are synchronized. 55 */ 56 public AddToCollectionStage(Collection<T> collection, boolean synchronize) { 57 if (collection == null){ 58 throw new IllegalArgumentException("Argument 'collection' can not be null."); 59 } 60 61 this.collection = synchronize ? Collections.synchronizedCollection(collection) : collection; 62 } 63 64 /** 65 * Adds the object to the underlying collection. 66 * 67 * @throws ClassCastException if the object is not of a suitable type to be added 68 * to the collection. 69 */ 70 public void process(Object obj) throws org.apache.commons.pipeline.StageException { 71 this.collection.add((T) obj); 72 this.emit(obj); 73 } 74 75 /** 76 * Returns the collection to which elements have been added during 77 * processing. 78 */ 79 public Collection<T> getCollection() { 80 return this.collection; 81 } 82 } | http://commons.apache.org/sandbox/pipeline/xref/org/apache/commons/pipeline/stage/AddToCollectionStage.html#0 | crawl-003 | en | refinedweb |
Simple quadratic term used to regularise functions. More...
#include <clsfy_logit_loss_function.h>
Simple quadratic term used to regularise functions.
For vector v' = (b w') (ie b=y[0], w=(y[1]...y[n])), computes f(v) = alpha*|w|^2 (ie ignores first element, which is bias of linear classifier)
Definition at line 56 of file clsfy_logit_loss_function.h.
Definition at line 122 of file clsfy_logit_loss_function.cxx.
The main function: Compute f(v).
Reimplemented from vnl_cost_function.
Definition at line 128 of file clsfy_logit_loss_function.cxx.
Calculate the gradient of f at parameter vector v.
Reimplemented from vnl_cost_function.
Definition at line 136 of file clsfy_logit_loss_function.cxx.
Scaling factor.
Definition at line 60 of file clsfy_logit_loss_function.h. | http://public.kitware.com/vxl/doc/release/contrib/mul/clsfy/html/classclsfy__quad__regulariser.html | crawl-003 | en | refinedweb |
#include <IpSymTMatrix.hpp>
Inheritance diagram for Ipopt::SymTMatrixSpace:
The sparsity structure is stored here in the matrix space.
Definition at line 159 of file IpSymTMatrix.hpp.
Constructor, given the number of rows and columns (both as dim), as well as the number of nonzeros and the position of the nonzero elements.
Note that the counting of the nonzeros starts a 1, i.e., iRows[i]==1 and jCols[i]==1 refers to the first element in the first row. This is in accordance with the HSL data structure. Off-diagonal elements are stored only once.
Destructor.
Overloaded MakeNew method for the sYMMatrixSpace base class.
Implements Ipopt::SymMatrixSpace.
Definition at line 181 of file IpSymTMatrix.hpp.
References MakeNewSymTMatrix().
Method for creating a new matrix of this specific type.
Definition at line 187 of file IpSymTMatrix.hpp.
References SymTMatrix.
Referenced by MakeNewSymMatrix().
Number of non-zeros in the sparse matrix.
Definition at line 195 of file IpSymTMatrix.hpp.
Referenced by Ipopt::SymTMatrix::Nonzeros().
Row index of each non-zero element.
Definition at line 201 of file IpSymTMatrix.hpp.
Referenced by Ipopt::SymTMatrix::Irows().
Column index of each non-zero element.
Definition at line 207 of file IpSymTMatrix.hpp.
Referenced by Ipopt::SymTMatrix::Jcols().
Allocate internal storage for the SymTMatrix values.
Deallocate internal storage for the SymTMatrix values.
Definition at line 227 of file IpSymTMatrix.hpp.
Referenced by MakeNewSymTMatrix().
Definition at line 223 of file IpSymTMatrix.hpp.
Referenced by Nonzeros().
Definition at line 224 of file IpSymTMatrix.hpp.
Definition at line 225 of file IpSymTMatrix.hpp. | http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_sym_t_matrix_space.html | crawl-003 | en | refinedweb |
#include <IpSymLinearSolver.hpp>
Inheritance diagram for Ipopt::SymLinearSolver:
In the full space version of Ipopt a large linear system has to be solved for the augmented system. This case is meant to be the base class for all derived linear solvers for symmetric matrices (of type SymMatrix).
A linear solver can be used repeatedly for matrices with identical structure of nonzero elements. The nonzero structure of those matrices must not be changed between calls.
The called might ask the solver to only solve the linear system if the system is nonsingular, and if the number of negative eigenvalues matches a given number.
Definition at line 50 of file IpSymLinearSolver.hpp.
Definition at line 55 of file IpSymLinearSolver.hpp.
Definition at line 58 of file IpSymLinearSolver.hpp.
overloaded from AlgorithmStrategyObject
Implements Ipopt::AlgorithmStrategyObject.
Implemented in Ipopt::TSymLinearSolver.
Solve operation for multiple right hand sides.
Solves the linear system A * Sol = Rhs with multiple right hand sides. If necessary, A is factorized. Correct solutions are only guaranteed if the return values is SYMSOLVER_SUCCESS. The solver will return SYMSOLVER_SINGULAR if the linear system is singular, and it will return SYMSOLVER_WRONG_INERTIA if check_NegEVals is true and the number of negative eigenvalues in the matrix does not match numberOfNegEVals.
check_NegEVals cannot be chosen true, if ProvidesInertia() returns false.
Implemented in Ipopt::TSymLinearSolver.
Solve operation for a single right hand side.
Solves the linear system A * Sol = Rhs. See MultiSolve for more details.
Definition at line 89 of file IpSymLinearSolver.hpp.
References MultiSolve().
Number of negative eigenvalues detected during last factorization.
Returns the number of negative eigenvalues of the most recent factorized matrix. This must not be called if the linear solver does not compute this quantities (see ProvidesInertia).
Implemented in Ipopt::TSymLinearSolver.
Request to increase quality of solution for next solve.
Ask linear solver to increase quality of solution for the next solve (e.g. increase pivot tolerance). Returns false, if this is not possible (e.g. maximal pivot tolerance already used.)
Implemented in Ipopt::TSymLinearSolver.
Query whether inertia is computed by linear solver.
Returns true, if linear solver provides inertia.
Implemented in Ipopt::TSymLinearSolver. | http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_sym_linear_solver.html | crawl-003 | en | refinedweb |
#include <IpSumSymMatrix.hpp>
Inheritance diagram for Ipopt::SumSymMatrixSpace:
Definition at line 99 of file IpSumSymMatrix.hpp.
Constructor, given the dimension of the matrix and the number of terms in the sum.
Definition at line 106 of file IpSumSymMatrix.hpp.
Destructor.
Definition at line 113 of file IpSumSymMatrix.hpp.
Number of terms in the sum.
Definition at line 120 142 of file IpSumSymMatrix.hpp.
Definition at line 144 of file IpSumSymMatrix.hpp. | http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_sum_sym_matrix_space.html | crawl-003 | en | refinedweb |
#include <IpSumSymMatrix.hpp>
Inheritance diagram for Ipopt::SumSymMatrix:
For each term in the we store the matrix and a factor.
Definition at line 24 of file IpSumSymMatrix.hpp.
Constructor, initializing with dimensions of the matrix and the number of terms in the sum.
Destructor.
Default Constructor.
Copy Constructor.
Method for setting term iterm for the sum.
Note that counting of terms starts at 0.
Method for getting term iterm for the sum.
Note that counting of terms starts at 0.
Return the number of terms.umSymMatrix.hpp.
std::vector storing the matrices for each term.
Definition at line 92 of file IpSumSymMatrix.hpp.
Copy of the owner_space as a SumSymMatrixSpace.
Reimplemented from Ipopt::SymMatrix.
Definition at line 95 of file IpSumSymMatrix.hpp. | http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_sum_sym_matrix.html | crawl-003 | en | refinedweb |
loglet 1.0
client library Loglet, the Web-based logging system
Loglet is a tiny tool for keeping tabs on long-running processes. Send log messages to Loglet using a simple POST request and then view them in your browser or subscribe to an Atom feed.
This Python package provides a small client library of Loglet. You can creates a new loglet simply by this and send messages by using standard logging interface. For example:
import logging from loglet import LogletHandler logger = logging.getLogger(__name__) loglet = LogletHandler(mode='threading') logger.addHandler(loglet) logger.setLevel(logging.DEBUG) logger.info('hello') logger.error('something horrible has happened')
If you have a loglet already, you can specify logid explicitly:
loglet = LogletHandler('2LNbYgNEAaezJduj')
There are 4 types of sync/async modes:
- 'sync' (default)
- Simply sends all logs synchronously. It can affect serious inefficiency to your application.
- 'threading'
- Sends all logs asynchronously by using standard threading module. Threads are rich and heavy to use for just input/output.
- 'multiprocessing'
- Sends all logs asynchronously by using standard multiprocessing module. It requires to use Python 2.6 or higher. It forks for every message internally.
- 'gevent'
- Sends all logs asynchronously by greenlet (coroutine). It requires to install gevent. Most efficient way though additional dependency is required.
- Author: Adrian Sampson
- Categories
- Development Status :: 5 - Production/Stable
- Intended Audience :: Developers
- License :: OSI Approved :: MIT License
- Operating System :: OS Independent
- Programming Language :: Python :: 2.5
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 2 :: Only
- Topic :: System :: Logging
- Package Index Owner: Adrian
- DOAP record: loglet-1.0.xml | http://pypi.python.org/pypi/loglet/1.0 | crawl-003 | en | refinedweb |
Fill in contour bounded regions in slices of 3D image. More...
#include "vil3d_fill_boundary.h"
#include <vil3d/vil3d_image_view.h>
#include <vcl_vector.h>
#include <vcl_stack.h>
#include <vil3d/vil3d_convert.h>
#include <vil3d/algo/vil3d_threshold.h>
Go to the source code of this file.
Fill in contour bounded regions in slices of 3D image.
Definition in file vil3d_fill_boundary.cxx.
Fill interior of current boundary.
Definition at line 126 of file vil3d_fill_boundary.cxx.
Follow the current boundary in the current slice.
labeling boundary pixels and background pixels that border the boundary.
Definition at line 58 of file vil3d_fill_boundary.cxx.
Reset background pixels to 0.
Definition at line 184 of file vil3d_fill_boundary.cxx.
Compute a mask where the regions in each slice of a 3D image bounded by contours are set to "on".
Definition at line 14 of file vil3d_fill_boundary.cxx. | http://public.kitware.com/vxl/doc/release/contrib/mul/vil3d/html/vil3d__fill__boundary_8cxx.html | crawl-003 | en | refinedweb |
Scale + translate ImageMetric. More...
#include <vnl/vnl_double_3x3.h>
#include <vgl/vgl_fwd.h>
#include <mvl/ImageMetric.h>
#include <vcl_iosfwd.h>
Go to the source code of this file.
Scale + translate ImageMetric.
An ImageMetric that simply scales and translates. Most often used to condition points by transforming the image centre to the origin, and scaling so that the diagonal has length 2.
Modifications 22 Jun 2003 - Peter Vanroose - added vgl_homg_point_2d interface
Definition in file SimilarityMetric.h. | http://public.kitware.com/vxl/doc/release/contrib/oxl/mvl/html/SimilarityMetric_8h.html | crawl-003 | en | refinedweb |
#include <CoinShallowPackedVector.hpp>
Inheritance diagram for CoinShallowPackedVector:
This class is for sparse vectors where the indices and elements are stored elsewhere. This class only maintains pointers to the indices and elements. Since this class does not own the index and element data it provides read only access to to the data. An CoinSparsePackedVector must be used when the sparse vector's data will be altered.
This class stores pointers to the vectors. It does not actually contain the vectors.
Here is a sample usage:
const int ne = 4; int inx[ne] = { 1, 4, 0, 2 }; double el[ne] = { 10., 40., 1., 50. }; // Create vector and set its value CoinShallowPackedVector r(ne,inx,el); // access each index and element assert( r.indices ()[0]== 1 ); assert( r.elements()[0]==10. ); assert( r.indices ()[1]== 4 ); assert( r.elements()[1]==40. ); assert( r.indices ()[2]== 0 ); assert( r.elements()[2]== 1. ); assert( r.indices ()[3]== 2 ); assert( r.elements()[3]==50. ); // access as a full storage vector assert( r[ 0]==1. ); assert( r[ 1]==10.); assert( r[ 2]==50.); assert( r[ 3]==0. ); assert( r[ 4]==40.); // Tests for equality and equivalence CoinShallowPackedVector r1; r1=r; assert( r==r1 ); r.sort(CoinIncrElementOrdered()); assert( r!=r1 ); // Add packed vectors. // Similarly for subtraction, multiplication, // and division. CoinPackedVector add = r + r1; assert( add[0] == 1.+ 1. ); assert( add[1] == 10.+10. ); assert( add[2] == 50.+50. ); assert( add[3] == 0.+ 0. ); assert( add[4] == 40.+40. ); assert( r.sum() == 10.+40.+1.+50. );
Definition at line 71 of file CoinShallowPackedVector.hpp.
Default constructor.
Explicit Constructor.
Set vector size, indices, and elements. Size is the length of both the indices and elements vectors. The indices and elements vectors are not copied into this class instance. The ShallowPackedVector only maintains the pointers to the indices and elements vectors.
The last argument specifies whether the creator of the object knows in advance that there are no duplicate indices.
Copy constructor from the base class.
Copy constructor.
Destructor.
Definition at line 119 of file CoinShallowPackedVector.hpp.
Get length of indices and elements vectors.
Implements CoinPackedVectorBase.
Definition at line 79 of file CoinShallowPackedVector.hpp.
References nElements_.
Get indices of elements.
Implements CoinPackedVectorBase.
Definition at line 81 of file CoinShallowPackedVector.hpp.
Get element values.
Implements CoinPackedVectorBase.
Definition at line 83 of file CoinShallowPackedVector.hpp.
Reset the vector (as if were just created an empty vector).
Assignment operator.
Assignment operator from a CoinPackedVectorBase.
Reimplemented from CoinPackedVectorBase.
just like the explicit constructor
Print vector information.
A function that tests the methods in the CoinShallowPackedVector class.
The only reason for it not to be a member method is that this way it doesn't have to be compiled into the library. And that's a gain, because the library should be compiled with optimization on, but this method should be compiled with debugging.
Vector indices.
Definition at line 128 of file CoinShallowPackedVector.hpp.
Referenced by getIndices().
Vector elements.
Definition at line 130 of file CoinShallowPackedVector.hpp.
Referenced by getElements().
Size of indices and elements vectors.
Definition at line 132 of file CoinShallowPackedVector.hpp.
Referenced by getNumElements(). | http://www.coin-or.org/Doxygen/Smi/class_coin_shallow_packed_vector.html | crawl-003 | en | refinedweb |
CodeGuru Forums
>
.NET Programming
> Windows Presentation Foundation (WPF) & XAML forum
PDA
Click to See Complete Forum and Search -->
:
Windows Presentation Foundation (WPF) & XAML forum
Pages :
1
[
2
]
Update TextBox with data binding
WPF beginners questions
XAML resources
NotifyPropertyChanged VS DependencyObjects
Issue With Dispatch Timer
TreeView, ContextMenu, how do I get the SelectedItem's data Node/object?
Unable to select any item in WPF Tab control
Image manipulation in WPF
Validation 'before' setting value or issuing command
WPF Textbox loses original values, even if the user doesn't udpate
WPF Custom Control ContentControl for a label with other items
Need help on putting a function into a button
list
help with playing song from memory stream
Hiearchy Issues
Need help improving performance of InitialiseComponent/LoadComponent
Toggle Image source using only XAML
wpf .net 3.5 spell check
WFP Express edition
Textbox and columns
XAML auto resize a tab' content
TextBox RaiseEvent KeyDownEvent does not work
WPF 2 Pane Functionality
Exception at Application start up
How to completely reset MediaElement?
Binding in Style template
TreeView- Dynamically Add Items
WPF Object binding
WPF binding with viewmodel property
Winform advice?!
Focus on a Disabled Control
EventTrigger Issue
ZIndex or Z-order
Mutually Exclusive comboboxes that binds to same data source - MVVM implementation
dynamically add in C# Validation.Error = "ValidationErrorHandler" to Grid panel or an
Show datagrid rowdetail template on click of DataGridCheckBoxColumn
user control - binding order problem ?
using play sound action in blend
Firefox 4 style interface in WPF?
Looking for an experienced Mozilla Addon developer.
DOS command in VB.net and WPF
WPF Controls and Transparency
Horizntal GridSplitter
How does Google Chrome get tabs up into the Window Title Bar?
MVC.NET 4.00 Presentation, easier than Webforms!
How do you databind to a child Collection?
Setting image source in only in runtime (Without define it in the XAML)
WPF Combo Box doesn't rollup until the event is processed.
Binding to DataRowView with a DataTrigger inside of a DataGrid
custom query in domain services
Overriding the default keys used to navigate in RadioButtons [WPF] [VB]
Dynamic Dialog using from XAML in .net desktop application
Blurred (See through) background for full screen window
expression blend and visual studio integration - can they do this?
Expression Studio learning
clr-namespace and loadFromResource
Add wpf to existing code
Poor performace in Storyboard.Begin() with KeyFrames animations
Bind StringCollection to a ListBox
Drag drop WPF tab items
Checkbox in WPF DataGrid header only?
Framework for developing a dashboard which has data from different applications
How to configure Apache server to run PHP engine as CGI on Windows systems?
WPF media player with Sereo Sound
WPF chart and DataGrid
wpf canvas resize
Need help with data binding
Make a window look like this
Binding from a XML file to UserControl through DependencyProperty
DrawFocusRectangle in WPF?
WPF application not responding to text/csv file update when plotting graph
Help with image enlargement bringing to front
Help with Style on ListBoxItem
Flashing DataGrid Cell - How to do it properly
Login Click
WPF (with C#) TextBox Cursor Position Problem
How to Erase, Repaint or Delete a 3D object in wpf ? (Timer-Based Animation)
How do I add click event to custom control?
Commandbindings fail after file dialog cancel
Local Database references
Moving GridSplitters in unison
Implementing a Wizard in WPF C#
How to use WPF to implement multi-pen drawing?
Drag and Drop listbox items and persist dropped data
Converter Parameters
Binding ListView to SortedList
beginner question
Adding pictures to lilstbox!
Navigation in WPF MVVM design
How to add a canvas control?
WPF DataGrid alternating row color display issue.
WPF MediaElement Query
Timer display in Datagrid
WPF application UI working on any size monitor and screen resolution
Firefox and WPF
binding of datagridcomboboxcolumn problem
WebBrowser control. need to prevent it from closing in a wpf app
how to color the gridlines of a datagrid column
Windows Forms User Controls with WPF
How to create a "Search" textbox, with instant result show in listbox? Int value
calender, only month and year selection
Video chat in WPF app.
WPF/MVVM Load UserControl
Binding collection of control of same type
Resizing WPF window with Aero glass
How add/update user specific configuration settings in WPF
A text box that filters a datagrid (search)
Clipboard opening failed - C# / WPF
Label Content Not Displaying and Differnet To Value
How to determine if form/window is WPF or WinForm?
Help with MenuItem template.
How to bind Flash right click context menu with button in WPF
pause or sleep function in wpf not working
window forms
"Invalid XAML", No idea why.
Infragistic Grid problem
Allow user customizing styles of WPF controls and retaining them
Media element causing process not to die
Binding Fill property to value of node with particular id in XML
Apply the standard MVC architecture to develop applications with WFP
Show/Hide hidden formatting symbols like paragraph markers in RichTextBox control
how to retrieve clicked content in datagrid
GDI+ to WPF .. Impossible??
Combo Box Selected Index Change in an other tab
mouse event
Contextual Ribbon
WPF Datagrid mouseover header image overlapping
WPF Datagrid Column Collection Changed Event
displaying a canvas at two positions simultaneously
codeguru.com | http://forums.codeguru.com/archive/index.php/f-99-p-2.html | crawl-003 | en | refinedweb |
).
Static.
Increment level...
{ fEvolution = enable; }
{ return fEvolution; }
{ return (Int_t)(fBufCompCur - fBufComp); }
{return fBitsPIDs.TestBitNumber(bitnumber);} | http://root.cern.ch/root/html522/TMessage.html | crawl-003 | en | refinedweb |
getsubopt - parse suboption arguments from a string
Synopsis
Description
Return Value
Errors
Examples
Parsing Suboptions
Application Usage
Rationale
Future Directions
See Also
#include <stdlib.h>
int getsubopt(char **optionp, char * const *keylistp, char **valuep);. Since), getsubopt() shall update *optionp to point to the null character at the end of the string. Otherwise, it shall isolate the suboption argument by replacing the comma separator with a null character, and shall update *optionp to point to the start of the next suboption argument. If the suboption argument has an associated value (equivalently, contains an equal sign), getsubopt() shall update *valuep to point to the values first character. Otherwise, it shall set *valuep to a null pointer. The calling application may use this information to determine whether the presence or absence of a value for the suboption is an error.
Additionally, when getsubopt() fails to match the suboption argument with a token in the keylistp array, the calling application should decide if this is an error, or if the unrecognized option should be processed in another way.
The getsubopt() function shall return the index of the matched token string, or -1 if no token strings were matched.
No errors are defined.
The following sections are informative.
#include <stdio.h> #include <stdlib.h>
int do_all; const char *type; int read_size; int write_size; int read_only;
enum { RO_OPTION = 0, RW_OPTION, READ_SIZE_OPTION, WRITE_SIZE_OPTION };
const char *mount_opts[] = { [RO_OPTION] = "ro", [RW_OPTION] = "rw", [READ_SIZE_OPTION] = "rsize", [WRITE_SIZE_OPTION] = "wsize", NULL };
int main(int argc, char *argv[]) { char *subopts, *value; int opt;
while ((opt = getopt(argc, argv, "at:o:")) != -1) switch(opt) { case a: do_all = 1; break; case t: type = optarg; break; case o: subopts = optarg; while (*subopts != \0) switch(getsubopt(&subopts, mount_opts, &value)) { case RO_OPTION: read_only = 1; break; case RW_OPTION: read_only = 0; break; case READ_SIZE_OPTION: if (value == NULL) abort(); read_size = atoi(value); break; case WRITE_SIZE_OPTION: if (value == NULL) abort(); write_size = atoi(value); break; default: /* Unknown suboption. */ printf("Unknown suboption %s\n", value); break; } break; default: abort(); }
/* Do the real work. */
return 0; }() , . | http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man3p/getsubopt.3p | crawl-003 | en | refinedweb |
Swarnava,
Are you using VB6 or VB.net?
There is a bug:
The window doesn’t move. It resizes, but does not move.
example move:
window_1.DragMove();
Hey, I’m in search for a solution that when going maximized not covering taskbar.. tried your example but still no luck :/ found some solutions but placing taskbar in top of screen always gets covered.
Hi Joel,
Please refer to my reply in the following thread for an example of how you could handle the SourceInitialized event of the window and call some interop code to hopefully make it work as expected:
Works like a dream, many thanks!!
Got a hint on Windows resize feature when mouse touches screen edges too? Some way to tap in to it or is it manual work all the way?
Thanks in advance!
That is a great solution. one thing missing is the toggle between restore and minimize when clicking on task bar. I am still unable to find any good solution to that. any idea?
Hi, it works also for me.
But how can I add the Window Title to the xaml File?
Thanks Christoph
Ok, in OnApplyTemplate you can add the folowing:
Label WindowTitle = GetTemplateChild(“WindowTitle”) as Label;
WindowTitle.Content = Title;
And you have to add a label names WindowTitle to your xaml
When I tried to create this project using .NET 4.5, I was getting a type “x:Type” not found. Are you missing a reference.
Adding System.Xaml fixed this issue.
Very nice. There is another toolkit, LinsUIWPF, which is very useful for developers who will like their application is customizable. If you are interested in it, the link is here,.
“CustomWindow” does not exist in the namespace “clr-namespace:Mm.Wpf.Controls”
i m getting this error Pls Help me
Hi Ashwanikumar singh,
Please take a look at the sample markup again. You need to specify the name of the assembly where the CustomWindow class is defined:
xmlns:control=”clr-namespace:Mm.Wpf.Controls;assembly=Mm.Wpf.Controls”
Thanks for the reply Sir ….
Dear sir i am using .net 4.5 I tried all the way but i am still getting 5 errors.. Sir, I am beginner in WPF please help me I am sending my code in notepad I have followed all the steps you mentioned on the site Of “How to create Custom window ” I am not able to solve my errors please Help me out
Thanks for the tutorial Magnus.
I’m getting the same error as Ashwanikumar singh posted.
The Wpf.Controls assembly is showing in the references.
Still I get “CustomWindow” does not exist in the namespace “clr-namespace:Wpf.Controls;assembly=Wpf.Controls”
(I chose to name mine differently)
Thanks for this tutorial!
@CJohnson (dragging issue):
If moving the window doesn’t work, add the following code to the OnApplyTemplate method of the CustomWindow class:
Rectangle moveRect = GetTemplateChild(“moveRectangle”) as Rectangle;
if(moveRect != null)
moveRect.PreviewMouseDown += moveRectangle_PreviewMouseDown;
where is the source code?
Thought I’d just update on my own problem. My issue was related to changing the name of the assembly and I missed a name change.
Hi, I tried the same thing, i could successfully build and run the application. But I’ve got some problem with the visual. The visual is proper in visualstudio designer, but when i run it in my system, it is totally different. Please suggest what I should do
Hi, Everything is working fine, except for black borders. Does anyone have a solution as to how to get rid of them?
Hey I face the same problem just like Ashwani and Chris. Any luck on that?
Thank you so much for this tutorial. I am facing a problem with this, please help me.
This tutorial work fine bot the Form (main window) come with Black form border. For that when i move my cursor at the cross button, then only button show otherwise main form border style property is totally black.
Please help me to resolve.
Thanks in advance.
Nice tutorial :)
If you have black rectangle problems as I do, you should change the following in the “Window Style”:
I think that as everything is transparent, Windows puts black as default background.
Hi i am getting the Black screen, Help me
I was able to get this to work in VS2013, Net 4.5. I did however, as some suggested, create the project as a WPF Custom Control Library.
The “black title bar” issue I fixed by adding Background=”White” to the Border style of the window template:
… Border BorderThickness=”{TemplateBinding BorderThickness}” BorderBrush=”{TemplateBinding BorderBrush}” Background=”White”
To prevent the window from being resized below a given value, just set the MinWidth and MinHeight properties in the constructor.
Hi Claire, Are u able to see ur screen the way it is described in this tutorial?
Could u please share ur code by email me at sanyamra5@gmail.com
@sanyam:
Done.
Claire, sanyam. Could you please send source code to chertykto@gmail.com
Thanks
@alex:
I put a copy of the solution on my OneDrive account:
Anybody should be able to download it from there.
This being said, people can also take advantage of the WindowChrome class which allows you to customize a window without having to re-implement the standard functionalities:
This class is available only in .NET 4.5+ though (for prior versions you may still be able to download it as a separate DLL).
Thank you very much
Thanks a lot, works like a charm!!
On .NET 4.5 there’s no need to write code to move, neither resize. There’s a WindowChrome extension that’s do all de job. Use properties “CaptionHeight” (Height from top that’s accepts moving) and “BorderThickness” to resize operations.
For everyone who was having the issue with the black border, Claire’s solution works fine, however if you want the border to change along with the background color of the window, the Background property of the Border control should be set to
Background=”{TemplateBinding Background}”
instead of
Background=”White”
this way when you change the Background of your window the border will change along with it.
can we set button background as icon or image instead of color?
Followed to the T. Still shows a normal window when run
Only issue I have is after adding the Title it works fine After it applies the template but I want to be able to change the title during run time or set it from a bind but setting it in the OnApplyTemplate routine won’t allow for this.
@Claire’s comment saved my day.
In .NET 4.5 we get WindowChrome class which easily allows window customization.
Hi Magnus,
Hi Michael,
Thanks for the post. I followed your post but I am having some troubles with modifying the control style with implicit styles.
I have two custom controls that represent a custom base window (like above) and a base page. I have added default styles with default templates to generic.xaml and the default styles get applied correctly. In order to make the controls customizeable, I used template bindings to existing properties like Background, and to a few custom dependency properties. To make sure the template bindings work, I made implicit styles and added them to the merged dictionaries in my app.xaml file. However, the values in the implicit styles are never applied to the controls defined by the default templates. The confusing thing is if I look at the live visual tree, I can see that the implicit style is being applied to the page or the window, however, the children controls defined by the default template are not updating their template-bound values based on these implicit styles. The only way I have been able to get the implicit styles to work is by doing the following in the constructors:
var style = this.TryFindResource(typeof(BasePageView)) as Style;
if (style != null)
this.Style = style;
However this seems very hacky and completely unnecessary based on my understanding of implicit styling and template bindings. Do you have any idea what I might be missing?
I first got the idea for the above from. But the behavior doesn’t seem limited to windows. | https://blog.magnusmontin.net/2013/03/16/how-to-create-a-custom-window-in-wpf/comment-page-2/ | CC-MAIN-2020-05 | en | refinedweb |
Error handling with Python
Errors happen. Writing scripts that expect and handle errors can save you a lot of time and frustration. When a tool returns an error message, ArcPy generates a system error, or exception. In Python, you can provide a variety of structures and methods that can handle exceptions. Of course, a script can fail for many other reasons that are not specifically related to a geoprocessing tool; these too need to be caught and dealt with in an appropriate manner. The following sections offer a few techniques that introduce the basics of Python exception handling.
When a tool writes an error message, ArcPy generates a system error, or an exception. Python allows you to write a routine that is automatically run whenever a system error is generated. Within this error-handling routine, you can then executed. Using a simple except statement is the most basic form of error handling.
In the following code, Buffer fails because the required Distance parameter has not been used. Instead of failing without explanation, the except statement is used to trap the error, then fetch and print the error message generated by Buffer. Note that the except block is only executed if Buffer returns an error.
import arcpy try: # Execute the Buffer tool # arcpy.Buffer_analysis("c:/transport/roads.shp", "c:/transport/roads_buffer.shp") except Exception as e: print e.message # If using this code within a script tool, AddError can be used to return messages # back to a script tool. If not, AddError will have no effect. arcpy.AddError(e.message)
The try statement has an optional finally clause that can be used for tasks that should be always be executed, whether an exception has occurred or not. In the following example, the 3D Analyst extension is checked back in under a finally clause, ensuring that the extension is always checked back in.
class LicenseError(Exception): pass import arcpy from arcpy import env try: if arcpy.CheckExtension("3D") == "Available": arcpy.CheckOutExtension("3D") else: # Raise a custom exception # raise LicenseError env.workspace = "D:/GrosMorne" arcpy.HillShade_3d("WesternBrook", "westbrook_hill", 300) arcpy.Aspect_3d("WesternBrook", "westbrook_aspect") except LicenseError: print "3D Analyst license is unavailable" except: print arcpy.GetMessages(2) finally: # Check in the 3D Analyst extension # arcpy.CheckInExtension("3D")
raise statement
The previous example dealt with handling arcpy.env.overwriteOutput = 1 fc = arcpy.GetParameterAsText(0) try: # Check that the input has features # result = arcpy.GetCount_management(fc) if int(result.getOutput(0)) > 0: arcpy.FeatureToPolygon_management(fc, os.path.dirname(fc) + os.sep + "out_poly.shp") else: # Raise custom exception # raise NoFeatures(result) except NoFeatures: # The input has no features # print fc + " has no features." except: # By default any other errors will be caught here # print arcpy.GetMessages(2)
ExecuteError class
When a geoprocessing tool fails, it throws an ExecuteError exception class. What this means is that you can divide errors into two groups, geoprocessing errors (those that throw the ExecuteError exception) and everything else. You can then handle the errors differently, as demonstrated in the code below:
import arcpy try: result = arcpy.GetCount_management("C:/invalid.shp") # Return geoprocessing specific errors # except arcpy.ExecuteError: arcpy.AddError(arcpy.GetMessages(2)) # Return any other type of error except: arcpy.AddError("Non-tool error occurred") you # print pymsg + "\n" print msgs
If the above code was used and a geoprocessing tool error occurred, such an invalid input, this would raise ExecuteError, and the first except statement would be used. This statement would print out the error messages using the GetMessages function. If the same code was used, but a different type of error occurred, the second except statement would be used. Instead of printing geoprocessing messages, it would get a traceback object and print out out only the traceback information. | http://help.arcgis.com/en/arcgisdesktop/10.0/help/002z/002z0000000q000000.htm | CC-MAIN-2020-05 | en | refinedweb |
Build a geofencing app
How-To Details
One of the most useful features of mobile devices is it’s location-awareness. It helps you navigate, automatically switch to the timezone you’re in, allow for location targeted push notifications, or give insights into whatever activities take place at a certain location.
You could say there are two types of location services:
- Active, i.e. where you perform a task based on your location (for instance turn-by-turn navigation, checking into a place on social media, find the nearest espresso bar, etc.)
- Passive, i.e. where your mobile device performs an action based on your location (for instance location based mobile advertising, notify warehouse workers a truck is about to arrive at the dock, alert people of entering a dangerous area, open your garage door when you approach your home, etc.)
One solution to be able to passively act on your current location is by using geofences. A geofence is nothing more than an area on a virtual map. And when a mobile device enters or leaves such predefined areas, this can be detected and an application can act on that: send a notification, update a backend system, trigger another hardware device to do something; basically anything that can be done manually can now be done automatically.
In this tutorial, you will first create an SAP HANA MDC database which will hold location data as geofences. You then expose this data via an OData service. Then you will build an app using this OData service, add logic to display the stored geofences, as well as perform action when your device enters these geofences.
Log<<
Once the database has been created, click on the SAP HANA Cockpit link in the
sapgeo Overview page.
At the logon screen, enter
SYSTEM and the password you provided in Step 1:
After you have clicked the Log On button, you will see the following warning:
Click OK, and you will now see the SAP HANA Database Administration overview page:
Go back to the
sapgeo Overview page and click the SAP HANA Web-based Development Workbench link. If being asked your login credentials, provide the same SYSTEM user credentials you used to accessing the SAP HANA Cockpit.
Once logged in, you should see the workbench’ landing page:
To allow creation and administration of the database and service, you need to assign the correct roles to the user.
For the simplicity of this tutorial, you will use the
sapgeoSYSTEM user to create and maintain the database. In a real-world environment, however, you would never use the SYSTEM user but use a dedicated user.
.
Click on the Security tile of the SAP HANA Web-based Development Workbench landing page.
In the left pane, navigate to the SYSTEM user:
In the Granted Roles tab, click the Add button. Add the following roles:
sap.hana.xs.ide.roles::Developer
sap.hana.xs.debugger::Debugger
sap.hana.xs.admin.roles::HTTPDestViewer
sap.hana.xs.admin.roles::HTTPDestAdministrator
sap.hana.xs.admin.roles::TrustStoreViewer
sap.hana.xs.admin.roles::TrustStoreAdministrator
Click OK once done. The roles are now assigned:
Click on the Catalog tile of the SAP HANA Web-based Development Workbench landing page. After you have provided the SYSTEM user credentials, the catalog workbench opens:
Click the Open SQL Console button in the top toolbar, and in the editor, enter the following command:
CREATE SCHEMA "SAPGEO";
Click the Run button (or press F8) to execute the command. The console in the bottom pane should indicate a successful execution and the newly added schema should be listed in the left pane:
Remove the SQL statement from the SQL Console, and replace it with the following:
CREATE COLUMN TABLE "SAPGEO"."GeoLocation" ( "ID" VARCHAR(36) NOT NULL , "Title" NVARCHAR(32), "Description" NVARCHAR(256), "Latitude" DOUBLE, "Longitude" DOUBLE, "Radius" DOUBLE, PRIMARY KEY ("ID") );
Click the Run button (or press F8) to execute the command. The console in the bottom pane should indicate a successful execution and you should see the newly added table under the
SAPGEO schema.
The just created table is used to store the geofence data you will use in the mobile app.
NOTE 1: Ultimately, geofences can be any two-dimensional polygon. However, since iOS by default only supports circular regions and it takes quite some extra coding to support polygon regions as well as a more complicated database structure to store this polygon data (see also Note 2 below), this tutorial will use the simple circular region.
NOTE 2: Instead of using separate
latitudeand
longitudecolumns, SAP HANA supports geospatial columns of type
ST_POINTand
ST_POLYGON. These columns, however, are stored in a binary format, and are currently not supported by OData version 2 which you will be using for this tutorial (there is limited supported in OData version 4). For simplicity of the service and not doing any conversions, a simple table with separate
latitude,
longitudeand
radiuscolumns is used instead to store the geofence properties, but this could be easily adapted for use with a
ST_POINTor
ST_POLYGONcolumn.
For this tutorial, it is assumed the geofence data is already stored in the database. In a real-world scenario, geofence data can be provided in many ways. For convenience of not having to run around with your mobile device to manually submit geofences to the database, but also to simplify the mobile app coding, you will just run a few SQL INSERT statements with pre-set data.
The easiest way to retrieve longitude and latitude data from a map would be to use Google Maps.
Click on an area on the map (preferably a location near or at your current location) and copy the coordinates displayed at the bottom of the screen:
In the SQL Console, add the following statement:
INSERT INTO "SAPGEO"."GeoLocation" VALUES('<some ID>', 'SAP SE WDF01', 'The Mothership', 49.293406, 8.641362, 200);
Using this example, this will create a geofence region at SAP’s main building, with a radius of 200 meters.
Select some more points nearby, and create an additional 3 to 4 records with these coordinates.
For best results, make sure the circular regions do not overlap and do have some significant radius (at least 10 meters or more).
Click on the Editor tile of the SAP HANA Web-based Development Workbench landing page. After you have provided the SYSTEM user credentials, the editor workbench opens:
Right-click the Content node, and from the context menu, select New > Package. In the dialog, enter the following details:
Click Create when finished. Select the new
sapgeo package, and from the toolbar click the Menu button and select File > Create Application. In the dialog that appears, specify the following:
Click Create when done. The
sapgeo package now expands and should contain the files
.xsaccess,
.xsapp and
index.html.
To allow execution of the OData service you will create later on, you first set the privilege to do so. Right-click the
sapgeo package, and from the context menu, select New > File. Specify a file name
.xsprivileges.
Add the following JSON code to the newly created
.xsprivileges file:
{ "privileges" : [ { "name" : "Execute", "description" : "Execute" } ] }
Click the Save button once done. The console in the bottom pane should indicate a successful save and activation:
Next, you will create the XS OData service. Right-click the
sapgeo package, and from the context menu, select New > File. Specify a file name
SAPGeoService.xsodata:
Add the following code to the newly created
SAPGeoService.xsodata file:
service { "SAPGEO"."GeoLocation" as "GeoLocation"; }
This exposes the
GeoLocation table in schema
SAPGEO as entity set
GeoLocation. Click the Save button once done. The console in the bottom pane should indicate a successful save and activation:
Select the
SAPGeoService.xsodata, and click the Run button from the top toolbar. You should now see the OData service response:
Take a note of the URL, because you will need it later:<your account>trial.hanatrial.ondemand.com/sapgeo/SAPGeoService.xsodata
For fun, try to add
/GeoLocation at the end of the URL, and you should see the entities you created in Step 6
Log on to your SAP Cloud Platform mobile service for development and operations cockpit, and navigate to Mobile Applications > Native/Hybrid. Click the New button, and in the dialog, add the following information:
Click Save when finished. You should now see the application definition details:
The Connectivity feature is listed as Incomplete, because you haven’t yet specified which OData service the application will use. Click the Connectivity item, and in the following screen, click the Create Destination button. In the dialog that appears, enter the following data:
Click Next. In the next page, specify the following data:
Click Next. In the next page, specify the following data:
Click Next. In the next page, specify the following data:
For the simplicity of this tutorial, you will use the
sapgeoSYSTEM user to access the database. In a real-world environment, however, you would never use the SYSTEM user but use a dedicated user to access the database.
.
Click Next. In the next page, no changes are required:
Click Finish to complete the wizard. The dialog will close, and the connection is created:
Click the Ping button next to the destination to check whether the OData service is accessible from the destination.:
Make sure
<Organization Identifier>.<Product Name>matches the value of
Application IDyou entered in Step 8
..tutorials.demoapp.SAPGeo the previous wizard step./sapgeo/SAPGeoService.xsodata/$metadata
If you have followed the tutorial to the letter, you may now get a message the SDK Assistant could not load the metadata. This happens because the application definition created in Step 8 by default is configured with SAML authentication. If you see this warning, simply download the contents of the<your account>trial.hanatrial.ondemand.com/sapgeo/SAPGeoService.xsodata/$metadatalocally, and upload it to the SDK Assistant.
.
After the SDK Assistant has finished, Xcode will launch and open the just generated
SAPGeo project.
In Xcode, assign the appropriate development account to the project’s Team property in the General > Signing panel, and then build and run the app in the simulator.
Once the app has started, dismiss the push notifications message by clicking Allow.
The on-boarding landing page is now displayed:
Click the blue Start button, and in the SAML login screen, provide your SAP Cloud Platform credentials:
After you click Log on, the Touch ID screen is displayed:
If you are running in the simulator, click Not Now, and the Passcode screen is displayed:
Enter an 8-digit numerical passcode, click Next, and confirm the passcode. Click Done when finished, and the single entity collection is now shown:
Click on the
GeoLocation list item, and you should see the list of entities you created with the SQL INSERT statements in Step 6.
In this step, you will add a new View Controller which will display a map with the stored geofences. In all fairness, for the geofences to work you don’t need to see the geofences in a map at all. For the purpose of the tutorial, having a visual clue of the geofence locations, it should make things a bit more clear.
Open
Main.storyboard, and from the Object library, drag a View Controller right next to the Collections scene. With the new view controller selected, set its title to
Map View Controller in the Attributes inspector:
Next, drag a Map Kit View from the Object Library onto the Map View Controller. Resize the map so its borders align with the view’s dimensions:
With the map control still selected, click the little “triangle TIE-fighter” button in the lower right of the storyboard and from the context menu, select Reset to Suggested Constraints. This ensures that regardless of the screen dimensions your app will run, the map viewport will have the same dimensions as its parent view controller.
In the Project navigator pane on the left side of Xcode, right-click the
ViewControllers group and from the context menu, select New File…. In the dialog, select Cocoa Touch Class:
Click Next to continue. In the next page, enter the following details:
Click Next to continue. In the next page, make sure the new class is added to the
ViewControllers group, and click Create to finalize the wizard. The new class will now open:
Go back to the Storyboard, select the Map View Controller, and in the Identity inspector, assign the newly created class to the view controller:
You have created the map view as well as a custom implementing class, but there’s no navigation path to that view. Since the view is merely intended as feedback, it makes sense to navigate to the map via a toolbar action.
First, you need to enable the toolbar. Select the Navigation Controller connected to the Collections scene, and from the Attributes inspector, tick the checkbox next to Shows Toolbar.
Next, drag a Bar Button Item onto the toolbar of the Collections scene. In the Attributes inspector, set the button’s title to
Show Map:
Finally, Ctrl-drag from the toolbar button to the Map View Controller scene. From the action list, choose the Show Detail action segue.
Select the segue, and in the Attribute inspector, provide the Identifier
showMap:
If you now build and run the app and click the Show Map button in the toolbar, a map zoomed to display the country you’re currently in is displayed:
In the next steps, you will implement logic to visually show the geofence data stored in the SAP HANA MDC.
With the Map View Controller selected in the Storyboard, click the Show Assistant Editor button. The custom
MapViewController.swift file you created earlier is now opened.
To create an outlet for the Map control, Ctrl-drag from the Map control to the
MapViewController.swift file, below the class definition. Name the new outlet
mapView:
Click Connect once done. Your code should now give an couple of errors. This is because it cannot resolve the
MKMapKit class for the
mapView outlet:
Add the following import statements:
import MapKit import CoreLocation import SAPCommon
The
MapKit import solves the error message, and
CoreLocation is needed to determine your location, as well as handling the geofences later on in the tutorial. The
SAPCommon import is used to implement the SDK’s
Logger functionality.
Add the following private stored properties just above the
viewDidLoad() method:
private var locationManager = CLLocationManager() private let logger = Logger.shared(named: "MapViewControllerLogger")
Inside the
viewDidLoad() method, replace the comment with the following code:
mapView.delegate = self locationManager.delegate = self locationManager.requestAlwaysAuthorization()
Here you set the view controller as the delegate for both the
mapView and
locationManager instances. You also set the required location permissions to Always. This is needed because you want the app to monitor geofences also when the app is not running. To allow the user to grant this authorization, open
Info.plist and add the following two entries:
Now you only need to display your current location on the map, and correct the two errors that are shown in the editor. These errors are because you have set the view controller as a delegate, but you haven’t yet implemented the required delegate methods.
At the bottom of the
MapViewController.swift file, add the following extensions:
// MARK: - Map View Delegate extension MapViewController: MKMapViewDelegate { func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? { // TODO: Implement later! return nil } func mapView(_ mapView: MKMapView, rendererFor overlay: MKOverlay) -> MKOverlayRenderer { // TODO: Implement later! return MKOverlayRenderer(overlay: overlay) } } // MARK: - Location Manager Delegate extension MapViewController: CLLocationManagerDelegate { // Allow access to your current location if the "always" authorization is set (see r:25) func locationManager(_ manager: CLLocationManager, didChangeAuthorization status: CLAuthorizationStatus) { mapView.showsUserLocation = status == .authorizedAlways } func locationManager(_ manager: CLLocationManager, monitoringDidFailFor region: CLRegion?, withError error: Error) { logger.error("Monitoring did fail for region: \(region!.identifier)") } func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) { logger.error("Location Manager did fail with error: \(error)") } }
The delegate methods for the
mapView instance will be implemented later, and will eventually display a pin and overlay for the geofences onto the map. The delegate method for the
locationManager instance enables the
mapView instance to display your current location. If you now build and run the app and navigate to the map, you should first grant access for the app to always use your location:
If you run the app from the simulator, click the Simulate Location button and select a location nearest to you:
On the map, scroll to the selected location, and you should now see a blue dot with your simulated location:
In this step, you will display the geofences stored in the SAP HANA MDC onto the map.
The OData service returns instances of
GeoLocation. While this is perfectly fine, it is convenient to translate these into objects which are easier to handle for both the map as well as the location manager. For both offline storage as well as using the object as an annotation on the map, the class should implement both
NSCoding as well as
MKAnnotation.
Right-click the
Model group in the Project navigator, and select New File…. Add a new Swift File and name it
SAPGeoLocation. An empty file is created:
Replace the content of the file with the following code:
import MapKit import CoreLocation class SAPGeoLocation: NSObject, NSCoding, MKAnnotation { let identifier: String? let title: String? let subtitle: String? let coordinate: CLLocationCoordinate2D let radius: Double init(geoLocationType: GeoLocationType) { self.identifier = geoLocationType.id self.title = geoLocationType.title self.subtitle = geoLocationType.description self.coordinate = CLLocationCoordinate2D(latitude: geoLocationType.latitude!, longitude: geoLocationType.longitude!) self.radius = geoLocationType.radius! } required init?(coder aDecoder: NSCoder) { identifier = aDecoder.decodeObject(forKey: SAPGeoLocationKey.identifier) as? String title = aDecoder.decodeObject(forKey: SAPGeoLocationKey.title) as? String subtitle = aDecoder.decodeObject(forKey: SAPGeoLocationKey.subtitle) as? String let latitude = aDecoder.decodeDouble(forKey: SAPGeoLocationKey.latitude) let longitude = aDecoder.decodeDouble(forKey: SAPGeoLocationKey.longitude) coordinate = CLLocationCoordinate2D(latitude: latitude, longitude: longitude) radius = aDecoder.decodeDouble(forKey: SAPGeoLocationKey.radius) } func encode(with aCoder: NSCoder) { aCoder.encode(identifier, forKey: SAPGeoLocationKey.identifier) aCoder.encode(title, forKey: SAPGeoLocationKey.title) aCoder.encode(subtitle, forKey: SAPGeoLocationKey.subtitle) aCoder.encode(coordinate.latitude, forKey: SAPGeoLocationKey.latitude) aCoder.encode(coordinate.longitude, forKey: SAPGeoLocationKey.longitude) aCoder.encode(radius, forKey: SAPGeoLocationKey.radius) } } struct SAPGeoLocationKey { static let identifier = "identifier" static let title = "title" static let subtitle = "subtitle" static let latitude = "latitude" static let longitude = "longitude" static let radius = "radius" }
The constructor takes the OData service’s
GeoLocationType instance as input, and creates a
SAPGeoLocation which implements both
NSCoding and
MKAnnotation. The
required init? and
encode methods implement the
NSCoding’s required decode and encode functionality, respectively. The structure is for convenience and contains the property names as strings.
Switch back to the
MapViewController.swift file, and add the following private method to the
MapViewController class:
/** Converts array of `GeoLocationType` objects to array of `SAPGeoLocation` objects, for convenience. - Parameters: - locations: Array of `GeoLocationType` entities - Returns: Array of `SAPGeoLocation` objects */ private func getArrayOfSAPGeoLocationsFromEntities(locations: [GeoLocationType]) -> [SAPGeoLocation] { var sapGeoLocations: [SAPGeoLocation] = [] for location in locations { let sapGeoLocation = SAPGeoLocation(geoLocationType: location) sapGeoLocations.append(sapGeoLocation) } return sapGeoLocations }
This method takes an array of
GeoLocationType objects returned from the OData service, and returns an array of
SAPGeoLocation objects.
Add the following private method to the
MapViewController class:
/** Renders all geolocations on the map - Parameters: - locations: Array of `SAPGeoLocation` entities */ private func renderLocationsOnMap(locations: [SAPGeoLocation]) { for location in locations { mapView.addAnnotation(location) mapView.add(MKCircle(center: location.coordinate, radius: location.radius)) // Uncomment line below later in the tutorial // registerGeofence(location: location) } }
This method takes the array of
SAPGeoLocation objects, adds an annotation to the map, and in addition, adds a circle at the given coordinates and radius. For both these calls, the respective delegate methods of the
mapView instance are called, but they are not yet implemented. Find the
MKMapViewDelegate extension and replace both delegate methods with the following two methods:
func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? { if let annotation = annotation as? SAPGeoLocation { let identifier = "pin" var view: MKPinAnnotationView if let dequeuedView = mapView.dequeueReusableAnnotationView(withIdentifier: identifier) as? MKPinAnnotationView { dequeuedView.annotation = annotation view = dequeuedView } else { view = MKPinAnnotationView(annotation: annotation, reuseIdentifier: identifier) view.canShowCallout = true view.calloutOffset = CGPoint(x: -5, y: 5) view.rightCalloutAccessoryView = UIButton(type: .detailDisclosure) as UIView } view.pinTintColor = UIColor.preferredFioriColor(forStyle: .tintColorDark) return view } return nil } func mapView(_ mapView: MKMapView, rendererFor overlay: MKOverlay) -> MKOverlayRenderer { if overlay is MKCircle { let circleRenderer = MKCircleRenderer(overlay: overlay) circleRenderer.lineWidth = 1.0 circleRenderer.strokeColor = UIColor.preferredFioriColor(forStyle: .tintColorDark) circleRenderer.fillColor = UIColor.preferredFioriColor(forStyle: .tintColorLight).withAlphaComponent(0.4) return circleRenderer } return MKOverlayRenderer(overlay: overlay) }
The first delegate method is called when the map instance’s
addAnnotation method is called. It adds an
MKPinAnnotationView instance with a pin in one of the standard SAP Fiori colors. The second delegate method checks whether the object being added is of type
MKCircle, and adds it as an overlay on the map, again with one of the standard SAP Fiori colors.
The one thing missing is to actually load the stored
GeoLocationType objects, and call the methods to plot them on the map.
Just below the stored property
logger, add the following stored property referencing the applications
AppDelegate:
private let appDelegate = UIApplication.shared.delegate as! AppDelegate
You will use the
appDelegate instance to get a reference to the OData service. Add the following method:
/** Loads all geolocations from OData service on SAP Cloud Platform */ func loadLocations() { appDelegate.sapGeoService.fetchGeoLocation() { (geolocations, error) in guard let geolocations = geolocations else { return } let locations = self.getArrayOfSAPGeoLocationsFromEntities(locations: geolocations) // Uncomment line below later in the tutorial // self.storeLocationsToUserDefaults(locations: locations) self.renderLocationsOnMap(locations: locations) } }
This method loads the actual
GeoLocationType entities from the OData service, converts the resulting array to an array of
SAPGeoLocation objects, which is then provided to the previously created
renderLocationsOnMap(locations:) method. Call this
loadLocations() function at the end of the
viewDidLoad() method so it resembles this:
override func viewDidLoad() { super.viewDidLoad() mapView.delegate = self locationManager.delegate = self locationManager.requestAlwaysAuthorization() loadLocations() }
If you now run the app, you should see one or more pins marking your stored geofences. If you click on it, it shows the call-out with the geofence title and subtitle, as well as a detail disclosure indicator as specified in the
mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) delegate method:
Zoom in (with the simulator, use Alt-Click for two-finger pinch) and you should now see the circular representation of the geofences too:
Depending on how far apart you have specified your geofences, you might need to zoom in significantly to distinguish the various geofences you have defined in the database. In this step, you will add a toolbar button which will zoom in to the selected geofence.
Open the Storyboard and drag a Bar Button Item onto the Map View Controller’s toolbar. In the Attribute inspector, set the title to
Zoom to geofence:
Open the Assistant editor, and Ctrl-drag the newly added toolbar button to the
MapViewController class, just below the
mapView outlet.
Specify the following parameters:
Implement the newly added action so it resembles the following:
@IBAction func zoomToLocation(_ sender: Any) { if mapView.selectedAnnotations.count > 0 { let selected = mapView.selectedAnnotations[0] let region = MKCoordinateRegionMakeWithDistance(selected.coordinate, 250, 250) mapView.setRegion(region, animated: true) } }
This method checks if a geofence is selected on the map, and then sets the map viewport to center at the geofence coordinates, and span the north-to-south distance as well as the east-to-west distance to approximately 250 meters.
Run the app, select a pin on the map, and click the Zoom to geofence button. You will now zoom in on the map with the selected geofence in the center:
Until now, you can only display the stored geofences. In this step, you will store the geofences and register them for monitoring. After that, you will enhance the app to react on entering a geofence.
So first, you need to store the geofences for offline usage. You have a couple of possibilities here:
Offline OData could work just perfectly here, but to convert from Online OData to Offline OData takes a couple of extra steps which would increase the complexity of this tutorial.
Storing it in the SDK’s
SecureKeyValueStore is also a possibility, but since it only accepts single
NSCoding objects, adding an array of objects for a single key is not possible without extra coding.
So for least complexity, in this tutorial the geofences are stored in the
UserDefaults database, which makes them persistent even when the app is not running.
Add the following method to the
MapViewController class:
/** Add locations to `UserDefaults` for offline access - Parameters: - locations: Array of `SAPGeoLocation` entities */ private func storeLocationsToUserDefaults(locations: [SAPGeoLocation]) { var listSAPGeoLocations: [Data] = [] for item in locations { let sapGeoLocation = NSKeyedArchiver.archivedData(withRootObject: item) listSAPGeoLocations.append(sapGeoLocation) } UserDefaults.standard.set(listSAPGeoLocations, forKey: "geofences") }
This method stores the whole array of
SAPGeoLocation objects into a single
UserDefaults key.
Locate the
loadLocations() function, and uncomment the commented out line, so it now calls the newly added
storeLocationsToUserDefaults() method.
Add the following two methods to the
MapViewController class:
/** Registers a region to location manager and start monitoring for crossing the geofence - Parameters: - location: The `SAPGeoLocation` object which will be registered as a geofence */ private func registerGeofence(location: SAPGeoLocation) { let region = getRegionForLocation(location: location) locationManager.startMonitoring(for: region) } /** Returns a circular geofence region - Parameters: - location: The `SAPGeoLocation` object which will be used to define the geofence - Returns: Instance of `CLCircularRegion` */ private func getRegionForLocation(location: SAPGeoLocation) -> CLCircularRegion { let region = CLCircularRegion(center: location.coordinate, radius: location.radius, identifier: location.identifier!) region.notifyOnEntry = true region.notifyOnExit = false return region }
Method
getRegionForLocation(location:) takes an
SAPGeoLocation instance as input, and creates a
CLCircularRegion instance off it. A
CLCircularRegion is a circular region defining the actual geofence at the specified location. The geofence is set up so the location manager gets notified only when you enter the geofence.
You could extend the
GeoLocationdatabase table to have extra columns for notifications upon entry and exit, making this a dynamic instead of a fixed setting.
.
Method
registerGeofence(location:) then instructs the location manager to start monitoring the supplied
SAPGeoLocation geofence.
Finally, locate the
renderLocationsOnMap(locations:) function, and uncomment the commented out line, so it now calls the newly added
registerGeofence(location:) method.
As stated before, you want to detect geofence events even when the app is inactive, not running or offline. The way the location manager works is, if your device detects a geofence event, it will launch the app in the background. Acting on geofence events is then best done in the app’s
AppDelegate class.
Open
AppDelegate.swift and import
CoreLocation:
import CoreLocation
Then, add the following stored property:
let locationManager = CLLocationManager()
Locate method
applicationDidFinishLaunching(_:) and below the line
UINavigationBar.applyFioriStyle() add the following:
locationManager.delegate = self locationManager.requestAlwaysAuthorization()
The editor should now indicate an error, since you haven’t yet implemented the required
CLLocationManager delegate methods.
At the bottom of the
AppDelegate.swift file, add the following extension:
extension AppDelegate: CLLocationManagerDelegate { func locationManager(_ manager: CLLocationManager, didEnterRegion region: CLRegion) { if region is CLCircularRegion { handleEvent(forRegion: region, didEnter: true) } } func locationManager(_ manager: CLLocationManager, didExitRegion region: CLRegion) { if region is CLCircularRegion { handleEvent(forRegion: region, didEnter: false) } } }
The editor should now complain about the missing
handleEvent(forRegion:didEnter:) method.
Add the following two methods:
/** Processes the geofence event received from one of the `CLLocationManagerDelegate` delegate methods `locationManager(_:didEnterRegion:)` or `locationManager(_:didExitRegion:)` If the app is running in the foreground, it will show an alert. If the app is running in the background, it will show a local notification - Parameters: - region: The `CLRegion` instance which has been detected - didEnter: `true` if the geofence has been entered, `false` if the geofence has been exited */ func handleEvent(forRegion region: CLRegion!, didEnter: Bool) { let geoLocation = self.getGeoLocation(fromRegionIdentifier: region.identifier) if geoLocation != nil { let message = geoLocation?.title ?? "Unknown title" logger.debug("\(didEnter ? "Entered" : "Exited") geofence: \(message)") if UIApplication.shared.applicationState == .active { let view = window?.rootViewController let alert = UIAlertController(title: "Geofence crossed", message: message, preferredStyle: .alert) let action = UIAlertAction(title: "OK", style: .cancel, handler: nil) alert.addAction(action) view?.present(alert, animated: true, completion: nil) } else { let content = UNMutableNotificationContent() content.title = "Geofence crossed" content.body = message content.sound = UNNotificationSound.default() let notificationTrigger = UNTimeIntervalNotificationTrigger(timeInterval: 1, repeats: false) let request = UNNotificationRequest(identifier: "notification1", content: content, trigger: notificationTrigger) UNUserNotificationCenter.current().add(request, withCompletionHandler: nil) } } } /** Retrieves an instance of `SAPGeoLocation` from the array stored in `UserDefaults` based on the `identifier` provided - Parameters: - identifier: The id of the geofence - Returns: Instance of `SAPGeoLocation` or `nil` if the geofence could not be found */ func getGeoLocation(fromRegionIdentifier identifier: String) -> SAPGeoLocation? { let storedLocations = UserDefaults.standard.array(forKey: "geofences") as? [NSData] let sapGeoLocations = storedLocations?.map { NSKeyedUnarchiver.unarchiveObject(with: $0 as Data) as? SAPGeoLocation } let index = sapGeoLocations?.index { $0?.identifier == identifier } return index != nil ? sapGeoLocations?[index!] : nil }
Method
getGeoLocation(fromRegionIdentifier:) retrieves an instance of
SAPGeoLocation which has been stored in
UserDetails. Method
handleEvent(forRegion:didEnter:) takes the
CLRegion geofence received from the
CLLocationManagerDelegate delegate, and displays a notification.
Your app is now ready to test the stored geofences. You could now deploy the app on a physical device and drive around town, but that would be both quite cumbersome as well as impossible to detect any failures or analyze logged messages. You could, however, test geofences using a GPX file.
At the root of your project, add a new Group and name it
Test. Right-click the
Test group and from the context menu, select New File…. From the dialog, choose
GPX File:
Click Next. In the next page, name the file
TestLocations and make sure it sits in the
Test group:
Click Create when done. A new
TestLocations.gpx file is added to your project:
Add at least two waypoints which will cross one or more geofences:
If you do an online search for “GPX generator”, you will find some tools which allow you to simply click on a map and generate a GPX file with a series of waypoints
.
Now, build and run the app in the simulator. If the app runs, click the Locations button in the Debug pane and select
TestLocations:
Navigate to the map. You should now see your simulated location move over the map, based on the waypoints you have defined in the
TestLocations.gpx file. Even more, if you cross a geofence, it will fire a geofence event, and displays an alert:
Also, if you dismiss the app to the background, you will receive a notification:
You may find the simulator acts quite inaccurate at times when testing geofence events. Build and deploy the app on a physical device and enjoy a greater accuracy!
The tutorial ends here. However, you could enhance the app even further.
For instance, you now only receive a notification when crossing a geofence. You could simply create a second database table which stores records for the geofence events with timestamps and user details, and instead of displaying an alert or notification, add a record in that table. Imagine being a truck driver crossing multiple geofences around warehouses. The logistics department would then be notified which driver is in the vicinity of which warehouse.
You could also use Offline OData for storing geofence data, which may give you different kinds of possibilities.
Or use the event to trigger a separate REST service on SAP Cloud Platform which sends a signal to an IoT device, for instance a connected gate or garage door… The geofencing possibilities are endless!
- Step 1: Create the SAP HANA MDC database
- Step 2: Log on to the SAP HANA Cockpit
- Step 3: Log on to the SAP HANA Web-based Development Workbench
- Step 4: Assign administration and development roles to SYSTEM user
- Step 5: Create database schema and tables
- Step 6: Add location data to the database
- Step 7: Create XS OData service
- Step 8: Create mobile application definition
- Step 9: Create Xcode project with SDK Assistant
- Step 10: Test the generated Xcode project
- Step 11: Add a map view
- Step 12: Add a custom controller class to the map view
- Step 13: Enable navigation to the map view
- Step 14: Display your current location on the map
- Step 15: Display the stored geofences on the map
- Step 16: Add map zoom button
- Step 17: Store geofences for offline usage
- Step 18: Register geofences for monitoring
- Step 19: Detect geofence events
- Step 20: Testing your geofences
- Step 21: Where to go from here
- Back to Top | https://developers.sap.com/tutorials/fiori-ios-scpms-geolocation.html | CC-MAIN-2020-05 | en | refinedweb |
Hi all,
I’ve been messing around with the Censor Dispenser exercise for a while, in particular part 3 where the goal is to write a function that censors items occurring in a list. So far, I’ve come up with this:
proprietary_terms = ["she", "personality matrix", "sense of self", "self-preservation", "learning algorithm","herself", "her", ] def censored(email, to_censor): for word in to_censor: email = email.replace(word, 'XXXX') return email print(censored(email_two,proprietary_terms))
Which replaces the words in the list with ‘XXXX’. It’s not as refined as the solution code:
def censor_two(input_text, censored_list): for word in censored_list: censored_word = "" for x in range(0,len(word)): if word[x] == " ": censored_word = censored_word + " " else: censored_word = censored_word + "X" input_text = input_text.replace(word, censored_word) return input_text print(censor_two(email_two, proprietary_terms))
Now, when printing both, the output is slightly different (my code just replaces words in a string whereas the solution replaces each letter in a word occurring in the input list with an ‘X’).
There are three problems here that the solution also doesn’t address:
- ‘herself’ will be replaced by ‘XXXself’ (because “her” is before “herself” in the input list)
- ‘She’ will not be replaced because the string is capitalised.
- Words containing a string from the input list will be censored as well, e.g. researcXXXs.
I’ve tried resolving these but am all out of ideas.
For 1. I took the lazy approach and moved ‘herself’ before ‘her’ in the list). I guess this has to do with list iteration.
For 2. I want to check the email against capitalised words in the list. I tried to resolve this by adding .title() here and there:
def censored(email, censor): for word in censor: if word.title() in email: email = email.replace(word.title(),'XXXX') return email
The problem is that although “She” and “Her” are now censored, lowercase strings ‘she’ isn’t.
For 3 , I though adding a space before the item in the list would prevent words such as ‘researcher’ from being censored:
def censored(email, censor): for word in censor: if " "+word in email: email = email.replace(word, 'XXXX') return email
This doesn’t make a difference, and I don’t really know why it doesn’t.
Now, I added an elif statement to the code I made for problem 2, which somehow resolved problem 3:
def censored(email, censor): for word in censor: if word.title() in email: email = email.replace(word.title(),'XXXX') elif word in email: email = email.replace(word,'XXXX') return email
Now I guess this got resolved because I capitalised the string ‘her’ to ‘Her’ which doesn’t match in ‘researchers’. I just don’t really know what part of the code makes it do this. Swapping the if and elif statements does the same as the code I initially wrote.
Any thoughts on how to get a clear code that takes care of the three issues mentioned?
Thanks,
Twan
EDIT: would converting the entire string to a list, then run all the code to censor items from that list, and then convert the list back to a string work? | https://discuss.codecademy.com/t/improved-censor-dispenser/440283/2 | CC-MAIN-2020-05 | en | refinedweb |
Java provide benefits of avoiding thread pooling using inter-thread communication. The wait(), notify(), and notifyAll() methods of Object class are used for this purpose. These method are implemented as final methods in Object, so that all classes have them. All the three method can be called only from within a synchronized context.
wait()and
sleep()
Pooling is usually implemented by loop i.e to check some condition repeatedly. Once condition is true appropriate action is taken. This waste CPU time.
Deadlock is a situation of complete Lock, when no thread can complete its execution because lack of resources. In the above picture, Thread 1 is holding a resource R1, and need another resource R2 to finish execution, but R2 is locked by Thread 2, which needs R3, which in turn is locked by Thread 3. Hence none of them can finish and are stuck in a deadlock.
class Pen{} class Paper{} public class Write { public static void main(String[] args) { final Pen pn =new Pen(); final Paper pr =new Paper(); Thread t1 = new Thread() { public void run() { synchronized(pn) { System.out.println("Thread1 is holding Pen"); try{ Thread.sleep(1000); } catch(InterruptedException e){ // do something } synchronized(pr) { System.out.println("Requesting for Paper"); } } } }; Thread t2 = new Thread() { public void run() { synchronized(pr) { System.out.println("Thread2 is holding Paper"); try { Thread.sleep(1000); } catch(InterruptedException e){ // do something } synchronized(pn) { System.out.println("requesting for Pen"); } } } }; t1.start(); t2.start(); } }
Thread1 is holding Pen Thread2 is holding Paper
For more details, visit the following: Deadlocks | https://www.studytonight.com/java/interthread-communication.php | CC-MAIN-2020-05 | en | refinedweb |
All Blog Posts Tagged 'E-commerce' - Data Science Central 2020-01-18T04:34:58Z Ten strategies to implement AI on the Cloud and Edge tag: 2020-01-16T20:30:00.000Z ajit jaok…< ultimately, this will a free book on Data Science Central. I will take a use-case based approach i.e. each section would start with a use case.</p> <p>Firstly, </p> .</p> <p> </p> <p>In this post, we outline ways in which the cloud and the edge can work together to deploy AI on edge devices. The post is based on my teaching at the University of Oxford - Cloud and Edge implementations course</p> <p> </p> <p>Before we proceed, let us clarify some terminology that we will use in this post:</p> <ul> <li><strong>IoT (Internet of Things)</strong> – refers to smart sensors which have some sensing or actuating</li> </ul> <p></p> <ul> <li><strong>Edge computing</strong>.</li> </ul> <h2> </h2> <h2><span style="text-decoration: underline;">Flow of Data for AI on Edge devices</span></h2> <p>The overall flow of data for AI on edge devices is as follows</p> <p> </p> <ul> <li>Data leaves the Edge </li> <li>Data comes to rest in Cloud</li> <li>Not all data may be sent from the edge to the cloud i.e. in many cases, data may be aggregated at edge devices and only a summary maybe sent to the cloud</li> <li>Typically, machine learning and deep learning models are trained in the cloud</li> <li>The model is deployed to the edge</li> <li>Inference could be at the edge (device or embedded), cloud or in the stream</li> <li>Typically, models are deployed in containers through a webservice, on specific hardware, using serverless technologies, using CI/CD, using Kubernetes</li> <li>Finally, the whole system could be modelled as a Digital Twin</li> </ul> <p> </p> <h2><span style="text-decoration: underline;">Deployment strategies to implement AI on the Cloud and the Edge</span></h2> <p>Below are the possible deployment strategies for AI on Edge devices</p> <p> </p> <ol> <li>Edge processing – computer vision</li> <li>Edge processing – web service APIs</li> <li>Edge processing – Non-computer vision (ex: sensor-based data)</li> <li>Big Data Streaming strategies – Spark/ MLFlow, Kubeflow, Tensorflow extended etc</li> <li>Containers (docker and kubernetes)</li> <li>Serverless (Edge) – complements containers?</li> <li>CI/CD for Edge devices</li> <li>AI in hardware – inference, AI Chips, FPGAs, Vision Processing Units – Intel Movidius</li> <li>Streaming: Apache Kafka, Splunk</li> <li>Digital Twin</li> </ol> <p>Please let me know if I have missed any</p> <h2><span style="text-decoration: underline;">Conclusion</span></h2> <p>As I mentioned above, I will elaborate each of these strategies in subsequent posts. In each case, we will focus on the implementation of AI / ML for that strategy. Rather, I will discuss how ML / AI can be implemented with it</p> <p> </p> <p>Finally, some may think – this is all an overkill. Indeed, many applications do not need such a comprehensive approach. However, as larger/enterprise IoT applications get deployed, especially using the cloud, we will see these ideas being deployed.</p> <p> </p> <p>I list below a section from 2019 highlights from a niche analyst firm I follow </p> <p>Their ‘Most important IoT technology evolution’ for 2019 is Containers/Kubernetes</p> <p>I very much agree with the below. I have used the same in my teaching. While containers are not on the radar of many IoT developers, in my view, they should be for scalable applications. <br/> <br/> <em>IT architectures are fundamentally changing. Modern (cloud-based) applications build on containers, thereby bringing a whole new set of flexibility and performance to deployments. This is also becoming true for any centralized or edge IoT deployment.<br/> <br/> It is fair to say that by now, Google’s open-source platform Kubernetes has largely won the race of container orchestration platforms, and Docker is the most popular container runtime environment (desipite the companies financial issues)<br/> <br/> 2019 saw several heavyweights in the IT and OT industry refine their container strategy:<br/> <br/>.<br/><br/> Siemens. Industrial giant Siemens in October 2019 bought Pixeom, a software-defined edge platform, with the goal to embrace container technology for edge applications in factories. The Pixeom technology is built on the Docker runtime environment.<br/> <br/>.</em></p> <p><em> </em></p> <p><span style="text-decoration: underline;"><strong><em>About the image</em></strong></span></p> <p>I created the image based on Azure but the concepts apply to other cloud platforms also</p> <p>To keep it uncluttered, the image does not include all the strategies listed above</p> <p><span style="text-decoration: underline;"><strong>About me:</strong></span> Pls see <a href="">Ajit Jaokar - Linkedin</a> </p> <p></p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> Thursday News, January 16 tag: 2020-01-16T20:00:00.000Z Vincent Granville …</a></li> </ul> R data.table for Ohio Voter Registration/History</a></li> <li><a href="">Importance of Hyper-parameters in Model development</a></li> <li><a href="">Docker in 10 minutes</a></li> <li><a href="">5 Most Preferred Programming Languages for AI Engineers in 2020</a></li> <li><a href="">Typo Fixes in ML Coding in a Weekend</a></li> </ul> <p><strong>Articles</strong></p> <ul> <li><a href="">Just How Much Does the Future Depend on AI?</a></li> <li><a href="">2020 Challenge: Unlearn to Change Your Frame</a></li> <li><a href="" target="_blank" rel="noopener">Ten strategies to implement AI on the Cloud and Edge</a></li> <li><a href="">Top Data Science Use Cases in HR</a></li> <li><a href="">Machine Learning in Banking – Opportunities, Risks, Use Cases</a></li> <li><a href="">RPA + Machine Learning = Intelligent Automation</a></li> </ul> <p><strong>Upcoming DSC Webinar</strong></p> <ul> <li><a href="" target="_blank" rel="noopener">Weaponizing Data in the Fight Against White Collar Crimes</a> - Jan 21</li> </ul> <p>Enjoy the reading!</p> Top Data Science Use Cases in HR tag: 2020-01-16T19"/>< results.</span></p> <p class="justifyfull" dir="ltr"><span.</span></p> <p class="justifyfull" dir="ltr"><span>To familiarize you with the benefits that data science brings to the sphere of HR, we tried to compile a list of the most important data science use cases. Hopefully, these examples will disclose some secrets of modern recruiting and human resources management.</span></p> <ul> <li dir="ltr"><h2 dir="ltr"><span style="font-size: 14pt;"><strong>Talent analytics maturity model</strong></span></h2> </li> </ul> <p class="justifyfull" dir="ltr"><span>Concerning all the fuss around data use and analytics in HR it would be good to know how it has started. Regarding the</span><span>talent analytics, data science has proved its efficiency in the course of the years. First of all, an employer should find his place at the talent analytics learning curve. Analytics have been implemented</span><span>on various levels of business organization practically in all companies, no matter how big they are. Building the talent analytics maturity model is a complex, long-term and voluntary process. It is usually performed step by step. </span></p> <p class="justifyfull" dir="ltr"><span>Here are the major levels of analytics implementation in HR:</span></p> <h3 dir="ltr"><strong>Operational reporting</strong></h3> <p class="justifyfull" dir="ltr"><span>This level involves developing dashboards and reports presenting the measurement of efficiency and compliance. </span></p> <h3 dir="ltr"><strong>Advanced reporting </strong></h3> <p class="justifyfull" dir="ltr"><span>Knowing where you can go deeper into details. Further filtering, analysis and processing the data allows building a multi-dimensional dashboard presenting data for each separate employee. </span></p> <h3 dir="ltr"><strong>Advanced analytics </strong></h3> <p class="justifyfull" dir="ltr"><span</span><span>. This answer may be transformed into actionable decisions for your company.</span></p> <h3 dir="ltr"><strong>Predictive analytics </strong></h3> <p class="justifyfull" dir="ltr"><span>Using statistical data gained at levels 1, 2, and 3 you can create and develop predictive models. Reaching this level you are firmly stating that HR analytics plays one of the key roles in strategic decision-making. </span></p> <p class="justifyfull" dir="ltr"><span</span><span>. </span></p> <ul> <li dir="ltr"><h2 dir="ltr"><span style="font-size: 14pt;"><strong>Recruitment </strong></span></h2> </li> </ul> <p class="justifyfull" dir="ltr"><span>Uncovering the insights is always beneficial. That is why predictive analytics is making its way into the recruitment process. This is a technology capable of learning on the historical data and making predictions about the top-performing hires, selection process, cognitive skills assessment. </span></p> <p class="justifyfull" dir="ltr"><span>Data science in recruitment can help in improving talent acquisition process, employee assessment, recruitment etc.</span></p> <p class="justifyfull" dir="ltr"><span>Let us consider several vivid use cases as a proof of HR analytics vast potential and influence.</span></p> <ul> <li dir="ltr"><h3 dir="ltr"><strong>Use case #1 Predicting top-performing hires</strong></h3> </li> </ul> <p class="justifyfull" dir="ltr">The HR manager faces the problem of choosing the best employee among a considerable number of candidates. A key point to take into account is the ability of a candidate to perform a specific task. Here Google serves as a fascinating example.. </p> <p class="justifyfull" dir="ltr"> <a href="" target="_blank" rel="noopener"><img src="" class="align-center"/></a></p> <ul> <li dir="ltr"><h3 dir="ltr"><strong>Use case # 2 Workforce forecasting </strong></h3> </li> </ul> <p class="justifyfull" dir="ltr"><span. </span><span> </span></p> <ul> <li dir="ltr"><h3 dir="ltr"><strong>Use case #3 Cognitive Based Talent Acquisition</strong></h3> </li> </ul> <p class="justifyfull" dir="ltr"><span </span><span>t</span><span>o list the hiring profiles containing desirable traits</span><span>to which the candidates’ results are compared. </span></p> <p dir="ltr"><span style="font-size: 14pt;"><strong>Retention</strong></span></p> <p dir="ltr"><span.</span></p> <ul> <li dir="ltr"><h3 dir="ltr"><strong>Flight risk assessment</strong></h3> </li> </ul> <p class="justifyfull" dir="ltr"><span>It seems so natural</span><span> to </span><span>use previously received data and their analysis to forecast some future trends, events or behavior. Monitoring KPIs (Key Performance Indicators) help to define whether the actions of staff members, teams, departments, and individuals were successful and enables to foresee possible risks in the future. </span></p> <p class="justifyfull" dir="ltr"><span>Almost every aspect of HR may be automated, accelerated and streamlined starting with the job advertising to performance analysis.</span></p> <p class="justifyfull" dir="ltr"><span>The most vivid representation of the practical use of predictive analytics</span><span. </span></p> <ul> <li dir="ltr"><h2 dir="ltr"><span style="font-size: 14pt;"><strong>Performance Management </strong></span></h2> </li> </ul> <p class="justifyfull" dir="ltr"><span. </span></p> <ul> <li dir="ltr"><h3 dir="ltr"><strong>Sales team productivity management</strong></h3> </li> </ul> <p class="justifyfull" dir="ltr"><span.</span></p> <p class="justifyfull" dir="ltr"><span><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/></a></span></p> <ul> <li dir="ltr"><h3 dir="ltr"><strong>Succession planning</strong></h3> </li> </ul> <p class="justifyfull" dir="ltr"><span.</span></p> <ul> <li dir="ltr"><h3 dir="ltr"><strong>Pay for performance</strong></h3> </li> </ul> <p class="justifyfull" dir="ltr"><span. </span></p> <ul> <li dir="ltr"><h2 dir="ltr"><span><strong>Engagement chat</strong> </span></h2> </li> </ul> <p class="justifyfull" dir="ltr"><span>An engaged employee proves to be a reliable and productive employee. This dependency is well familiar to HR managers. Thus, a good HR would recommend you foster the engagement of your team to increase performance.</span></p> <p class="justifyfull" dir="ltr"><span>Start creating a productive office environment by engaging your team members to communicate. Thousands of apps and tools are solely developed to facilitate communication between team members. Messaging applications allows sending text messages, sharing visual items and files, scheduling calls and live discussions, communicating in groups or individually. </span></p> <h2 dir="ltr"><strong>Conclusion </strong></h2> <p class="justifyfull" dir="ltr"><span>Does it mean that HR needs to master statistics, machine learning, and programming? Not only HR! A recent study shows</span><span>. </span></p> <p class="justifyfull" dir="ltr"><span. </span></p> <p class="justifyfull" dir="ltr"><span. </span></p> Importance of Hyper-parameters in Model development tag: 2020-01-16T09:00:00.000Z Janardhanan PS …</p> . They are often specified by practitioners experienced in machine learning development. They are often tuned independently for a given predictive modeling problem.<br/> <br/> Building an ML model is a long process that requires domain knowledge, experience and intuition. In ML, hyper-parameter optimization or tuning is the problem of choosing a set of optimal hyper-parameters for a learning algorithm. We may not know the best combination of values for hyper-parameters in advance for a given problem. We may use rules of thumb, copy values used on other problems, or search for the best value by trial and error. When a machine learning algorithm is tuned for specific problems by changing the higher level APIs for optimization, we need to tune the hyper-parameters also to discover the parameters that results in a model with higher accuracy in prediction. Hyper-parameter tuning is often referred to as searching the parameter space for optimum values. With Deep Learning models, the search space is usually very large, and a single model might take days to train. The common Hyper-parameters are:</p> <ul> <li>Epochs - A full training pass over the entire dataset such that each example has been seen once.</li> <li>Learning rate - A scalar used to train a model via gradient descent. During each iteration, the gradient descent algorithm multiplies learning rate by the gradient. Resulting product is called the gradient step.</li> <li>Momentum in Stochastic Gradient Descent - The coefficient of friction controlling the rate at which the descent happens, when it goes towards the bottom.</li> <li>Regularization method - Regularization is used to prevent overfitting by the model. Different kinds of regularization include L1 regularization (Lasso) and L2 regularization (Ridge)</li> <li>Regularization Rate - The penalty on a model's complexity. The scalar value ƛ specifies the importance of the regularization function relative to the loss function. Raising the value of ƛ reduces over-fitting at the cost of model accuracy.</li> <li>Early Stopping - Regularization by early stopping callback function tests a training condition for every epoch and if a set number of epochs elapses without showing any improvement, then it automatically stops the training.</li> <li>The patience parameter is the number of epochs to check for improvement.</li> <li>K in k-means clustering - Number of clusters to be discovered</li> <li>C and Sigma - For Support Vector Machines</li> <li>Number of hidden layers - For Neural Networks</li> <li>Number of units per layer - For Neural Networks</li> <li>max_depth - Maximum depth of a tree in Random Forest method</li> <li>n_estimators- Number of trees in the Random Forest. More number of trees gives better performance</li> </ul> <p>Model optimization using hyper-parameter tuning is a search problem to identify the ideal combination of these parameters. The commonly used methods for optimization using hyper-parameters are; Grid search, Random search and Beyesian optimization. In Grid search, a list of all possible values for each hyper-parameter in a specified range is constructed and all possible combinations of these values are tried sequentially. In grid search, the number of experiments to be carried out increases drastically with the number of hyper-parameters. Rather than training on all possible configurations, in Random search method the network is trained only on a subset of the configurations. Choice of the configurations to be trained is randomly picked up and only the best configuration is trained in each iteration. In Beyesian optimization, we are using ML techniques to figure out the hyper-parameters. It predicts regions of the hyper-parameter space that might give better results. Gaussian process is the technique used and it finds out the optimal hyper parameters from the results of the previously conducted experiments with various types of parameter configurations.</p> Does Big Data Impact Business Mobile App Development? tag: 2020-01-16T05:43:50.000Z Veronica Hanks .…</p> . <a href="" target="_blank" rel="noopener">Big Data helps mobile developers</a> get insights from the information generated through apps by users every day.</p> <p><br/>Check below how big data impacts a business’s mobile app development. <br/> <br/><strong>1. Create customer-driven mobile apps</strong></p> <p><br/>The most preferred mobile app is the one which is free from bugs, is fast, easy to use and meets the needs of the users. Businesses should carefully analyze the customer’s experience using big data so that they can create better and usable apps. The data will provide information about what real customers want when they use the app.</p> <p>The main aim of using this data is to get ideas to create apps with greater user experience. By analyzing the data, mobile app developers will know the behavior of customers and how they interact with the app. Mobile app developers can use this to enhance the existing app or to create a better version.</p> <p></p> <p><strong>2. Big Data fuels user experience analytics</strong></p> <p><br/>An extensive analysis of customer experience is required for app development. Using Big Data developers can collect full details about the behavior of the user which can be focused while assimilating user experience in app development. <a href="" target="_blank" rel="noopener">Hiring the right mobile app developers</a> can think of new ideas for making new apps on the basis of how the users want it by analyzing through big data.</p> <p>For example - Developers can analyze the top-rated apps in fashion if they want to create a similar app and could analyze apps like H&M, Zara, and understand what the users really want to do with their apps. They can add some innovative features to make their app more usable. <br/> <br/><strong>3. A new age of marketing</strong></p> <p><br/>The new age of marketing has changed the way businesses market their products or services. Business Intelligence and Big Data have changed the way developers build apps. Well known marketers such as <a href="" target="_blank" rel="noopener">SalesForce Marketing</a> and <a href="" target="_blank" rel="noopener">CheetahMail</a> are also using big data to build a better app for customer experience. <br/>Companies who are targeting professional-level users should utilize big data analytics of mobile apps.</p> <p><strong>4. Big data as a crucial aspect of the future app</strong></p> <p><br/>The market of mobile apps is expected to reach high volumes due to a large number of users who have shifted to the use of mobile phones and tablets. Therefore it is better to develop better usable mobile apps. Mobile apps are easier to use because of their simple and easily navigable display. <br/>Analysis of big data is the most effective way to obtain information making it a big investment for the business.</p> <p><strong>5. Mobile Advertising</strong></p> <p></p> <p>Big Data also suggests how and where to target audience and this approach is very helpful as it is carried out with the help of proper analysis and also leads to an increase in traffic. Big Data uses demographic data, social behavior, and customers’ purchasing patterns to modify the strategies according to the user’s interest.</p> <p><strong>6. Big Data helps bridge international boundaries</strong></p> <p></p> <p>If you are thinking of expanding your business globally, it is important to know how different users respond. These businesses need to analyze customer trends, ie. their behavior, interests, etc. If the business plans to develop a mobile app it should have the proper interface, support, and functionality that will help the app to get more traffic.</p> <p><strong>7. Purchasing options for in-app</strong></p> <p></p> <p>The in-app purchases let you drive sales. The business should have an idea about the purchase format and what is best suited for your app. Businesses can search for similar apps and can take ideas from them. This will help you to focus on the experience of the customer and how to deal with them to increase sales.</p> <p><strong>8. Target marketing locally</strong></p> <p></p> <p>It is important to do the target area marketing for regular interaction with users. This helps promote your business locally <a href="" target="_blank" rel="noopener">with the help of social media strategies</a> and SEO. Through this, your customers will get to know about the events and offers which the app is providing.</p> <p>With a rapid increase in the usage of mobile phones in the market, there is also a requirement of smart mobile apps to enhance the customer experience. Businesses are using business intelligence and Big Data to analyze the user’s behavior, their interest, and demographics and accordingly develop more engaging apps.</p> Blockchain for Fintech: now and tomorrow tag: 2020-01-15T19:07:12.000Z Valery Geldash …</span></p> means that you don’t need to take the counterpart’s</span> <span style="font-weight: 400;">word, but you can be sure in systems’ mathematical reliability. </span></p> <p></p> <p><span style="font-weight: 400;">Most payments can proceed during a couple of days due to a number of payment processors. While Blockchain transactions are held within a seconds and require minimal fees or no fees at all. </span></p> <p></p> <p><span style="font-weight: 400;">In addition, core issues falling within the FinTech industry are incomplete system protection causing the risk of fraud and inaccurate information; inability to access financial services of some segment of society in certain areas. Blockchain solves all of them due to its decentralized nature by providing transparency, data security by using cryptography methods, shared publicity and automation.</span></p> <p></p> <p><span style="font-weight: 400;">Q: Why does the FinTech sector need Blockchain? </span></p> <p></p> <p><span style="font-weight: 400;">A: Apparently, Blockchain's unique features fit FinTech needs perfectly.</span></p> <p></p> <p><span style="font-weight: 400;">As I already mentioned, Blockchain absolutely solves the issue of trust in transactions. It is the revolutionary solution as so far all operational solutions are centralized. Blockchain enables decentralized solutions and avoid human error.</span></p> <p></p> <p><span style="font-weight: 400;">Also, it can be noted that Blockchain-based transactions of any type and difficulty can be monitored in real-time, the technology enables peer-to-peer transactions due to cryptocurrencies providing full control over assets and minimizing fees due to its transparency and consistency, as well as reduce time for clearance and settlement processes as Blockchain records are seen in real-time, build trust between businesses and consumers.</span></p> <p></p> <p><span style="font-weight: 400;">Q: What are the top use cases for Blockchain in FinTech?</span></p> <p><span style="font-weight: 400;">A: Banking and Cross-Border Micro-Payments Particularly</span></p> <p><span style="font-weight: 400;">The whole financial sector is built on operations that require the highest level of security, reliability and short transaction time. The remittance sector is one of the most perspective fields for Blockchain deployment. Due to opportunities provided by Blockchain, people and businesses will have full control over their assets without any third-parties and not reasonable fees. </span></p> <p></p> <p><span style="font-weight: 400;">Cryptocurrency</span></p> <p><span style="font-weight: 400;">Cryptocurrencies as the first Blockchain use case ever. Being based on and provided by Blockchain cryptocurrency is considered as a future of money. Nowadays it’s getting more and more casual to pay with cryptocurrencies in big international deals and even while shopping. </span></p> <p><span style="font-weight: 400;">Therefore, if you want to know <a href="" target="_blank" rel="noopener">how to build a bitcoin wallet</a> for your own purposes and business, we have an article about it.</span></p> <p></p> <p><span style="font-weight: 400;">Stock Trading </span></p> <p><span style="font-weight: 400;">The Blockchain technology can remove numerous controversies and stock manipulation as everything in the stock market will be built with smart contracts, decreasing stock trading cost and time. </span></p> <p></p> <p><span style="font-weight: 400;">Voting and Registers, including Governmental.</span></p> <p><span style="font-weight: 400;">This sectors highly requires data reliability and veracity, permanence and impossibility of forgery more than any and Blockchain gladly provides it. </span></p> <p></p> Multi Gigabyte R data.table for Ohio Voter Registration/History tag: 2020-01-15T14:29:56.000Z steve miller <p><a href="" rel="noopener" target="_blank"><img class="align-full" src=""></img><…</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/>< came across an<span> </span><a href="" rel="noopener" target="_blank">article in the New York Times</a><span> </span>on voter purging in Ohio. Much of the research surrounding the article was driven by data on the<span> </span><a href="" rel="noopener" target="_blank">voting history of Ohio residents</a><span> </span>readily available to the public.</p> <p>When I returned home, I downloaded the four large csv files and began to investigate. The data consisted of over 7.7M voter records with in excess of 100 attributes. The "denormalized" structure included roughly 50 person-location variables such as address and ward, and a close to 50 variable "repeating group" indicating voter participation in specific election events and characterized by a concatenated type-date attribute with an accompanying voted or not attribute.</p> <p>My self-directed task for this blog was to load the denormalized data as is, then create auxiliary "melted" data.tables that could readily be queried i.e. transform from wide to long. The query type of interest revolved on counts/frequencies of the dimensions election type, date, and participation. The text will hopefully elucidate both the power and ease of programming with R's data.table and tidyverse packages.</p> <p>The technology used is Wintel 10 along with JupyterLab 1.2.4 and R 3.6.2. The R data.table, tidyverse, magrittr, fst, feather, and knitr packages are featured.</p> <p>See the entire blog <a href="" target="_blank" rel="noopener">here.</a></p> <p><a href="" target="_blank" rel="noopener"></a></p> Machine Learning in Banking – Opportunities, Risks, Use Cases tag: 2020-01-15T13:00:00.000Z Roman Chuprina <p></p> <p><a href="" rel="noopener" target="_blank"><img class="align-center" src=""></img><> <p></p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/><> <h2><span id="Artificial_Intelligence_in_Banking_Statistics">Artificial Intelligence in Banking Statistics</span></h2> <ul> <li>According to a forecast by the research company Autonomous Next, banks around the world will be able to reduce costs by 22%<span> </span>by 2030 through using artificial intelligence technologies. Savings could reach $1 trillion.</li> <li>Financial companies employ 60% of all professionals <a href="" target="_blank" rel="nofollow external noopener noreferrer"></a>who have the skills to create AI systems.</li> <li>It is expected that face recognition technology will be used in the banking sector to prevent credit card fraud. Face recognition technology will increase its annual revenue growth rate by over<span> 20% in 2020.</span></li> </ul> <h2><span id="How_Artificial_Intelligence_is_Used_in_Banking">How Artificial Intelligence is Used in Banking</span></h2> <p>The data that banks receive from their customers, investors, partners, and contractors is dynamic and can be used for different purposes, depending on which parameters are used to analyze them. Basically, the scope of AI for banking can be grouped into four large groups.</p> <h3><span id="Improving_Customer_Experience">Improving Customer Experience</span></h3> <p>When banks and other financial organizations got the opportunity to learn everything about a user and his behavior on a network, they simultaneously gained the opportunity to improve the user experience as much as possible.</p> <p><span id="Chatbots">Chatbots</span></p> <p.</p> <p><span id="Personalized_Offers">Personalized Offers</span></p> <p.</p> <p><span id="Customer_Retention">Customer Retention</span></p> <p.</p> <h2><span id="Machine_Learning_for_Safe_Bank_Transactions">Machine Learning for Safe Bank Transactions</span></h2> <p>The main advantage of machine learning for the financial sector in the context of fraud prevention is that systems are constantly learning. In other words, the same fraudulent idea will not work twice. This works great for credit card fraud detection<span> </span>in the banking industry.</p> <h3><span id="How_Artificial_Intelligence_Makes_Banking_Safe">How Artificial Intelligence Makes Banking Safe</span></h3> <p>Most financial transactions are made when the user pays for purchases on the Internet or at brick-and-mortar businesses. This means that most fraudulent transactions also occur under the pretext of buying something. AI in banking provides an opportunity to prevent this from happening. For example:</p> <ul> <li>Cameras with face recognition can determine whether a credit card is in the hands of the rightful owner when buying at a physical point of sale.</li> <li>Tracking suspicious IP addresses from which a financial transaction occurs may help prevent fraud with discount coupons as well as identify fraudulent intentions. For example, if someone buys a product in order to return a fake one in its place.</li> </ul> <h3><span id="Market_Research_and_Prediction">Market Research and Prediction</span></h3> <p.<br/> </p> <h3><span id="Cost_Reduction">Cost Reduction</span></h3> <p.</p> <h2><span id="Machine_Learning_Use-Cases_in_American_Banks">Machine Learning Use-Cases in American Banks</span></h2> <p>Here are some examples of how machine learning works at leading American banks.</p> <h3><span id="JP_Morgan_Chase">JP Morgan Chase</span></h3> <p>This leading bank in the United States has developed a smart contract system<span> </span.</p> <h3><span id="Bank_of_America">Bank of America</span></h3> <p.</p> <h3><span id="Wells_Fargo">Wells Fargo</span></h3> <p.</p> <h3><span id="Citibank">Citibank</span></h3> <p>Citibank has developed a powerful fraud prevention system that tracks abnormalities in user behavior. In particular, the system is polished to detect fraudulent credit card transactions when shopping on the Internet.</p> <h3><span id="US_Bank">US Bank</span></h3> <p> <h2><span id="Are_There_Any_Risks_in_Adopting_Machine_Learning_for_Banking">Are There Any Risks in Adopting Machine Learning for Banking?</span></h2> <p>Of course, Artificial Intelligence technology can revolutionize the banking sector. However, there are certain risks — but they are mostly associated with the novelty of technologies and the lack of full understanding among users about how they really work.</p> <h3><span id="Job_Cuts">Job Cuts</span></h3> <p.</p> <h3><span id="Less_Trust_Due_to_Less_Human_Contact">Less Trust Due to Less Human Contact</span></h3> <p.</p> <h3><span id="Ethical_Risks">Ethical Risks</span></h3> <p.<br/> </p> <h3><span id="False-Positive_Results_Risks">False-Positive Results Risks</span></h3> <p.<br/> </p> <h2><span id="How_to_Choose_the_Best_Partner_to_Develop_Machine_Learning_Solutions_for_Your_Financial_Service">How to Choose the Best Partner to Develop Machine Learning Solutions for Your Financial Service</span></h2> <p.</p> <p> </p> <p>In addition, when choosing a potential AI vendor, make sure the company already has experience in developing solutions specifically for the financial sector. Why? Because the security requirements are higher than in any other field, perhaps only with the exception of healthcare. Here is our article on Top of 6 AI Companies with more detailed advice on choosing the right vendor.<br/> </p> <h2><span id="Conclusion">Conclusion</span></h2> <p. </p> <p><em>Originally posted <a href="" target="_blank" rel="noopener">here</a></em></p> Docker in 10 minutes tag: 2020-01-15T03:00:00.000Z Igor Bobriakov <p><span style="font-weight: 400;"><a href="" rel="noopener" target="_blank"><img class="align-center" src=""></img><…</span></p> <p><span style="font-weight: 400;"><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/><?</span></p> <h1><span style="font-size: 12pt;"><strong>What is Docker</strong></span></h1> <p><span style="font-weight: 400;".</span></p> <p><span style="font-weight: 400;".</span></p> <p><span style="font-weight: 400;">Docker allows you to deploy code everywhere from local machine or data center to cloud infrastructure.</span></p> <h2><span style="font-size: 12pt;">Docker architecture</span></h2> <p><span style="font-weight: 400;">Docker is a client-server app.</span> <b>Docker daemon</b> <span style="font-weight: 400;">serves your app (create, deploy, shut down, etc..) and</span> <b>docker client</b> <span style="font-weight: 400;">interacts with it to manage its activity. Client and server can exist in one system and also docker client can connect to the remote daemon.</span><span style="font-weight: 400;"><br/></span> <span style="font-weight: 400;">You need to know 3 main terms:</span></p> <ol> <li style="font-weight: 400;"><span style="font-weight: 400;">Image</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Registry</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Container</span></li> </ol> <p><b>Image</b> <span style="font-weight: 400;">is a read-only pattern that is used to build a container. The Image can contain an Apache Nginx or Kafka with Ubuntu, etc.. You can add, update and share your images. The Image is a construction component.</span></p> <p><span style="font-weight: 400;">Each image consists of layers. Docker uses UnoinFS</span> <span style="font-weight: 400;">to combine these layers into one image. UnionFS implements a</span> <span style="font-weight: 400;".</span></p> <p><span style="font-weight: 400;".</span></p> <p><span style="font-weight: 400;".</span></p> <p><span style="font-weight: 400;">At the heart of each image is a basic image. For example, ubuntu, the base image of Ubuntu, or Debian, the base image of the Debian distribution. You can also use images as a basis for creating new images.</span></p> <p><b>Registries</b> <span style="font-weight: 400;">are storage for images. It allows you to save own or use any public ones. There are public and private registries. The official Docker registry is called Docker Hub. Registries are a distribution component.</span></p> <p><b>Containers</b> <span style="font-weight: 400;".</span></p> <p><span style="font-weight: 400;"><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/></a></span></p> <p style="text-align: center;"><span style="font-weight: 400;"><b>Structure of containerized applications with Docker</b></span></p> <p><span style="font-weight: 400;".</span></p> <p><span style="font-weight: 400;"><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/></a></span></p> <p style="text-align: center;"><span style="font-weight: 400;"><b>Structure of containerized applications with Virtual Machine</b></span></p> <h1><span style="font-size: 12pt;">How does it work?</span></h1> <p><span style="font-weight: 400;"</span> <span style="font-weight: 400;".</span></p> <p><span style="font-weight: 400;">When docker daemon starts the container, it creates a read/write level on top of the image (using the union file system, as mentioned earlier), in which the application can be launched.</span></p> <p><span style="font-weight: 400;">Docker uses several namespaces to isolate containers such as PID, NET and UTC namespaces. Some of all listed before features uses a Linux kernel, So you need a special virtual machine if you use Windows.</span></p> <h1><span style="font-size: 12pt;"><strong>Some of the common use-cases</strong></span></h1> <h2><span style="font-size: 12pt;">Simplifying Configuration</span></h2> <p><span style="font-weight: 400;".</span></p> <h2><span style="font-size: 12pt;">Developer Productivity</span></h2> <p><span style="font-weight: 400;".</span></p> <h2><span style="font-size: 12pt;">App Isolation</span></h2> <p><span style="font-weight: 400;".</span></p> <h2><span style="font-size: 12pt;">Server Consolidation</span></h2> <p><span style="font-weight: 400;">The application isolation abilities of Docker allows consolidating multiple servers to save on cost without the memory footprint of multiple OSes. It is also able to share unused memory across the instances. Docker provides far denser server consolidation as compared to VMs.</span></p> <h2><span style="font-size: 12pt;"><strong>Rapid Deployment</strong></span></h2> <p><span style="font-weight: 400;".</span></p> <h2><span style="font-size: 12pt;">Load balancing</span></h2> <p><span style="font-weight: 400;".</span></p> <h1><span style="font-size: 12pt;"><strong>Who uses Docker?</strong></span></h1> <p><span style="font-weight: 400; font-size: 12pt;">PayPal</span></p> <p><span style="font-weight: 400;">PayPal migrated 700+ applications to Docker Enterprise, running over 200,000 containers. This company also achieved a 50% productivity increase in building, testing and deploying applications. </span></p> <h2><span style="font-weight: 400; font-size: 12pt;">Visa</span></h2> <p><span style="font-weight: 400;">After just six months in production, Visa achieved a 10x increase in scalability for two customer-facing payment processing applications. Visa processes $5.8 trillion in transactions, while maintaining the company’s robust availability and security capabilities.</span></p> <h2><span style="font-weight: 400; font-size: 12pt;">Cornell University</span></h2> <p><span style="font-weight: 400;">Cornell University has achieved 13x faster application deployment by leveraging reusable architecture patterns and simplified build and deployment processes with Docker Enterprise. </span></p> <h2><span style="font-weight: 400; font-size: 12pt;">Other</span></h2> <p><span style="font-weight: 400;">BCG Gamma, Desigual, Jabil, Citizen Bank, GE Appliances, BBC News, Lyft, Spotify, Yelp, ADP, eBay, Expedia, Groupon, ING, New Relic, The New York Times, Oxford University Press.</span></p> <p></p> Key Graph Based Shortest Path Algorithms With Illustrations - Part 1: Dijkstra's And Bellman-Ford Algorithms tag: 2020-01-15T00:00:00.000Z Murali Kashaboina …</p> algorithms. Algorithms such as Dijkstra’s, Bellman Ford, A*, Floyd-Warshall and Johnson’s algorithms are commonly encountered. While these algorithms are discussed in many text books and informative resources online, I felt that not many provided visual examples that would otherwise illustrate the processing steps to sufficient granularity enabling easy understanding of the working details. As such, I had to use simple enough graphs to visualize the algorithmic flow for my own understanding and I wanted to share my examples along with the explanations through this article. Since there are many algorithms to illustrate, I decided to divide the article into several parts. In part 1, I have illustrated Dijkstra’s and Bellman-Ford algorithms. Before diving into algorithms, I also wanted to highlight salient points on the graph data structure.</p> <p><strong>Quick Primer On Graph Data Structure<br/></strong></p> <p>A graph is a data structure comprising of a finite non-empty set of vertices wherein some pairs of vertices are connected. In real life, such vertices represent real world objects wherein some pairs of such objects are related and such relationship is represented by a link connection. The link between a pair of vertices is referred to as an edge. Edges have directionality. In case of unidirectional edge, an arrow points from the tail vertex (source) to the head vertex (target) and hence link goes one way. As such, an edge between vertices v1 and v2 is an ordered pair (v1, v2) where v1 is the tail vertex and v2 is the head vertex. In case of a bidirectional edge, arrows point in both the directions and hence link goes both ways. As such, an edge between vertices v1 and v2 is unordered pair wherein both (v1, v2) and (v2, v1) represent the same edge. A graph which contains all unidirectional edges is called as a directed graph. A graph which contains all bidirectional edges is called as undirected graph. A graph in which some edges are unidirectional and some are bidirectional is called as mixed graph. The number of edges incident to a vertex is called as the degree of the vertex. The out-degree of a vertex is the number of directed edges incident to the vertex where the vertex is the tail and the in-degree of a vertex is the number of directed edges incident to the vertex where the vertex is the head. In addition, edges have weights. Edge weight represents the capacity or cost or distance of that edge. As such, edge weight can be positive or negative number. A path from vertex v1 to vertex vn is a sequence of vertices v1, v2, v3...vn in a graph such that the pairs (v1, v2), (v2, v3)…(vn-1, vn) are connected via edges in the graph. As such, two vertices are connected if a path exists between them in the graph. A path is said to be simple if all the vertices are distinct with the exception of the first and the last vertices. A path is said to be circular or cyclic if the first and the last vertex are same. A directed graph without any circular paths is called as Directed Acyclic Graph (DAG). The number of edges in a path represents the path’s length and the sum of the edge weights in the path represents the capacity or cost or distance of that path. If the path weight is negative in a cyclic path, then that path is referred to as negative cycle. </p> <p>A graph is said to be complete if each of its vertices is connected to all other vertices. If there are N vertices in a complete graph, then there will be N(N-1)/2 edges in the graph. Complete graphs are also commonly referred to as universal graphs.</p> <p><strong>Dijkstra’s Algorithm</strong></p> <p>Dijkstra’s algorithm is a greedy algorithm used to find the shortest path between a source vertex and other vertices in a graph containing weighted edges. An algorithm is said to be greedy if it leverages local optimal solution at every step in its execution with the expectation that such local optimal solution will ultimately lead to global optimal solution. As such, Dijkstra’s algorithm works on a greedy expectation that a sub-path between vertices A and B within a global shortest path between vertices A and C, is also a shortest path between A and B. The limitation in Dijkstra’s algorithm is that it may not work if there are negative edge weights and definitely will not work if there are negative cycles in the graph. The algorithm measures the shortest path from the source vertex to all other vertices by visiting a source vertex, measuring the path lengths from the source to all its neighboring vertices and then visiting one of the neighbors with the shortest path. Algorithm repeats these steps iteratively until it completes visiting all vertices in the graph.</p> <p></p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-center" width="750"/></a></p> <p style="text-align: center;"><em>Figure 1: Illustrations of Dijkstra's Algorithm</em></p> <p>Figure 1 illustrates the logical steps in the execution of Dijkstra’s algorithm. The algorithm starts with maintaining two sets; 1] a set with all unvisited vertices and 2] a set with all visited vertices which is initially empty. In addition, the algorithm maintains a tally of tentative shortest paths for each vertex in the graph measured thus far from the source vertex and keeps a reference to a predecessor vertex on the path to each vertex from the source vertex. As shown in figure 1, at step 1, the algorithm sets the distances from the source vertex, A in the example, to rest of the vertices as infinity and the distance from the source to itself as 0. At step 2, the algorithm measures the tentative distances to each of the unvisited neighbors of source vertex, B, C and E in the example. A distance of a vertex from the source is considered as tentative until algorithm confirms it to be the shortest distance. Tentative distance of vertex V, given the tentative distance of vertex U from the source, is measured using the formula d(U) + d(U,V) where d(U) is the tentative distance of vertex U from the source vertex and d(U,V) is the edge weight between vertices U and V. If newly measured tentative distance, i.e., d(U) + d(U,V), is less than the previously assigned tentative distance of vertex V, d(V), then the algorithm updates tentative distance of vertex V with the new value, i.e., d(V) = d(U) + d(U,V). This process of updating vertex distance if the newly measured distance is less than the previously assigned distance is commonly referred to as relaxation. If a vertex’s distance gets updated with newly measured distance lesser than its previous measured distance, then the vertex is considered as relaxed. In the example, algorithm relaxes vertices B, C and E and at the same time sets vertex A as their predecessor vertex. The algorithm marks the current source vertex A as visited, pops it out of the unvisited set and places it in the visited set. The algorithm then determines the neighbor vertex with shortest distance as the next vertex to be visited, vertex C in the example, and iterates to the next step. At step 3, the algorithm measures the tentative distances of unvisited neighbors of vertex C, i.e., B, D and E, relative to the original source vertex A. The algorithm relaxes vertex D with new tentative distance and sets its predecessor path vertex as C. The algorithm does not update the tentative distances of vertices B and E since their distances measured via vertex C are greater than their previously assigned tentative distances. As in step 2, the algorithm marks the current vertex C as visited, pops it out of the unvisited set and places it in the visited set. The algorithm then determines the unvisited vertex with the shortest distance as the next vertex to be visited, vertex B in the example, and iterates to the next step. Algorithm will repeat such steps until all the vertices have been marked as visited or there are no more connected vertices to evaluate. The shortest path from source vertex to any other vertex can then be determined by looking up the predecessor vertices from the evaluated table. For example, to determine the shortest path from vertex A to vertex G, table can be looked up to find the predecessor vertex of G which is D. The predecessor vertex of D is B and the predecessor vertex of B is A. As such, the shortest path from vertex A to vertex G is <em>{A,B,D,G}</em> with a shortest distance of 11.</p> <p>In a complete graph comprising of N vertices, where each vertex is connected to all other vertices, the number of vertices to be visited by the algorithm will be N. Also the number of vertices potentially relaxed each time a vertex is visited is also N. As such, the worst case time complexity of Dijkstra’s algorithm is in the order of NxN = N<sup>2</sup>.</p> <p><strong>Bellman-Ford Algorithm</strong></p> <p>Bellman-Ford algorithm is used to find the shortest paths from a source vertex to all other vertices in a weighted graph. Unlike Dijkstra’s algorithm, Bellman-Ford algorithm can work when there are negative edge weights. The core of the algorithm is centered on iteratively relaxing the path distances of vertices from the source vertex. If there are N vertices, the maximum number of edges from the source vertex to the N<sup>th</sup> vertex could possibly be (N-1) edges. As such, the algorithm iterates utmost (N-1) times to relax the vertex distances. In every iteration, the algorithm starts at the source vertex, walks the outgoing edges to the connected neighbors and evaluates the tentative distance of each of the neighbors and updates the tentative distance if it is less than the previous value. The algorithm then moves to next vertex and repeats the process of walking the outgoing edges and accordingly relaxing the tentative distances for each of the connected neighbors. As such, in every iteration, the algorithm visits all the vertices and walks all the edges thereby relaxing the vertex distances wherever possible. The algorithm repeats the iterations of relaxing the vertices utmost (N-1) times or until no vertices can be updated anymore.</p> <p style="text-align: center;"><a href="" target="_blank" rel="noopener"><img src="" class="align-center" width="750"/></a><em>Figure 2: Illustrations of Bellman-Ford Algorithm</em></p> <p style="text-align: left;">Figure 2 illustrates the logical steps in the execution of Bellman-Ford algorithm. In essence, the algorithm maintains a table containing the evaluated shortest distances to each of the vertices from the source vertex along with the predecessor vertex, which would get updated potentially at every iteration. In the example graph, there are 4 vertices and hence the algorithm will execute utmost 4-1 = 3 iterations. At the initiation step, the algorithm sets the distances from the source vertex, A in the example, to all other vertices as infinity and the distance from source vertex to itself as 0 in the table. At this step, the predecessor vertex row is empty. The algorithm then begins iteration 1. The algorithm starts at the source vertex and evaluates the distances to the neighbors connected by outgoing edges. In the example, vertex A has only one outgoing edge to vertex C with a weight of -2. As such the measured distance of vertex C from vertex A would be d(A) + d(A,C) which is 0 – 2 = -2 which is less than C’s currently assigned value of infinity. Hence the algorithm relaxes the vertex C in the table and sets vertex A as the predecessor vertex of C. Then algorithm moves to next vertex, B in the example. Since the tentative distance of vertex B is still infinity, none of its neighbors connected by outgoing edges can be relaxed. As such, the algorithm moves to next vertex C. Vertex C has one outgoing edge connecting to vertex D. Since the current tentative distance of C is -2 and the outgoing edge weight to vertex D is 2, the algorithm evaluates the tentative distance of vertex D as -2 + 2 = 0 and since it is less than vertex D’s current distance, i.e., infinity, algorithm relaxes vertex D and sets C as D’s predecessor vertex. Algorithm then moves to vertex D. Vertex D has one outgoing edge to vertex B with a weight of -1. As such, the algorithm evaluates the tentative distance of vertex B as 0 – 1 = -1 and since it is less than B’s current distance of infinity, algorithm relaxes vertex B and sets D as B’s predecessor vertex. With this, the algorithm completes iteration 1. Algorithm then begins iteration 2. As in iteration 1, the algorithm starts at the source vertex A. Since vertex C is the only neighbor connected by outgoing edge, the algorithm evaluates the distance of C from A. The distance is unchanged at -2 and hence algorithm moves to next vertex B. The current tentative distance of vertex B as measured in iteration 1 is -1. Vertex B has two outgoing edges; one connecting vertex A and the other connecting vertex C. Algorithm evaluates the tentative distances of vertices A and C relative to vertex B. Since the outgoing edge weight to vertex A is 4, the algorithm evaluates the tentative distance of vertex A relative to vertex B as -1 + 4 = 3. Since this is greater than the vertex A’s distance of 0, algorithm does not relax vertex A. Similarly, the outgoing edge weight to vertex C is 3 and the algorithm evaluates the tentative distance of vertex C relative to vertex B as -1 + 3 = 2. Since this is greater than C’s current distance of -2, algorithm does not update vertex C in the table. The algorithm then moves to vertex C. Since vertex C has one outgoing edge to vertex D, the algorithm evaluates the tentative distance of vertex D relative to vertex C. Since the current tentative distance of C is -2 and the outgoing edge weight to vertex D is 2, the algorithm evaluates the tentative distance of vertex D as -2 + 2 = 0 and since it is same a vertex D’s current distance, the algorithm does not update vertex D in the table. Algorithm then moves to vertex D which has one outgoing edge to vertex B. Algorithm evaluates the tentative distance of vertex B as 0 – 1 = -1 and since it is same as B’s current distance, the algorithm does not update vertex B in the table. With this, the algorithm completes iteration 2. After completing iteration 2, the algorithm determines that no vertex was relaxed in iteration 2 and distances remained unchanged. As such, the algorithm stops the execution even though iteration 3 is pending. The shortest path from source vertex to any other vertex can then be determined by looking up the predecessor vertices from the evaluated table. For example, to determine the shortest path from vertex A to vertex B, table can be looked up to find the predecessor vertex of B which is D. The predecessor vertex of D is C and the predecessor vertex of C is A. As such, the shortest path from vertex A to vertex B is <em>{A,C,D,B}</em> with a shortest distance of -1.</p> <p style="text-align: left;">In essence, Bellman-Ford algorithm relaxes utmost E number of vertices in every iteration, where E is the number of edges in the graph. Since the algorithm executes utmost (N-1) times where N is the number of vertices in the graph, the total number of relaxations would be E x (N-1). As such, the time complexity of the algorithm is in the order of (E x N). In a complete graph comprising of N vertices, where each vertex is connected to all other vertices, the total number of edges would be N(N-1)/2. Therefore the total number of relaxations would be N(N-1)/2 x (N-1). As such, the worst case time complexity of Bellman-Ford algorithm is in the order of N<sup>3</sup>.</p> <p style="text-align: left;"><strong>More Algorithms To Cover<br/></strong></p> <p style="text-align: left;">In the upcoming continuation parts of this article, I will cover several other graph based shortest path algorithms with concrete illustrations. I hope such illustrations help in getting a good grasp of the intuitions behind such algorithms.</p> 5 Most Preferred Programming Languages for AI Engineers in 2020 tag: 2020-01-14T09:30:00.000Z Yoey Thamas ></p> ><a href="" target="_blank" rel="noopener"><img src="" class="align-left" width="550"/></a></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p>The very nature of the technology industry requires you to stay updated in the latest tech skills, latest advances taking place in the IT industry. <strong>AI engineers</strong> are in demand, job opportunities are plentiful, but where are the skilled AI experts?</p> <p>AI comes up with amazing breakthroughs in the tech industry today. But to get there, they still require skilled AI professionals to take up the responsibility.</p> <p>As you embark your journey in the AI career these are the topmost programming languages you need to learn: -</p> <p><strong>Python</strong></p> <p>Python is every programmer’s favorite, but have you wondered why it holds a special place for AI and machine learning?</p> <p>Well, it comes with huge numbers of inbuilt libraries. For certain languages, the individual needs to first learn the language. However, it is not the same case with Python.</p> <p>Winning the hearts of many, this is what you need to know now: -</p> <ul> <li>For every AI project, you can choose the libraries that will best suit your project (SciPy, NumPy, and Pybrain, etc.</li> <li>It has a great open source community</li> <li>Since Python involves a lot of algorithms, it will help in easing testing and writing simple codes</li> <li>Python is platform agnostic</li> </ul> <p><strong>R</strong></p> <p>R programming is a popular programming language for statistical analysis. Previously, this language was the most preferred programming language for data scientists. However, R has made quite an impact on AI over the past few years.</p> <ul> <li>It is the perfect fit for AI modeling</li> <li>For visualization purposes, R is the winner</li> <li>R has a huge community as well as supporters</li> </ul> <p>Some of the great companies such as Google are already making use of R for data modeling, data visualization, and data analysis.</p> <p><strong>Java</strong></p> <p>AI algorithms are real. It should be a thrill for all <strong>AI experts</strong> to be inspired by Java as it is said to be named as one of the best languages for projects in AI.</p> <p>For instance, if Java AI programming is taken, it is generally used for creating machine learning models, genetic programming, multi-robot systems, etc.</p> <p>Java being object-oriented and possessing a high scalability feature is a great match for large-scale projects. And since AI when connected with algorithms, it gives a higher chance for Java to be able to code different kinds of algorithms.</p> <p><strong>Prolog</strong></p> <p>They say you can give your chatbots the best life using Prolog. This is a logic programming language specifically used to perform natural language processing. Ever heard of Eliza? This was one of the chatbots that were written, therefore this is how Prolog came into existence.</p> <p>The best features that this language provides automatic back-tracking, pattern matching, and tree-based data structuring.</p> <p><strong>Lisp</strong></p> <p>If you’re looking to expand your horizons in the AI field, then perhaps AI engineers must start exploring about Lisp.</p> <p>Invented by John McCarthy (the father of AI in 1958), Lisp is considered as one of the oldest languages that are used for development in AI.</p> <p>A unique feature taking ownership of processing symbolic information effectively.</p> <ul> <li>Excellent prototyping capabilities</li> <li>Automatic garbage collection</li> <li>Dynamic creation of newer objects</li> </ul> <p>The development cycle provides an interactive evaluation of expressions along with the recompilation of functions even when the program is still running. With the advancement in the past years, many of the features of Lisp were migrated into other languages, thus affecting the uniqueness of Lisp.</p> <p>Looking forward to the year, skilled AI professionals must learn these programming languages.</p> Just How Much Does the Future Depend on AI? tag: 2020-01-13T20:20:02.000Z William Vorhies <p><strong><em>Summary:</em></strong><em> Looking at the 12 hottest world-changing segments in the VC-funded world shows that AI will play a key role. Here’s a little more detail.</em></p> <p> </p> <p><a href="" rel="noopener" target="_blank"><img class="align-right" src="" width="350"></img></a> From the inside of the data science profession looking out it’s easy to imagine that almost everything that is or will be important somehow depends on AI. Maybe…</p> <p><strong><em>Summary:</em></strong><em> Looking at the 12 hottest world-changing segments in the VC-funded world shows that AI will play a key role. Here’s a little more detail.</em></p> <p> </p> <p><a href="" target="_blank" rel="noopener"><img src="" width="350" class="align-right"/></a>From the inside of the data science profession looking out it’s easy to imagine that almost everything that is or will be important somehow depends on AI. Maybe that’s true, but how do we tell?</p> <p>First of all we’d have to make a list of all the tech trends that are destined to be game changers over the mid-term, say the next 5 to 15 years. Then we could examine each one for AI content and get a better idea about just how important AI is.</p> <p>But how to find the most important tech trends, much less get agreement on one list versus another. Fortunately there is at least one reasonable way and that’s the time honored tradition of following the money.</p> <p>In this case we mean VC investment. Let’s stipulate that at least as a group that VCs do their homework and make reasonably smart decisions. Then we’d look for fast growth areas where VCs are putting a disproportionate amount of their investment.</p> <p>Fortunately someone has already done this work for us. The good folks at CBInsights conducted exactly this analysis and published their findings in a recent report called “<a href=""><em><u>Game Changers 2020</u></em></a>”. </p> <p>They weren’t particularly concerned about the distinction of AI or not. Their focus was on trends and companies that could change the world. In the process they also told us about the leading companies in each of their target areas so we can easily discern whether AI is a critical component. </p> <p>Here’s a brief rundown of the dozen tech trends they identified as potential world-changers which on average are receiving 2.6 times as much investment as less exciting areas. <a href="" target="_blank" rel="noopener"><img src="" width="600" class="align-center"/></a></p> <p><span style="font-size: 12pt;"><strong>Those with AI Critical Components</strong></span></p> <ol> <li><strong>Speed of Light Chips:</strong> Photonic based hardware with unprecedented processing power.</li> </ol> <p>AI isn’t necessarily the enabler in this category so much as the customer. Using photons instead of electrons for processing allows higher bandwidth, increases in speed of 1000X, and retains MPP parallel processing. Photonic chips are useful across the whole range of computer processing but will bring greatest benefit to the increased speed and reduced cost of AI workloads.</p> <p> </p> <ol start="2"> <li><strong>Quantum Cryptography:</strong> Protecting sensitive data against the threat of quantum decryption.</li> </ol> <p>Creating and breaking cyphers is the meat and potatoes of AI. IBM already has over 100 clients working on its cloud-accessed quantum computer so implementation is a matter of when, not if. New high level languages are needed to perform AI in quantum environments but at its core the AI is the same as non-quantum, just with the weirdness added in. </p> <p> </p> <ol start="3"> <li><strong>AI Transparency:</strong> Building trust in AI by analyzing as algorithm’s decision making process.</li> </ol> <p>Some players in this area are creating new approaches to analyzing black box models while others are creating auditable platforms to meet regulatory requirements in finance and healthcare.</p> <p> </p> <ol start="4"> <li><strong>AI-Based Protein Prediction:</strong> Predicting the structure of proteins to enhance disease diagnosis and treatment.</li> </ol> <p>The problem of predicting or even understanding how proteins will fold has been around for a long time and the subject of many AI startups. Leaders in this group have combined advances in AI with robotic labs and other advancements to move forward with <a href=""><em><u>computation synthetic biology</u></em></a>.</p> <p> </p> <ol start="5"> <li><strong>Sustainable shippers:</strong> Reducing costs and mitigating the environmental impact of heavy-lift logistics.</li> </ol> <p>Autonomous vehicles on land, sea, and air are in this category as are optimization platforms for creating the most efficient supply chain transport strategies.</p> <p> </p> <p><span style="font-size: 12pt;"><strong>Game Changers with No Apparent AI Component</strong></span></p> <ol start="6"> <li>CRISPR 2.0 – developing safer and more precise approaches to gene editing.</li> <li>Electro-charged Therapeutics – treating ailments with electrical impulses instead of chemical drugs.</li> <li>Microbiome masters - targeting the human microbiome to treat both chronic and rare diseases.</li> <li>Mind-altering medicines - startups developing psychedelic compounds to treat mental illness.</li> <li>DNA data marketplaces - enabling the secure exchange of genetic data to reward consumers and enrich medical research.</li> <li>Carbon capturers – startups removing and recycling CO2emissions from the atmosphere.</li> <li>Next-gen nuclear energy - new solutions to zero-emission nuclear energy production.</li> </ol> <p><strong> </strong></p> <p><span style="font-size: 12pt;"><strong>Will AI Drive the Future?</strong></span></p> <p>And the answer is surprisingly 42 (Douglas Adams of Hitchhikers Guide to the Galaxy would be proud). Actually what we mean is 5 of 12 world changers or 42% rely significantly on some component of AI. </p> <p>Leaving out all the high value applications of today’s AI we still have a leading role in shaping the future – at least according to CBInsights’ read of the most important startups. And if you didn’t want to be a data scientist you could always try out for one of those mind-altering psychedelic medicine companies. Me, I’ll continue to get my mellow from the math.</p> > RPA + Machine Learning = Intelligent Automation tag: 2020-01-13T19:30:00.000Z Maggie …</span></p> Automation can be achieved by integrating machine learning and artificial intelligence with Robotic Process Automation to achieve automation of repetitive tasks with an additional layer of human-like perception and prediction.</span></p> <p></p> <p><b>The Difference Between RPA and Artificial Intelligence</b></p> <p></p> <p><span style="font-weight: 400;">By design, RPA is not meant to replicate human-like intelligence. It is generally designed simply to mimic rudimentary human activities. In other words it does not mimic human</span> <b>behavior</b><span style="font-weight: 400;">, it mimics human</span> <b>actions</b><span style="font-weight: 400;">. Behavior implies making intelligent choices among a spectrum of possible options, whereas action is simply movement or process execution. RPA processes are most often driven by pre-defined business rules that can be narrowly defined thus RPA has limited abilities to deal with ambiguous or complex environments. </span> <a href=""><span style="font-weight: 400;">Artificial Intelligence, on the other hand, is the simulation of human intelligence by machines which requires a broader spectrum of possible inputs and outcomes</span></a><span style="font-weight: 400;">.</span> <span style="font-weight: 400;">AI is both a mechanism for intelligent decision making and a simulation of human behaviors. Meanwhile, Machine Learning is a necessary stepping stone to Artificial Intelligence, contributing deductive analytics and predictive decisions that increasingly approximate the outcomes that can be expected from humans.</span></p> <p></p> <p><span style="font-weight: 400;">The IEEE Standards Association published its</span> <a href=""><span style="font-weight: 400;">IEEE Guide for Terms and Concepts in Intelligent Process Automation</span></a> <span style="font-weight: 400;">in June 2017. In it, Robotic Process Automation is defined as a “preconfigured software instance that uses business rules and predefined activity choreography to complete the autonomous execution of a combination of processes, activities, transactions, and tasks in one or more unrelated software systems to deliver a result or service with human exception management.” In other words, RPA is simply a system that can perform a defined set of tasks repeatedly and without fail because it has been specifically programmed for that job. But it cannot apply the function of learning to improve itself or adapt its skills to a different set of circumstances, that’s where Machine Learning and Artificial Intelligence are increasingly contributing to building more intelligent systems.</span></p> <p></p> <p><b>Process-driven vs. Data-driven</b></p> <p></p> <p><span style="font-weight: 400;">Intelligent Automation is a term that can be applied to the more sophisticated end of the automation-aided workflow continuum consisting of Robotic Desktop Automation, Robotic Process Automation, Machine Learning, and Artificial Intelligence. Depending on the type of business, companies will often employ one or more types of automation to achieve improved efficiency and effectiveness. As you move along the spectrum from process-driven automation to more adaptable data-driven automation, there are additional costs in the form of training data, technical development, infrastructure, and specialized expertise. But the potential benefits in terms of additional insights and financial impact can be greatly magnified.</span></p> <p></p> <p><span style="font-weight: 400;">To remain competitive and efficient, businesses now must contemplate adding Machine Learning and Artificial Intelligence to traditional RP in order to achieve Intelligent Automation.</span></p> <p></p> <p><b>Data-Driven + Process Driven = Intelligent Automation</b></p> <p><b><a href="" target="_blank" rel="noopener"><img src="" width="500" class="align-full"/></a></b></p> <p></p> <p><b>Intelligent Automation Relies on Data Integrity</b></p> <p></p> <p><span style="font-weight: 400;">In the Intelligent Automation framework, training data is a central component on which all else depends. In industries such as autonomous driving and healthcare, where decisions made by AI/ML can have serious repercussions, the accuracy of training data that informs these types of decisions is critical. As the accuracy of modern AI and Machine Learning models utilizing neural networks and deep learning progresses toward 100%, these engines are working more autonomously than ever to make decisions without human intervention. Small variations or inaccuracies in training data can have dramatic and unintended effects. Data integrity and accuracy thus become increasingly more important as humans come to rely on the decisions made by intelligent machines for complicated tasks.</span></p> <p></p> <p><b>Accurate ML Models Require Accurate Training Data </b></p> <p></p> <p><span style="font-weight: 400;">Data integrity involves starting with representative source data then accurately labeling this data prior to the training, testing, and deployment of machine learning models. An iterative workflow of data preparation, feature engineering, modeling, and validation is the standard Data Science playbook. </span></p> <p></p> <p><span style="font-weight: 400;">Any Data Scientist will explain that the availability of accurately labeled training data is perhaps the most important ingredient in their recipes. Examples of “dirty” data include missing, biased, and outlier data or simply data sets that are not representative of the future data to be processed in production. Feature engineering is also an important step in the machine learning process i.e. selecting the data features that are likely to be the most critical in informing the predictive accuracy of a given model. In a neural network, where parameters are stacked one on top of the other, correct identification of key features in each iteration is critical to the success of the model building exercise. Poor training data can cause incorrect features to be selected or weighted, thus leading to models that cannot be generalized to a broader population of production data.</span></p> <p></p> <p><span style="font-weight: 400;">For instance, for a model that detects specific organs in MRI images, choosing representative training images from a particular MRI machine then accurately isolating the relevant boundaries of specific regions of interest for each organ will lead to better detection results than simply using photos of those organs from public sources. Another example can be seen in accounts payable systems using optical character recognition (OCR) to programmatically extract relevant information from invoices. Key fields in each invoice such as “Address”, “Name” and “Total” must be accurately distinguished from the body of different types of invoices in order to create an effective and accurate model. If these items are labeled incompletely or incorrectly, the accuracy of the resulting model will suffer.</span></p> <p></p> <p><b>The Issue of Bias</b></p> <p></p> <p><span style="font-weight: 400;">Current AI and machine learning models differ from human intelligence in part because they depend entirely on their initial training data and usually do not have an automatic and recursive mechanism to absorb and process new data for course correction i.e. continuous retraining. This means that poorly balanced data introduced during training may have the potential to cause unexpected bias over time and can produce unexpected (and sometimes offensive) results. When a significant amount of bias is introduced into a system, it becomes difficult to rely on decisions made by these systems.</span></p> <p></p> <p><b>Good Data Annotation Leads to High-Quality Intelligent RPA</b></p> <p></p> <p><span style="font-weight: 400;">Accurate training data is the foundation of most successful data science projects. BasicAI provides high-quality data annotation services to businesses in many different industries, and this is a central theme heard in the majority of our client conversations. With accurate data annotation, machine learning models and AI models can make increasingly accurate decisions, and when combined with the fundamental processes of RPA businesses can achieve truly Intelligent Automation.</span></p> <p><em><span style="font-weight: 400;">More <a href="" target="_blank" rel="noopener">here</a>.</span></em></p> <p></p> <p><span style="font-weight: 400;"><a href="mailto:sales@basic.ai" target="_blank" rel="noopener"></a></span></p> <p></p> 2020 Challenge: Unlearn to Change Your Frame tag: 2020-01-13T15:00:00.000Z Bill Schmarzo …</p> and challenge everyone to open their minds to the possibility of new ideas and new learning. That does not mean you should blindly believe, but instead, should invest the time to study, unlearn and learn new approaches and concepts.</p> <p><em>“You can’t climb a ladder if you’re not willing to let go of the rung below you.”</em></p> <p>As the new Chief Innovation Officer at Hitachi Vantara (yes, I have a new, more relevant, very exciting role), leveraging ideation and innovation to derive and drive new sources of customer, product and operational value is more important than ever. So, Hitachi Vantara employees and customers, be prepared to change your frames; to challenge conventional thinking with respect to how we blend new concepts – AI / ML, Big Data, IOT – with tried and true ideas – Economics, Design Thinking – to create new sources of value. And let’s start that unlearning/learning process with this list for 2020:</p> <ol> <li>Listening to learn versus listening to respond.</li> <li>Empowering management versus dictating management.</li> <li>Value in use versus value in exchange.</li> <li>Predicting versus reporting.</li> <li>Autonomous versus automate.</li> <li>Learning versus rules.</li> </ol> <p>But first, a bit more about the “Art of Unlearning.”</p> <h2><strong>What is Unlearning?</strong></h2> <p>Unlearning is the ability to discard something learned (bad habit or outdated information) from one's memory and everyday use. Unlearning is especially hard if you have spent a lifetime perfecting something. </p> <p><em>It takes years – sometimes a lifetime – to perfect certain skills. But once we get comfortable with those skills, we become reluctant to change. We are reluctant to unlearn what we’ve taken so long to master. It is hard to un-wire those synoptic nerve endings and deep memories than it was to wire them in the first place. It’s not just a case of thinking faster, smaller or cheaper; it necessitates thinking differently (see Figure 1).</em></p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>1</span></strong><strong>:<span> </span></strong> <a href=""><strong>Importance of Thinking Differently…Hint: Don’t Pave the Cow Path</strong></a></p> <p>How do you go about unlearning so that you can learn anew?</p> <ul> <li>Be open to the ideas and positions of others who may have different opinions or perspectives than you. Remember: <em>all ideas are worthy of consideration</em></li> <li>Seek out diverse and conflicting narratives. Read articles or listen to podcasts on positions that are different than your own.</li> <li>Embrace critical thinking, which is the objective analysis and evaluation of an issue in order to form a viable and justifiable position.</li> <li>Actively challenge your beliefs; consider the frame you are using to make your decisions and contemplate what might happen if you were willing to discard that frame.</li> </ul> <p>In the book “The Runaway Species", Anthony Brandt and David Eagleman propose a creativity framework comprised of three basic techniques: bending, breaking and blending. Unlearning opens the opportunity to “bend, break or blend” what you already know to create something new and more powerful.<span> </span></p> <h2><strong>1) Listening to Learn versus Listening to Respond</strong></h2> <p>One human weakness is that we are very quick to jump to solutions; we ask some superficial questions in order to be better positioned to respond, versus asking detailed questions to really learn and empathize. Design Thinking is one way to help offset that weakness.</p> <p>Design thinking is a human-centric approach that creates a deep understanding of users in order to generate ideas, build prototypes, share what you’ve made, embrace the art of failure and put your innovative solution out into the world. Design Thinking may be both the most powerful, yet abused concept I know. And that’s entirely based upon the intent of the listener. The starting point for the Design Thinking process is to build a sense of empathy for the customer (see Figure 2).</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>2</span></strong><strong>: “Design Thinking: Future-proof Yourself from AI"</strong></p> <p>This Empathize process2><strong>2) Empowering Management versus Dictating Management</strong></h2> <p>Classic management is a “Tell and Do” employee relationship; a command-and-control structure that excels in fixed types of situations and engagements. But add a bit of uncertainty into that model, and the command-and-control structure quickly falls apart. We watched the movie “Black Hawk Down” and the 1993 Battle of Mogadishu in horror as the directions for the US ground troops to navigate Mogadishu had to be relayed to and answered by the superiors as the troops were under deadly enemy fire.</p> <p>“The Orion spy plane could see what was happening but couldn’t speak directly to Lieutenant Colonel Danny McKnight. So, it relayed information to the commander at Joint Operations Command (JOC). Next, the JOC commander called the command helicopter. Finally, the command helicopter radioed McKnight. By the time McKnight received directions to turn, he’d already passed the road.” (Source: The History Reader)</p> <p>General Stanley McChrystal details in “Team of Teams” a similar challenge using a traditional command-and-control management style in combating insurgents in Iraq. General McChrystal totally reframed his enemy engagement approach to rely upon smaller, cohesive teams that could more quickly respond to a changing warfare environment.</p> <p>“Whiteboards versus Maps” highlights the difference in empowering versus dictating management styles. A rigidly-defined map – like a command-and-control management style – can quickly become a liability, unable to respond to changing customer and market conditions. Whiteboards, on the other hand, represent a “way of thinking”, where challenges can be quickly explored, refined, and morphed before being implemented (see Figure 3).</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>3</span></strong><strong>: “Scaling Innovation: Whiteboards versus Maps</strong><strong>”</strong></p> <p>As AI increases the importance of augmenting the intelligence of front-line employees, we need to focus on improving the weakest link in the system for the system to reach its full potential.</p> <h2><strong>3) Value in Use (Economics) versus Value in Exchange (Accounting)</strong></h2> <p>Sebastian Thrun, in the Artificial Intelligence podcast with Lex Fridman, “Flying Cars, Autonomous Vehicles, and Education” talked about how DARPA transformed the DARPA Autonomous Car Grand Challenges by paying reward money ($2M) based not on the traditional measure of hours put against writing a paper on the topic, but upon the results/outcomes of the DARPA Autonomous Car Challenges. Instead of awarding DARPA funds based upon how many hours a firm spent in writing about something (a traditional way of paying the Belt Way Bandits) for the Autonomous Vehicle Challenges, DARPA decided to pay the winning car. The results were eye-opening especially in terms of the diversity of ideas that were pursued to win the prize money.</p> <p>Most organizations make business and operational decisions based upon accounting GAAP rules, a retrospective methodology for determining valuation (value in exchange). Economics, on the other hand, brings a forward perspective on determining valuation. Organizations that use an economics frame to measure and manage their business operations focus on the value or wealth that an asset can create (value in use). If one wants to exploit data and analytics to enable “doing more with less”, then one must embrace an economics mentality (see Figure 4).</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>4</span></strong><strong>: “Using the Economics Value Curve to Drive Digital Transformation</strong><strong>”</strong></p> <p>If you are still confused, here is a simple test: Why is the car of an Uber driver more valuable than my own? Because the Uber driver is using their car to generate more value (money) than the value that I generate from my car. Note: this fact will eventually have HUGE impacts on the profit structure of the automotive industry.</p> <h2><strong>4) Predicting versus Reporting</strong></h2> <p>“Anything you can report, you can predict.”</p> <p>Okay, on the surface, that seems like an outrageous statement. But what we have found in using our “Thinking Like A Data Scientist” methodology, is that comment is perfect in helping business stakeholders to cross the Analytics Chasm.</p> <p>Crossing the Analytics Chasm requires an understanding of economics and how the organization can leverage digital economics to identify and capture the new sources of customer and market value creation. Crossing the Analytics Chasm requires (see Figure 5):</p> <ul> <li>Transitioning from an organizational mentality of using data and analytics to monitor the business to predicting what’s likely to happen and prescribing recommended actions.</li> <li>Maturing beyond aggregating data in order to control the costs of storage and data management to a mentality of hording every bit of detailed historical data, complemented with a wealth of external data sources about every customer, employee, product and asset.</li> <li>Expanding data access from a restrictive data access model to enabling access to all data consumer that might derive and drive business and operational value from the data.</li> <li>Transitioning from batch data processing to an operational model that can process and analyze the data in real-time in order to capture business value in the act of happening.</li> </ul> <p><strong><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></strong></p> <p><strong>Figure</strong> <strong><span>5</span></strong><strong>: Crossing the Traditional Analytics Chasm</strong></p> <h2><strong>5) Autonomous versus Automate</strong></h2> <p>But want to go even further? How about changing your frame from Automate to Autonomous?<span> </span> Making that change requires that organizations Cross the AI Chasm. Crossing the AI Chasm will be more an organizational and cultural challenge than a technology challenge. Crossing the AI Chasm not only requires gaining organizational buy-in, but more importantly, it necessitates creating a culture of continuous learning at the front-lines of customer and/or operational engagement.</p> <p>Crossing the AI Chasm requires:</p> <ul> <li>Creating a culture of continuous learning.</li> <li>Capturing and augmenting front-line operational intelligence.</li> <li>Mastering the unique economics of data and analytics.</li> <li>Building assets that appreciate, not depreciate, through usage.</li> <li>Training everyone to “Thinking Like A Data Scientist.”</li> </ul> <p>Crossing the AI Chasm doesn’t require the proverbial leap of faith. It just requires senior management to loosen the reins a bit and let the learnings fueled by AI flourish at the front-lines of customer and operational engagement.</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>6</span></strong><strong>: Crossing the AI Chasm</strong></p> <p>Autonomous enables devices to experience, experiment, learn and improve their operational effectiveness through usage…without human intervention. Now that’s cool!</p> <h2><strong>6) Learning versus Rules</strong></h2> <p>Most traditional analytics are rule based; the analytics would make decisions guided by a documented set of criteria. However, AI (Deep Learning) analytics makes decisions based upon the learning gleaned from the operational data. Deep Learning <strong>learns</strong> on massive data sets (millions of records) to determine characteristics, patterns and relationships to make decisions such as cats versus dogs, tanks versus trucks, or healthy cells versus cancerous cells (see Figure 7).</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>7</span></strong><strong>: “Neural Networks: Is Meta-learning the New Black</strong><strong>?”</strong></p> <p>And this learning magnifies itself when there is collaboration across a collection of similar physical assets – vehicles, trains, airplanes, compressors, turbines, motors, elevators, cranes, etc. – so that the experience and intelligence can be accumulated in a cloud to uncover and codify new learning that can then be back propagated to the individual assets (see Figure 8).</p> <p>The goal of AI analytics is to leverage Deep Learning, Machine Learning and/or Reinforcement Learning to create a “Rational AI Agent” to learn a successful strategy from continuous engagement with the environment. With the optimal strategy, the agent can actively adapt to the changing environment to maximize rewards (current and future) while minimizing costs.</p> <h2><strong>2020 Challenges Summary</strong></h2> <p>Unlearning maybe the most valuable human characteristic (and something that we currently have over the machines). And 2020 will challenge our ability to unlearn old habits and beliefs so that we can learn anew. That unlearning will impact organizations in at least 6 ways:</p> <ol> <li>Listening to learn versus listening to respond.</li> <li>Empowering management versus dictating management.</li> <li>Value in use versus value in exchange.</li> <li>Predicting versus reporting.</li> <li>Autonomous versus automate.</li> <li>Learning versus rules.</li> </ol> <p>Damn, 2020 is going to be fun. <em>Are you ready to let go of that rung below you so that you can learn anew?</em></p> Weekly Digest, January 13 tag: 2020-01-12T23><a href="" target="_blank" rel="noopener">Marketing Analytics & Data Science MADS WEST March 31–April 2</a></li> </ul> <p>Join your peers in San Francisco for MADS West. Over the course of three days, discover the infinite possibilities that emerge when data scientists, insights, and analytics executives break down silos and work together to drive bottom line impact. <a href="" target="_blank" rel="noopener">Save 20% on your ticket with VIP Code MADS20DSC</a>.</p> <div><p><span><strong>Featured Resources and Technical Contributions </strong></span></p> <ul> <li><a href="">Scylla vs Cassandra: Performance Comparison</a></li> <li><a href="">Neural Quantum States</a> +</li> <li><a href="">Beginners Guide To Statistical Cluster Analysis</a></li> <li><a href="">Google Brain’s TensorFlow</a></li> <li><a href="">Deep Learning : Introduction to Long Short Term Memory</a></li> <li><a href="">Question: Suitability of Augmented Analytics</a></li> </ul> <p><span><strong>Featured Articles</strong></span></p> <ul> <li><a href="">Six AI Strategies – But Only One Winner</a></li> <li><a href="">Will AI Force Humans to Become More Human?</a></li> <li><a href="">Business Intelligence vs Business Analytics</a></li> <li><a href="">Oh, the Places You’ll Go: Top AI Predictions for 2020</a></li> <li><a href="">Avoid Common Pitfalls in Launching AI Projects</a></li> <li><a href="">Overcoming Barriers in ML adoption in corporate world<><a href="" target="_blank" rel="noopener">Weaponizing Data in the Fight Against White Collar Crimes</a></li> <li><a href="" target="_blank" rel="noopener">Online Data Science Programs from Drexel University</a></li> <li><a href="" target="_blank" rel="noopener">Strata Data & AI Conference: One Pass. One Price.</a></li> <li><a href="" target="_blank" rel="noopener">State of AI Bias: Webinar</a></li> <li><span><a href="" target="_blank" rel="noopener">Dashboard Best Practices</a> </span></li> <li><span><a href="" target="_blank" rel="noopener">Explore the Shift from ETL to Data Wrangling<> Oh, the Places You’ll Go: Top AI Predictions for 2020 tag: 2020-01-11T00:42:25.000Z Ji Li …</p> 2020:</p> <p> </p> <p><strong>AI Adopted More as an Assistant Than a Replacement</strong></p> <p> </p> <p.</p> <p> </p> <p>AI and machine learning can analyze thousands of data points in seconds to yield insights that humans never could achieve alone. These insights will be used to make human decision-making easier and alleviate workers’ most mundane, time-consuming tasks so that they can concentrate on higher-order problems that don’t fit neatly into algorithms. Look for AI-based technologies to be applied strategically in the coming year to help employees become more efficient and valuable in their respective roles.</p> <p> </p> <p><strong>Transfer Learning Becomes More Prevalent</strong></p> <p> </p> <p>Transfer learning, in which machine learning algorithms improve based on exposure to other algorithms, will become a more widely used technique in 2020. To date, it has been leveraged primarily with image processing, but we will see transfer learning applied to areas like text mining continue to improve.</p> <p> </p> <p>The benefit of transfer learning is that a wider range of industries will be able to utilize AI to create highly specific applications based on small data. As less large data is required, organizations can create state-of-the-art solutions that are faster, more accurate, and better tailored to their specific needs.</p> <p> </p> <p><strong>The Cloud of the Black Box Continues to Lift</strong></p> <p> </p> <p.</p> <p> </p> <p.</p> <p> </p> <p><strong>Demand Will Rise for AI as a Service</strong></p> <p> </p> <p>Traditionally, machine learning models have not been straightforward to deploy for data scientists and engineers. This will change in the coming year as AI is delivered more like a service. AI models will be executed in cheaper, easier ways in the cloud.</p> <p> </p> <p>This is a significant development on multiple fronts. By shifting to serverless.</p> <p> </p> <p>These are just some ideas of where AI could go in the near future. AI and machine learning are advancing at a rapid pace, and companies are both eager and nervous to pull the trigger on new solutions. But the current momentum behind AI will continue to drive innovation, and organizations will evolve as they reap the benefits of machine learning systems.</p> Scylla vs Cassandra: Performance Comparison tag: 2020-01-10T09:12:41.000Z Igor Bobriakov <p><span style="font-weight: 400;"><a href="" rel="noopener" target="_blank"><img class="align-center" src=""></img><></p> <p><span style="font-weight: 400;"><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/><> <a href=""><span style="font-weight: 400;">Scylla</span></a> <span style="font-weight: 400;">and</span> <a href=""><span style="font-weight: 400;">Cassandra</span></a><span style="font-weight: 400;">:</span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">Cassandra is a distributed, scalable and secure database built on the principles of the NoSQL storage with no single point of failure assurances.</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Scylla is a drop-in Cassandra NoSQL highly available and performance database that allows implementing ultra-low latency and high throughput data processes.</span></li> </ul> <p><span style="font-weight: 400;">These databases use the same structure, which allows for easier migration from one database to another. The main difference between them is that Scylla is written in C++ when Cassandra is in Java. </span></p> <p><span style="font-weight: 400;">So Scylla has the following performance advantages:</span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">reduces CPU resources consumption by avoiding program loading into the JVM,</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">has more flexible and complex memory management (attributes of C++ designed programs),</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">provides a custom network configuration that minimizes resource usage with direct requests from userspace without usage system kernel.</span></li> </ul> <p><span style="font-weight: 400;".</span></p> <p><span style="font-size: 12pt;"><strong>Benchmarking options</strong></span></p> <p><span style="font-weight: 400;">The performance benchmarking process for two databases follows the next principles:</span></p> <ol> <li><span style="font-weight: 400;">Using several database versions (revisions) for more efficient performance comparisons:</span></li> </ol> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">Scylla version – 2.1.2 (Cassandra version 3.0.8),</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Cassandra versions – 3.0.16 and 3.11.2.</span></li> </ul> <ol start="2"> <li><span style="font-weight: 400;">Using the same hardware options with benchmarks tests:</span></li> </ol> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">Scylla and Cassandra with 4GB of RAM,</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Linux OS (Ubuntu 16.10).</span></li> </ul> <ol start="3"> <li><span style="font-weight: 400;">Determining benchmarking processes:</span></li> </ol> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">writing data tests,</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">writing and reading data tests. </span></li> </ul> <ol start="4"> <li><span style="font-weight: 400;">Benchmark parameters and metrics options:</span></li> </ol> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">rate parameters: operations, partitions, rows;</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">latency options: mean, median, percentiles and max values;</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">partitions: total value of data partitions;</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">operation time: total time for all data operations.</span></li> </ul> <ol start="5"> <li><span style="font-weight: 400;">Using the same dataset for the correct and valid benchmark test results.</span></li> </ol> <p><span style="font-size: 12pt;"><strong>Benchmarking processes</strong></span></p> <p><span style="font-weight: 400;">Benchmarking processes represent an iterative procedure: cyclic execution process for pre-defined algorithms – usually the simplest ones. In our case, these algorithms include the writing and reading data procedures for each database: Scylla and Cassandra. The general benchmark processes structure is shown below.</span><br/><br/></p> <p><span style="font-weight: 400;"><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/></a></span></p> <p><span style="font-weight: 400;">Also, it should be noted that all benchmarks data operations will be performed with the same data. The results will be presented in the tables and diagrams form for better analysis and comparison process.</span></p> <p><span style="font-weight: 400;">Let's leave out the technical details of the benchmarking process and just present it's results.</span></p> <p><span style="font-size: 12pt;"><strong>Write test</strong></span></p> <p><span style="font-weight: 400;">Write test includes the data writing into the database and measuring a number of parameters that describe its performance. These parameters are grouped according to the described data processes and have numerical values:</span></p> <ul> <li style="font-weight: 400;"><b>rate</b> <span style="font-weight: 400;">parameters – describe data rates for operations, partitions, and rows;</span></li> <li style="font-weight: 400;"><b>latency</b> <span style="font-weight: 400;">parameters – define the reaction time for database accessing process and presented in the statistical form: mean, median, quintiles;</span></li> <li style="font-weight: 400;"><b>total</b> <span style="font-weight: 400;">parameters – show the resulting value by the number of operations and its total time.</span></li> </ul> <p><span style="font-weight: 400;"><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/></a></span></p> <p><span style="font-weight: 400;">We’ll build diagrams based on parameters that generally describe the database's performance. Diagrams are shown as a bar plot for each group of parameters.<. </span></p> <p><span style="font-weight: 400;">The remaining diagrams describe operation rate parameters and their total time measuring. As we can see, the performance of the Scylla database by these parameters is more than 6 times greater than that of Cassandra databases. </span></p> <p><span style="font-weight: 400;">In general, we can conclude that the data writing performance of the Scylla database is an order of magnitude larger. For a more complete analysis of database performance, let's perform a similar test for writing/reading data procedures.</span></p> <h2><span style="font-size: 12pt;"><b>Write/read test</b></span></h2> <p><span style="font-weight: 400;"-full"/></a></span></p> <p><span style="font-weight: 400;".</span></p> <h2><span style="font-size: 12pt;"><b>Conclusion</b></span></h2> <p><span style="font-weight: 400;".</span></p> Avoid Common Pitfalls in Launching AI Projects tag: 2020-01-09T22:30:00.000Z Betsy Romeri …</span></p> <a href="">planning investments in AI</a> over the next year. That’s not surprising when you consider that, in some industries, these investments are expected to <a href="">boost revenue by over 30%</a> over the next four years.</span></p> <p><span style="font-size: 12pt;"> We are already seeing many AI solutions in the marketplace, from facial recognition and chatbots to machine-learning models applied to supply chain management, and even solutions embedded within Internet of Things (IoT) applications. According to Gartner, however, only 13% of AI projects have gone into production. Why is that? Below are some reasons I have observed when speaking to our customers at Analytics2Go: </span></p> <p><span style="font-size: 12pt;"> </span></p> <p><span style="font-size: 14pt;"><strong>Lack of Strategy</strong></span></p> <p><span style="font-size: 12pt;"> Selecting the best strategy and having a clear vision of where to start an AI project can dictate its outcome. Like any new initiative in a company, AI initiatives should be guided by business goals, although not all business goals can be addressed with AI. Companies need an AI plan to avoid wasting time and money by mapping out what projects to do first, second, third, and so on. Understandably, companies are often confused about where to begin because of all the breathless hype and pressure to do something quickly, as in these demands: “<em>Hire data scientists! Buy this shiny new software! Our platform will run your business!</em>”</span></p> <p><span style="font-size: 12pt;"> Your business will have different priorities and goals than even your closest competitors. To differentiate, it doesn’t make sense to simply buy what everyone else is buying and run the same solutions. So, define what your goals are and the outcomes that your organization wants to achieve with AI. This part of the process is arguably the most important and should involve a stakeholder from each level of your organization to ensure agreement on goals and adoption by your organization. If you want to improve your company’s supply chain accuracy, for instance, you need to involve your S&OP leader, your inventory planner, and your supply/demand planner, etc. They have their particular focus and perspective and understand the business challenge from their lens. Your own people are your best source of information on how to achieve your overall business goals. </span></p> <p><span style="font-size: 12pt;"> To begin the AI strategy process, the most effective plan is to start by decomposing your current business processes to identify which recurring decisions would benefit from being automated. What are your recurring decisions that could benefit from automation and save your workforce’s time for more important responsibilities? Even if the recurring decisions typically rely on a human’s judgment as the final step, automation can still reduce the time required for your managers to make a more accurate, data-driven decision. These are your “quick wins” and are possible with automation and purpose-built AI solutions embedded into your existing workflows. </span></p> <p><span style="font-size: 12pt;"> </span></p> <p><span style="font-size: 14pt;"><strong>Oversimplification of the Problem</strong> </span></p> <p><span style="font-size: 12pt;"> Companies tend to <em>underestimate</em> the complexity of the problem they are trying to solve, especially in the current “rush to AI or die” environment. They can fail to identify the hidden complexities often found in business processes that span multiple functional areas and stakeholders. For instance, if a company is focused on creating a machine-learning algorithm to improve the forecast accuracy of sales, it is critical to understand all of the stakeholders’ objectives in the value chain. For a CPG company, for instance, that would require evaluating the entire value chain, including your retail sellers (large and small), internal planners, suppliers, and others. Each functional area is focused on improving its own objectives, which may not align cross-functionally, so it’s important to achieve alignment among these different stakeholders and strive for synchronization toward a common objective. Some companies create workshops on this objective alone. </span></p> <p><span style="font-size: 12pt;"> </span></p> <p><span style="font-size: 14pt;"><strong>Data Challenges</strong></span></p> <p><span style="font-size: 12pt;"> For an AI project to be successful, it is important to identify the data your company has available to solve the problem, but it’s equally important to get creative and bring in datasets that provide you with contextual data intelligence. For example, geopolitical events, location, weather, and competitor data can take your supply chain demand-forecasting model to the next level by shedding light on the context surrounding each point in your value chain.</span></p> <p><span style="font-size: 12pt;"> Perhaps your company could use imaging data of your rail routes or benefit from real-time social-sentiment-analysis data to understand product-demand indicators. The possibilities are growing every day with new datasets, so think outside the box. Every company operates in a different environment and has a different set of variables affecting daily, monthly, or quarterly decisions. I have observed a tendency to underestimate how much time/work it will take to optimize the data intelligence for successful AI initiatives. Importantly, a continual evaluation of available data should be put in place since the world is always changing and new data are available every day. </span></p> <p><span style="font-size: 12pt;"> </span></p> <p><span style="font-size: 14pt;"><strong>Overly Aggressive Project Plans</strong></span></p> <p><span style="font-size: 12pt;"> Companies are often so focused on achieving results in a short timeframe that they lose sight of what is really possible. Any AI project will require a proof of concept (POC) phase that serves to prove that the model and data are capable of delivering value. Most POCs can be done in 4 – 8 weeks, but the next steps are the most difficult and often erroneously assumed to flow effortlessly once the model has been proven to work.</span></p> <p><span style="font-size: 12pt;"> We have observed that with a successful POC, acceptance of the AI solution grows within the company. Getting support from C-level to end users is critical to the success of any AI initiative in any company, regardless of size. Taking an AI solution from POC to deployment and enterprise-wide operationalization involves technical integration that is typically accounted for before the beginning of the project. However, getting AI solutions to operate throughout a company and have end users able and willing to adopt them can often be the failure point.</span></p> <p><span style="font-size: 12pt;"> Likewise, user experience and training can often be the differentiators for the success or failure of AI initiatives. A step-wise scaling of AI solutions within a company lowers the risk of its falling into that 87% that never deploy and operationalize their AI initiatives. Adoption takes time and should be planned along with the AI strategy. Stakeholders want to be part of the solution and the rollout. If they own it, they will use it.</span></p> <p><span style="font-size: 12pt;"> To prove value in a POC and in later phases of an AI project, “gates” should be put in place where the project is evaluated on the basis of measurable metrics to determine if it continues or if changes are needed. More often than not, the success metrics serve to help the business stakeholders and the data scientists to pause and make necessary changes to assure success. </span></p> <p><span style="font-size: 12pt;"> We often call describing something from beginning to end in a brief way as “the 30,000-foot view.” For AI projects, this view could be used to outline the overall challenges faced and the benefits received from AI as related to the long-term goals of a business. For example, from a composite customer: “We partnered with an AI-as-a-Service company to guide us in our AI strategy. We were in need of more accurate and frequent demand-prediction forecasting of our global inventory to capture more revenue and reduce our exposure to ‘out of stock’ scenarios. Our partnership with A2Go led to multiple solutions that provided automation and sophisticated AI solutions that brought in a large amount of external data that we would not have had access to without our AI-as-a-Service partner. These capabilities have allowed us to reach our company goals with completely automated, real-time AI solutions that recalibrate and learn on their own. Now we can spend the recaptured time on our customers.”</span></p> <p><span style="font-size: 12pt;"> Based on my observations above, to get to this 30,000-foot view, a business can break it down into many sprints separated by “gates”—sprints that make sense for that business. Eventually, the sprints add up to the proverbial “30,000-foot view.” Consultants refer to this sprint approach as the Scrum Methodology.</span></p> <p><span style="font-size: 12pt;"> </span></p> <p><span style="font-size: 12pt;"><em>Pranay Agarwal is <a href="" target="_blank" rel="noopener">Analytics2Go’s</a> Vice President of Sales and Customer Success. He has over 20 years of experience, from leading B2B technology sales and management consulting projects for Fortune 500 companies to leadership roles at SAP Ariba (in its procurement and supply chain management practice) and IBM (heading its Emptoris Sales for Latin America). He has been a management consultant at both PwC-PRTM Technology Group and Deloitte Consulting Service.</em> </span></p> Thursday News, January 9 tag: 2020-01-09T20:30:00.000Z Vincent Granville <p>Here is our selection of featured articles and technical resources posted since Monday.</p> <p><strong>Resources</strong></p> <ul> <li><a href="">Neural Quantum States</a></li> <li><a href="">Google Brain’s TensorFlow…</a></li> </ul> <p>Here is our selection of featured articles and technical resources posted since Monday.</p> <p><strong>Resources</strong></p> <ul> <li><a href="">Neural Quantum States</a></li> <li><a href="">Google Brain’s TensorFlow</a></li> <li><a href="">Deep Learning : Introduction to Long Short Term Memory</a></li> <li><a href="">Question: Suitability of Augmented Analytics</a></li> </ul> <p><strong>Articles</strong></p> <ul> <li><a href="">Six AI Strategies – But Only One Winner</a></li> <li><a href="">Will AI Force Humans to Become More Human?</a></li> <li><a href="">Business Intelligence vs Business Analytics</a></li> <li><a href="">Overcoming Barriers in ML adoption in corporate world</a></li> </ul> <p><strong>Announcements</strong></p> <ul> <li><a href="" target="_blank" rel="noopener">Gartner Data & Analytics Summit</a></li> <li><a href="" target="_blank" rel="noopener">Weaponizing Data in the Fight Against White Collar Crimes</a></li> </ul> <p>Enjoy the reading!</p> <p></p> <p></p> <p></p> Business Intelligence vs Business Analytics tag: 2020-01-09T16:51:39.000Z Stephanie Gl…</span>< current state of operations, while BA focuses on future trends.</span></p> <h2>Deciphering Trends</h2> <p>Business Intelligence<strong> </strong><span>uncovers past and present patterns and trends, using that information </span><span>to make <strong>better decisions for day-to-day </strong></span><span><strong>business operations.</strong> </span><span>On the other hand, </span>Business Analytics (which <a href="" target="_blank" rel="noopener">Harvard Business School</a> calls "...<span>a newer, trendier term than business intelligence")</span> uses <a href="" target="_blank" rel="noopener">data mining</a>, <a href="" target="_blank" rel="noopener">statistical analysis</a>, and <a href="" target="_blank" rel="noopener">predictive modeling</a> to <strong>predict future patterns.</strong></p> <h2>Types of Questions Answered</h2> <p><strong>Business Intelligence</strong> <strong>answers questions like:</strong></p> <ul> <li>"What has happened in the past?"</li> <li>"What is happening right now?"</li> <li>"How can we stay focused on our current target?"</li> <li>"What do our current customers look like?"</li> </ul> <p>It can tell you <strong>what is working</strong>, and <strong>what isn't working</strong>. BI is the person at the meeting table, informing you of the current state of operations. They are the person armed with a collection of facts and figures in the guise of <a href="" target="_blank" rel="noopener">descriptive statistics</a>: <a href="" target="_blank" rel="noopener">bar graphs</a>, <a href="" target="_blank" rel="noopener">pie charts</a> and the like.</p> <p><em>Use business intelligence if:</em></p> <ul> <li><span>Your goal is to make fast, data-driven decisions about the present situation,</span></li> <li>You want to produce informative reports about your current status.</li> <li>You are a small company, lacking staff with a data science background.</li> <li>You want insight into employee performance or organizational performance.</li> <li>You want to improve workflow or reduce operating costs.</li> </ul> <p><br/> <strong>Business Analytics answers very different questions:</strong></p> <ul> <li><strong>"</strong>What is the probability that this will happen in the future?"</li> <li>"What are our future customers doing?"</li> </ul> <p>Your BA doesn't care much about the current state of operations; they are more concerned with <strong>what's going to happen in the future.</strong> Your BA guy might have a chart or two showing where your bottom line is likely headed, but they are more concerned with predictive analytics: the outcomes from data mining, machine learning and similar tools.</p> <p><em>Use business analytics if:</em></p> <ul> <li>You have access to big data, and want a competitive edge.</li> <li>You want to predict where your company is headed.</li> <li><span> You're looking to change or improve current operations with a view to the future.</span></li> </ul> <h2>Technical Know How</h2> <p>Your <strong>business intelligence specialist</strong> has in depth knowledge about current business operations. Their technical know-how includes <span>business activity monitoring software,</span> <span>reporting software, and spreadsheets. Positions include project managers, consultants, and business intelligence analysts. Where an overlap with BA typically happens is that a BI sometimes uses tools more often associated with BA, including predictive and statistical tools.</span></p> <p>The more scientific arm is the <strong>business analyst.</strong> Business analysts use a variety of tools, including <a href="" target="_blank" rel="noopener">correlational analysis</a><span>, <a href="" target="_blank" rel="noopener">regression analysis</a>, and text mining. The area</span> includes a wide range of technical positions, including database administrator, database developer, data scientist, and data engineer.</p> <h2>References</h2> <p><a href="" target="_blank" rel="noopener">What’s the Difference Between Business Intelligence and Business Analytics?</a></p> <p><a href="" target="_blank" rel="noopener">Business Intelligence vs. Business Analytics</a></p> <p><a href="" target="_blank" rel="noopener">Business Intelligence or Business Analytics</a></p> Overcoming Barriers in Machine Learning adoption in corporate world tag: 2020-01-09T04:00:00.000Z Janardhanan PS …</p> programming. At its heart ML runs on data. The algorithms used in ML systems enable machines learn independently from data and make meaningful predictions useful for making decisions.<br/> <br/> By reading the above paragraph, some of you may think that ML is an opportunity for getting rid of the dependency on expensive software developers and MIS tools. If you think so, you are partially correct. By adopting ML systems, you are going to remove large number of traditional programmers and create dependency on small number of expensive machine learning experts aka data scientists. As a senior manager you may boast that you know software development process and you are capable of leading and guiding a team of developers to implement solutions visualized by you. When it comes to ML based solutions, you may be shocked to know that none of these beliefs help you. When a senior management person decides to adopt Machine Learning for data driven decision making, he may encounter a plethora of challenges which he may not be able address with exposure to traditional software development methods. This article is an attempt to provide some insights into the real problems in adopting ML based techniques for decision making in the corporate world. This will be useful in understanding differences between traditional and ML software. It helps you to overcome the barriers in Machine Learning adoption in the corporate world.<br/> <br/> The barriers are of two different categories; the first one is to understand the philosophy behind ML systems and the second one is to learn the process for development of ML based solutions. The Machine Learning paradigm and the development process is totally different from that of traditional software development. The taste of all dishes made from a recipe by traditional software cooks will be identical. But, the taste of dishes made by different ML cooks from the same recipe will be different. The taste of the output of ML cooks depends on his level of experience, imagination, creativity and domain expertise.<br/> <br/> In traditional software systems, algorithm is represented in the code and in ML based systems intelligence is represented in the model. Traditional software design starts with schema definition of data. AI development starts with accumulation of huge volume of past data. High effort in designing and transforming algorithms to code in traditional method. ML automatically captures intelligence from past data in the form of models. In ML software, the accuracy of models depends on the quality and quantity of data used in training phase of the model. ML software can work with unstructured data existing in the form of text in natural languages. Expertise needed in model development is in selection of APIs and functions for loss estimation, optimization and activation. The results generated depends on ML code and the hyper-parameters used by the developers during the training phase.<br/> <br/> If you are planning to launch an IT solution, then you start working on vendor identification. When you are planning to launch an ML based solution you need to start accumulating data and manually generated results from it. In ML based software development, you worry about the Machine Learning framework for development and deployment of the ML system. Examples are TensorFlow, PyTorch, CNTK, SageMaker etc<br/> <br/> When you start migrating to the ML world, you will be surprised to interact with a group of people talking a totally different language which you have never heard about. They are data scientists with very good background in data engineering, programming, statistics, machine learning, deep learning, and hyper-parameter tuning. They will be describing the solution using technical jargon which you have never heard about. With all these problems in hand, should you migrate to ML based solutions ? The world is moving into the machine learning paradigm and if ML is not part of your MIS tool or solution, it will have less value when evaluated by the next generation management experts.<br/> <br/> How do you overcome these barriers in adopting ML solutions in your solutions. You need to forget your knowledge of traditional software development process and understand the new ML paradigm of development and the jargon used in the ML world. You need to watch many introductory videos on ML to understand the basics of the new paradigm. Once you understand the new ML paradigm of computing, you will be shocked to find that all your skills in designing and developing traditional software systems is of no use. And finally the ML system starts giving instructions to you and influences your decision making capability. I hope as an Artificial Intelligence (AI) enthusiast, you are ready to obey the instructions given by intelligent machines and happy to live peacefully in the new era of AI based systems.<br/> <br/> WRITTEN BY<br/> Janardhanan PS<br/> <br/> Machine Learning Evangelist at SunTec Business Solutions Pvt Ltd., Trivandrum, India.<br/> Domain: Machine Learning Software Development. LinkedIn:<a href=""></a></p> Six AI Strategies – But Only One Winner tag: 2020-01-07T17:30:00.000Z William<img class="align-right" src="" width="300"></img></a> For the last three years we’ve been close observers of exactly…<="" target="_blank" rel="noopener"><img src="" width="300" class="align-right"/></a>For the last three years we’ve been close observers of exactly what makes a successful AI/ML strategy. In addition to our own observations we’ve been listening closely to VCs and how they describe their internal process for deciding who to fund. It’s remarkable how rapidly and fundamentally this conversation has changed.</p> <p>We’ll start with a brief reprise of the various strategies we’ve described over that period and finish with a startling conclusion. <strong>If you want to be a big and successful AI-first company there really is only one proven strategy.</strong> And yes, there are still opportunities here for new entrants, though we strongly suspect that those future stars already exist and are doing their darnedest to grow as fast as possible.</p> <p>Here’s a brief synopsis of the AI/ML strategies we’ve observed pretty much in the order we first saw them emerge.</p> <p> </p> <p><span style="font-size: 12pt;"><strong>Applied AI – Optimizing the Current Business Model</strong></span></p> <p>We list this strategy only because this is where the vast majority of enterprises are today. <a href=""><em><u>Carving out specific projects</u></em></a> where they’ll utilize some elements of AI/ML to tweak the existing business model. This is the common approach of grafting new tech onto old outmoded business models.</p> <p>This isn’t unique to large established companies or necessarily even bad. However, there are plenty of startups that have simply grafted AI/ML onto their existing products. This isn’t AI-first and it’s the source of the new term ‘AI-Washing’. This correctly implies that there’s not enough AI/ML here to create a breakthrough, just enough to justify putting it in the advertising.</p> <p> </p> <p><span style="font-size: 12pt;"><strong>Horizontal Strategy</strong></span></p> <p>The core concept is to <a href=""><em><u>make an AI product or platform that can be used by many industries</u></em></a> to solve problems more efficiently than we could before AI. In the beginning many companies thought they could create cross-industry AI utilities. And if you’re one of the surviving advanced analytic platforms or data prep suites you may have been right.</p> <p>These opportunities were rapidly swallowed up by the monoliths like Google, Amazon, IBM and Microsoft. By their own research and strategic M&A, they rapidly dominated the opportunities such as advanced analytics and generalized image, video, speech, and text AI tools.</p> <p>None of these started out to be an AI-first company. They grew up alongside the developments in AI/ML and rapidly adopted it. </p> <p>There is no particular requirement here for deep industry or process expertise. It’s a widely held principle among VCs that startups should keep a maximum distance from these competitors in order to be at all defensible.</p> <p>And thanks to the open source ethos of IA, there’ really no defensible IP in a ‘proprietary ML or DL algo. Plus, they don’t own the customer’s core problem or train on data unique to that problem. These are general purpose tools that must be adapted by industry or consultants to become targeted solutions.</p> <p>Horizontal strategy is not where new success lies today.</p> <p><strong> </strong></p> <p><span style="font-size: 12pt;"><strong>Purpose Built Analytic Modules</strong></span></p> <p>There is a small exception to the horizontal strategy which consists of <a href=""><em><u>narrowly defined, technically difficult problems</u></em></a> which several industries share in common. Fraud detection and other rare anomaly detection problems such as cybersecurity intrusion detection are the poster children for this group.</p> <p>These are highly tuned special purpose modules that are practically plug-and-play in the industries and applications for which they’re targeted. Frequently they have adapted their UIs so that non-data scientist analysts or even LOB managers can use their sophisticated DS techniques without having to directly operate or configure them.</p> <p><strong> </strong></p> <p><span style="font-size: 12pt;"><strong>Vertical and Data Dominant Strategies</strong></span></p> <p>The vertical and data dominance strategies have rapidly converged and still offer opportunities for commercial success. They require <a href=""><em><u>deep industry and process expertise</u></em></a> where the focus is on a single industry and generally requires defensible ownership of the core training data.</p> <p>Apps in this category always strive to be enterprise in breadth expanding beyond their key AI/ML unique positioning to create a full vertical solution to a specific industry problem.</p> <p><a href=""><em><u>BlueRiver in agriculture, Axon in police vest cam video, and StitchFix in fashion</u></em></a> are all good examples of vertical/data dominant strategies successfully executed. How successful can companies in this strategic group be? Well BlueRiver was acquired by Deere. Axon (of Taser fame) is public and may have avoided a flameout by expanding into police video. Stitch Fix went public in late 2017 and trades today around $24 where it has traded for most of its post-IPO life.</p> <p><strong> </strong></p> <p><span style="font-size: 12pt;"><strong>Systems of Intelligence (SOI)</strong></span></p> <p><a href=""><em><u>The Systems of Intelligence (SOI) strategy</u></em></a> emerged from an article in early 2017 by VC, Jerry Chen of Greylock Partners. Mr. Chen observed that core operational data was locked away in operational systems that are Systems of Record. At the time, attempts to get at SOR data and blend it with other external sources was difficult and required custom solutions mostly from the new-at-the-time world of data lakes.</p> <p>Chen imagined a business world in which users would call on Systems of Intelligence inserted between SORs and friendly UIs that would allow all users to access sophisticated DS based analytics and modeling, thereby creating value.</p> <p>SOI strategy does not necessarily require defensible data (which could only be data appended from external sources) and sought to be as general and universal as horizontal strategies meaning that they weren’t tailored to specific industries or customers.</p> <p>For example, one might develop an SOI that would sit on top of a CRM SOR system to give valuable analytics around the customer journey. It’s not evident whether any companies utilizing this strategy still exist. In general if your SOI was good you rapidly became an M&A target for the underlying deep-pocket SOR (SalesForce, PeopleSoft, SAP, Oracle, and the like) or were a target for acqui-hire.</p> <p><strong> </strong></p> <p><span style="font-size: 12pt;"><strong>And the Winner is – Platform Strategy</strong></span></p> <p>The research firm CBInsights recently published a report on the “19 Business Moats That Helped Shape the World’s Most Massive Companies”. Six of these companies, Amazon, Google, Open Table, Uber, Apple, and Facebook are AI-first companies and all succeeded and created material barriers to competitors by adopting the Platform Strategy. <a href="" target="_blank" rel="noopener"><img src="" width="300" class="align-right"/></a>Consider:</p> <ul> <li>13 of the top 30 global brands are now platform companies and growing strong.</li> <li>Platform companies trade at 4 to 11 X revenues, compared to tech companies at 3-7X, and services companies at 1-3X. And note that’s a multiple of revenues not profit! (Barry Libert, Professor Digital Transformation, DeGroote School of Business, McMasters Univ., Toronto)</li> <li>Leading platform companies like Uber, Airbnb, and Instagram eclipsed the market cap of their traditional competitors in just 6 or 7 years compared to the decades those traditional companies took to achieve that.</li> </ul> <p><strong> </strong></p> <p><span style="font-size: 12pt;"><strong>What Exactly is a Platform Strategy?</strong></span></p> <p><a href=""><u>Platform Strategy</u></a> is technically described as a two-sided market, or two-sided network. CBInsights uses the term “Network Effect Moats”.</p> <p>The centerpiece is an intermediary economic platform with two distinct user groups, typically buyers and sellers, which adds value to the transactions by exploiting Metcalfe’s law, showing that the value of the network increases with the number of users.</p> <p>There are several key characteristics here:</p> <ol> <li>Economies of scale allow the platforms to provide increasing levels of benefit to both parties. These might be economic in terms of sales volume or discounts. But they are equally likely to be intangible.</li> <li>Information and interactions are the source of value. The platform can customize the user experience to both users’ benefit further increasing usage. This is where AI/ML becomes critical.</li> <li>The resources being organized aren’t owned by the platform company and even the management of the network is mostly provided by the participants (e.g. providing profiles, learned preferences, pricing and product/services tailored by providers).</li> </ol> <p><strong> </strong></p> <p><span style="font-size: 12pt;"><strong>Different Flavors of Platform Strategies</strong></span></p> <p>CBInsights has added a second level of detail showing that platforms can arise in different ways.</p> <p>Amazon for example grew its platform based on <strong>marketplace network effects</strong>. Their platform aggregates supply and demand for a given product, drawing in more competing suppliers to join the marketplace, where customers find a more efficient experience and less expensive source.</p> <p>Google however is an example of growing a platform based on <strong>data network effects</strong>. In a data platform there is a central repository of knowledge with more users drawn in as it becomes more useful. Clearly this describes Google’s well known continuous improvement of search results.</p> <p>Open Table is an interesting example as they didn’t set off to be AI-first or with the intent to build a platform. Rather the original Open Table product was a reservation system meant to simplify restaurant backend operations, which were notoriously inefficient. What they achieved almost by accident was to put a networked server (with the software) in a very large number of restaurants.</p> <p>The networked server gave Open Table access to customer behavior data where AI/ML was used to enhance response. Their proprietary networked server also kept competitors away due to high switching cost and the disruption of operations that would occur.</p> <p>The lesson seems to be that it’s still possible to create a platform strategy using <strong>a back door approach</strong>, introducing automation and AI/ML where it hasn’t been used before.</p> <p>Uber’s case may seem obvious but differs from our others as they set out to <strong>own and match supply and demand</strong>. They identified an underserved market with significant consumer pain points (taxi riders) and used their two sided market to draw more and more drivers in to meet demand with innovations like surge pricing.</p> <p>Facebook’s case is also different. It’s opening product which offered little more than access to other user’s profiles was not particularly sticky and didn’t promote increased usage. But with the addition of features like photos and the ability to tag photos with the names of other users they built <strong>a feature-based platform</strong> that is a proven network generator. Since users who are tagged by others in photos they themselves didn’t upload and then are notified that so-and-so has tagged you, who could resist looking.</p> <p>Finally Apple which may seem the least like a platform company. But on a foundation of OS supplemented by great products they have made their <strong>OS</strong> <strong>ecosystem</strong> even more effective with features like the App Store, iTunes, and iCloud. Users are locked in by the OS and the switching cost to other OS’s are simply considered too difficult or qualitatively inferior.</p> <p>So to our way of thinking, case closed. While there may be some room for new entrants in the Vertical or Purpose Built Analytic Module strategy those opportunities will probably result in only modest wins. The message for us is clear. If you want to big and successful, think platform.</p> <p> </p> <p><strong>Other articles on AI Strategies</strong></p> <p><a name="_Toc17964202"></a><a href=""><em><u>It’s Official – Our DNN Models are Now Commodity Software</u></em></a></p> <p><a href=""><em><u>AI/ML Lessons for Creating a Platform Strategy – Part 2</u></em></a></p> <p><a href=""><em><u>AI/ML Lessons for Creating a Platform Strategy – Part 1</u></em></a></p> <p><a href=""><em><u>A Radical AI Strategy - Platformication</u></em></a></p> <p><a href=""><em><u>Now that We’ve Got AI What do We do with It?</u></em></a></p> <p><a href=""><em><u>Capturing the Value of ML/AI – the Challenge of Offensive versus Defensive Data Strategies</u></em></a></p> <p><a href=""><em><u>The Case for Just Getting Your Feet Wet with AI</u></em></a></p> <p><a href=""><em><u>The Fourth Way to Practice Data Science – Purpose Built Analytic Modules</u></em></a></p> <p><a href=""><em><u>From Strategy to Implementation – Planning an AI-First Company</u></em></a></p> <p><a href=""><em><u>Comparing the Four Major AI Strategies</u></em></a></p> <p><a href=""><em><u>Comparing AI Strategies – Systems of Intelligence</u></em></a></p> <p><a href=""><em><u>Comparing AI Strategies – Vertical versus Horizontal.</u></em></a></p> <p><a href=""><em><u>What Makes a Successful AI Company</u></em></a> <span><em><u>– Data Dominance</u></em></span></p> <p><a href=""><em><u>AI Strategies – Incremental and Fundamental Improvements</u></em><> Will AI Force Humans to Become More Human? tag: 2020-01-06T14:00:00.000Z Bill Schmarzo …</p> Day).<span> </span> Instead of AI replacing humans, will AI actually make humans more human, and the very human characteristics such as empathy, compassion and collaboration actually become the future high-value skills that are cherished by leading organizations.</p> <p>Let’s explore these wild-assed questions a bit further, but as always, we need to start with some definitions.</p> <h1><strong>AI, AI Rational Agents, and the AI Utility Function, Oh My!</strong></h1> <p>Artificial intelligence (AI) is defined as the simulation of <strong><em>human intelligence</em></strong>. AI relies upon the creation of “<strong><em>AI Rational Agents</em></strong>” that interact with the environment to learn, where learning or intelligence is guided by the definition of the rewards associated with actions. AI leverages Deep Learning, Machine Learning and/or Reinforcement Learning to guide the “AI Rational Agent” to <strong><em>learn</em></strong> from the continuous engagement with its environment to create the intelligence necessary to maximize current and future rewards (see Figure 1).</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>1</span></strong><strong>: AI Rational Agent</strong></p> <p).</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p>Figure 2: "Why Unity Determination is Critical to Defining AI Success"</p> <p.”</p> <h1><strong>Re-Defining Intelligence</strong></h1> <p><em>Intelligence is defined as the ability to acquire and apply knowledge and skills.</em></p> <p.</p> <p.<span> </span> Anyone with children knows the horror of this “intelligence box” dilemma as our children panic to study, tutor and prepare for ACT and SAT tests that play an over-sized role in deciding their future.<span> </span></p> <p>This archaic definition of “intelligence” is actually having the exact opposite impact in that it reduces students (our children) to becoming rote learning machines that actually drives out the creativity and innovation skills that differentiates us from machines.<span> </span></p> <p>We already have experienced machines taking over some of the original components of intelligence. I mean, how many of you use long division, or manually calculate square root, or multiple numbers with more than 2 digits in your head? Traditional measures of intelligence are already under assault by machines.</p> <p>And AI is going to make further inroads into what we have traditionally defined as intelligence.<span> </span> Human intelligence will no longer be defined by one’s ability to reduce inventory costs or improve operational uptime or detect cancer or prevent unplanned maintenance or flag at-risk patients and students. Those are all tasks at which AI models will excel. No human competitive advantage there anymore.</p> <p><strong>We must focus on nurturing the creativity and innovation skills that distinctly make us humans and differentiate us from analytical.</strong> We need a new definition of intelligence that nurtures those uniquely human creativity and innovation capabilities (said by the new Chief Innovation Officer at Hitachi Vantara, wink, wink).</p> <h1><strong>What Is Innovation or Creativity?</strong></h1> <p><em>Creativity is the application of imagination plus exploration with a strong tolerance to learn through failure.</em></p> <p.</p> <p – <em>humans need to become more human</em>. Which is why I think Design Thinking is such a critical skill in a world where AI is going to eliminate rote-skill jobs (flipping burgers, operating a machine press, detecting cancer, replacing broken parts, driving cars).</p> <p>Design thinking is a human-centric approach that creates a deep understanding of and empathy for users in order to generate ideas, build prototypes, share what you’ve made, embrace learning through failure, and put your innovative solution out into the world (see Figure 3).</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>3</span></strong><strong>: “Design Thinking Humanizes Data Science" </strong></p> <p>The Empathize stage of Design Thinking in particular is critical as it sets the frame around which we can apply creative and innovate thinking and exploration to come up with different, relevant and meaningful real-world solutions.</p> <p>The Empathize stage1><strong>Summary</strong></h1> <p>So, let’s get back to those original questions, with my answers (you can grade me and send me my score so that I can see what colleges I am qualified to attend):</p> <p><strong><em>Will Artificial Intelligence (AI) create an environment where design thinking skills are more valuable than data science skills?</em></strong></p> <p.</p> <p><strong><em>Will AI alter how we define human intelligence?</em></strong></p> <p.</p> <p><strong><em>Will AI actually force humans to become more human?</em></strong></p> <p. <span> </span>And the understanding, articulation and formulation of the human “ethics equation” will become even more important as AI forces humans to actually become more human.</p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-full"/></a></p> <p><strong>Figure</strong> <strong><span>4</span></strong><b>: “AI Ethic Challenge: Understanding Passive versus Proactive Ethics" </b></p> <p><em>Will AI actually force humans to become more human</em>…interesting that the technology that potentially threatens so many human jobs, might actually be the technology that forces humans to become more human.</p> Weekly Digest, January 6 tag: 2020-01-05T20>Upcoming DSC Webinar</strong></p> <ul> <li><span><a href="" target="_blank" rel="noopener">Weaponizing Data in the Fight Against White Collar Crimes</a> - <em><strong>1/21 - 2:30 PM GMT</strong></em> - In this latest DSC webinar analyst and advisor Alasdair Anderson will discuss how escalating global white collar crime activity is driving adoption of advanced analytics at a pace never before seen in mature enterprises. <a href="" target="_blank" rel="noopener">Register today</a>.</span></li> </ul> <div><p><span><strong>Featured Resources and Technical Contributions </strong></span></p> <ul> <li><a href="">Naive Bayes Classifier using Kernel Density Estimation (with example)</a> +</li> <li><a href="">Data science cookbook style code reference in Python </a>for beginners</li> <li><a href="">A beginner’s guide to BigQuery Sandbox and exploring public datasets</a></li> <li><a href="">Connections between Neural Networks and Pure Mathematics</a></li> <li><a href="">3 essential elements for mastering machine learning for 2020</a></li> <li><a href="">5 Online Python Machine Learning Courses for 2020</a></li> <li><a href="">Data Science Job Titles to Look Out for in 2020</a></li> <li><a href="">Question: Where is BERT update (NLP)?</a></li> <li><a href="">Web UI Development: Why Vue.js is the Idea Choice</a></li> </ul> <p><span><strong>Featured Articles</strong></span></p> <ul> <li><a href="" target="_self">AI Ethics Challenge: Understanding Passive versus Proactive Ethics</a></li> <li><a href="">Traveling to Other Planets with Google Maps</a></li> <li><a href="">Schmarzo’s Favorite 10 Infographic Blogs for 2019</a></li> <li><a href="">Can Reinforcement Learning Break Through in 2020?</a></li> <li><a href="">Emerging Technologies: Automated DigitalOps Processes</a></li> <li><a href="">Data Science insights after a rookie year in the industry</a></li> <li><a href="">Setting the Cutoff Criterion for Probabilistic Models</a></li> <li><a href="">It's time for Time-series Databases</a></li> <li><a href="">What is BERT and how does it Work?<> <div><strong>From our Sponsors</strong></div> <ul> <li><a href="" target="_blank" rel="noopener">Weaponizing Data in the Fight Against White Collar Crimes</a></li> <li><a href="" rel="noopener" target="_blank">20 Critical Data Labeling Questions for ML</a></li> <li><a href="" rel="noopener" target="_blank">Discover 2020 Trends in Data and BI</a></li> <li><a href="" rel="noopener" target="_blank">How to Succeed with AI in 2020</a></li> <li><a href="" rel="noopener" target="_blank">Enter the Growing Field of Business Analytics</a></li> <li><span><a href="" rel="noopener" target="_blank">University of Denver's MS in Data Science Online</a> </span></li> <li><span><a href="" rel="noopener" target="_blank">2020 Data Science Trends Report<> Connections between Neural Networks and Pure Mathematics tag: 2020-01-05T14:30:00.000Z Marco Tavora <h4 class="graf graf--h4"></h4> <p><img class="graf-image" src="*GJj62r8BX02Sx0I26O3DUA.jpeg"></img><,…</p> <h4 class="graf graf--h4"></h4> <p><img class="graf-image" src="*GJj62r8BX02Sx0I26O3DUA.jpeg"/><, and visual object recognition. However, the reasons why deep learning works so spectacularly well are not yet fully understood.</p> <h3 class="graf graf--h3">Hints from Mathematics</h3> <p class="graf graf--p"><a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">Paul Dirac</a>, one of the fathers of quantum mechanics and arguably the greatest English physicist since <a href="" class="markup--anchor markup--p-anchor" title="Sir Isaac Newton" rel="noopener" target="_blank">Sir Isaac Newton</a>, once remarked that progress in physics using the “<a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">method of mathematical reason</a>” would</p> <blockquote class="graf graf--pullquote graf--startsWithDoubleQuote">“…enable[s] one to infer results about experiments that have not been performed. There is no logical reason why the […].”</blockquote> <blockquote class="graf graf--pullquote">— Paul Dirac, 1939</blockquote> <p><img class="graf-image" src="*c5iSSTlLr-MtAAJhXGCqeQ@2x.png"/><br/> Portrait of Paul Dirac is at the peak of his powers (Wikimedia Commons).<br/></p> <p class="graf graf--p">There are many examples in history where purely abstract mathematical concepts eventually led to powerful applications way beyond the context in which they were developed. This article is about one of those examples.</p> <p class="graf graf--p">Though I’ve been working with machine learning for a few years now, I’m a <a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">theoretical physicist</a> by training, and I have a soft spot for pure mathematics. Lately, I have been particularly interested in the connections between deep learning, pure mathematics, and physics.</p> <p class="graf graf--p">This article provides examples of powerful techniques from a branch of mathematics called <a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">mathematical analysis</a>. My goal is to use rigorous mathematical results to try to “justify”, at least in some respects, why deep learning methods work so surprisingly well.</p> <p><img class="graf-image" src="*2cLp2uOGaeHY1o0-yNtp1Q.jpeg"/><br/> Abstract representation of a neural network (<a href="" class="markup--anchor markup--figure-anchor" rel="noopener" target="_blank">source</a>).<br/></p> <h3 class="graf graf--h3">A Beautiful Theorem</h3> <p class="graf graf--p">In this section, I will argue that one of the reasons why artificial neural networks are so powerful is intimately related to the mathematical form of the output of its neurons.</p> <p><img class="graf-image" src="*n0rmclGG85wM1apczxgfSA@2x.png"/><br/> A manuscript by Albert Einstein (<a href="" class="markup--anchor markup--figure-anchor" rel="noopener" target="_blank">source</a>).<br/></p> <p class="graf graf--p">I will justify this bold claim using a celebrated theorem originally proved by two Russian mathematicians in the late 50s, the so-called <a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">Kolmogorov-Arnold representation theorem</a>.</p> <p><img class="graf-image" src="*G7xbK-DUhYhKl8mCYLXmnA@2x.png"/><br/> The mathematicians Andrei Kolmogorov (left) and Vladimir Arnold (right).<br/></p> <h4 class="graf graf--h4">Hilbert’s 13th problem</h4> <p class="graf graf--p">In 1900, <a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">David Hilbert</a>, one of the most influential mathematicians of the 20th century, presented a famous <a href="" class="markup--anchor markup--p-anchor" title="Hilbert's problems" rel="noopener" target="_blank">collection of problems</a> that effectively set the course of the 20th-century mathematics research.</p> <p class="graf graf--p">The Kolmogorov–Arnold representation theorem is related to one of the celebrated <a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">Hilbert problems</a>, all of which hugely influenced 20th-century mathematics.</p> <h4 class="graf graf--h4">Closing in on the connection with neural networks</h4> <p class="graf graf--p">A generalization of one of these problems, the <a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">13th</a> problem specifically, considers the possibility that a function of <em class="markup--em markup--p-em">n</em> variables can be expressed as a combination of sums and compositions of just two functions of a single variable which are denoted by Φ and <em class="markup--em markup--p-em">ϕ</em>.</p> <p class="graf graf--p">More concretely:</p> <p><img class="graf-image" src="*66wotEfmDQY1yYjTXPNjBg@2x.png"/><br/> Kolmogorov-Arnold representation theorem<br/></p> <p class="graf graf--p">Here, <em class="markup--em markup--p-em">η</em> and the λs are real numbers. It should be noted that these two univariate functions are Φ and <em class="markup--em markup--p-em">ϕ</em> can have a highly complicated (fractal) structure.</p> <p class="graf graf--p">Three articles, by Kolmogorov (1957), Arnold (1958) and <a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">Sprecher</a> (1965) provided a proof that there must exist such representation. This result is rather unexpected since according to it, the bewildering complexity of multivariate functions can be “translated” into trivial operations of univariate functions, such as additions and function compositions.</p> <h3 class="graf graf--h3">Now what?</h3> <p class="graf graf--p">If you got this far (and I would be thrilled if you did), you are probably wondering: how could an esoteric theorem from the 50s and 60s be even remotely related to cutting-edge algorithms such as artificial neural networks?</p> <h3 class="graf graf--h3">A Quick Reminder of Neural Networks Activations</h3> <p class="graf graf--p":</p> <p><img class="graf-image" src="*FyOC7pyTxClj_pWJBV9cOg@2x.png"/><br/> Computation performed by the k-th hidden unit in the second hidden layer.<br/></p> <p class="graf graf--p">Where the <em class="markup--em markup--p-em">w</em>s are the weights, and the <em class="markup--em markup--p-em">b</em>s are the biases. The similarity with the multivariate function <em class="markup--em markup--p-em">f</em> shown a few paragraphs above is evident!</p> <p class="graf graf--p">Let us quickly write down a function in Python only for forward-propagation which outputs the calculations performed by the neurons. The code for the function below has the following steps:</p> <ul class="postList"> <li class="graf graf--li"><strong class="markup--strong markup--li-strong">First line</strong>: the first activation function <em class="markup--em markup--li-em">ϕ</em> acts on the first linear step given by:</li> </ul> <pre class="graf graf--pre">x0.dot(w1) + b1</pre> <p class="graf graf--p">where <code class="markup--code markup--p-code">x0</code> is the input vector.</p> <ul class="postList"> <li class="graf graf--li"><strong class="markup--strong markup--li-strong">Second line: t</strong>he second activation function acts on the second linear step</li> </ul> <pre class="graf graf--pre">y1.dot(w2) + b2</pre> <ul class="postList"> <li class="graf graf--li"><strong class="markup--strong markup--li-strong">Third line:</strong> a <a href="" class="markup--anchor markup--li-anchor" rel="noopener" target="_blank">softmax function</a> is used in the final layer of the neural network, acting on the third linear step</li> </ul> <pre class="graf graf--pre">y2.dot(w3) + b3</pre> <p class="graf graf--p">The full function is:</p> <pre class="graf graf--pre">def forward_propagation(w1, b1, w2, b2, w3, b3, x0):<br/><br/> y1 = phi(x0.dot(w1) + b1)<br/> y2 = phi(y1.dot(w2) + b2)<br/> y3 = softmax(y2.dot(w3) + b3)<br/> <br/> return y1, y2, y3</pre> <p class="graf graf--p">To compare this with our expression above we write:</p> <pre class="graf graf--pre">y2 = phi(phi(x0.dot(w1) + b1).dot(w2) + b2)</pre> <p class="graf graf--p">The correspondence can be made more clear:</p> <p><img class="graf-image" src="*yhYSISnd-KnzOk30rvlHvw@2x.png"/></p> <h3 class="graf graf--h3">A Connection Between Two Worlds</h3> <p class="graf graf--p".</p> <p class="graf graf--p">As pointed out by <a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">Giuseppe Carleo</a>, the generalization power of forming functions of functions of functions <em class="markup--em markup--p-em">ad</em> <em class="markup--em markup--p-em">nauseam</em> was, in a way, “discovered independently also by nature” since neural networks, which work as shown above doing precisely that, are a simplified way to describe how our brains work.</p> <p class="graf graf--p">Thanks a lot for reading! Constructive criticism and feedback are always welcome!</p> <p class="graf graf--p">There is a lot more to come, stay tuned!</p> <p class="graf graf--p">Originally posted <a href="" target="_blank" rel="noopener">here</a>.</p> Neural Quantum States tag: 2020-01-05T13:30:00.000Z Marco Tavora <h3 class="graf graf--h3"></h3> <p><img class="graf-image" src="*MldhiO2QJs6Uj9vCVGL06Q.jpeg"></img> <br></br> Picture by By <a class="markup--anchor markup--figure-anchor" href=""…</p> <h3 class="graf graf--h3"></h3> <p><img class="graf-image" src="*MldhiO2QJs6Uj9vCVGL06Q.jpeg"/><br/> Picture by By <a href="" class="markup--anchor markup--figure-anchor" <strong class="markup--strong markup--p-strong">exactly</strong> encode such states (they can handle only relatively small systems, with less than ~45 particles).</p> <p class="graf graf--p">As we shall see, recent applications of machine learning techniques (artificial neural networks in particular) have been shown to provide highly efficient representations of such complex states, making their overwhelming complexity computationally tractable.</p> <p class="graf graf--p">In this article, I will discuss how to apply (a type of) artificial neural network to represent quantum states of many particles. The article will be divided into three parts:</p> <ul class="postList"> <li class="graf graf--li">A bird’s-eye view of fundamental quantum mechanical concepts.</li> <li class="graf graf--li">A brief description of machine learning concepts with a particular focus on a type of artificial neural network known as Restricted Boltzmann Machine (RBM)</li> <li class="graf graf--li">An explanation of how one can use RBMs to represent many-particle quantum states.</li> </ul> <h3 class="graf graf--h3">A Preamble</h3> <p class="graf graf--p">There is a fascinating story recounted by one of Albert Einstein's scientific collaborators, the Polish physicist Leopold Infeld, in his autobiography.</p> <p><img class="graf-image" src="*ZvBvH_8wMUQGpKC_QTlHrQ.jpeg"/><br/> Einstein and Infeld in Einstein’s home (<a href="" class="markup--anchor markup--figure-anchor" rel="noopener" target="_blank">source</a>).</p> <p class="graf graf--p">According to Infeld, after the two physicists spent several months performing long and grueling calculations, Einstein would make the following remark:</p> <blockquote class="graf graf--pullquote graf--startsWithDoubleQuote">“God [Nature] does not care about our mathematical difficulties. He integrates empirically.”</blockquote> <blockquote class="graf graf--pullquote">— Einstein (1942).</blockquote> <p class="graf graf--p">What Einstein meant was that, while humans must resort to complex calculations and symbolic reasoning to solve complicated physics problems, <strong class="markup--strong markup--p-strong">Nature does not need to.</strong></p> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Quick Note</strong>: Einstein used the term “integrate” here because many physical theories are formulated using equations called “differential equations” and to find solutions of such equations one must apply the process of “integration”.</p> <h3 class="graf graf--h3">The Many-Body Problem</h3> <p class="graf graf--p">As noted in the introduction, a notoriously difficult problem in theoretical physics is the many-body problem. This problem has been investigated for a very long time in both classical systems (physical systems based on Newton's three laws of motion and its refinements) and quantum systems (systems based obeying quantum mechanical laws).</p> <p class="graf graf--p">The first (classical) many-body problem to be extensively studied was the 3-body problem involving the Earth, the Moon, and the Sun.</p> <p><img class="graf-image" src="*duLTQQ3jxM7oiANqNxNn6g.gif"/><br/> A simple orbit of a 3-body system with equal masses.</p> <p class="graf graf--p">One of the first scientists to attack this many-body problem was none other than Issac Newton in his masterpiece, the Principia Mathematica:</p> <blockquote class="graf graf--pullquote graf--startsWithDoubleQuote">“Each time a planet revolves it traces a fresh orbit […] and each orbit is dependent.”</blockquote> <blockquote class="graf graf--pullquote">— Isaac Newton (1687)</blockquote> <p><img class="graf-image" src="*f56hS17pgEiSgNRALOpP0g@2x.png"/><br/> Newton’s Principia Mathematica<em class="markup--em markup--figure-em">, arguably the most important scientific book in history.</em></p> <p class="graf graf--p">Since essentially <strong class="markup--strong markup--p-strong">all</strong> relevant physical systems are composed by a collection of interacting particles, the many-body problem is extremely important.</p> <h4 class="graf graf--h4">A Poor Man’s Definition</h4> <p class="graf graf--p">One can define the problem as “the study of the effects of interactions between bodies on the behavior of a many-body system ”.</p> <p><img class="graf-image" src="*W_hrv-qfQM9csK52JQ3Eyg.jpeg"/><br/> Collisions of gold ions generate a quark-gluon plasma, a typical many-body system.</p> <p class="graf graf--p">The meaning of “many” in this context can be anywhere from three to infinity. In a recent paper, my colleagues and I showed that the signatures of quantum many-body behavior can be found already for <em class="markup--em markup--p-em">N</em>=5 spin excitations (figure below).</p> <p><img class="graf-image" src="*WZPTQVn4cFFxcgRW3XMHNQ.png"/><br/> The density of states of a type of spin system (XX model). As the number of spin excitations increases from 2 to 5, a Gaussian distribution (typical of many-body systems with 2-body couplings) is approached.</p> <p>In the present article, I will focus on the quantum many-body problem which has been my main topic of research since 2013.</p> <h3 class="graf graf--h3">Quantum Many-Body Systems</h3> <p class="graf graf--p">The complexity of quantum many-body systems was identified by physicists already in the 1930s. Around that time, the great physicist Paul Dirac envisioned two major problems in quantum mechanics.</p> <p><img class="graf-image" src="*9RRGHDmLk8jXNpQqy2MR3g@2x.jpeg"/><br/> The English physicist Paul Dirac.</p> <p class="graf graf--p">The first, according to him, was “in connection with the exact fitting in of the theory with relativity ideas”. The second was that “the exact application of these [quantum] laws leads to equations much too complicated to be soluble”. The second problem was precisely the quantum many-body problem.</p> <p class="graf graf--p">Luckily, the quantum states of many physical systems can be described using much less information than the maximum capacity of their Hilbert spaces. This fact is exploited by several numerical techniques including the well-known Quantum Monte Carlo (QMC) method.</p> <h4 class="graf graf--h4">Quantum Wave Functions</h4> <p class="graf graf--p">Simply put, a quantum wave function describes mathematically the state of a quantum system. The first quantum system to receive an exact mathematical treatment was the hydrogen atom.</p> <p><img class="graf-image" src="*uxib63UwtT1wgyfd9U4kww@2x.png"/><br/> The probability of finding the electron in a hydrogen atom (represented by the brightness).</p> <p class="graf graf--p">In general, a quantum state is represented by a complex probability amplitude Ψ(<em class="markup--em markup--p-em">S</em>), where the argument <em class="markup--em markup--p-em">S</em> contains all the information about the system’s state. For example, in a spin-1/2 chain:</p> <p><img class="graf-image" src="*EWV0hbIpddTCyilkpo_aEA@2x.png"/><br/> <img class="graf-image" src="*fIHAwALPFG8FEKnkKq9RqQ@2x.png"/><br/> A 1D spin chain: each particle has a value for σ in the z-axis.</p> <p class="graf graf--p">From Ψ(<em class="markup--em markup--p-em">S</em>), probabilities associated with measurements made on the system can be derived. For example, the square modulus of Ψ(<em class="markup--em markup--p-em">S</em>), a positive real number, gives the probability distribution associated with Ψ(<em class="markup--em markup--p-em">S</em>):</p> <p><img class="graf-image" src="*uEaZVlqf7rEcZM6gt4_3GA@2x.png"/></p> <h4 class="graf graf--h4">The Hamiltonian Operator</h4> <p class="graf graf--p">The properties of a quantum system are encapsulated by the system’s Hamiltonian operator <em class="markup--em markup--p-em">H</em>. The latter is the sum of two terms:</p> <ul class="postList"> <li class="graf graf--li">The kinetic energy of all particles in the system and it is associated with their motion</li> <li class="graf graf--li">The potential energy of all particles in the system, associated with the position of the particles with respect to other particles.</li> </ul> <p class="graf graf--p">The allowed energy levels of a quantum system (its energy spectrum) can be obtained by solving the so-called Schrodinger equation, a partial differential equation that describes the behavior of quantum mechanical systems.</p> <p><img class="graf-image" src="*sBKxQEXUE9HjfB4xFG3YXA.jpeg"/><br/> The Austrian physicist Erwin Schrodinger, one of the fathers of quantum mechanics .</p> <p class="graf graf--p">The time-independent version of the Schrödinger equation is given by the following eigenvalue system:</p> <p><img class="graf-image" src="*crnAFGkj9bWGgo-98B1glw@2x.png"/></p> <p class="graf graf--p">The eigenvalues and the corresponding eigenstates are</p> <p><img class="graf-image" src="*lqLxW3NV3hty7SzZZXZsvA@2x.png"/></p> <p class="graf graf--p">The lowest energy corresponds to the so-called “ground state” of the system.</p> <h4 class="graf graf--h4">A Simple Example</h4> <p class="graf graf-.</p> <p><img class="graf-image" src="*Ms9s0ZIjxQIDi19_iLYSJA.gif"/><br/> A mass-spring harmonic oscillator </p> <p class="graf graf--p">The animation below compares the classical and quantum conceptions of a simple harmonic oscillator.</p> <p><img class="graf-image" src="*d9st-CLhnnNBK1WQANsqBQ.gif"/><br/> Wave function describing a quantum harmonic oscillator (Wiki).</p> <p class="graf graf--p">While a simple oscillating mass in a well-defined trajectory represents the classical system (blocks A and B in the figure above), the corresponding quantum system is represented by a complex wave function. In each block (from C onwards) there are two curves: the blue one is the real part of Ψ, and the red one is the imaginary part.</p> <h4 class="graf graf--h4">Bird’s-eye View of Quantum Spin Systems</h4> <p class="graf graf--p">In quantum mechanics, spin can be roughly understood as an “intrinsic form of angular momentum” that is carried by particles and nuclei..</p> <p><img class="graf-image" src="*BDIE_kA0TOMVjgIxMid0gQ.jpeg"/><br/> Example of a many-body system: a spin impurity propagating through a chain of atoms </p> <p class="graf graf--p">Quantum spin systems are closely associated with the phenomena of magnetism. Magnets are made of atoms, which are often small magnets. When these atomic magnets become parallelly oriented they give origin to the macroscopic effect we are familiar with.</p> <p><img class="graf-image" src="*DBplohXNM_weoglTQ5xXrg@2x.png"/><br/> Magnetic materials often display spin waves, propagating disturbances in the magnetic order.</p> <p class="graf graf--p">I will now provide a quick summary of the basic components of machine learning algorithms in a way that will be helpful for the reader to understand their connections with quantum systems.</p> <h3 class="graf graf--h3">Machine Learning = Machine + Learning</h3> <p class="graf graf--p">Machine learning approaches have two basic components (<a href="" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">Carleo, 2017</a>):</p> <ul class="postList"> <li class="graf graf--li">The <strong class="markup--strong markup--li-strong">machine</strong>, which could be e.g. an artificial neural network Ψ with parameters</li> </ul> <p><img class="graf-image" src="*quEUmxtVTi9vZpnStzVEMw@2x.png"/></p> <ul class="postList"> <li class="graf graf--li">The <strong class="markup--strong markup--li-strong">learning</strong> of the parameters <em class="markup--em markup--li-em">W</em>, performed using e.g. stochastic optimization algorithms.</li> </ul> <p><img class="graf-image" src="*_BEFleZzE_ZGiQo4aXwirQ@2x.png"/><br/> The two components of machine learning.</p> <h4 class="graf graf--h4">Neural networks</h4> <p class="graf graf--p">Artificial neural networks are usually non-linear multi-dimensional nested functions. Their internal workings are only heuristically understood and investigating their structure does not generate insights regarding the function being it approximates.</p> <p><img class="graf-image" src="*3fA77_mLNiJTSgZFhYnU0Q@2x.png"/><br/> Simple artificial neural network with two hidden layers.</p> <p class="graf graf--p">Due to the absence of a clear-cut connection between the network parameters and the mathematical function which is being approximated, ANNs are often referred to as “black boxes”.</p> <p class="graf graf--p">What are Restricted Boltzmann Machines?</p> <p class="graf graf--p">Restricted Boltzmann Machines are generative stochastic neural networks. They have many applications including:</p> <ul class="postList"> <li class="graf graf--li">Collaborative filtering</li> <li class="graf graf--li">Dimensionality reduction</li> <li class="graf graf--li">Classification</li> <li class="graf graf--li">Regression</li> <li class="graf graf--li">Feature learning</li> <li class="graf graf--li">Topic modeling</li> </ul> <p class="graf graf--p">RBMs belong to a class of models known as Energy-based Models. They are different from other (more popular) neural networks which estimate a <strong class="markup--strong markup--p-strong">value</strong> based on inputs while RBMs estimate <strong class="markup--strong markup--p-strong">probability densities</strong> of the inputs (they estimate many points instead of a single value).</p> <p class="graf graf--p">RBMs have the following properties:</p> <ul class="postList"> <li class="graf graf--li">They are shallow networks, with only two layers (the input/visible layer and a hidden layer)</li> <li class="graf graf--li">Their hidden units <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">h</em></strong> and visible (input) units <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">v</em></strong> are usually binary-valued</li> <li class="graf graf--li">There is a weight matrix <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">W</em></strong> associated with the connections between hidden and visible units</li> <li class="graf graf--li">There are two bias terms, one for input units denoted by <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">a</em></strong> and one for hidden units denoted by <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">b</em></strong></li> <li class="graf graf--li">Each configuration has an associated energy functional <em class="markup--em markup--li-em">E</em>(<strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">v</em></strong>,<strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">h</em></strong>) which is minimized during training</li> <li class="graf graf--li">They have no output layer</li> <li class="graf graf--li">There are no intra-layer connections (this is the “restriction”). For a given set of visible unit activations, the hidden unit activations are mutually independent (the converse also holds). This property facilitates the analysis tremendously.</li> </ul> <p class="graf graf--p">The energy functional to be minimized is given by:</p> <p><img class="graf-image" src="*C9imxn6Eee_sgmfXqafKbw@2x.png"/><br/> Eq.1: Energy functional minimized by RBMs.</p> <p class="graf graf--p">The joint probability distribution of both visible and hidden units reads:</p> <p><img class="graf-image" src="*iR4Zz43r00ZBXUOl-P6l1Q@2x.png"/><br/> Eq.2: Total probability distribution.</p> <p class="graf graf--p">where the normalization constant <em class="markup--em markup--p-em">Z</em> is called the partition function. Tracing out the hidden units, we obtain the marginal probability of a visible (input) vector:</p> <p><img class="graf-image" src="*ADOF25GetX3ZNkOHc3Hq6w@2x.png"/><br/> Eq.3: Input units marginal probability distribution,</p> <p class="graf graf--p">Since, as noted before, hidden (visible) unit activations are mutually independent given the visible (hidden) unit activations one can write:</p> <p><img class="graf-image" src="*zL0kR667G-RYvJx27WDQ2g@2x.png"/><br/> Eq.4: Conditional probabilities becomes products due to mutual independence.</p> <p class="graf graf--p">and also:</p> <p><img class="graf-image" src="*Y1sMasLECgn9bEh5iHMjaQ@2x.png"/><br/> Eq. 5: Same as Eq.4.</p> <p class="graf graf--p">Finally, the activation probabilities read:</p> <p><img class="graf-image" src="*QnSECgL9M_Ft8Nayd55cHg@2x.png"/><br/> Eq.6: Activation probabilities.</p> <p class="graf graf--p">where <em class="markup--em markup--p-em">σ</em> is the sigmoid function.</p> <p class="graf graf--p">The training steps are the following:</p> <ul class="postList"> <li class="graf graf--li">We begin by setting the visible units states to a training vector.</li> <li class="graf graf--li">The states of the hidden units are then calculated using the expression on the left of Equation 6.</li> <li class="graf graf--li">After the states are chosen for the hidden units, one performs the so-called “reconstruction”, setting each visible unit to 1 according to the expression on the right of Equation 6.</li> <li class="graf graf--li">The weight changes by (the primed variables are the reconstructions):</li> </ul> <p><img class="graf-image" src="*DpAyiXyFbxQ-9iGZjoz55g@2x.png"/></p> <h3 class="graf graf--h3">How RBMs process inputs, a simple example</h3> <p class="graf graf--p">The following analysis is heavily based on this excellent tutorial. The three figures below show how a RBM processes inputs.</p> <p><img class="graf-image" src="*8qoWSg_GInRTytpcDFr4rA@2x.png"/><br/> A simple RBM processing inputs.</p> <ul class="postList"> <li class="graf graf--li">At node 1 of the hidden layer, the input <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">x</em></strong> is multiplied by the weight <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">w</em></strong>, a bias <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">b</em></strong> is added, and the result is fed into the activation giving origin to an output <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">a</em></strong> (see the leftmost diagram).</li> <li class="graf graf--li">In the central diagram, all inputs are combined at the hidden node 1 and each input <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">x</em></strong> is multiplied by its corresponding <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">w</em></strong>. The products are then summed, a bias <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">b</em></strong> is added, and the end result is passed into an activation function producing the full output <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">a</em></strong> from the hidden node 1</li> <li class="graf graf--li">In the third diagram, inputs <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">x</em></strong> are passed to all nodes in the hidden layer. At each hidden node, <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">x</em></strong> is multiplied by its corresponding weight <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">w</em></strong>. Individual hidden nodes receive products of all inputs <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">x</em></strong> with their individual weights <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">w</em></strong>. The bias <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">b</em></strong> is then added to each sum, and the results are passed through activation functions generating outputs for all hidden nodes.</li> </ul> <h4 class="graf graf--h4">How RBMs learn to reconstruct data</h4> <p class="graf graf--p">RBMs perform an unsupervised process called “reconstruction”. They learn to reconstruct the data performing a long succession of passes (forward and backward ones) between its two layers. In the backward pass, as shown in the diagram below, the activation functions of the nodes in the hidden layer become the new inputs.</p> <p><img class="graf-image" src="*X-DyRFjbHb1cgzqNCLAfyQ@2x.png"/></p> <p class="graf graf--p">The product of these inputs and the respective weights are summed and the new biases <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">b</em></strong> from the visible layer are added at each input node. The new output from such operations is called “reconstruction” because it is an approximation of the original input.</p> <p class="graf graf--p">Naturally, the reconstructions and the original inputs are very different at first (since the values of <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">w</em></strong> are randomly initialized). However, as the error is repeatedly backpropagated against the <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">w</em></strong>s, it is gradually minimized.</p> <p class="graf graf--p">We see therefore that:</p> <ul class="postList"> <li class="graf graf--li">The RBM uses, on the forward pass, inputs to make predictions about the activations of the nodes and estimate the probability distribution of the output <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">a</em></strong> conditional on the weighted inputs <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">x</em></strong></li> <li class="graf graf--li">On the backward pass, the RBM tries to estimate the probability distribution of the inputs <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">x</em></strong> conditional on the activations <strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">a</em></strong></li> </ul> <p class="graf graf--p">Joining both conditional distributions, the joint probability distribution of <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">x</em></strong> and <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">a</em></strong> is obtained i.e. the RBM learns how to approximate the original data (the structure of the input).</p> <h3 class="graf graf--h3">How to connect machine learning and quantum systems?</h3> <p class="graf graf--p">In a recent article published in Science magazine, it was proposed that one can treat the quantum wave function Ψ(<em class="markup--em markup--p-em">S</em>) of a quantum many-body system as a black-box and then approximate it using an RBM. The RBM is trained to represent Ψ(<em class="markup--em markup--p-em">S</em>) via the optimization of its parameters.</p> <p><img class="graf-image" src="*5neVk2ALovSKWW-4HYKfOg@2x.png"/><br/> RBM used by Carleo and Troyer (2017) that encodes a spin many-body quantum state.</p> <p class="graf graf--p">The question is how to reformulate the (time-independent) Schrodinger equation, which is an eigenvalue problem, as a machine learning problem.</p> <h4 class="graf graf--h4">Variational Methods</h4> <p class="graf graf--p">As it turns out, the answer has been known for quite some time, and it is based on the so-called variation method , an alternative formulation of the wave equation that can be used to obtain the energies of a quantum system. Using this method we can write the optimization problem as follows:</p> <p><img class="graf-image" src="*TRiMeFyh11LeC8Y6FZ2wDA@2x.png"/></p> <p class="graf graf--p">where <em class="markup--em markup--p-em">E</em>[Ψ] is a functional that depends on the eigenstates and Hamiltonian. Solving this optimization problem we obtain both the ground state energy and its corresponding ground state.</p> <h4 class="graf graf--h4">Quantum States and Restricted Boltzmann Machines</h4> <p class="graf graf--p">In Carleo and Troyer (2017), RBMs are used to represent a quantum state Ψ(<em class="markup--em markup--p-em">S</em>). They generalize RBMs to allow for complex network parameters.</p> <p class="graf graf--p">It is easy to show that the energy functional can be written as</p> <p><img class="graf-image" src="*vvWnVLLyIKaNO1_b_gZfvw@2x.png"/></p> <p class="graf graf--p">where the argument of the expectation value after the last equal sign is the local energy. The neural network is then trained using the method of Stochastic Reconfiguation (SR). The corresponding optimization iteration reads:</p> <p><img class="graf-image" src="*Mjh3JHtxkfUGZiIb_6YbqA@2x.png"/><br/> The gradient descent update protocol.</p> <p class="graf graf--p">where <em class="markup--em markup--p-em">η</em> is the learning rate and <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">S</em></strong> is the stochastic reconfiguration matrix which depends on the eigenstates and its logarithmic derivatives. </p> <p class="graf graf--p">Carleo and Troyer (2017)were interested specifically in quantum systems of spin 1/2 and they write the quantum state as follows:</p> <p><img class="graf-image" src="*gmMHZjlqVZST8sPnr1QbCg@2x.png"/></p> <p class="graf graf--p">In this expression the <em class="markup--em markup--p-em">W</em> argument of Ψ is the set of parameters:</p> <p><img class="graf-image" src="*mp2njbK9ga1OqbrM_9pDzg@2x.png"/></p> <p class="graf graf--p">where the components on <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">a</em></strong> and <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">b</em></strong> are real but <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">W</em></strong> can be complex. The absence of intralayer interactions, typical of the RBMs architecture allows hidden variables to be summer over (or traced out), considerably simplifying the expression above to:</p> <p><img class="graf-image" src="*DN4zmK_p1z1UR991kMXTng@2x.png"/></p> <p class="graf graf--p">To train the quantum wave functions one follows a similar procedure as described above for RBMs.</p> <h4 class="graf graf--h4">Impressive Accuracy</h4> <p class="graf graf--p">The figure below shows the negligible relative error of the NQS ground-state energy estimation. Each plot corresponds to a test case which is a system with known exact solutions. The horizontal axis is the hidden-unit density i.e. the ratio between the number of hidden and visible units. Notice that even with relatively few hidden units, the accuracy of the model is already extremely impressive (one part per million error!)</p> <p><img class="graf-image" src="*nW7g93Ha7_YPZSX4kvo_qg@2x.jpeg"/><br/> The error of the model ground-state energy relative to the exact value in three tests cases.</p> <h3 class="graf graf--h3">Conclusion</h3> <p class="graf graf--p">In this brief article, we saw that Restricted Boltzmann Machines (RBMs), a simple type of artificial neural network, can be used to compute with extremely high accuracy the ground-state energy of quantum systems of many particles. </p> <p class="graf graf--p">Thanks for reading!</p> <p class="graf graf--p">As always, constructive criticism and feedback are welcome!</p> <p class="graf graf--p"><em>This article was originally published on <a href="" target="_blank" rel="noopener">here</a>.</em></p> Data Science insights after a rookie year in the industry tag: 2020-01-05T12:03:14.000Z Tomasz Szmidt ></p> ><span style="font-weight: 400;">The Kaggle times, when tremendous community effort was unearthing secrets and patterns of almost every data set, are over once you turn pro. You will find your data fragmented, skewed and screwed, simply missing, abundant but noisy - just to list a few plausible scenarios. Your newbie energy won’t leave you discouraged, yet patching the gaps may consume too much time and resources than available. Although it’s often stated that companies sit on stockpiles of data it doesn’t mean it’s accessible for data science research. You can easily find yourself constrained by licenses, corporate agreements, confidentiality matters and technical issues, such as parsing or streaming. If that happens navigate yourself through conversations with experts. Their thorough understanding, of the field you were appointed to investigate, will guide you through the confusion and facilitate data science research. There is one more vital reason to stay in touch with the experts’ panel. It’s been said many times - data science projects fail due to miscommunication with clients. Either you get their expectations wrong or they imagine your solution to be different than it is. This is by all means true. </span></p> <p><span style="font-weight: 400;">My takeaway here is to bridge gaps in data with the expertise of people close to the problem and foster cooperation with a data engineering team. After all, they deliver data fuel to your model rocket.</span></p> <h3><span style="font-weight: 400;">Do not underestimate the power of Maths</span></h3> <p><span style="font-weight: 400;">It was so much fun to import this or that from scikit learn and fit my models, back in the days. What I quickly experienced at work is the cost of computation, especially if you work on big data, meaning you’re out of RAM right after loading a data set. The currency of that cost is either real money spent on cloud or execution time utilizing in-house infrastructure. It may also happen, like in my case, that your environment would require you to switch from Python to PySpark. Regardless of the industry, business objectives are always the same. If you have to deliver your solution in production it has to be fast and cheap. Otherwise, you will circulate around the infinite RND loop. That’s why I turned to statistics and probability, investigating how can I blend pure maths into my algorithms. As I was closely working with experts I was getting the vital context of the industry our team was assigned to. Splitting complex problems into really narrow cases, separated by well-defined thresholds, made even standard deviation applicable. Although it may not sound data scientific at all, relatively simple math can deliver lean solutions that work lighting fast on excessively large volumes of data.</span></p> <h3><span style="font-weight: 400;">GIT matters</span></h3> <p><span style="font-weight: 400;">I wasn’t any different than numerous junior data scientists entering the job market with a belief, that Jupyter Notebook is the fundamental tool of our work. I simply couldn’t be more wrong. Just as the name suggests, ‘notebook’ stands for keeping notes, full stop. Jupyter won’t facilitate teamwork, won’t enable code version control and won’t lead you to production. My conclusion regarding Jupyter Notebook is that although it’s great for quick exploration and verification of your ideas it cripples the overall performance of the data science team. Now, what has become of fundamental importance is to keep your code repository thriving. Daily commits, work on branches, it will all benefit transparency of your project, facilitate testing and production, taking tasks over from fellow data scientists. Before I started as a data scientists I was on a 3-month front-end web app internship. One year later it’s really striking how much typical app development has in common with developing data science projects.</span></p> <p></p> <p><span style="font-weight: 400;">Having articulated my thoughts above, let me conclude with one productivity hack that was actually unthinkable at my data science discovery phase. Disconnect from Jupyter Notebook and say hello to Python IDE of your choice. You won’t lose the Jupyter experience as both, Visual Studio Code and PyCharm, support notebooks. What you gain, however, is instant ability to turn your code into proper .py files. This is what you commit at the end of the day and schedule for testing in the development environment. Tracking changes and development of your algorithms is a robust component of quality assurance and indicator of your performance. This is how you keep things organized. Ultimately, running a data science project is so much alike app development. At least this is what I’ve observed in my rookie year as a data scientist.</span></p> Traveling to Other Planets with Google Maps tag: 2020-01-04T18:30:00.000Z Capri Granville …</p> appear as static as if seen from Earth and Earth was not rotating. So all space travel movies are based on unrealistic assumptions. </p> <p></p> <p><a href="" target="_blank" rel="noopener"><img src="" class="align-center"/></a></p> <p></p> <p>Anyway, you can check the details <a href="" target="_blank" rel="noopener">here</a>. Or start you space exploration <a href="" target="_blank" rel="noopener">here</a> with your browser. This app was released on 2017. Above is how it looks like on my browser (Chrome), as I was visiting Ceres. It uses a lot of memory on your laptop, compared to other websites, even those with streaming videos.</p> <p></p> Setting the Cutoff Criterion for Probabilistic Models tag: 2020-01-04T12:00:00.000Z Frank Raulf …</em></p> probabilistic model of having an accident given a blood alcohol Level of 0.5 <span style="font-size: 10pt;">%</span><span style="font-size: 8pt;">o</span> is 40% does not necessarily mean that you should predict this case as no accident.</em></p> <p>Examining the probability distribution, you might notice a concentration into the value of zero. This is not wrong at all costs, but you can easily validate whether it is better to adjust your cutoff criterion by setting it down - or up. ROC helps as well.</p> <p>If you have doubts regarding the shape of the probability-distribution (of the results), you can reshape it:</p> <p>Supposed you found that the cutoff should be at 40% instead of 50%. So, you know three things:</p> <ol> <li> p = 0 should remain 0</li> <li> p = 1 should remain 1</li> <li> p = 40% should be 50%</li> </ol> <p>A root-function fulfills the first two requirements. The rest is simple mathematics.</p> <p>0.5 = 0.4^x </p> <p>log(0.5) / log(0.4) = x</p> <p>log(0.5) / log(<span style="display: inline !important; float: none; background-color: #ffffff; color: #000000; cursor: text; font-family: Arial,'Helvetica Neue',Helvetica,sans-serif; font-size: 13px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; -webkit-text-stroke-width: 0px; white-space: normal; word-spacing: 0px;">cutoff</span>) = x</p> <p>With this root-function you can adjust all probability-results. At least slightly, this exponent can remove the slope of the probability distribution into zero. The lower the probability the stronger the effect. </p> <p>You will see that it performs better in many cases - instead of just setting down the cutoff criterion.</p> 5 Best Python Machine Learning Courses Online for 2020 tag: 2020-01-04T06:30:00.000Z Digital Defynd <p><img alt="Best Python Machine Learning course tutorial class certification training online" class="aligncenter size-full wp-image-12209" height="493" src="" width="876"></img><…</p> <p><img class="aligncenter size-full wp-image-12209" src="" alt="Best Python Machine Learning course tutorial class certification training online" width="876" height="493"/><. </p> <p><strong>Key USPs -</strong></p> <ul> <li>Learn about classification, clustering and regression problems.</li> <li>Tons of examples and demonstrations based on real-world scenarios.</li> <li>Guidance is provided for the necessary configuration of tools required to follow along with the lectures.</li> <li>Build your portfolio by working on various projects.</li> <li>Complete all the graded assessments to earn the completion certificate as well as the IBM digital badge.</li> </ul> <p><strong>Duration: 5 to 6 weeks of study, 3 to 6 hours per week</strong></p> <p><strong>Rating: 4.7 out of 5</strong></p> <blockquote>Review : The course was highly informative and very well presented. It was very easier to follow. Many complicated concepts were clearly explained. It improved my confidence with respect to programming skills. -RC</blockquote> <p>Learn more <a href="" target="_blank" rel="noopener">here</a></p> <h3><span style="font-size: 18pt;">2. Applied Machine Learning in Python (Coursera) </span></h3> <p. </p> <p><strong>Key USPs-</strong></p> <ul> <li>Understand the difference between statistical and ML models.</li> <li>Learn about scikit toolkit by following along with the tutorial.</li> <li>Identify the characteristics of datasets and decide which technique to apply.</li> <li>Write efficient code in Python to analyze challenges and engineer appropriate features.</li> <li>The program can be audited for free and the verified certification can be added for an additional fee.</li> </ul> <p><strong>Duration: 24 hours, 8 hours per week</strong></p> <p><strong>Rating: 4.6 out of 5</strong></p> <blockquote>Review : Very well structured course, and very interesting too! Has made me want to pursue a career in machine learning. I originally just wanted to learn to program, without true goal, now I have one thanks!! -FL</blockquote> <p>Learn more <a href="" target="_blank" rel="noopener">here</a></p> <h3><span style="font-size: 18pt;">3. Python for Data Science and Machine Learning Bootcamp (Udemy)</span></h3> <p.</p> <p><strong>Key USPs-</strong></p> <ul> <li>Get familiar with tools and software such as Pandas, SciKit-Learn, Seaborn.</li> <li>Cover topics like natural language processing, neural networks, regression, plotting and more.</li> <li>Every video is accompanied by code notes to help you understand the topic in depth.</li> <li>149 Lectures + 10 Articles + 4 Downloadable resources + Full lifetime access</li> <li>Enroll in the course at an affordable rate.</li> </ul> <p><strong>Duration: 22.5 hours</strong></p> <p><strong>Rating: 4.5 out of 5</strong></p> <blockquote</blockquote> <p>Lear more <a href="" target="_blank" rel="noopener">here</a></p> <h3><span style="font-size: 18pt;">4. Machine Learning with Python (DataCamp)</span></h3> <p <a href="" target="_blank" rel="noopener">Python Certification</a> as well. </p> <p><strong>Key USPs-</strong></p> <ul> <li>Interactive classes make learning a fun experience.</li> <li>Keep up with the latest techniques to create solutions using the relevant tools.</li> <li>Gain best practices and advice from the instructor and incorporate them into your development process.</li> <li>The track consists of 5 courses in total with an increasing level of difficulty.</li> <li>Access the lessons for free during the trial period to check if it suits your learning style.</li> </ul> <p><strong>Duration: 20 hours</strong></p> <p><strong>Rating: 4.5 out of 5</strong></p> <p>Learn more <a href="" target="_blank" rel="noopener">here</a> </p> <h3><span style="font-size: 18pt;">5. Intro to Machine Learning (Udacity)</span></h3> <p.</p> <p><strong>Key USPs-</strong></p> <ul> <li>All the modules are followed with practical projects that give you the opportunity to integrate and apply the covered theory.</li> <li>Get supervision from the one to one mentor assigned to you throughout the program.</li> <li>Access to sessions to help you with interview preparation and beyond.</li> <li>The study schedule is tailored to fit your daily routine.</li> <li>Join the student community to interact with your peer and exchange ideas.</li> </ul> <p><strong>Duration: 3 months, 10 hours per week</strong></p> <p><strong>Rating: 4.5 out of 5</strong></p> <p>Learn more <a href="" target="_blank" rel="noopener">here</a> </p> <p>So these were the 5 Best Python for Machine Learning Tutorial, Class, Course, Training & Certification available online for 2020. Hope you found what you were looking for. Wish you a Happy Learning!</p> | https://www.datasciencecentral.com/profiles/blog/feed?tag=E-commerce&xn_auth=no | CC-MAIN-2020-05 | en | refinedweb |
uiomove,
uiomovei
— move data described by a struct uio
#include
<sys/systm.h>
int
uiomove(void
*buf, size_t n,
struct uio *uio);
int
uiomovei(void
*buf, int n,
struct uio *uio);;/* associated process or NULL */ };
A struct uio typically describes data in motion. Several of the fields described below reflect that expectation.
struct iovec { void *iov_base; /* Base address. */ size_t iov_len; /* Length. */ };
uiomoveitself does not use this field if the area is in kernel-space, but other functions that take a struct uio may depend on this information..
The
uiomovei function is similar to
uiomove, but uses a signed integer as the byte
count. It is a temporary legacy interface and should not be used in new
code.
uiomove and
uiomovei return 0 on success or EFAULT if a bad
address is encountered. | https://man.openbsd.org/OpenBSD-5.9/uiomovei.9 | CC-MAIN-2020-05 | en | refinedweb |
import "go.chromium.org/luci/cipd/client/cipd/deployer"
Package deployer holds functionality for deploying CIPD packages.
deployer.go doc.go gofslock.go paranoia.go
type DeployedPackage struct { Deployed bool // true if the package is deployed (perhaps partially) Pin common.Pin // the currently installed pin Subdir string // the site subdirectory where the package is installed Manifest *pkg.Manifest // instance's manifest, if available InstallMode pkg.InstallMode // validated install mode, if available // ToRedeploy is a list of files that needs to be reextracted from the // original package and relinked into the site root. ToRedeploy []string // ToRelink is a list of files that needs to be relinked into the site root. // // They are already present in the .cipd/* guts, so there's no need to fetch // the original package to get them. ToRelink []string // contains filtered or unexported fields }
DeployedPackage represents a state of the deployed (or partially deployed) package, as returned by CheckDeployed.
ToRedeploy is populated only when CheckDeployed is called in a paranoid mode, and the package needs repairs.
type Deployer interface { // DeployInstance installs an instance of a package into the given subdir of // the root. // // It unpacks the package into <base>/.cipd/pkgs/*, and rearranges // symlinks to point to unpacked files. It tries to make it as "atomic" as // possible. Returns information about the deployed instance. // // Due to a historical bug, if inst contains any files which are intended to // be deployed to `.cipd/*`, they will not be extracted and you'll see // warnings logged. DeployInstance(ctx context.Context, subdir string, inst pkg.Instance, maxThreads int) (common.Pin, error) // CheckDeployed checks whether a given package is deployed at the given // subdir. // // Returns an error if it can't check the package state for some reason. // Otherwise returns the state of the package. In particular, if the package // is not deployed, returns DeployedPackage{Deployed: false}. // // Depending on the paranoia mode will also verify that package's files are // correctly installed into the site root and will return a list of files // that needs to be redeployed (as part of DeployedPackage). // // If manifest is set to WithManifest, will also fetch and return the instance // manifest and install mode. This is optional, since not all callers need it, // and it is pretty heavy operation. Any paranoid mode implies WithManifest // too. CheckDeployed(ctx context.Context, subdir, packageName string, paranoia ParanoidMode, manifest pkg.ManifestMode) (*DeployedPackage, error) // FindDeployed returns a list of packages deployed to a site root. // // It just does a shallow examination of the metadata directory, without // paranoid checks that all installed packages are free from corruption. FindDeployed(ctx context.Context) (out common.PinSliceBySubdir, err error) // RemoveDeployed deletes a package from a subdir given its name. RemoveDeployed(ctx context.Context, subdir, packageName string) error // RepairDeployed attempts to restore broken deployed instance. // // Use CheckDeployed first to figure out what parts of the package need // repairs. // // 'pin' indicates an instances that is supposed to be installed in the given // subdir. If there's no such package there or its version is different from // the one specified in the pin, returns an error. RepairDeployed(ctx context.Context, subdir string, pin common.Pin, maxThreads int, params RepairParams) error // TempFile returns os.File located in <base>/.cipd/tmp/*. // // The file is open for reading and writing. TempFile(ctx context.Context, prefix string) (*os.File, error) // CleanupTrash attempts to remove stale files. // // This is a best effort operation. Errors are logged (either at Debug or // Warning level, depending on severity of the trash state). CleanupTrash(ctx context.Context) }
Deployer knows how to unzip and place packages into site root directory.
New return default Deployer implementation.
ParanoidMode specifies how paranoid CIPD client should be.
const ( // NotParanoid indicates that CIPD client should trust its metadata // directory: if a package is marked as installed there, it should be // considered correctly installed in the site root too. NotParanoid ParanoidMode = "NotParanoid" // CheckPresence indicates that CIPD client should verify all files // that are supposed to be installed into the site root are indeed present // there. // // Note that it will not check file's content or file mode. Only its presence. CheckPresence ParanoidMode = ParanoidMode = "CheckIntegrity" )
func (p ParanoidMode) Validate() error
Validate returns an error if the mode is unrecognized.
type RepairParams struct { // Instance holds the original package data. // // Must be present if ToRedeploy is not empty. Otherwise not used. Instance pkg.Instance // ToRedeploy is a list of files that needs to be extracted from the instance // and relinked into the site root. ToRedeploy []string // ToRelink is a list of files that just needs to be relinked into the site // root. ToRelink []string }
RepairParams is passed to RepairDeployed.
Package deployer imports 22 packages (graph) and is imported by 9 packages. Updated 2020-01-18. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/cipd/client/cipd/deployer | CC-MAIN-2020-05 | en | refinedweb |
Admin User7,988 Points
Why isn't this working, again?
I've put it in correctly here...
from flask import Flask from flask import render_template app = Flask(__name__) @app.route('/') def index(): return render_template('index.html')
{% extends "layout.html" %} <!doctype html> <html> <head><title>{% block title %}Homepage{% endblock %}</title></head> <body> {% block content %} <h1>Smells Like Bakin'!</h1> <p>Welcome to my bakery web site!</p> {% endblock %} </body> </html>
{% extends "layout.html" %} <!doctype html> <html> <head><title>{% block title %}Smells Like Bakin'{% endblock %}</title></head> <body> {% block content %}{% endblock %} </body> </html>
1 Answer
Steven Parker179,649 Points
There's an extraneous line at the top of "layout.html":
{% extends "layout.html" %}
This was not asked for by the challenge instructions, and it wouldn't make much sense for a module to be an extension of itself!
Remove that line and you'll pass task 4. | https://teamtreehouse.com/community/why-isnt-this-working-again | CC-MAIN-2020-05 | en | refinedweb |
The highest common factor is also known as GCD (Greatest common divisor). GCD is the largest possible integer which can be divided by the given numbers without a remainder.
Note: GCD is also known as HCF(Highest Common Factor).
LCM, lowest common multiple is the least possible integer which can be divided by the given numbers without a remainder.
In the example given below, we will take two numbers and find their GCD and LCM.
Logic:
For GCD:
We will take a number, check if it is perfectly divisible by both numbers. We store the value in a variable, and then, print the variable.
For LCM:
We use a formula here,
LCM = Num1*Num2/GCD
Algorithm:
- Take two number’s as input.
- Check if the given numbers are divisible by any number less than the number itself using for loop.
- If yes, then store it (in gcd) and continue ahead.
- After termination of the loop, the last updated value in gcd will be GCD.
- To find LCM of the numbers apply the formula for lcm.
- Now, Print the GCD and LCM
Code:
#include<iostream> using namespace std; int main() { int fnum,snum,gcd,lcm; cout<<"Enter first number"; cin>>fnum; cout<<"\nEnter second number"; cin>>snum; //find factors of both numbers for(int i=1;i<=fnum && i<=snum;i++) { if(fnum%i==0 && snum%i==0) gcd=i; } //find lcm of both numbers lcm = fnum*snum/gcd; cout<<"\n GCD of given numbers is:"<<gcd; cout<<"\n LCM of given numbers is:"<<lcm; return 0; }
Output:
Enter first number 10 Enter second number 5 GCD of given numbers is:5 LCM of given numbers is:10
Report Error/ Suggestion | https://www.studymite.com/cpp/examples/gcd-hcf-lcm-program-in-cpp?utm_source=related_posts&utm_medium=related_posts | CC-MAIN-2020-05 | en | refinedweb |
Opened 5 years ago
Closed 5 years ago
Last modified 5 years ago
#2820 closed Bug (Fixed)
WIN(all) handle bug
Description
Strange bug for almost all functions related to the search window.
If the window handle, for example 0x00110758, then the same window be found from 0x00000758 (LoWord)
It generates such bugs as:
WinExists('192.168.0.1 - WINDOW NOT EXISTS') - This CODE return TRUE and finds window handle 0x000000C0, but the original handle 0x003100C0
WinGetHandle ("192.txt - THIS WINDOW NOT REALY EXISTS") - this return not valid handle
#include <WinAPI.au3> $h = WinGetHandle('[CLASS:SciTEWindow]') ConsoleWrite('Handle 1: ' & $h & @CRLF) $hLo = _WinAPI_LoWord($h) ConsoleWrite('Handle 2: ' & WinGetHandle($hLo) & @CRLF)
Here function WinGetHandle return wrong result to.
Bug is not observed on x64 systems.
Autoit 3.3.10.0 - 3.3.x.x
Attachments (0)
Change History (11)
comment:1 Changed 5 years ago by Jon
comment:2 Changed 5 years ago by Jpm
I don't know why you want to use only the low part of an handle
an handle is 32-bit under Autoit-32 and 64-bit under AutoIt-64
WinGetHandle is supposed to work with title/text.
Weird the low part is pointing to the same Windows just use the return vale as a whole
comment:3 Changed 5 years ago by jchd18
comment:4 Changed 5 years ago by anonymous
WinGetHandle is supposed to work with title/text.
OK. But why
WinExists("192.168.0.1 - WINDOW NOT EXISTS")
returns 1? I don't have window with title "192.168.0.1 - WINDOW NOT EXISTS" and can't find window with handle 0x000000C0 (tried to use WinList, _WinAPI_EnumWindows, _WinAPI_EnumChildWindows).
Or "192.168.0.1 - WINDOW NOT EXISTS" is not a string?
comment:5 follow-up: ↓ 6 Changed 5 years ago by BrewManNH
It doesnt't return 1 for me, it returns 0x0000000000000000 with an @error set to 1. I used this code, modified from the help file example, which should be just as valid.
#include <MsgBoxConstants.au3> Example() Func Example() ; Run Notepad ;~ Run("notepad.exe") ; Wait 10 seconds for the Notepad window to appear. ;~ WinWait("[CLASS:Notepad]", "", 10) ; Retrieve the handle of the Notepad window using the classname of Notepad. Local $hWnd = WinGetHandle("[CLASS:Notepad]") If @error Then ConsoleWrite('@@ Debug(' & @ScriptLineNumber & ') : $Error code: ' & @error & @CRLF) ;### Debug Console ConsoleWrite(VarGetType($hWnd) & @CRLF) MsgBox($MB_SYSTEMMODAL, "", "An error occurred when trying to retrieve the window handle of Notepad.") Exit EndIf ConsoleWrite('@@ Debug(' & @ScriptLineNumber & ') : $Error code: ' & @error & @CRLF) ;### Debug Console ; Display the handle of the Notepad window. MsgBox($MB_SYSTEMMODAL, "", $hWnd) ; Close the Notepad window using the handle returned by WinGetHandle. WinClose($hWnd) EndFunc ;==>Example
BTW, using WinExists instead of WinGetHandle returns 0 with an error of 0.
comment:6 in reply to: ↑ 5 Changed 5 years ago by anonymous
It doesnt't return 1 for me, it returns 0x0000000000000000 with an @error set to 1. I used this code, modified from the help file example, which should be just as valid.
...
BTW, using WinExists instead of WinGetHandle returns 0 with an error of 0.
It looks like this bug is only in x86
comment:7 Changed 5 years ago by anonymous
Win7 x86
v 3.3.8.x
MsgBox(0, "", WinExists("192.txt")) ; 0 MsgBox(0, "", WinExists(0x000000C0)) ; 0 MsgBox(0, "", WinExists(HWnd(0x000000C0))) ; 1
v 3.3.10.x +
MsgBox(0, "", WinExists("192.txt")) ; 1 MsgBox(0, "", WinExists(0x000000C0)) ; 1 MsgBox(0, "", WinExists(HWnd(0x000000C0))) ; 1
Which version works correctly?
comment:8 Changed 5 years ago by Jon
Ok, something is wrong there. Let me run it through the debugger.
comment:9 Changed 5 years ago by Jon
It looks like it was a change trancexx made in Dec 2011 that just missed the 3.3.8.0 release and is only now showing up.
The change was what to do when a window match didn't occur. Basically if a window match fails it tries to interpret passed strings as window handles and then tries again. Oddly "192.txt" messes up when it is forced to convert from a string into a handle and ends up evaluating as 192 or C0. And that happens to be a valid handle.
This will possibly cause false matches with any string window titles that start with numbers. I'll have to remove the change or improve the conversion so that these errors don't occur.
comment:10 Changed 5 years ago by Jon
- Milestone set to 3.3.13.14
- Owner set to Jon
- Resolution set to Fixed
- Status changed from new to closed
comment:11 Changed 5 years ago by anonymous
How about
MsgBox(0, "", HWnd("192.txt")) ; 0x000000C0 MsgBox(0, "", HWnd("log.txt")) ; 0x00000000
Guidelines for posting comments:
- You cannot re-open a ticket but you may still leave a comment if you have additional information to add.
- In-depth discussions should take place on the forum.
For more information see the full version of the ticket guidelines here.
Seems to be some weird quirk of the Windows API. Outside of autoit I can get a handle to a window
0x00960608 and yet IsWindow(0x608) seems to point to the same window. I can even get window text from the same windows using both handles. | https://www.autoitscript.com/trac/autoit/ticket/2820 | CC-MAIN-2020-05 | en | refinedweb |
This file defines basic types and constants for utf.h to be platform-independent. umachine.h and utf.h are included into utypes.h to provide all the general definitions for ICU. All of these definitions used to be in utypes.h before the UTF-handling macros made this unmaintainable.
Definition in file umachine.h.
#include "unicode/ptypes.h"
#include <stddef.h>
#include "unicode/urename.h"
Go to the source code of this file. | http://icu.sourcearchive.com/documentation/4.4~rc1-1/umachine_8h.html | CC-MAIN-2017-39 | en | refinedweb |
If anybody's telling you that BizTalk Server 2004 (BTS) is the best Microsoft product to come along in quite a spell, believe it. If they tell you that it's the closest thing to ERP that Microsoft has yet accomplished, believe it. If they tell you it's easy to use and sets the bar higher for distributed application configuration, you can safely laugh in their faces.
There's a lot of irony in this, because the third BTS is by far the best, is deeply integrated into Visual Studio .NET (where it ought to be), and of course takes the very graphically friendly Orchestration feature to a new level of versatility. The whole point of these features is ease-of-use. But there's almost no developers' documentation yet available, so you're pretty much on your own in figuring out the best way to do things; and what little documentation exists isn't nearly detailed enough. The error handling also leaves much to be desired: tracking down your mistakes can be "pull-your-hair-out" frustrating.
Below, we'll cover several bases: we'll track down the places where your errors are noted; we'll look at some of the easy-to-make mistakes; and consider some good practices to make interface development easier.
Where the errors are
When you've built an interface in BTS 2004 and start testing, you could find yourself submitting test messages to the server and having your enabled solution swallowing them whole. Nothing seems to go wrong, but your message doesn't end up where it was supposed to. Here's where to look:
Health and Activity Tracking
This powerful utility (Start | All Programs | Microsoft BizTalk Server 2004 | Health and Activity Tracking) is your first line of inquiry when a message vanishes. Select Operations | Messages to track down your missing message (Figure A). There are several options for refining a query to list messages, but leaving the defaults in place is usually fine. Click on Run Query, and you'll get a list of messages. Look for your missing message, which should have a "suspended" status. You can get more detail by selecting your message in the list, then right-clicking—choose either Orchestration Debugger or Message Flow.
Event Viewer
If you've sent yourself a test message that is intended for a receiving schema that will define it, or translate it, or reformat it, and the incoming message is a little off in some way, the schema's going to show a parsing error or application-internal error (this is common in message translation such as EDI or HL7). You'll get no outward sign of it, just a vanished message.
Go to Start | All Programs | BizTalk Server 2004 | BizTalk Server Administration to bring up the BTS Administration Console. Then open the Event Viewer (below Console Root) and click on Application. You'll get an error log that will tell you what happened in the parsing of the message (Figure B). Click on the error to get the specific parsing error message. Warning: a parsing error will suspend processing of your message upon occurrence. Therefore, your message could contain many formatting mistakes or incorrect fields and you must discover them one at a time via this technique (i.e., find an error, fix the data, run it again, find an error, fix the date ... and so on).
Not for your to-do list
Here are some things not to do when developing and testing a messaging interface. There are many more, but these are a few that BizTalk won't give you any hints about.
Deploying the same schema twice
If you're developing more than one interface at once, it doesn't matter that you're using a different solution to store each interface's project components; if you deploy the same schema within different assemblies, you're going to run afoul with the fact that the namespace for both instances is the same. (Example: Suppose you're deploying two HL7 ADT interfaces, each for a different ADT document. They'll both probably use the same message header parsing schema, since they'll have those segments and elements in common.) Try to deploy two header projects as assemblies, containing the same schema, and neither will work! Solution: change the namespace—or, to your surprise, you may find that one deployed assembly will do the job for both solutions.
Using the wrong pipeline
When setting up Send and Receive ports for message transport, you'll have various pipelines available to you, configurable within the ports (See Figure C). These pipelines are the circulatory system for your messages, and using the wrong one can cause BTS to fail to process the incoming message (which pipeline to use for particular steps in your particular interface or process is too detailed a question to address here). In short, if a message didn't show up where it was supposed to, and you aren't yet experienced in using BTS 2004, experiment with changing the pipeline. If this gives you a good result, you'll rapidly learn to match the correct pipeline to the correct step in your processes.
Going from one message format to another in an orchestration
This kind of exercise is fraught with peril, but is exactly the kind of capability you really need for true distributed application messaging. There are many messaging transactions that BTS 2004 does painlessly—inbound formatted document to XML document, XML document to an adapter, and then into a SQL Server database, etc.—but jumping from an interim XML document to, say, a set of objects in a .DLL defining a database record, with an associated method for inserting a new record into a table—that's tricky stuff, and BizTalk will be very fussy about accommodating you.
You need to create an orchestration to do something that complex, for a start. You'll do a Construct Message for the receiving object(s), which you must make XML-serializable (if you want to map from an XML document). Moreover, if the method you're using to add the elements to a database table is in a different .DLL, that one will have to be XML-serializable, too, even if it contains no objects with properties and even if it compiles cleanly in other contexts. Do your element-to-element mapping in a Transform expression (use xPath to pull data out of your XML document), and use an Expression to execute the Add.
Make it easier on yourself
There are several tricks you use, in development, to help make this whole process easier. Here are a few:
Set up multiple Sends with file capture
When you're putting together messaging in BizTalk Explorer (apart from any orchestrations), you can attach multiple Send ports (See Figure D) to a specific Party ("Party" is how a pipeline identifies an external messaging source).
One of these Send ports will be the next step in your business process, but you can create one or more for your own use. This not only gives you a running breakpoint of sorts, but allows you to examine the contents of a message at different points along its journey. To create such a Send port, right-click on Send Ports in BizTalk Explorer and select Add Send Port (let it be a Static One-Way Port). Set it up with Transport Type File and a directory/file address. A copy of your message (or acknowledgment, or whatever it is you're processing) will be deposited in that file/directory (you can also use this process within an orchestration, if it's helpful, though the set-up of the File transport will be done through a wizard when you create the port).
Isolate the orchestration from pipeline activity
If possible, do your receiving, qualifying, and acknowledgment of messages in BizTalk Explorer, and do mapping and database work in orchestrations. Why? First, your orchestrations will be all the more complex if you do everything there. Second, the graphic display of those portions of the process is really unnecessary, since the receiving, qualifying, and acknowledgment steps are part of any messaging transaction with an outside party. Third, debugging is simpler: the techniques above will help you debug the pitch-and-catch with your messaging partner (call this the "network" interface), as well as permitting you to enable multiple partners in a collective process with less confusion (while confining the business logic in the orchestration to exactly that, the business logic).
One of the powerful aspects of BTS 2004 is the pipelines; let them do as much work as possible and keep the front-end messaging work distinct from the crunch of mapping and database interface. (Note that this concept should not apply to communication with other points in your internal application system. Include them in your orchestrations or your solution design will be incomprehensible to other developers.) It may not be what BTS 2004's creators intended, but in the long run, it's tidier and easier to debug.. | http://www.techrepublic.com/article/debugging-your-biztalk-server-2004-interface-solutions/5497814/ | CC-MAIN-2017-39 | en | refinedweb |
Generic Types in Visual Basic.
An analogy is a screwdriver set with removable heads. You inspect the screw you need to turn and select the correct head for that screw (slotted, crossed, starred). Once you insert the correct head in the screwdriver handle, you perform the exact same function with the screwdriver, namely turning the screw.
When you define a generic type, you parameterize it with one or more data types. This allows the using code to tailor the data types to its requirements. Your code can declare several different programming elements from the generic element, each one acting on a different set of data types. But the declared elements all perform the identical logic, no matter what data types they are using.
For example, you might want to create and use a queue class that operates on a specific data type such as String. You can declare such a class from System.Collections.Generic.Queue, as the following example shows.
You can now use stringQ to work exclusively with String values. Because stringQ is specific for String instead of being generalized for Object values, you do not have late binding or type conversion. This saves execution time and reduces run-time errors.
For more information on using a generic type, see How to: Use a Generic Class.
Example of a Generic Class
The following example shows a skeleton definition of a generic class.
In the preceding skeleton, t is a type parameter, that is, a placeholder for a data type that you supply when you declare the class. Elsewhere in your code, you can declare various versions of classHolder by supplying various data types for t. The following example shows two such declarations.
The preceding statements declare constructed classes, in which a specific type replaces the type parameter. This replacement is propagated throughout the code within the constructed class. The following example shows what the processNewItem procedure looks like in integerClass.
For a more complete example, see How to: Define a Class That Can Provide Identical Functionality on Different Data Types.
Eligible Programming Elements
You can define and use generic classes, structures, interfaces, procedures, and delegates. Note that the .NET Framework defines several generic classes, structures, and interfaces that represent commonly used generic elements. The System.Collections.Generic namespace provides dictionaries, lists, queues, and stacks. Before defining your own generic element, see if it is already available in System.Collections.Generic.
Procedures are not types, but you can define and use generic procedures. See Generic Procedures in Visual Basic.
Advantages of Generic Types
A generic type serves as a basis for declaring several different programming elements, each of which operates on a specific data type. The alternatives to a generic type are:
A single type operating on the Object data type.
A set of type-specific versions of the type, each version individually coded and operating on one specific data type such as String, Integer, or a user-defined type such as customer.
A generic type has the following advantages over these alternatives:
Type Safety. Generic types enforce compile-time type checking. Types based on Object accept any data type, and you must write code to check whether an input data type is acceptable. With generic types, the compiler can catch type mismatches before run time.
Performance. Generic types do not have to box and unbox data, because each one is specialized for one data type. Operations based on Object must box input data types to convert them to Object and unbox data destined for output. Boxing and unboxing reduce performance.
Types based on Object are also late-bound, which means that accessing their members requires extra code at run time. This also reduces performance.
Code Consolidation. The code in a generic type has to be defined only once. A set of type-specific versions of a type must replicate the same code in each version, with the only difference being the specific data type for that version. With generic types, the type-specific versions are all generated from the original generic type.
Code Reuse. Code that does not depend on a particular data type can be reused with various data types if it is generic. You can often reuse it even with a data type that you did not originally predict.
IDE Support. When you use a constructed type declared from a generic type, the integrated development environment (IDE) can give you more support while you are developing your code. For example, IntelliSense™ can show you the type-specific options for an argument to a constructor or method.
Generic Algorithms. Abstract algorithms that are type-independent are good candidates for generic types. For example, a generic procedure that sorts items using the IComparable interface can be used with any data type that implements IComparable.
Constraints
Although the code in a generic type definition should be as type-independent as possible, you might need to require a certain capability of any data type supplied to your generic type. For example, if you want to compare two items for the purpose of sorting or collating, their data type must implement the IComparable interface. You can enforce this requirement by adding a constraint to the type parameter.
Example of a Constraint
The following example shows a skeleton definition of a class with a constraint that requires the type argument to implement IComparable.
If subsequent code attempts to construct a class from itemManager supplying a type that does not implement IComparable, the compiler signals an error.
Types of Constraints
Your constraint can specify the following requirements in any combination:
The type argument must implement one or more interfaces
The type argument must be of the type of, or inherit from, at most one class
The type argument must expose a parameterless constructor accessible to the code that creates objects from it
The type argument must be a reference type, or it must be a value type
If you need to impose more than one requirement, you use a comma-separated constraint list inside braces ({ }). To require an accessible constructor, you include the New (Visual Basic) keyword in the list. To require a reference type, you include the Class (Visual Basic) keyword; to require a value type, you include the Structure (Visual Basic) keyword.
For more information on constraints, see Type List.
Example of Multiple Constraints
The following example shows a skeleton definition of a generic class with a constraint list on the type parameter. In the code that creates an instance of this class, the type argument must implement both the IComparable and IDisposable interfaces, be a reference type, and expose an accessible parameterless constructor.
Important Terms
Generic types introduce and use the following terms:
Generic Type. A definition of a class, structure, interface, procedure, or delegate for which you supply at least one data type when you declare it.
Type Parameter. In a generic type definition, a placeholder for a data type you supply when you declare the type.
Type Argument. A specific data type that replaces a type parameter when you declare a constructed type from a generic type.
Constraint. A condition on a type parameter that restricts the type argument you can supply for it. A constraint can require that the type argument must implement a particular interface, be or inherit from a particular class, have an accessible parameterless constructor, or be a reference type or a value type. You can combine these constraints, but you can specify at most one class.
Constructed Type. A class, structure, interface, procedure, or delegate declared from a generic type by supplying type arguments for its type parameters. | https://msdn.microsoft.com/en-us/library/w256ka79(v=vs.85) | CC-MAIN-2017-39 | en | refinedweb |
ACO-Pants 0.4.0
A Python3 implementation of the ACO Meta-Heuristic
A Python3 implementation of the Ant Colony Optimization Meta-Heuristic
Overview
Pants provides you with the ability to quickly determine how to visit a collection of interconnected nodes such that the work done is minimized. Nodes can be any arbitrary collection of data while the edges represent the amount of “work” required to travel between two nodes. Thus, Pants is a tool for solving traveling salesman problems.
The world is built from a list of its usefulness in finding shorter solutions. The ant that traveled the least distance is considered to be the local best solution. If the local solution has a shorter distance than the best from any previous iteration, it then becomes the global best solution. The elite ant(s) then deposit their pheromone along the path of the global best solution to strengthen it further, and the process repeats.
You can read more about Ant Colony Optimization on Wikipedia.
Installation
Installation via pip
$ pip3 install ACO-Pants
Useage
Using Pants is simple. The example here uses Euclidean distance between 2D nodes with (x, y) coordinates, but there are no real requirements for node data of any sort.
Import Pants (along with any other packages you’ll need).
import pants import math import random
Create your data points; these become the nodes. Here we create some random 2D points. The only requirement for a node is that it is distinguishable from all of the other nodes.
nodes = [] for _ in range(20): x = random.uniform(-10, 10) y = random.uniform(-10, 10) nodes.append((x, y))
Define your length function. This function must accept two nodes and return the amonut of “work” between them. In this case, Euclidean distance works well.
def euclidean(a, b): return math.sqrt(pow(a[1] - b[1], 2) + pow(a[0] - b[0], 2))
Create the World from the nodes and the length function.
world = pants.World(nodes, euclidean)
Create a Solver for the World.
solver = pants.Solver(world)
Solve the World with the Solver. Two methods are provided for finding solutions: solve() and solutions(). The former returns the best solution found, whereas the latter returns each solution found if it is the best thus far.
solution = solver.solve() # or solutions = solver.solutions()
Inspect the solution(s).
print(solution.distance) print(solution.tour) # Nodes visited in order print(solution.path) # Edges taken in order # or best = float("inf") for solution in solutions: assert solution.distance < best best = solution.distance
Run the Demo
Included is a 33 “city” demo that can be run from the command line. Currently it accepts a single integer command line parameter to override the default iteration limit of 100.
Solver settings: limit=100 rho=0.8, Q=1 alpha=1, beta=3 elite=0.5 Time Elapsed Distance -------------------------------------------------- 0:00:00.017490 0.7981182992833705 0:00:00.034784 0.738147755518648 0:00:00.069041 0.694362159048816 0:00:00.276027 0.6818083968312925 0:00:00.379039 0.6669398280432167 0:00:00.465924 0.6463548571712562 0:00:00.585685 0.6416519698864324 0:00:01.563389 0.6349308484274142 -------------------------------------------------- Best solution: 0 = (34.02115, -84.267249) 9 = (34.048194, -84.262126) 6 = (34.044915, -84.255772) 22 = (34.061518, -84.243566) 23 = (34.062461, -84.240155) 18 = (34.060461, -84.237402) 17 = (34.060164, -84.242514) 12 = (34.04951, -84.226327) 11 = (34.048679, -84.224917) 8 = (34.046006, -84.225258) 7 = (34.045483, -84.221723) 13 = (34.051529, -84.218865) 14 = (34.055487, -84.217882) 16 = (34.059412, -84.216757) 25 = (34.066471, -84.217717) 24 = (34.064489, -84.22506) 20 = (34.063814, -84.225499) 10 = (34.048312, -84.208885) 15 = (34.056326, -84.20058) 5 = (34.024302, -84.16382) 32 = (34.118162, -84.163304) 31 = (34.116852, -84.163971) 30 = (34.109645, -84.177031) 29 = (34.10584, -84.21667) 28 = (34.071628, -84.265784) 27 = (34.068647, -84.283569) 26 = (34.068455, -84.283782) 19 = (34.061281, -84.334798) 21 = (34.061468, -84.33483) 2 = (34.022585, -84.36215) 3 = (34.022718, -84.361903) 4 = (34.023101, -84.36298) 1 = (34.021342, -84.363437) Solution length: 0.6349308484274142 Found at 0:00:01.563389 out of 0:00:01.698616 seconds. $
Known Bugs
None of which I am currently aware. Please let me know if you find otherwise.
Troubleshooting
Credits
- Robert Grant rhgrant10@gmail.com
License
GPL
- Author: Robert Grant
- License: LICENSE.txt
- Package Index Owner: rhgrant10
- DOAP record: ACO-Pants-0.4.0.xml | https://pypi.python.org/pypi/ACO-Pants/0.4.0 | CC-MAIN-2017-39 | en | refinedweb |
Hello, I'm working on an assignment due next week and I'm having quite a bit of trouble.
The program is to ask the user for input of a double until they desire to stop. That part, I imagine will be in the Test file. The master file is to store the inputs in an array list, count how many instances of doubles there were, and then output each variable, add them all the together, find the average, the largest, and the smallest value. I know this is probably really easy, but I haven't been in Java for 2 years so I'm struggling and I need to know what I'm doing wrong if, that is I'm doing anything right at all lol.
Example of desired output:
Please enter a double value: 3.5
Another?: Y
Please enter a double value: 4.3
Another?: Y
Please enter a double value: 7.3
Another?: N
Value 1: 3.50000
Value 2: 4.30000
Value 3: 7.30000
Sum: 15.100000
Average: 5.033333
Largest: 7.300000
Smallest: 3.500000
List cleared.
---My code so far---
public class ArrayList { private double sum, average, large, small; public ArrayList() { final int MAX_ARRAY = 15; //hold up to 15 values double[] list = new double[ MAX_ARRAY ]; } public Data() { for(int count = 1; count < list.length; count++) } public void process(list) { double sum, average, large, small; for(int count = 1; count < list.length; count++) sum += list[ count ]; for(int count = 1; count < list.length; count++) average = list[ count ] / count; for(int count = 0; count < list.lenght; count++) if(large < list[ count ];) { large = list[ count ]; large = count; } for(int count = 0; count < list.length; count++) if(small > list[ count ];) { small = list[ count ]; small = count; } } public display() { return list; return sum; return average; return large; return small; } public clearData() { list[] = 0; count = 0; } }
Any help is appreciated, thanks.
Edited by peter_budo: Keep It Clear - Do wrap your programming code blocks within [code] ... [/code] tags | https://www.daniweb.com/programming/software-development/threads/308976/array-lists-please-help | CC-MAIN-2017-39 | en | refinedweb |
<<
oising blender animations with opencv and python
Nikos Priniotakis posted a teaser of a denoising script for blender animations a few months ago, that shows really impressive improvements on a noisy cycles animation (see his original tweet here) I sent some twitter messages back and forth with him and he sent me the links to the opencv denoise function he used for the demo. So I finaly found the time to wirte a short python script that uses pyopencv to denoise all the pictures in a folder and copies it to another folder.
The script I used to denoise my animation is here
import cv2 import os import numpy as np from matplotlib import pyplot as plt files = os.listdir("metabubbles/") for f in files: if f.endswith('.png') and f.startswith('0'): print f img = cv2.imread("metabubbles/%s" %f); dst = cv2.fastNlMeansDenoisingColored(img) cv2.imwrite('res/%s' %f, dst);
The denoising process is no magical pixiedust that can be sprinkled on your noisy cycles-renders to fix everything but when used correcly it can improve preview renders a lot, but if the script is used on an image sequence that is too noisy it introduced a whole lot of new artifacts. I used the script on an amiation I rendered last year. Here is how the original video compares to the denoised version.
read more ...
AN experiment - curling curves
I created a bunch of curves curling around a bezier path using the Animation Nodes Addon for Blender
you can download the blend file here
read more ...
AN experiment - Cube Snake
I transformed a grid of cubes into a wiggly line snake using the vector animation node from the animation nodes addon in blender
you can download the blend file here
read more ...
| http://www.local-guru.net/blog/tag/cycles | CC-MAIN-2017-39 | en | refinedweb |
I'm trying to write a python code that counts the frequency of each word in a text file. The code should display one line per unique word. The code I wrote is displaying duplicate words.
import string
text = open('mary.txt','r')
textr = text.read()
for punc in string.punctuation:
textr = textr.replace(punc, "")
wordlist = textr.split()
for word in wordlist:
count = wordlist.count(word)
print word,':',count
are : 1
around : 1
as : 1
at : 2
at : 2
away : 1
back : 1
be : 2
be : 2
because : 1
below : 1
between : 1
both : 1
but : 1
by : 2
by : 2
at : 2
be : 2
by : 2
The issue with your code is that you're creating a list of all the words and then looping over them. You want to create some sort of data structure that only stores unique words. A
dict is a good way to do this, but it turns out there's a specialized collection in Python called a
Counter that's built for exactly this purpose.
Give this a try (untested):
from collections import Counter import string text = open('mary.txt','r') textr = text.read() for punc in string.punctuation: textr = textr.replace(punc, "") counts = Counter(textr.split()) for word, count in counts.items(): print word,':',count | https://codedump.io/share/0sJOSCvpBCSK/1/python---display-one-line-for-each-unique-word | CC-MAIN-2017-39 | en | refinedweb |
I've read through several curly braces and braces differences in stackoverflow, such as What is the formal difference in Scala between braces and parentheses, and when should they be used?, but I didn't find the answer for my following question
object Test {
def main(args: Array[String]) {
val m = Map("foo" -> 3, "bar" -> 4)
val m2 = m.map(x => {
val y = x._2 + 1
"(" + y.toString + ")"
})
// The following DOES NOT work
// m.map(x =>
// val y = x._2 + 1
// "(" + y.toString + ")"
// )
println(m2)
// The following works
// If you explain {} as a block, and inside the block is a function
// m.map will take a function, how does this function take 2 lines?
val m3 = m.map { x =>
val y = x._2 + 2 // this line
"(" + y.toString + ")" // and this line they both belong to the same function
}
println(m3)
}
}
The answer is very simple, when you use something like:
...map(x => x + 1)
You can only have one expression. So, something like:
scala> List(1,2).map(x => val y = x + 1; y) <console>:1: error: illegal start of simple expression List(1,2).map(x => val y = x + 1; y) ...
Simply doesn't work. Now, let's contrast this with:
scala> List(1,2).map{x => val y = x + 1; y} // or scala> List(1,2).map(x => { val y = x + 1; y }) res4: List[Int] = List(2, 3)
And going even a little further:
scala> 1 + 3 + 4 res8: Int = 8 scala> {val y = 1 + 3; y} + 4 res9: Int = 8
Btw, the last
y never left the scope in the
{},
scala> y <console>:18: error: not found: value y | https://codedump.io/share/tD0zDxRHIS0j/1/strange-behaviour-in-curly-braces-vs-braces-in-scala | CC-MAIN-2017-39 | en | refinedweb |
ADO.Net
This Book gives a good idea on Ado.net and Creating a Custom .NET Data Provider
ADO.NET is a large set of .NET classes that enable us to retrieve and manipulate data, and update data
sources, in very many different ways. As an integral part of the .NET framework, it shares many of its features:
features such as multi-language support, garbage collection, just-in-time compilation, object-oriented design,
and dynamic caching, and is far more than an upgrade of previous versions of ADO.
This book provides a thorough investigation of the ADO.NET classes (those included in the System.Data,
System.Data.Common, System.Data.OleDb, System.Data.SqlClient, and System.Data.Odbc
namespaces). We adopt a practical, solutions-oriented approach, looking at how to effectively utilize the
various components of ADO.NET within data-centric application development.
Download
Thanks so much for this good article.
Regards,
Vishnu | http://www.dotnetspider.com/resources/41330-ADO-Net.aspx | CC-MAIN-2017-39 | en | refinedweb |
Posted 11 Jul 2015
Link to this post
Posted 11 Jul 2015
in reply to
Xiaoming
Link to this post
Posted 16 Jul 2015
Link to this post
RadDataForm form =
new
RadDataForm(context);
form.setEntity(
Person());
form.setCommitMode(CommitMode.MANUAL);
layoutRoot.addView(form);
Posted 29 Sep 2015
Link to this post
Hi Victor,
RadDataForm dataForm = new RadDataForm(this);
dataForm.Entity = new Product();
It is throwing an error stating "Error5Cannot implicitly convert type 'TelerikListView.Product' to 'Com.Telerik.Widget.Dataform.Engine.IEntity'. An explicit conversion exists (are you missing a cast?)
After that i tried this
dataForm.Entity = (Com.Telerik.Widget.Dataform.Engine.IEntity)new Product();
it is throwing an exception
"Cannot cast from source type to destination type."
please rectify this.
And also i downloaded UI for Xamarin and i added an samples solution which was given by telerik to my solution explorer and i executed it is throwing an error
Error2The type or namespace name 'EntityProperty' could not be found (are you missing a using directive or an assembly reference?)C:\Program Files (x86)\Telerik\UI for Xamarin Q2 2015\Examples\Android\Fragments\DataForm\CustomEditor.cs1746Samples
Posted 01 Oct 2015
Link to this post | http://www.telerik.com/forums/how-to-use-raddataform | CC-MAIN-2017-39 | en | refinedweb |
- was not declared in this scope
- no such a file or directory
- undefined reference to
- error while loading shared library, or cannot open shared object file
The thing is actually very simple: the compiler is not told to find the right file at the right place. The discussion below is based on Linux using GCC for C and C++.
Problem 1: Usually, this problem has nothing to do with your system settings or compiler configuration. It's a problem within your source code. On the top level, the function is not defined. The reason can vary.
was not declared in this scope
One reason is that the file declaring (or defining, if the declaration is done together with definition) the functions is not specified, e.g., the header file is not included. This can be easily fixed by including such a file in your source code.
Another reason is that you used the function beyond its class or namespace.
Problem 2:
No such a file or directory for a header file
If a header file is not found, the preprocessor or the preprocessor part of the compiler, will raise the error
No such a file or directory. Depends on where the header files are supposed to be at, there are different solutions.
Most C/C++ compiles treat the quote form of
#includeand angle-bracket form of
#includedifferently. Simply running
$ echo | gcc -E -Wp,-v -will tell you the difference:
#include "..." search starts here: #include <...> search starts here: /usr/lib/gcc/x86_64-linux-gnu/5/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/5/include-fixed /usr/include/x86_64-linux-gnu /usr/include
Those directories listed under the section
#include <...> search starts here:are referred to as standard system directories [1]. They are from the convention of Linux systems' file hierarchy.
If you want to expand the search for angle-bracket form
#include<...>, use the
-Ioption. The preprocessor will search header files in directories followed by
-Ioption before searching system directories. Hence, a header file in a
-I-option directory will override its counterpart in system directories and be used to compile your code.
For quote form
#include"...", the preprocessor will first search in the current directory that the source file is at and then all the directories specified after the
-iquoteoption.
The example below will show you how
-Iand
-iquoteappend search paths:
l$ echo | gcc -iquote/home/forrest/Downloads -I/home/forrest/Dropbox -E -Wp,-v - #include "..." search starts here: /home/forrest/Downloads #include <...> search starts here: /home/forrest/Dropbox /usr/lib/gcc/x86_64-linux-gnu/5/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/5/include-fixed /usr/include/x86_64-linux-gnu /usr/include
So, if you have the following macro in your code called
mycode.c:
#include < xyz.h> #include "abc/xyz.h"and your GCC command is like this
gcc mycode.c -I/opq/ -iquote/uvwthen the GCC's preprocessor (again, it's part of the GCC) will look for header files in the following full paths (on Ubuntu Linux 16.04) in addition to the search in system directories:
/opq/xyz.h /uvw/abc/xyz.hAccording to GCC document[1], it searches for header files in quote form
#include"..."before searching for angle-bracket form
#include"...".
Further specifications can be achieved using two other options
-isystemand
-idirafter. They are all well documented in GCC document[2].
Problem 3: You probably see an error like this
Undefined reference to
face.cpp:(.text._ZN2cv3Mat7releaseEv[_ZN2cv3Mat7releaseEv]+0x4b): undefined reference to `cv::Mat::deallocate()' collect2: error: ld returned 1 exit statusThis is a link-time error when the linker
ld, which is called automatically when you use GCC, cannot find the binary library that contains at least one function called by your code.
To fix, first tell the compiler (actually it's linker part) the path containing library files using the
-Loption [2] and then library file names using the
-loption [4]. If we denote the values after
-land
-Loptions as X and Y respectively, the compiler will search for every file called
libY.sounder every directory X. That's why on (almost) all Linux systems, a shared library file begins with
liband ends with the appendix
.so, such as
libopencv_core.so.
For example, the command
$ g++ face.cpp -L/opencv/lib -lopencv_core -lopencv_videoiowill ask the linker to find the following binary library files:
/opencv/lib/libopencv_core.so /opencv/lib/libopencv_videoio.so
Note that you may not be able to use some shell directives, such as
~after the
-Loption.
In most cases, you do not need to use the
-Land
-loptions because the compiler (again, its linker part) automatically searches in a set of system directories. When you have to, some tools can help you, such as pkg-config. I will write another blog post.
Problem 4:
error while loading shared libraries or
cannot open shared object file
Now, your program has been successfully compiled. When running it, you probably see an error like this:
./a.out: error while loading shared libraries: libopencv_core.so.3.3: cannot open shared object file: No such file or directoryThis problem is from the loader. Unlike the link-time error above, this problem is a run-time error. The shared library filenames are usually hardcoded into your binary program. To fix it, simply tell the loader where to find the specified shared library files. There are multiple ways.
Solution 1: Most Linux systems maintain an environment variable
LD_LIBRARY_PATHand the loader will search for binary library files requested by a program in all directories listed in
LD_LIBRARY_PATH, on top of a set of standard system directories, such as
/libor
/usr/lib. To set
LD_LIBRARY_PATHis like how you set any other environment variables, e.g., export on the Shell or edit and source
~/.bashrc.
Solution 2: Most Linux systems also maintains a shared library cache by a program called
ld-config. It remembers where to find the default location of each shared library file. Simply run
$ ldconfig -pit will tell you the mapping from shared library files to absolute paths in the system, e.g.,
2591 libs found in cache `/etc/ld.so.cache' libzzipwrap-0.so.13 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzzipwrap-0.so.13 libzzipmmapped-0.so.13 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzzipmmapped-0.so.13 libzzipfseeko-0.so.13 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzzipfseeko-0.so.13
To change the mapping, simply edit
/etc/ld.so.confand then run
$ sudo ldconfigto rebuild the cache.
Solution 3: The two changes above are applied system-wide. A more flexible way is to specify the location of the shared libraries when linking your code using the
-Wl,-rpathoption, e.g.,
g++ face.cpp -idirafter ~/Downloads/opencv_TBB_install/include -L/home/forrest/Downloads/opencv_TBB_install/lib -lopencv_core -lopencv_objdetect -lopencv_highgui -lopencv_imgproc -lopencv_videoio -Wl,-rpath=/home/forrest/Downloads/opencv_TBB_install/lib
The disadvantage of Solution 3 is that if you change the location of the shared library, then you will run into error again.
References:
[1] Filesystem Hierarchy,
[2] GCC options for directories
[3] Shared Library How-To,
[4] GCC options for linking, | http://forrestbao.blogspot.com/2017/08/solution-to-most-not-found-or-undefined.html | CC-MAIN-2017-39 | en | refinedweb |
Matplotlib: shaded regions¶
Use the fill function to make shaded regions of any color tint. Here is an example.
In [1]:
from pylab import * x = arange(10) y = x # Plot junk and then a filled region plot(x, y) # Make a blue box that is somewhat see-through # and has a red border. # WARNING: alpha doesn't work in postscript output.... fill([3,4,4,3], [2,2,4,4], 'b', alpha=0.2, edgecolor='r')
Out[1]:
[<matplotlib.patches.Polygon at 0x7f7a19aac890>]
Section author: jesrl
Attachments | http://scipy-cookbook.readthedocs.io/items/Matplotlib_ShadedRegions.html | CC-MAIN-2017-39 | en | refinedweb |
I was exploring Twitter APIs and thought if I can easily add Twitter widgets to Fiori apps. This especially made sense for Marketing and Campaign Fiori apps. I tried it in jsbin and found it to be very simple and wanted to share that experience.
Scenario: Consider you are running a marketing campaign and wanted to keep a tab on the twitter stream for a specific search-term.
Step1. Generate Code.
First thing to do is to create a widget in Twitter site here. Here you need to provide your search term and “Create Widget”.
This will create the code for your widget. For the above Search Query it looked like this.
As you can see first line is a HTML tag and second and third line here contain javascript code.
Step 2. Adding the generated Code to our Fiori app.
XML views can be easily enhanced with HTML without any need to encapsulate the code.
Add the first line above where ever you want to display the widget.
<mvc:View xmlns: <Page id="page1" title="Products by Category" enableScrolling="false"> <content> <html:a#drilling Tweets</html:a> </content> </Page> </mvc:View>
Note that I have added ‘html’ namespace to the ‘a’ tag in the original code.
Add the javascript code (2nd and 3rd line in this case, without script tag) in the generated code to the “onInit” method of the controller.
That is it. You are done.
I have not yet enhanced any Fiori app with this, but got it done in a JS Bin. You can check it here.
JS Bin – Collaborative JavaScript Debugging
Hope it was useful! | https://blogs.sap.com/2015/12/06/enhance-your-fori-app-with-a-twitter-widget/ | CC-MAIN-2017-39 | en | refinedweb |
:.
In toolbox.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Dear Josh,
This article is great, helps me a lot. I've developed a project which also needs to resolve the issues mentioned in your article: to scroll a TreeViewItem into the viewable area of the TreeView control when its associated ViewModel object selects it. In my project,I tried to introduce your Attached Behavior. but unfortunately, my debugging did not come out, it was always run into "type not found" error. Suspicion of possible other causes an error. I decided to make a simple project, and see if I can properly introduce your method, but there was still the same error.
Here is what I did a simple test project:
1 : Create a wpf project, add a tree view control in the main window, add a reference of UIAutomationProvider Assembly in the project.
2 : Add TreeViewItemBehavior.cs class file in my project, change the namespace of this class as AttachedBehaviorTest.
3 :According to your method, add xmlns: These two labels. When I wrote these, each element is normal prompted.
After I've done above, the same error has occurred again!
Dear Josh, would you kind enough to take some of your precious time and have a look at what the problem is.
Thank you very much.
Your sincerely
He Lingyong, Beijing, China
treeView.SelectedNodeChanged += (o, e) => treeView.SelectedNode.EnsureVisible();
NickB wrote:Now I feel queasy exposing the model as a public member, since I'm not very keen on a view getting the slightest chance of using it.
NickB wrote:Am I right in thinking I need to look closer at the messenger object?
private void OnPoseLoaded(object sender, RoutedEventArgs e)
{
DockPanel panel = sender as DockPanel;
if (panel != null)
panel.BringIntoView();
}
<br />
void searchTextBox_KeyDown(object sender, KeyEventArgs e)<br />
{<br />
if (e.Key == Key.Enter)<br />
_familyTree.SearchCommand.Execute(null);<br />
} <br />
local:MyBehaviors.ExecuteCommandOnEnterKey="{Binding Path=SearchCommand}"
<Style x:
<Setter Property="vm:TreeViewItemBehavior.IsBroughtIntoViewWhenSelected" Value="True" />
...
<TreeViewItem Name="tviOpHist" Header="Operational Data"
vm:TreeViewItemBehavior.IsBroughtIntoViewWhenSelected="True"
...
<br />
Public NotInheritable Class TreeViewItemBehavior<br />
Inherits DependencyObject<br />
<br />
Private Shared propChangeCallback As PropertyChangedCallback = _<br />
New PropertyChangedCallback(AddressOf OnIsBroughtIntoViewWhenSelectedChanged)<br />
<br />
Public Shared ReadOnly IsBroughtIntoViewWhenSelectedProperty As DependencyProperty = _<br />
DependencyProperty.RegisterAttached("IsBroughtIntoViewWhenSelected", GetType(String), GetType(TreeViewItem), _<br />
New FrameworkPropertyMetadata("False", propChangeCallback))<br />
<br />
Shared Sub New()<br />
Debug.Print("TreeViewItemBehavior Shared Sub New called")<br />
Debug.Print("propChangeCallback is {0}", propChangeCallback.ToString)<br />
End Sub<br />
<br />
<br />
Public Shared Function GetIsBroughtIntoViewWhenSelected(ByVal tvi As TreeViewItem) As String<br />
Return CStr(tvi.GetValue(IsBroughtIntoViewWhenSelectedProperty))<br />
End Function<br />
<br />
<br />
Public Shared Sub SetIsBroughtIntoViewWhenSelected(ByVal tvi As TreeViewItem, ByVal value As String)<br />
tvi.SetValue(IsBroughtIntoViewWhenSelectedProperty, value)<br />
End Sub<br />
<br />
<br />
Public Shared Property IsBroughtIntoViewWhenSelected(ByVal tvi As TreeViewItem) As String<br />
Get<br />
Return GetIsBroughtIntoViewWhenSelected(tvi)<br />
End Get<br />
Set(ByVal value As String)<br />
SetIsBroughtIntoViewWhenSelected(tvi, value)<br />
End Set<br />
End Property<br />
<br />
<br />
Public Shared Sub OnIsBroughtIntoViewWhenSelectedChanged(ByVal depObj As DependencyObject, _<br />
ByVal e As DependencyPropertyChangedEventArgs)<br />
<br />
Dim tvi As TreeViewItem<br />
tvi = TryCast(depObj, TreeViewItem)<br />
If tvi Is Nothing Then<br />
Exit Sub<br />
Else<br />
'Dim newValue As Boolean = CBool(e.NewValue)<br />
Dim newValue As String = e.NewValue<br />
Debug.Print("value of depProp is {0}", TreeViewItemBehavior.IsBroughtIntoViewWhenSelected(tvi))<br />
<br />
If newValue = "True" Then<br />
AddHandler tvi.Selected, AddressOf OnTreeViewItemSelected<br />
Else<br />
RemoveHandler tvi.Selected, AddressOf OnTreeViewItemSelected<br />
End If<br />
End If<br />
End Sub<br />
<br />
<br />
Public Shared Sub OnTreeViewItemSelected(ByVal sender As Object, ByVal e As RoutedEventArgs)<br />
<br />
If Object.ReferenceEquals(sender, e.OriginalSource) Then<br />
Dim tvi As TreeViewItem = TryCast(sender, TreeViewItem)<br />
If tvi IsNot Nothing Then<br />
Debug.Print("value of depProp is {0}", TreeViewItemBehavior.IsBroughtIntoViewWhenSelected(tvi))<br />
tvi.BringIntoView()<br />
End If<br />
End If<br />
End Sub<br />
<br />
End Class<br />
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/28959/Introduction-to-Attached-Behaviors-in-WPF?fid=1525923&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed | CC-MAIN-2017-39 | en | refinedweb |
Use cond_signal(3C) to unblock one thread that is blocked on the condition variable pointed to by cv . If no threads are blocked on the condition variable, cond_signal() has no effect.
#include <thread.h> int cond_signal(cond_t *cv);
cond_signal() returns 0 if successful. When the following condition is detected, cond_signal() fails and returns the corresponding value.
EFAULTDescription:
cv points to an illegal address. | http://docs.oracle.com/cd/E19253-01/816-5137/sthreads-25587/index.html | CC-MAIN-2014-52 | en | refinedweb |
How to: Declare Handles in Native Types
You cannot declare a handle type in a native type. vcclr.h provides the type-safe wrapper template gcroot to refer to a CLR object from the C++ heap. This template lets you embed a virtual handle in a native type and treat it as if it were the underlying type. In most cases, you can use the gcroot object as the embedded type without any casting. However, with for each, in, you have to use static_cast to retrieve the underlying managed reference.
The gcroot template is implemented using the facilities of the value class System::Runtime::InteropServices::GCHandle, which provides "handles" into the garbage-collected heap. Note that the handles themselves are not garbage collected and are freed when no longer in use by the destructor in the gcroot class (this destructor cannot be called manually). If you instantiate a gcroot object on the native heap, you must call delete on that resource.
The runtime will maintain an association between the handle and the CLR object, which it references. When the CLR object moves with the garbage-collected heap, the handle will return the new address of the object. A variable does not have to be pinned before it is assigned to a gcroot template.
This sample shows how to create a gcroot object on the native stack.
hello
This sample shows how to create a gcroot object on the native heap.
// mcpp_gcroot_2.cpp // compile with: /clr // compile with: /clr #include <vcclr.h> using namespace System; struct CppClass { gcroot<String ^> * str; CppClass() : str(new gcroot<String ^>) {} ~CppClass() { delete str; } }; int main() { CppClass c; *c.str = gcnew String("hello"); Console::WriteLine( *c.str ); }
hello
This sample shows how to use gcroot to hold references to value types (not reference types) in a native type by using gcroot on the boxed type.
// mcpp_gcroot_3.cpp // compile with: /clr #include < vcclr.h > using namespace System; public value struct V { String^ str; }; class Native { public: gcroot< V^ > v_handle; }; int main() { Native native; V v; native.v_handle = v; native.v_handle->str = "Hello"; Console::WriteLine("String in V: {0}", native.v_handle->str); }
String in V: Hello | http://msdn.microsoft.com/en-US/library/481fa11f(v=vs.110).aspx | CC-MAIN-2014-52 | en | refinedweb |
The Internet has brought software to the people. For the first time in
history, ordinary people all over the world are using software to connect to
each other. This trend will surely continue as Internet connectivity enters the
realms of television, radio, telephone, personal digital assistant (PDA)
technology, and the automobile. In addition, people's lives are becoming the
primary focus of software-either directly through human interaction via Web-user
interfaces or indirectly through business-to-business (B2B) communication
targeted at serving human needs. The increasing connectivity of the populace
through software combined with software's more specialized focus on people is
revolutionizing software design.
The software of the past focused on modeling the operation of things, which
gave rise to the object-oriented movement. Although today people could be viewed
as just another collection of objects in an object-oriented world, this approach
would be impractical and likely fail. There is simply no plausible way to model
the dynamic interactions and forces within our society using object-oriented
design. Social interaction involves issues such as the use of freedom,
multicultural preferences, mobility, unpredictability, and geographical
location, just to name a few. Simply put, society cannot be adequately
represented using the abstraction of an object model. The real world of people
is radically different from the world of things, as philosopher Karl Wojtyla
(better known as Pope John Paul II) pointed out years ago:
The world in which we live is composed of many objects . . . As an object, a
man is "somebody"-and this sets him apart from every other entity in
the visible world, which as an object is always only "something."
Implicit in this simple, elementary distinction is the great gulf which
separates the world of persons from the world of things.1
Because people are singularly exceptional types of objects and their
activities are becoming the central focus of software development, a new
programming vision is emerging that targets people's complex dynamic
interactivity in a more specialized way.
The .NET platform is an early embodiment of this new programming vision that
is oriented toward people. It is necessary to understand the elements of this
programming vision to fully leverage the capabilities of .NET. In the days when
people were migrating from the C language's procedural-oriented programming to
the object-oriented programming of the C++ language, it was easy to make the
mistake of trying to adopt the new tool without understanding the paradigm shift
that it was designed to address. Some mistakenly viewed C++ as just a better
version of C instead of as a radically new way to write software. Similarly,
today's .NET could mistakenly be viewed as just a better way to build a Web site
instead of as an enabling technology for the next generation of the Internet. To
prevent this kind of misunderstanding, the rest of this chapter will examine the
people-oriented programming paradigm for building the next generations of the
Internet and how the paradigm can be implemented with the .NET platform.
What is the Internet evolving into, and how will .NET help? The success of
the Internet is tied to the fact that people are social beings. We quickly
embrace innovations that facilitate communication, as the success of the
printing press, radio, and television demonstrate. The primary need being
addressed by the Internet is people's desire to be involved in a community-an
online community that is global in scope. The global online community is
naturally subdivided into a vast multitude of smaller communities that target
specific groups in more personalized ways. The next generation of the Internet,
which will evolve using technologies such as .NET, will more completely and
seamlessly connect people through economic, social, and cultural interactions.
Building an online community that adequately represents society is a
complicated undertaking. Although dramatic improvements in speed and wireless
connectivity will be necessary to allow a truly ubiquitous Internet, the
greatest challenge engineers face is overcoming inadequate software
methodologies. More sophisticated and powerful software tools and techniques are
needed to meet the daunting engineering tasks that exist. Once the appropriate
methodologies and tools become available, a global online community can emerge
out of the somewhat independent initiatives of millions of people. Without the
right tools, standards, and methodologies, progress will be very slow, and we
will make many wrong turns.
What is the appropriate software methodology needed to transform the World
Wide Web into a more globally connected community? Part I of Inside Microsoft
Windows NT Internet Development2 introduced a people-oriented programming
paradigm to address this question. The people-oriented paradigm focuses on
connecting people in more immediate ways through the Internet and on embedding
software within the operations of society so that an online community can
emerge. Unlike the other paradigm shifts that have already transformed the
software industry, such as procedural- and object-oriented programming,
people-oriented programming does not focus on the creation of a new programming
language, such as Java. Rather, people-oriented programming focuses on
leveraging the rich services provided by modern operating systems such as
Windows server.
The Internet revolution represents a dramatic shift in the way society
conducts itself, and so it necessarily represents a dramatic shift in the
purpose of software-a paradigm shift-from technologies focused on individual
computing tasks to technologies focused on social interaction, cultural
expression, and information exchange. Essentially, software designed for the
Internet will be responsible for building a global community. It will focus on
improving the ordinary circumstances of living, enabling people to more
effectively accomplish day-to-day activities. Thus you could aptly call this new
paradigm people-oriented programming . . . While both ActiveX and Java make
valuable contributions to Internet development, they are insufficient by
themselves to meet the needs of the Internet era. To rapidly build reliable,
scalable, distributed software solutions, we need to take ActiveX and Java and
embed them in Internet-enabling systems. This is what the Windows NT server
platform technologies provide. Windows NT and Microsoft Windows Distributed
interNet Applications Architecture (Windows DNA) provide the tools for
implementing people-oriented programming.3
This guiding principle, which had its initial implementation in Windows DNA,
has achieved more comprehensive expression in Microsoft's next-generation .NET
platform. The .NET platform subsumes the rich system services of Windows DNA and
extends them to allow for the creation of people-oriented Web services that can
be used in an elaborate and personalized way over the Internet. Although Windows
DNA focused on satisfying foundational requirements such as Internet
connectivity, transactions, asynchronous programming, fault tolerance, security,
and scalability, the .NET platform addresses the need for people-oriented Web
services. These Web services will allow people to integrate software more
seamlessly into their lives. For example, people will be able to view other
people's appointment calendars in a standardized way or integrate their business
processes between their customers and suppliers. The focus on people has become
increasingly important in the software industry during the last few years, and
it predates the .NET platform. On March 29, 1999, Microsoft announced that its
company had been reinvented. Microsoft intended to encompass a new, broader
vision of the empowerment of people through great software-at any time, in any
place, and on any device. Bill Gates, Microsoft's Chairman and Chief Software
Architect, explained the company's more people-oriented perspective:
Our original vision of "a computer on every desk and in every home"
is still extremely relevant..4
The .NET platform will facilitate the creation of software that is much more
people oriented because it directly enables the implementation of the three
concepts embodied within the people-oriented programming paradigm:
Universalization is a model of development which leverages the capabilities
of sophisticated universal runtimes that implement universally accepted Internet
standards.
The universalization model relies on the runtime to provide system services
that are widely applicable to addressing complex software engineering tasks.
Although programming languages such as C or C++ have powerful capabilities and
also have runtime libraries, the runtimes proposed for people-oriented
programming are many orders of magnitude richer in their capabilities. Instead
of concentrating on programming techniques such as encapsulation, polymorphism,
or inheritance to write reusable code, people-oriented programming concentrates
on reusing the services of ubiquitous runtimes so that code creation can be
minimized and the coding effort can be increasingly directed toward developing
people-oriented Web services for building an online community.
The focus of programming shifts from the inherent capabilities of a
programming language to the inherent capabilities of a runtime. Windows Server
is an example of a runtime that implements universal Internet standards and
provides very rich features within its component object model (COM+) services
layer. The PocketPC is also an example of a runtime that implements universal
Internet standards but has scaled down services more suitable for the devices on
which it runs. The .NET runtime more eloquently expresses this model, as we will
explain later in the chapter.
Collaboration is a model of synergy in which people-oriented Web services
cooperate to provide enhanced services.
The collaboration model facilitates much more sophisticated software
integration across organizational boundaries. A people-oriented Web service is
any application exposing a programmable interface over the Internet, as opposed
to a graphical user interface, with the purpose of enabling developers to build
an online community. Examples of these Web services in the retail industry
include catalogues of products and accessories, billing and payment processing
services, and shipping and delivery services. By programmatically binding these
Web services together, software developers all over the globe can collaborate to
create marketplaces in which the needs of millions of businesses and consumers
are identified and matched.
The software engineering challenges of accomplishing this feat are
formidable. Directories are needed to identify the available Web services and to
describe how developers can integrate other services with them. People need to
form a consensus on Web service description contracts for similar types of
services to avoid excessive complexity in integrating multiple service
providers. Testing and troubleshooting will require easy ways for programmers to
collaborate with multiple Web service providers during the development and
operational phases.
Translation is a model of interoperability that addresses conversion of
functionality between heterogeneous platforms and between diverse service
description contracts.
The translation model provides an approach to creating a virtual uniformity
in an environment that is aggressively diverse. Building an online community
requires a seamless way for thousands of Web services to talk to each other.
This is not an easy task because there are millions of software developers
independently building these types of services using diverse technologies. The
Internet is a network of heterogeneous infrastructures that run many different
operating systems and use incompatible component protocols like distributed
component object model (DCOM), Common Object Request Broker Architecture (CORBA),
and remote method invocation (RMI).
It is not feasible to migrate all these systems into a common technology.
Instead, people-oriented programming concentrates on translation techniques that
allow disparate systems to communicate over ubiquitous Internet standards such
as the Hypertext Transfer Protocol (HTTP) and the eXtensible Markup Language
(XML). People-oriented Web services can be built with any programming language
and on any platform. Internally the service may use proprietary protocols to
achieve maximum scalability and an external interface can translate back and
forth between the proprietary interface and the ubiquitous Internet standard.
The translation model has another equally important objective. It is likely that
multiple Web service providers will provide the same type of service but use
different description contracts that indicate how to programmatically integrate
with that type of service. It will be a considerably complex task for the
software developer to try to accommodate all these different service description
contracts. The model of translation requires tools and techniques to facilitate
a way to map all the description contracts into a common format that the
consuming system understands.
The first version of the .NET platform goes a long way to help software
engineers build people-oriented systems that facilitate universalization,
collaboration, and translation. Although other competing technologies such as
Sun ONE could be used instead of .NET to implement these principles-and other
new technologies are likely to appear in the future-the purpose of the following
analysis is to give a sampling of how .NET facilitates the development of
people-oriented software. The rest of this book provides a more comprehensive
analysis.
The Internet of the future will be far more ubiquitous and powerful than the
Internet of today, with people interacting through natural interfaces. The
Internet will be a high-bandwidth, global network transmitting data, voice, and
video and connecting billions of computers, telephones, radios, televisions,
PDAs, and automobiles all over the globe.
Wireless connectivity will be fast and affordable, potentially displacing
landlines as the most predominant means of Internet access. Many new Internet
devices will emerge that will hook up home items like refrigerators, doors,
windows, air conditioning units, and security systems. Disposable Internet
devices will be commonplace, and people will be able to wear them, mail them,
and easily replace them. Data will be universally accessible and will usually
not be tied to an Internet device. Although security will be enhanced with
biometrics, security and privacy will always be difficult issues. Financial
transactions using Internet cards will be widespread, and the line between
buying online and buying in person will become blurred because the primary
distinction between the two will be delivery or pickup. There will be newer,
more natural ways of communicating over the Internet. Handwriting recognition,
voice recognition, and visual recognition through cameras will allow people to
forget that a sophisticated technical infrastructure is enabling them to
communicate easily over the Web. Hopefully, software developers will also be
able to forget these complexities through the services of universal runtimes and
universally accepted Internet standards.
The .NET platform furthers this end through the universal runtime called the
Common Language Runtime. This runtime operates on top of the operating system to
manage the execution of code and provides services to simplify the development
process. Source code that is targeted for the runtime is called managed code;
the compiler translates it into a Microsoft intermediate language (MSIL) that is
independent of the central processing unit (CPU). As the code is being executed,
a just-in-time (JIT) compiler converts this MSIL to the CPU-specific code
required by the device on which it is running. In theory, this means that
software developers can write code for the runtime without having to target each
CPU architecture on which it may run. This will become an issue of increasing
importance as new Internet devices are built on inexpensive commodity CPUs.
Although the .NET runtime will be available for each variation of the Windows
operating system, it is also possible that .NET may be ported to other operating
systems such as Linux. To enable easier interoperability with alternative
universal runtimes on different platforms, the .NET runtime implements
intermachine communication services using universal Internet standards such as
HTTP, HTTPS, XML, and the Simple Object Access Protocol (SOAP).
The .NET runtime exposes a unified programming model for many services that are
available today through disparate application program interfaces. There are
general-purpose libraries, such as the WIN32 application programming interface
(API), the Microsoft Foundation Classes (MFC), the Active Template Library (ATL),
and WinInet; more specialized libraries, such as DirectX, the Microsoft
Telephony application programming interface (TAPI), CrypoAPI; and a whole set of
COM interfaces for component services such as transaction processing, queued
components, or object pooling that must be learned to develop sophisticated
software. The software developer has to absorb many APIs from many sources-some
duplicating the same functionality and some targeting different programming
languages-to effectively leverage architectures like Windows DNA. The .NET
runtime consolidates most of these APIs under a simpler unified model that
abstracts many of the details, especially the interoperable COM underpinnings.
In the future, the software developer will be able to focus mostly on the .NET
runtime regardless of which programming language is being used. For example, the
message queue component in .NET allows easy incorporation of message-based
communication into applications to do tasks such as sending and receiving
messages, exploring existing queues, or creating and deleting queues. The
runtime implementation of HTTP 1.1 frees the developer from complex tasks such
as pipelining, chunking, encryption, proxy use, and certificates and
authentication mechanisms such as Basic, Kerberos, or Windows NT
Challenge/Response (NTLM).
Many powerful new features that simplify the development process have been
incorporated into the .NET runtime. These include cross-language integration
through a common type system (CTS), simplified versioning and deployment through
the use of assemblies, self-describing components through extensible metadata,
easier lifetime management through automatic garbage collection, a simplified
model for component interaction, and improved debugging and profiling services.
The .NET runtime introduces a new entity called an AppDomain that can greatly
facilitate scalability design, a major issue when connecting billions of
devices. Normally there is a tug of war between scalability and fault tolerance
in the design of high-performance systems. New components are partitioned into
different process spaces within an application so that undiscovered bugs do not
bring down the entire system through something like a memory access violation.
However, cross-process communication can significantly reduce the scalability of
an application because of the additional code execution required and the
serialization of processors that occurs from memory allocations on the heap
during marshalling. In the .NET architecture, managed code is protected from
causing many typical faults in the runtime. Any negative consequences that may
result can also be confined to the offending AppDomain. This allows the software
architect to partition code execution into multiple AppDomains within the same
process space to avoid expensive cross-process communication. The end result is
a much more scalable system that also achieves fault tolerance.
Security is another critical factor determining the evolution of the Internet
into a sophisticated online community. People will not connect sensitive
business operations to the Web unless they are confident that their transactions
will be secure. The .NET runtime provides a code access capability to help
address some of these concerns. Mobile code is a big danger because it can come
from many sources such as in e-mail attachments or in documents that are
downloaded from the Internet. Exploiting known vulnerabilities such as buffer
overflows in Internet software applications is another common method of attack.
Code access security helps protect computer systems from these kinds of attacks
because it allows code to be trusted in varying degrees depending on where the
code originates and on its intended purpose. This mechanism does not prevent all
mobile code from executing but rather limits what the code is capable of doing.
The degree to which this occurs may depend on whether the code has been
digitally signed by a trusted source. Code access security can also reduce the
risk that other legitimate software can be misused by malicious code using
buffer overflows or other exploits. This is accomplished by specifying the set
of operations the legitimate code is allowed to perform as well as the set of
operations it should never be allowed to perform.
The .NET runtime is an evolving platform that will continue to be enhanced as
the needs of the emerging online community expand. One can expect the
incorporation of additional features such as natural interfaces in later
releases of the universal runtime. These new features will be exposed through
the same unified programming model.
Building a globally connected online community can be a lot easier in theory
than in practice. The level of collaboration required is very difficult to
achieve, especially given the competing forces active within the different
industries that will be involved. However, the existence of the Internet today
in its present form demonstrates that it is possible to get a consensus when the
opportunities generated through collaboration outweigh the advantages of
protecting proprietary gains. What types of desirable applications will be
possible in future generations of the Internet?
This is a huge topic, so we will focus only on an e-commerce example. Assume
that every product and service has a globally unique identifier. Everything you
buy at a shopping mall or supermarket has this identifier encoded on it and it
can be scanned through readily available scanners in your home, car, or PDA, all
of which are connected to the Internet. Also assume that there are
people-oriented Web services that understand your identity and provide you with
personal storage. Anytime you buy anything, the item is scanned and your
personal inventory is updated. Anytime you consume an item, you scan it and your
personal inventory is depleted. After you set the personal preferences for your
inventory, automated software agents will be checking periodically on your data
to ensure that your house supplies are always replenished. If you are running
low on beer, the automated software agent will order more from your local store,
which you can pick up next time you drop in. Alternatively the automated agent
could search the current prices of this item using its unique identifier and
order additional supplies from a less expensive store, which will deliver it to
you. Each month your automated agent will produce a report of items it has
purchased for you, providing comparative analyses of alternative buying patterns
that may be to your advantage. For example, your agent may tell you that if you
buy brand X instead of brand Y, you will save a certain amount, or if you stock
a two-month supply of product X, you will save money through bulk purchasing.
Every month the automated agent will present you with a financial summary of
your account and ask for your approval to automatically pay your bills.
Every supplier of products could also have automated agents working on its
behalf. By analyzing previous buying patterns, suppliers could predict more
accurately the desired inventory levels required for their goods. These could be
used to automatically order additional supplies from manufacturers who in turn
could rely on automated agents to stock raw materials appropriately. Anytime
there are unforeseen events that break traditional buying patterns, adjustments
could be quickly made in production, and excess supply could be offered to the
automated agents at a discount. The end result is a finely tuned system that
minimizes waste and effectively matches supply to demand.
How can .NET help software developers build these kinds of systems? Building
these systems is only possible if there are ways to programmatically collaborate
the services of multiple companies over the Web. Vast arrays of Web services
need to be developed that expose business functionality in well-defined ways.
The .NET platform greatly simplifies the creation of these Web services using
universally accepted standards. The runtime provides all the necessary plumbing,
and development tools-such as Visual Studio.NET-provide wizards that create
skeleton Web service applications, which can be enhanced with specific
functionality. The .NET platform exposes its functionality through a number of
namespaces, a few of which we mention here.
The System.Web.Services namespace consists of the classes that enable the user
to build and use Web services. Using this functionality can be very easy. For
example, to make a method of a public class running inside ASP.NET accessible
over the Internet, the user simply adds the WebMethod attribute to its
definition. The System.Web.Services.Protocols namespace consists of the classes
that define the protocols used to transmit data across the wire during the
communication between Web service clients and the Web service itself. It exposes
methods such as HttpClientRequest and HttpServerResponse and provides the
implementation for communicating with a SOAP Web service over HTTP.
The System.Web.Services.Description namespace consists of the classes that
enable you to publicly describe a Web service via a service description
language. Service description contracts are automatically generated when a Web
service is created in Visual Studio. A consumer of the Web service uses this
contract to learn how to communicate with it-that is, the methods it can call,
the input parameters, and the exact format of the potential responses returned.
The Web services description language (WSDL) has become the de facto XML
Internet standard for describing Web services in this fashion. The
System.Web.Services.Discovery namespace consists of the classes that allow
consumers to locate available Web services. Web service discovery is the process
of learning about the existence of available Web services and interrogating them
for their description contracts so that users can properly interact with them.
The Universal Discovery, Description, and Integration (UDDI) Project ()
was created to provide a framework for Web service integration through a
distributed directory of Web services. This directory allows the user to locate
available Web services within an industry or a particular company. Programmatic
registration and discovery of Web services using an assortment of predefined
SOAP messages is supported by the .NET platform. The UDDI and WSDL standards
emerged out of collaboration between IBM, Microsoft, and Ariba, and more than 30
other software companies also endorse them.
Many Web services will be built through binding and extending other Web
services. Some foundationally people-oriented Web services will likely emerge as
commodities. A data store Web service could allow people to safely store their
information in a universally accessible location. The methods to access this
data will have to take into account the speed of the user's Internet connection
and allow for offline updates that can be synchronized later. Although the data
may be stored as XML, the user should be able to use Microsoft Office
applications to view and modify it. An identity Web service will be needed to
allow for a single log-on to multiple Internet services. The Microsoft Passport
service provides this kind of functionality today. More sophisticated features
such as biometrics may be added later. Users will need notification and
messaging Web services that push information to people or their software agents,
such as stock price updates or news headlines. Online calendar services will
also be needed to enable people or agents to collaboratively schedule
appointments and meetings. The calendar service should enable users to set
permissions that authorize or prohibit access. For example, a user might give
access to important clients and deny access for uninvited sales meeting
requests. The .NET platform and associated Web services are evolving systems
that will always be updating and improving. A dynamic delivery service will
likely emerge that will allow users to automatically receive improvements when
they occur.
Binding Web services together will present interesting challenges to the
software development community. The software development process will need to
evolve to encompass the particularities of collaborative development. How do
developers test and debug applications that incorporate multiple live systems
that they do not control? How is it possible to instrument them with diagnostic
information and manage them effectively? How is a denial-of-service scenario
that is caused by unexpected circumstances, such as a consumer repeatedly
calling a Web service in a tight loop, prevented? How are transactions spanned
across multiple Web services so that changes are automatically rolled back if an
error occurs? Although the .NET platform is a great first step to facilitating
collaborative development, numerous issues still need to be addressed.
As of August 2001, the .NET platform was still in the beta version. Although the
Internet has connected millions of computers all over the globe, less than a
tiny fraction of one percent is running the .NET software. The level of
communication between computers on the Web is minimal and is confined mostly to
presentation of information and point-and-click buying and selling. Even though
standards such as XML, SOAP, WSDL, and UDDI have emerged to facilitate
collaborative B2B communication, not many system implementations are currently
available.
Today the dominant binding glue of the Internet is still HTML, which has been
applied to nonpresentation tasks such as e-commerce. HTML has been a tremendous
success because it is easy to implement and acts as a common translation
language for the many different types of systems that exist in the wildly
heterogeneous Internet environment. Although practically every computer
connected to the Internet understands the Internet protocols such as
Transmission Control Protocol/Internet Protocol (TCP/IP) and HTTP, these
protocols limit the level of communication and collaboration that can occur
between them.
One approach to solving this problem would be to get everyone to adopt a common
technology or new programming language such as C# that could provide better
services. However, this is not realistic given the competing forces active
within the software industry, nor is it desirable because it would stifle
innovation in the future. A translation approach would allow greater flexibility
and freedom of expression.
If the Web is to evolve into a more globally connected online community, new
mechanisms of translation are needed that enable richer and more sophisticated
software interaction between people. It is not enough to have universally
accepted protocols; users also need platforms and tools that simplify their
implementation. The .NET platform will help address the translation needs of the
online community in two major ways: system interoperability and service contract
transformation.
Although Internet communication is still quite rudimentary, many sophisticated
business processes have been computerized. There is already a tremendous amount
of software engineering in place that can be used to construct an online
community. Most of it is locked within corporations because of the use of
proprietary and disparate software protocols. The .NET platform can help
companies unlock these rich resources and expose them on the Web. Software
developers can build intermediary Web services as translation layers to existing
systems and also port some legacy code to .NET. The CTS within the .NET platform
facilitates easier code porting through its handling of incompatible generic
data types used by different languages, cross-language integration capabilities,
and standardized ways of dealing with events, dynamic behaviors, persistence,
properties, and exceptions. The .NET implementation of Web services is XML based
and can be accessed by any language, component model, or operating system
because it is not tied to a particular component technology or object-calling
convention. This means that many different companies can independently build Web
services that will be able to interoperate without first having to agree on
system-level implementation details. If users already have a CORBA- or COM-based
system, they can build a .NET Web translation service that wraps its
functionality and exposes it on the Web. The .NET architecture uses the simple
and extensible SOAP protocol for exchanging information within the heterogeneity
of the Internet. SOAP is an XML-based protocol that does not define any
application or implementation semantics and can be used in a large variety of
systems from asynchronous messaging to remote procedure calls.
Web forms within the .NET platform also simplify the generation of HTML, which
serves as a translation mechanism for presentation information. Using Web forms,
a user can create Web pages by dragging and dropping rich user interface
controls onto a designer and then add code to programmatically bind these
components to business layers using any programming language. The .NET platform
will translate the desired presentation into pure HTML, which can be understood
by browsers on any device and operating system.
One of the biggest challenges facing companies who would like to use
collaboration to conduct business on the Web will be getting a consensus about
industry-specific Web service description contracts. There will be many
contracts that serve the same purpose but differ slightly in their format. For
example, there could be hundreds of purchase order contract types; without the
appropriate translation techniques, programmers could face the nightmarish task
of trying to cater to every purchase order contract variation. To simplify these
intricacies, users need an easy way to translate disparate Web service contracts
into a common format that is expected by the consuming Web service. Fortunately
the extensible style language transformation (XSLT) specification addresses this
problem in a standardized way. XSLT is a language that transforms XML document
types into other XML document types. A transformation expressed in XSLT
describes rules for transforming a source tree into a result tree and is
achieved by associating patterns with templates. The .NET platform implements
the XML document object model (DOM) through classes supported in the System.Xml
namespace, which also unifies the XML DOM with the data access services provided
by ADO.NET. The .NET System.Xml.Xsl namespace implements the World Wide Web
Consortium (W3C) XSLT specification. The XslTransform class can load an XSL
style sheet using an XmlReader and transform the input data using an
XmlNavigator.
The capability of the .NET platform to provide easy translation mechanisms for
system interoperability and service contract transformation is a huge step
forward in the evolution of the Web into a sophisticated online community. We
can expect that these capabilities will also emerge in other technologies and
platforms so that all Internet systems will be more easily integrated in more
powerful ways.
The Internet has changed the rules of software development and spurred on the
emergence of the people-oriented software paradigm, which aims to transform the
Internet into an online community that adequately represents society.
Universalization, collaboration, and translation are the three principles
proposed in the people-oriented paradigm. The transformation of the Web into a
sophisticated online community has begun, and many developers are already
building software that engenders the people-oriented characteristic outlined in
this chapter. However, today developers are engineering all the plumbing work by
hand and are implementing Internet standards on a case-by-case basis. The .NET
framework will put in place the tools needed to make software universalization,
collaboration, and translation much easier to deliver, thus more effectively
transforming the Web into a true, worldwide online. | http://www.codeproject.com/Articles/1529/Applied-NET-Developing-People-Oriented-Software-Us?fid=2857&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2014-52 | en | refinedweb |
public class PushRegistry extends java.lang.Object
PushRegistrymaintains a list of inbound connections. An application can register the inbound connections with an entry in the application descriptor or dynamically by calling the
registerConnectionmethod. The connection strings are URIs that are used with
Connector.opento open the appropriate server connection.
While an application is running and has Connections open, it is
responsible for all I/O operations associated with the inbound
connection using the appropriate Generic Connection Framework
API.
When the application is not running or the application does not have the URI
open, the application management software(AMS)
listens for inbound notification requests. When a notification arrives for a
registered application, the AMS will start the application, if necessary,
via the normal invocation of
MIDlet.startApp
method.
Implementations MUST guarantee that each inbound connection successfully registered (statically or dynamically) is logically unique. The logical uniqueness is determined by using the comparison ladder scheme as defined in [RFC3986]. Implementations MUST perform at least the simple string comparison, and SHOULD perform one or more of the latter steps of the comparison ladder scheme.
To avoid collisions on inbound generic connections, the application
descriptor MUST include information about static connections
that are needed by the
application suite.
See
Provisioning, and declaration
referencing an application class that is not listed in
the
MIDlet-<n> attributes of the same
application descriptor.
If the application suite can function meaningfully even if a Push registration
can not be fulfilled, it MUST register the Push connections using
the dynamic registration with
PushRegistry.registerConnection method.
A conflict-free installation reserves each requested connection for
the exclusive use of the applications in the suite. While the suite is
installed, any attempt by other applications to open one of the
reserved connections will fail with an
IOException.
A call from an application to
Connector.open()
on a connection reserved for the application will always
succeed, assuming the application does not already have the connection open.
If two application suites have a static push connection in common, they cannot be installed together. Typically one would have to be deleted one before the other one becomes able to be successfully installed.
In some cases the application may not function properly if it cannot listen to a certain protocol or port for incoming traffic. The static push registration has been designed for these cases. The application that must have access to certain protocol or port announces this need in the application descriptor or in the JAR manifest. The implementation checks at installation time that the requested protocol or port is available and registers the application to listen to the incoming traffic. If the request cannot be fulfilled because protocol or port is already reserved, the installation of the application fails.
Static Push registrations are done in the application descriptor or in the
JAR manifest with
MIDlet-Push-<n> attribute.
Each push registration entry contains the following information :
where:where:MIDlet-Push-<n>: <ConnectionURI>, <applicationClassName>, <AllowedSender>
MIDlet-Push-<n>= the Push registration attribute name. Multiple push registrations can be provided in an
applicationsuite. The numeric value for
<n>starts from 1 and MUST use consecutive ordinal numbers for additional entries. The first missing entry terminates the list. Any additional entries are ignored.
ConnectionURI= the connection string used in
Connector.open()
MIDletClassName= the application that is responsible for the connection. The named application MUST be registered in the application descriptor or the JAR manifest with a
MIDlet-<n>record. (This information is needed when displaying messages to the user about the application when Push connections are detected) If the named application appears more than once in the suite, the first matching entry is used.
AllowedSender= a designated filter that restricts which senders are valid for launching the requested application. The syntax and semantics of the
AllowedSenderfield depend on the addressing format used for the protocol. However, every syntax for this field MUST support using the wildcard characters "*" and "?". The semantics of those wildcard are:
When the value of the
AllowedSender field is just the wildcard character
"*", connections will be accepted from any originating source. For Push attributes
using the
datagram and
socket URIs (if supported by the
platform),
AllowedSender field contains a numeric IP address in the
same format for IPv4 and IPv6 as used in the respective URIs (IPv6 address
including the square brackets as in the URI). It is possible to use the wildcards
also in these IP addresses, e.g. "129.70.40.*" would allow subnet resolution. The
wildcards can also be used to match address delimiters, e.g. "72.5.1*" will match
"72.5.124.161". Note that the port number is not part of the filter for
datagram and
socket connections. In every protocol,
the
AllowedSender field MUST match with the appropriate address field
of the incoming event. The address field to use and the exact syntax and semantics
of the address depend on the protocol. However, the address and the
AllowedSender filter MUST be compared by exact string matching where
the strings are compared character by character and the characters need to match
exactly except as allowed by the two wildcard characters: asterisk(*) and question
mark(?).
This specification defines the syntax for
datagram and
socket inbound connections. When other specifications
define push semantics for additional connection types, they
must define the expected syntax for the filter field, as well as
the expected format for the connection URI string.
The following is a sample application descriptor entry that would reserve a stream socket at port 79 and a datagram connection at port 50000. (Port numbers are maintained by IANA and cover well-known, user-registered and dynamic port numbers) [See IANA Port Number Registry]
There are cases when defining a well known port registered with IANA is not necessary. Simple applications may just wish to exchange data using a private protocol between an application and server application.
To accommodate application.
For instance, if a
UDPDatagramConnection is opened and a port
number was not specified, then the application is requesting a dynamic port to be
allocated from the ports that are currently available. By calling
PushRegistry.registerConnection() method the application informs the
AMS that it is the target for inbound communication, even after the application
has been destroyed. (See application life cycle for definition of
Destroyed
state). Once the application has registered the connection with
PushRegistry.registerConnection method, the connection is listed
in the
PushRegistry.listConnections(false) return value. If the
application has an open connection to the registered connection, the AMS starts
listening to the inbound connection once the connection has been closed with
Connector.close method. If the application is deleted from the device,
then its dynamic communication connections are unregistered automatically.
Responsibility for registered Push connections is shared between the AMS and
the application enforcing the Auto Invocation And Push Registry Security and presenting notifications (if any, and if the device has graphical capabilities enabling it to do so) to the user while invoking the application suite.
The AMS is responsible for the shutdown of any running applications (if necessary) prior to the invocation of the push application.
After the AMS has started the Push application, the application end the connection with
Connector.close() method. If the connection is closed, then the
AMS resumes listening for Push notifications. This avoids the loss of data
that might occur if neither the application nor the AMS was listening.
When a registered Push application is not running the AMS listens for incoming connections and launches the application as necessary. If a Push application exits and there are incoming connections, either new or unhandled, for the application, then the application MUST be started to handle the available input.
A Push application SHOULD behave in a predictable manner when handling asynchronous data via the Push mechanism. An application MAY inform the user that data has been processed. (If the application has capabilities to do so.)
When the AMS is started, it checks the list of registered connections and
begins listening for inbound communication. When a notification arrives the
AMS starts the registered application. The application application when it requests to read the data. For stream oriented transports the connection may be lost if the connection is not accepted before the server end of the connection request timeouts.
When an application is started in response to a registered Push connection
notification, it is platform dependent what happens to the current running
application. The application life cycle defines the expected behaviors that
an interrupted application could see from a call to
destroyApp().
The requirements for buffering of messages are specific
to each protocol used for Push and are defined separately
for each protocol. There is no general requirement related
to buffering that would apply to all protocols. If the
implementation buffers messages, these messages MUST
be provided to the application when the application is started and it
opens the related
Connection that it has registered for Push.
When
datagram connections are supported with Push, the
implementation MUST guarantee that when an application
registered for
datagram Push is started in response to an
incoming datagram, at least the datagram that caused the startup of the
application is buffered by the implementation and will be
available to the application when the application
opens the
UDPDatagramConnection after startup.
When socket connections are supported with Push, the
implementation MUST guarantee that when an application
registered for
socket Push is started in response to
an incoming
socket connection, this connection can
be accepted by the application. For example, a platform might
support server socket connections in an application,
but might not support inbound socket connections for push
launch capability.
A
ConnectionNotFoundException is thrown from
the
registerConnection and from the
registerAlarm methods, when the platform
does not support that optional capability.
The
registerAlarm method MUST be supported.
Usage scenario 1:
The suite includes an application with a well known port for communication.
During the
startApp processing
a thread is launched to handle the incoming data.
Using a separate thread is the recommended practice
for avoiding conflicts between blocking I/O operations
and user interaction events. The
thread continues to receive messages until the application is destroyed.
In this sample, the descriptor includes a static push connection registration. It also includes an indication that this application requires permission to use a datagram connection for inbound push messages. (See Auto Invocation And Push Registry Security for details about application permissions.) Note: this sample is appropriate for bursts of datagrams. It is written to loop on the connection, processing received messages.Remark: This example is not appropriate for JME Embedded Profile. Has to be replaced.
Usage scenario 2: The suite includes an application that dynamically allocates port the first time it is started.
In this sample, the application descriptor.
clone, equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static void registerConnection(java.lang.String connection, java.lang.String app, java.lang.String filter) throws java.lang.ClassNotFoundException, java.io.IOException
While the application has opened the connection with
Connector.open
, the AMS will NOT be listening for the input. The application is
responsible for the connection. If the application has not opened a
connection to the registered URI, the AMS MUST listen for input
regardless of whether the application is running or not.
The arguments for the dynamic connection registration are the same as the Push Registration Attribute used for static registrations.
connection- the URI for the connection
app- class name of the
appto be launched when new external data is available. The named
appMUST be registered in the application descriptor or the JAR manifest with a
MIDlet-<n>record. This parameter has the same semantics as the
MIDletClassNamein the Push registration attribute defined in the class description.
filter- a connection URI string indicating which senders are allowed to cause the
appto be launched
java.lang.IllegalArgumentException- if the connection string is
nullor is not valid, or if the filter string is
nullor not valid
ConnectionNotFoundException- if the runtime system does not support push delivery for the requested connection protocol
java.io.IOException- if the connection is already registered or if there are insufficient resources to handle the registration request
java.lang.ClassNotFoundException- if
appis
nullor if the
appclass name can not be found in the current application suite or if this class is not included in any of the
MIDlet-<n>records in the application descriptor or the JAR manifest
java.lang.SecurityException- if the
appdoes not have permission to register a connection
unregisterConnection(java.lang.String)
public static boolean unregisterConnection(java.lang.String connection)
connection- the URI for the connection
trueif the unregistration was successful,
falseif the connection has not been registered or if the connection argument was
null
java.lang.SecurityException- if the connection was registered by another application suite
registerConnection(java.lang.String, java.lang.String, java.lang.String)
public static java.lang.String[] listConnections(boolean available)
Return a list of registered connections for the current application.
The URI of every registered connection is returned from
listConnections(false).
The URI of every connection that has available input is returned from
listConnections(true). URI's of connections opened with
Connector.open(URI) are not returned. After the
Connection
is closed new input may become available and the URI will again be included
in the return of
listConnections(true). URIs of connections
that timeout or otherwise no longer have available input are not returned
from
listConnections(true). Due to race conditions, a call to
listConnections(true) may return URIs that will fail to open
with
Connector.open because timeouts or other connection errors.
When the application opens the URI, the application takes over listening for input
and the AMS stops listening. The
listConnections(true)
method will only see URIs with available input during the time that the
application does NOT have the connection open.
available- if
true, only return the list of connections ready for the handoff to the application, otherwise return the complete list of registered connections for the current application
public static java.lang.String getMIDlet(java.lang.String connection)
connection- a registered generic connection URI string
nullif the connection was not registered by the current application or if the connection argument was
null
registerConnection(java.lang.String, java.lang.String, java.lang.String)
public static java.lang.String getFilter(java.lang.String connection)
connection- a registered generic connection URI string
null, if the connection was not registered by the current application or if the connection argument was
null
registerConnection(java.lang.String, java.lang.String, java.lang.String)
public static long registerAlarm(java.lang.String app, long time) throws java.lang.ClassNotFoundException, javax.microedition.io.ConnectionNotFoundException
PushRegistrysupports one outstanding wake up time per application in the current suite. An application is expected to use
java.util.TimerTaskfor notification of time based events while the application is running.
If a wakeup time was registered and is still pending, the wakeup time will be returned, otherwise zero is returned. If the wakeup has passed then the wakeup is not longer pending and zero is returned.
app- class name of the
appwithin the current running application suite to be launched, when the alarm time has been reached. The named application MUST be registered in the application descriptor or the JAR manifest with a
MIDlet-<n>record. This parameter has the same semantics as the
MIDletClassNamein the Push registration attribute defined above in the class description.
time- time at which the
appis to be executed in the format returned by
Date.getTime(). If the time is zero, or is in the past, or the app is already running at the
timethen the app MUST not be launched.
appwas scheduled to occur, in the format returned by
Date.getTime()
ConnectionNotFoundException- if the runtime system does not support alarm based application launch
java.lang.ClassNotFoundException- if the class name cannot be found in the current application suite or if this class is not included in any of the
MIDlet-<n>records in the application descriptor or the JAR manifest or if the
appargument is
null
java.lang.SecurityException- if the application does not have permission to register an alarm
Date.getTime(),
Timer,
TimerTask
Copyright (c) 2014, Oracle and/or its affiliates. All Rights Reserved. Use of this specification is subject to license terms. | http://docs.oracle.com/javame/config/cldc/opt-pkgs/api/meep/api/javax/microedition/io/PushRegistry.html | CC-MAIN-2014-52 | en | refinedweb |
See also bug #1590399
I still have nasty problems with Python objects magically changing / loosing attributes shortly after creation, while using win32ui and heavy threading. Mainly on machines with many Cores/CPUs.
Patch below: I found 2 strongly smelling locations in win32ui, which are probably responsible: where objects are DECREF'ed and possibly INCREF'ed soon after - while "hoping" that they do not disappear. But I guess this is not valid (today). A new Python object somewhere may use this memory meanwhile and then the object may be stolen back and going hybrid ...
(Unfortunately I don't have the MS full version compilers to compile current MFC based stuff. (Or is this possible somehow with VC Express versions?) I could make quick tests of win32ui.pyd (py2.3 and py2.6)
diff -ur --strip _orig/win32assoc.cpp ./win32assoc.cpp
--- _orig/win32assoc.cpp 2009-03-04 11:52:00 +0000
+++ ./win32assoc.cpp 2011-11-21 13:10:42 +0000
@@ -228,11 +228,11 @@
// So set the instance to NULL _before_ we decref it!
PyObject *old = pAssoc->virtualInst;
pAssoc->virtualInst = NULL;
- XDODECREF(old);
if (ob!=Py_None) {
pAssoc->virtualInst = ob;
DOINCREF(ob);
}
+ XDODECREF(old);
RETURN_NONE;
}
diff -ur --strip _orig/win32cmd.cpp ./win32cmd.cpp
--- _orig/win32cmd.cpp 2009-01-08 22:33:00 +0000
+++ ./win32cmd.cpp 2011-11-21 13:10:26 +0000
@@ -208,17 +208,20 @@
RETURN_ERR("The parameter must be a callable object or None");
void *oldMethod = NULL;
- // note I maybe decref, then maybe incref. I assume object wont be destroyed
- // (ie, ref go to zero) between the 2 calls!)
+ // we need incref's early in order to avoid a ref erronously going to zero during DEDECREF
+ if (method!=Py_None) {
+ Py_INCREF(method);
+ Py_INCREF(hookedObject);
+ }
if (pList->Lookup(message, oldMethod)) {
pList->RemoveKey(message);
// oldMethod is returned - don't drop its reference.
DODECREF(hookedObject);
}
if (method!=Py_None) {
- Py_INCREF(method);
+ // already done above: Py_INCREF(method);
pList->SetAt(message,method);
- Py_INCREF(hookedObject);
+ // already done above: Py_INCREF(hookedObject);
}
if (oldMethod)
return (PyObject *)oldMethod;
Mark Hammond
2012-01-02
Mark Hammond
2012-01-02
Thanks! I think that first patch could also be done by checking the new object isn't the same as the old, but I did it your way :) The second one I made simpler - just always incref to add a temp ref and always decref at the end - I've attached that patch below - please let me know if you think it will not work correctly for some reason. Checked in as rev 4173:95b4f896b100
diff --git a/Pythonwin/win32cmd.cpp b/Pythonwin/win32cmd.cpp
--- a/Pythonwin/win32cmd.cpp
+++ b/Pythonwin/win32cmd.cpp
@@ -208,8 +208,10 @@
RETURN_ERR("The parameter must be a callable object or None");
void *oldMethod = NULL;
- // note I maybe decref, then maybe incref. I assume object wont be destroyed
- // (ie, ref go to zero) between the 2 calls!)
+ // note I maybe decref, then maybe incref. To ensure the object will
+ // not be destroyed (ie, ref go to zero) between the 2 calls), I
+ // add a temporary reference first.
+ DOINCREF(hookedObject);
if (pList->Lookup(message, oldMethod)) {
pList->RemoveKey(message);
// oldMethod is returned - don't drop its reference.
@@ -220,6 +222,7 @@
pList->SetAt(message,method);
Py_INCREF(hookedObject);
}
+ DODECREF(hookedObject); // remove temp reference added above.
if (oldMethod)
return (PyObject *)oldMethod;
else | http://sourceforge.net/p/pywin32/patches/115/ | CC-MAIN-2014-52 | en | refinedweb |
Opened 3 years ago
Closed 3 years ago
Last modified 3 years ago
#9250 closed defect (wontfix)
fresh install rolls back TimingAndEstimationPlugin_Db_Version in system table in sqlite
Description
to reproduce:
- create a new environment with sqlite database
- follow setup instructions
- trac-admin /some/env upgrade output looks good
- run trac-admin /some/env upgrade a second time
- upgrade assumes a full install is needed for lack of TimingAndEstimationPlugin_Db_Version
- upgrade fails creating the billing table a second time
narrowed down cause:
- dbhelper::db_table_exists() called by CustomReportManager::upgrade() somehow causes the system table entry for TimingAndEstimationPlugin_Db_Version to be rolled back.
- since it's a fresh install there is no custom_report table
- dbhelper::db_table_exists() correctly returns false in the response to the exception for select from the non-existent table - but has the side effect of rolling back
- absence of the db version entry causes upgrade checks to assume full install is required again
FWIW:
We will live for now - since we'll be using sqlite for the foreseeable future, we just made dbhelper::db_table_exists() return early as follows:
def db_table_exists(env, table): return get_scalar(env, ("select count(*) from sqlite_master where type = 'table' and name = '%s'" % (table))) > 0;
Obviously not a viable solution, but could be made conditional if
- dbhelper can query database backend
- nested transactions as used by dbhelper fail only for sqlite
As such it would serve as a work around for many environments
Sorry if notation is substandard - spent half as much time looking at python code today as I have in entire my career prior - when I read 'dive into python' years ago ;)
Thanks for a plugin worth debugging!
Attachments (0)
Change History (4)
comment:1 follow-up: ↓ 2 Changed 3 years ago by bobbysmith007
comment:2 in reply to: ↑ 1 Changed 3 years ago by anonymous
- Priority changed from normal to lowest
Replying to bobbysmith007:
This should only be rolling back to the save point but... I found a quote on the internet claiming "SAVEPOINT first appeared in SQLITE V 3.6.8" so this is very likely the ultimate cause of the issues.
Right on! Just tried a vanilla install with sqlite 3.7.4 and problem did not manifest.
I will look at special casing this function for sqlite perhaps, or investigate if this function (table_exists) appears in trac now.
Much appreciated - but your reference to version info had me look at sqlite release history and find that
- 3.4.2 (used) is over four years old now
- 3.6.8 (required) was released over two and half years ago
- 3.7.4 (tested) fine is going on a year old
I'm not really qualified to guess how common such old installations are, but 4+ years makes me wish I hadn't made an assertion about a potential fix to 'many' installations.
If I were the only developer (or one of two/three) on a project and this 'issue' came up, I would probably
- recommend that the reporter move on with life every other year or so
- note a (quite reasonable) 3.6.8+ requirement for sqlite backed environments in the docs
- resolve: wontfix
- get back to what I was doing before I got pestered
That said, I'm not the developer ... just making sure I don't waste yet more of your time by sending you down a rabbit hole for ticket system public relations' sake ...
Thanks much again for your time, and yet again for the plugin!
Patrick
comment:3 Changed 3 years ago by bobbysmith007
- Resolution set to wontfix
- Status changed from new to closed
You rock! I updated the docs noting the increased sqlite3 version requirement.
Thanks for your verifying the fix and for your understanding, I am heading back to what I was doing :)
Cheers,
Russ
Thanks very much for the detailed bug report!
I am unclear on how this could occur, but I will definitely look into it further.
This should only be rolling back to the save point but... I found a quote on the internet claiming "SAVEPOINT first appeared in SQLITE V 3.6.8" so this is very likely the ultimate cause of the issues.
I will look at special casing this function for sqlite perhaps, or investigate if this function (table_exists) appears in trac now.
I'm glad you were able to get it working for you on your version of sqlite
Cheers,
Russ | http://trac-hacks.org/ticket/9250 | CC-MAIN-2014-52 | en | refinedweb |
This article talks about 21 important FAQ from the perspective of WPF and Silverlight. a lot more, feel free to download it
and enjoy:.
First let’s try to understand how Microsoft display technologies have evolved:
Note: Hardware acceleration is a process in which we use hardware to perform some functions rather than performing those functions using software running in the CPU.
Grid
FlowDocument
Ellipse
WPF is a collection of classes that simplifies building dynamic user interfaces. Those classes include a new set of controls, some of which mimic
old UI elements (such as Label, TextBox, Button), and some that are new (such as Grid, FlowDocument, and Ellipse).
Label
TextBox
Button
Hardware acceleration is a process in which we use hardware to perform some functions rather than performing those functions using.
Windows Presentation Framework is the new presentation API. WPF is a two and three dimensional graphics engine. It has the following capabilities:.
No, XAML is not meant only for WPF. XAML is an XML-based language and it has several variants..
There are ten important namespaces / classes in WPF. which verifies whether code is running on the correct thread. In the coming sections, we will look in detail how WPF threading works.
DispatcherObject
When WPF was designed, a property based architecture was considered. In other words, rather than using methods, functions, and events, object behavior will interact using properties.
For now, we will only restrict ourselves to this definition. In the coming sections, we have dedicated questions for this.
The Visual class is a drawing object which abstracts drawing instructions, how drawing should be done like clipping, opacity, and other functionalities.
The Visual class also acts like a bridge between the unmanaged MilCore.dll and the WPF managed classes. When a class is derived from Visual,
it can be displayed on windows. If you want to create your own customized user interface, then you can program using Visual objects.
Visual
UIElement handles three important aspects: layout, input, and events.
UIElement
FrameWorkElement uses the foundation set by UIElement. It adds key properties like HorizontalAlignment, VerticalAlignment, margins, etc.
FrameWorkElement
HorizontalAlignment
VerticalAlignment
This class helps us create basic shapes such as rectangle, polygon, ellipse, line, and path.
This class has controls like TextBox, Button, ListBox, etc. It adds some extra properties like font, foreground and background colors.
ListBox.
TreeView and flexible way possible. element
maps to the WpfApplication1.Window1 class, Button elements in the XAML file map to a System.Windows.Control.Button class, and the
Grid XAML elements map to System.Windows.Control.Grid.
Window
WpfApplication1.Window1
System.Windows.Control.Button.
void main()
We can now connect the behind code method and functions to the events in.
Click
MyButton_Click
Dependency properties belong to one class but can be used in another. Consider the code snippet below:
<Rectangle Height="72" Width="131" Canvas.
Height and Width are regular properties of the Rectangle. But Canvas.Top and Canvas.Left
are dependency properties as they belong to the Canvas class. It is used by the Rectangle to specify its position within Canvas.
Height
Width
Rectangle
Canvas.Top
Canvas.Left
Canvas folder. In short, at runtime, you actually do not see the XAML file. But if you want to do runtime parsing
of.
Above is a code snippet which shows a XAML file and the code completely detached from the XAML presentation. In order to associate a class with a XAML file,
you need to specify the x:Class attribute. Any event specified on the XAML object can be connected by defining a method with sender and event values.
You can see from the above code snippet that we have linked MyClickEvent to an event in the behind code.
x:Class
MyClickEvent
Note: You can find a simple sample in the WindowsSimpleXAML folder. Feel free to experiment with the code… experimenting will teach you more
than reading something theoretical.
To access XAML objects in behind code, you just need to define them with the same name as given in the XAML document. For instance, in the below code snippet, we have named
the object as objtext and the object is defined with the same name in the behind code.
objtext
Note: You can get the source code from the WindowsAccessXAML folder.
Silverlight is a web browser plug-in by which we can enable animations, graphics, and audio/video. You can compare Silverlight with Flash. We can view animations
with Flash and it’s installed as a plug-in in the browser.
Yes, animations made in Silverlight can run in other platforms other than Windows. In whatever platform you want to run, you just need the Silverlight plug-in. to the WPF framework. Microsoft then extended WPF and made WPF/e
which helped to render the UI in the browser. WPF/e was the code name for Silverlight. Later Microsoft launched Silverlight officially.
So the XAML just defines the XML structure to represent UI elements. Both the frameworks, i.e., WPF and Silverlight, then read the UI elements and render the UI elements
in the respective platform..
This sample we are making using VS 2008 Web Express edition and .NET 3.5. It’s a six step procedure to run our first Silverlight application. So let’s go through it step by step..
TextBlock
<%@Register Assembly="System.Web.Silverlight"
Namespace="System.Web.UI.SilverlightControls" TagPrefix="asp" %>
We also need to refer to the ScriptManager from the Silverlight name space. The ScriptManager control is a functionality from AJAX.
The main purpose of this control is to manage the download and referencing of JavaScript libraries.
ScriptManager
</MyFirstSilverLightApplication.xap"
MinimumVersion="2.0.31005.0" Width="100%" Height="100%" />
</div>
</form>
</body>
</html>. | http://www.codeproject.com/Articles/34433/21-Important-FAQ-questions-for-WPF-and-SilverLight?fid=1537837&df=90&mpp=10&sort=Position&tid=3195478 | CC-MAIN-2014-52 | en | refinedweb |
All Titles
Sir ,
Ur website is very use full to me to improve myself in JDBC
Thank You
jdbc mysql - JDBC
=DriverManager.getConnection("jdbc:mysql://localhost:3306/ram","root","root...jdbc mysql import java.sql.*;
public class AllTableName{
public static void main(String[] args) {
System.out.println("Listing all table
jdbc - JDBC
("jdbc:mysql://localhost:3306/ram","root","root");
System.out.println("Connect... static void main(String[] args)
{
System.out.println("Listing all table.... These arguments all have names such as fooPattern. Within a pattern String, "%" means match
MySql ClassNotFoundException - JDBC
MySql ClassNotFoundException Dear sir,
i am working in Linux platform with MySQL database , actually i finished all installation in MySQL... install in linux any software making connection between java and MySQL. Or how can i | http://www.roseindia.net/tutorialhelp/allcomments/2959 | CC-MAIN-2014-52 | en | refinedweb |
project suggest a network security project based on core java with explanation i want to get java projects in core java
project
project how to make blinking eyes using arc, applet what is object serialization ?
with an example
Hi Friend,
Please visit the following link:
Thanks
core java - Java Beginners
core java hallo sir,
in java ,int range is -128 to 127. what about '0' indicate what Hi,
In java byte range is -128 to 127, not of int Hello sir/madam,
Can you please tell me why multiple inheritance from java is removed.. with any example..
Thank you...://
Thanks Hi
core java - Java Beginners
core java can we write a program for adding two numbers without...-in-java/
it's about calculating two numbers
Java Program Code for calculating two numbers catch(Exception e)
{
System.out.println(e);
}
what is use of this?? Hi Friend,
The catch block is used as an exception... the following link:
core java - Java Beginners
core java
How to reverse the words in a given sentence??jagdysbms@yahoo.co.in Hi friend,
import java.io.*;
public class...);
}
}
-------------------------------------------------------
Read for more information.
Thanks
project - Java Beginners
project in Java Need example project in Java
java project - Java Beginners
java project how can i download the source code for the project college admission system
Java Project - Java Beginners
Java Project Hey, thanks for everything your doing to make us better. Am trying to work on a project in java am in my first year doing software Engineering and the project is m end of year project. Am really believing
Java Project - Java Beginners
Java Project Hello Sir I want Mini Student Admission System project in java
with Source Code and Back end Database is MS Access.
plz Give That Sir its Very Very Urgent
Stages in Core Java - Java Beginners
Stages in Core Java I like to know the growth stages of Core JAVA:
i came to know the Stages of core java as below:
1. core java
2... the market for Core Java? (or any others, plse specify
Java Project - Java Beginners
Java Project Dear Sir,
Right now i am working in java image processing project.
In that i have to split in one image like Texture, color...,
Please visit the following links:
JAVA PROJECT - Java Beginners
JAVA PROJECT Dear Sir,
I going to do project in java " IT in HR, recruitment, Leave management, Appriasal, Payroll"
Kindly guide me about... etc. I am doing project first time.
waiting for your reply,
Prashant
Java Core Code - Java Beginners
Java Core Code My question is that how can i calculate and display the true downloading speed from my download manager(I made it in java using core methods no struts or any adv technology used) using java methods detail - Java Beginners
project detail Dear roseindia,
i wnat to the idea for doing project in the java HAVE START A J2ME PROJECT WITH NO CLEAR IDEA .. ANY ONE... project titled, as ?M-News Application? is basically a mobile news application Project. In our project we are going to develop a Mobile Application which
small java project - Java Beginners
small java project i've just started using java at work and i need to get my self up to speed with it, can you give me a small java for beginners project to work on.
your concern will be highly appreciated
JAVA Project - Java Beginners
JAVA Project Hi Friends,
Can somebody please tell me a website from where I can download java live projects along with the source code Sir,
how can i download the source for for the proect college admission system
Core Java Exceptions - Java Beginners
Core Java Exceptions HI........
This is sridhar ..
Exceptions r checked exception and unchecked exception ........?
Checked exceptions r....
Thanks
Project - Java Beginners
Project Give me a sample project idea done using code Java
Java Project - Java Beginners
question in my java project.
But this program is shoing error on the 50th line...Java Project Create a class Computer that stores information about... of java too much hepl plz
Project Concerning - Java Beginners
Project Concerning Dear Sir,
Can you write me URL of any website from where I study the Servlet, JSP project.
Just googled it n im sure u will get many more results. Otherwise, RoseIndia is a better option
Core Java tutorial for beginners
Core Java tutorials for beginners makes its simpler for novices..., the Java
guide section is divided into two sections: Core Java and Advanced Java. Core
Java is where the basic parts of Java are explained while Advanced Java
j2me project - Java Beginners
SALES DATA FROM THE SERVER.
THIS MIDLET IS FOR THE PROJECT MEDICAL CENTRAL Questions - Java Beginners
Java Project Questions Hello sir ,I want Course Form in JAVA SWING that includes following contents-
1)Course Id -Numeric
2)Course NAME-TEXT
3)Course Duration-NUMBERS
4)Course Fees-NUMBERS
I want to to add,Update,
project
project i need a project in java backend sql
J2ME project - Java Beginners
core java
core java how to display characters stored in array in core java
abt java project - Java Beginners
abt java project if we want to create a pannel on the button how to write on the code in side the button plz help
core java
core java basic java interview question
project
project sir
i want a java major project in railway reservation
plz help me and give a project source code with entire validation
thank you | http://www.roseindia.net/tutorialhelp/comment/98950 | CC-MAIN-2014-52 | en | refinedweb |
Feb 12, 2009 09:40 AM|robinspaul|LINK
Feb 12, 2009 02:02 PM|ldechent|LINK
As a starting point:
You can put a placeholder on your aspx page.
In your code you create the checkboxlist and then you have the code add the appropriate items using the .Items.Add()
I think this question is abstract enough that you might prefer to be given a complete working example and then we can discuss what the various things in it do. If you are interested could you help me by making up data for me to put into a table:
Proposed Example
maybe you give three industries, the for each industry you give two to three examples of (what word did you use) and then for each of those you give two to three subcategories, and then after that some answer
hmmm..
maybe I could just call them
Is the following design acceptable:
First checkboxlist appears and the person makes a choice.
On their choice the second checkboxlist appears and the person makes a choice.
On their choice the third checkboxlist appears and when they make the choice the color appears.
I think the above simulates what you want to do sufficiently to satisfy your question but I want to run the design by you first to confirm that it is OK.
Feb 12, 2009 03:59 PM|paindaasp|LINK
Here’s a quick sample:
I have two tables Manufacturers and Models. Manufacturers has one column, the Manufacturer’s name. The Models table has two columns, the Model name, and the Manufacturer’s name. This example loads the first CheckBoxList with the Manufacturers. You can select any Manufacturer and then click the button to create additional CheckBoxLists of available models, one list per Manufacturer selected.
I use two SQLDataSources, one for the Manufacturer and one for the Models. I load a HiddenField with the selected manufacturer name, which is used as the SelectParameter. I also add a Label above each new CheckBoxList, which Displays the Manufacturer’s name.
The c# code executes when the button is clicked:
protected void GetModels(System.Object sender, System.EventArgs e) { Int16 I; for (I = 0; I <= cblManufacturer.Items.Count - 1; I++) { if (cblManufacturer.Items[I].Selected) { hfManu.Value = cblManufacturer.Items[I].Value; Label lblManu = new Label(); lblManu.ID = "lbl_" + cblManufacturer.Items[I].Value; lblManu.Text = "Models for Manufacturer: " + cblManufacturer.Items[I].Value; PlaceHolder1.Controls.Add(lblManu); CheckBoxList cblModels = new CheckBoxList(); cblModels.ID = "cbl_" + cblManufacturer.Items[I].Value; cblModels.DataSource = SqlDataSource2; cblModels.DataTextField = "Model"; cblModels.DataBind(); PlaceHolder1.Controls.Add(cblModels); } } }
The aspx code is:
<form id="form1" runat="server"> <div> <asp:CheckBoxList </asp:CheckBoxList> <asp:Button <br /> <br /> <asp:PlaceHolder</asp:PlaceHolder> <br /> <asp:HiddenField </div> </form> <asp:SqlDataSource </asp:SqlDataSource> <asp:SqlDataSource <SelectParameters> <asp:ControlParameter </SelectParameters> </asp:SqlDataSource>
Hope this helps...
Feb 12, 2009 11:48 PM|robinspaul|LINK
Thanks a lot for both the answers. My problem is similar to what contributor has mentioned:-
The fist checkboxlist will show all the industries and based on the selected ones I need to create new checkboxlists for all the selected one and again for all the selected items from the second groups i have create new checkboxlist again.
I could display the second level using the loop. But, I CAN'T READ the second sections to generate the third sections. Could you please help me?
Feb 13, 2009 12:58 PM|paindaasp|LINK
What you have to do is to get into the controls of your PlaceHolder.
Here's some code to do this (sorry it's vb, I'm weak in c#)... This is very similar to creating the initial dynamic CheckBoxList, except you are using the PlaceHolder to check the CheckBoxLists.
For Each c As Control In PlaceHolder1.Controls If c.GetType.Name = "CheckBoxList" Then Dim cbx1 As CheckBoxList = CType(c, CheckBoxList) For Each li As ListItem In cbx1.Items If li.Selected = True Then hfModel.Value = li.Value Dim lblYear As New Label lblYear.ID = "lbl_" & li.Value lblYear.Text = "Years for Models: " & li.Value PlaceHolder2.Controls.Add(lblYear) Dim cblYears As) End If
Feb 13, 2009 01:20 PM|ldechent|LINK
I'm still working on it (didn't want you to think I had abandoned the project--of course, given the amount of time that has passed the chances are increasing that someone else will stop by, see the question and drop a good link).
If it turns out I make it work using checkboxes instead of a checkboxlist, is that acceptable or is it not?
I'm working now to find a way to put a click event on the control. I did that in VB and I thought I would quickly find a similar C# solution...
written minutes later -- I'm looking at
and that might be what I needed.
Feb 13, 2009 01:26 PM|robinspaul|LINK
Thanks to all again. I will try with the new code.
Feb 13, 2009 02:44 PM|paindaasp|LINK
Here's the c# version of the above vb code:
foreach (Control c in PlaceHolder1.Controls) { if (c.GetType().Name == "CheckBoxList") { CheckBoxList cbx1 = (CheckBoxList)c; foreach (ListItem li in cbx1.Items) { if (li.Selected == true) { hfModel.Value = li.Value; Label lblYear = new Label(); lblYear.ID = "lbl_" + li.Value; lblYear.Text = "Years for Models: " + li.Value; PlaceHolder2.Controls.Add(lblYear); CheckBoxList cblYears =); } } } }
Feb 17, 2009 01:48 AM|ldechent|LINK
Robinspaul:
Are you still looking for help on this? I realize quite a bit of time has passed since the question was asked.
I now have a working demo that is not completely bulletproof.
If you start in the first column, and check all the appropriate boxes, and then move over to the second column and check the appropriate boxes, and then move over to the third column and then check all the appropriate boxes, it works perfectly.
However, if you go back and uncheck a box in the first column, all the checkboxes in the second column disappear corresponding to the unchecked firstcolumn checkbox dissappear but the checkboxes for the box that is still marked remain but lose their checkmark. Meanwhile the checkboxes in the third column all remain there without any change.
I can go in and write more code to address these deficiencies, but I wanted to hear from you first.
I'm asuming that you don't want a child checkbox to have a memory that it was checked should its parent be unchecked.
I'm going to go ahead and post my code behind for your curiosity. I created a class to hold the three following things: a checkbox's name, whether it was relevant (if it should show up because its parent is checked), and if it was checked.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.SqlClient; // This was added to make SqlConnection function using System.Configuration; // This was added to make ConfigurationManager function using System.Data; // This was added to make CommandType function using System.Text; // This was added to make StringBuilder function using System.Collections; // This enabled ArrayList public partial class DCBL4 : System.Web.UI.Page { List<CheckBoxThing> grouplist = new List(); List sublist = new List(); protected void Page_Init(object sender, EventArgs e) { string Industry; SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString); SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = "SELECT DISTINCT Industry FROM DCBL"; cmd.CommandType = CommandType.Text; conn.Open(); SqlDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { Industry = reader["Industry"].ToString().Trim(); IndustryCheckBoxList.Items.Add(Industry); } reader.Close(); cmd.CommandText = "SELECT DISTINCT theGroup FROM DCBL"; reader = cmd.ExecuteReader(); while (reader.Read()) { CheckBoxThing gc = new CheckBoxThing(reader["theGroup"].ToString().Trim(), false, false); grouplist.Add(gc); } reader.Close(); //if there are more than 9 items we need id in two places below to force the ordering to be correct //otherwise it goes 1,10,11,12,2,3,4,5,6, etc. cmd.CommandText = "SELECT DISTINCT theSub, id FROM DCBL ORDER BY id"; reader = cmd.ExecuteReader(); while (reader.Read()) { CheckBoxThing sc = new CheckBoxThing(reader["theSub"].ToString().Trim(), false, false); sublist.Add(sc); } reader.Close(); conn.Close(); } protected void Page_Load(object sender, EventArgs e) { } protected void IndustryCheckBoxList_SelectedIndexChanged(object sender, EventArgs e) { foreach (ListItem li in IndustryCheckBoxList.Items) { if (li.Selected == true) { string comparisoncheck = ""; foreach (CheckBoxThing cbt in grouplist) { //feedback.InnerHtml += "z " + cbt.Name + " - li.Text is " + li.Text + " z<br />"; SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString); conn.Open(); SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = "SELECT id FROM DCBL WHERE Industry=@Industry AND theGroup=@theGroup"; cmd.Parameters.AddWithValue("@theGroup", cbt.Name); cmd.Parameters.AddWithValue("@Industry", li.Text); cmd.CommandType = CommandType.Text; comparisoncheck = Convert.ToString(cmd.ExecuteScalar()); conn.Close(); if (comparisoncheck != "") //indicating we have a match { cbt.Relevant = true; } } } } DisplayGroupChecks(); } protected void DisplayGroupChecks() { GroupCheckBoxList.Items.Clear(); foreach (CheckBoxThing cbt in grouplist) { if (cbt.Relevant == true) { GroupCheckBoxList.Items.Add(cbt.Name); } } } protected void GroupCheckBoxList_SelectedIndexChanged(object sender, EventArgs e) { foreach (ListItem li in GroupCheckBoxList.Items) { if (li.Selected == true) { foreach (CheckBoxThing c in grouplist) { if (c.Name == li.Text) { c.CheckD = true; //we don't know which one is new so we "CheckD" all of them } } string comparisoncheck = ""; foreach (CheckBoxThing cbt in sublist) { SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString); conn.Open(); SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = "SELECT id FROM DCBL WHERE theGroup=@theGroup AND theSub=@theSub"; cmd.Parameters.AddWithValue("@theGroup", li.Text); cmd.Parameters.AddWithValue("@theSub", cbt.Name); cmd.CommandType = CommandType.Text; comparisoncheck = Convert.ToString(cmd.ExecuteScalar()); conn.Close(); if (comparisoncheck != "") //indicating we have a match { cbt.Relevant = true; } } } } DisplaySubChecks(); } protected void DisplaySubChecks() { SubCheckBoxList.Items.Clear(); foreach (CheckBoxThing cbt in sublist) { if (cbt.Relevant == true) { SubCheckBoxList.Items.Add(cbt.Name); } } } protected void SubCheckBoxList_SelectedIndexChanged(object sender, EventArgs e) { foreach (ListItem li in SubCheckBoxList.Items) { if (li.Selected == true) { foreach (CheckBoxThing c in sublist) { if (c.Name == li.Text) { c.CheckD = true; } } } } DisplayResults(); } protected void DisplayResults() { feedback.InnerHtml = ""; foreach (CheckBoxThing cbt in sublist) { if (cbt.CheckD == true) { SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString); conn.Open(); SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = "SELECT Color FROM DCBL WHERE theSub=@theSub"; cmd.Parameters.AddWithValue("@theSub", cbt.Name); cmd.CommandType = CommandType.Text; feedback.InnerHtml += Convert.ToString(cmd.ExecuteScalar()) + "<br />"; conn.Close(); } } } }
and the class that goes with this....
using System; using System.Collections.Generic; using System.Linq; using System.Web; /// <summary> /// Summary description for CheckBoxThing /// </summary> public class CheckBoxThing { string _name; bool _relevant; bool _checkD; //checked is a keyword so we use checkD public CheckBoxThing(string name, bool relevant, bool checkD) { _name = name; _relevant = relevant; _checkD = checkD; } public string Name { get { return _name; } set { _name = value; } } public bool Relevant { get { return _relevant; } set { _relevant = value; } } public bool CheckD { get { return _checkD; } set { _checkD = value; } } }
Feb 17, 2009 08:32 AM|robinspaul|LINK
Thanks again contributor,
Yes, I got it working.
I am trying to understand your code as well.
Thank you very much for helping me.
None
0 Points
Mar 13, 2010 10:49 AM|Patibandla.Suresh|LINK
hi i do have the same kind of a problem. i would like to populate a checkbox list with 20 items , of which 12 items should appear in one column and the remaining 8 items should appear in the other . Any help would be appreciated.
Thanks and Regards,
Suresh Patibandla.
10 replies
Last post Mar 13, 2010 10:49 AM by Patibandla.Suresh | http://forums.asp.net/t/1383931.aspx | CC-MAIN-2014-52 | en | refinedweb |
WPF Hands-On Lab: Getting Started with the Composite Application Library
Purpose
In this lab, you will learn the basic concepts of the Composite Application Guidance and apply them to create a Composite Application Library solution that you can use as the starting point for building a composite Windows Presentation Foundation (WPF) application. After completing this lab, you will be able to do the following:
- You will understand the basic concepts of the Composite Application Guidance for WPF and Silverlight.
- You will create a new solution based on the Composite Application Library.
- You will create and load a module.
- You will create a view and show it in the Shell window.
Preparation
This topic requires you to have the following Composite Application Library and Unity Application Block assemblies:
- Microsoft.Practices.Composite.dll
- Microsoft.Practices.Composite.Presentation.dll
- Microsoft.Practices.Composite.UnityExtensions.dll
- Microsoft.Practices.ObjectBuilder2.dll
- Microsoft.Practices.ServiceLocation.dll
- Microsoft.Practices.Unity.dll
The Composite Application Library ships as source code, which means you must compile it to get the Composite Application Library assemblies (Microsoft.Practices.Composite.dll, Microsoft.Practices.Composite.Presentation. dll, and Microsoft.Practices.Composite.UnityExtensions.dll).
To compile the solution
- In Windows Explorer, double-click the following batch file to open the Composite Application Library solution in Visual Studio:
Desktop & Silverlight - Open Composite Application Library.bat
- Build the solution. The Composite Application Library assemblies will be placed in the folder CAL\Desktop\Composite.UnityExtensions\bin\Debug.
Procedures
This lab includes the following tasks:
- Task 1: Understanding the Composite Application Guidance for WPF and Silverlight
- Task 2: Creating a Solution Using the Composite Application Library
- Task 3: Adding a Module
- Task 4: Adding a View
The next sections describe each of these tasks.
Task 1: Understanding the Composite Application Guidance for WPF and Silverlight
The Composite Application Guidance for WPF and Silverlight is a set of guidance for developing complex WPF and Silverlight applications. The complexity this refers to occurs when there are multiple independently evolving pieces in an application that need to work together. The Composite Application Guidance is designed to help you manage this complexity. It includes a reference implementation, reusable library code (named Composite Application Library), and pattern guidance.
Background: Composite Applications
A composite application is composed of a number of discrete and independent components. To the user, the application appears as a seamless program that offers many capabilities. These components are integrated together in a host environment to form a coherent solution. Figure 1 shows an example of a composite application.
Figure 1
Background: Containers
Applications built using the Composite Application Library are composites that potentially consist of many loosely coupled components that need a way to interact and communicate with one another to deliver the required business functionality.
To tie together these various modules, applications built using the Composite Application Library rely on a dependency injection container. The container offers a collection of services. A service is an object that provides functionality to other components in a loosely coupled way through an interface and is often a singleton. The container creates instances of components that have service dependencies. During the component's creation, the container injects any dependencies that the component has requested into it. If those dependencies have not yet been created, the container creates and injects them first.
There are several advantages of using a container:
- A container removes the need for a component to have to locate its dependencies or manage their lifetimes.
- A container allows swapping the implementation of the dependencies without affecting the component.
- A container facilitates testability by allowing dependencies to be mocked.
- A container increases maintainability by allowing new services to be easily added to the system.
For more information about containers, see the Container design concept.
Task 2: Creating a Solution Using the Composite Application Library
In this task, you will create a solution using the Composite Application Library. You will be able to use this solution as a starting point for your composite Windows Presentation Foundation (WPF) application. The solution includes recommended practices and techniques and is the basis for the procedures in the Composite Application Guidance. To create a solution with the Composite Application Library, the following tasks must be performed:
- Create a solution with a Shell project. In this task, you create the initial Visual Studio solution and add a WPF Application project that is the basis of solutions built using Composite Application Library. This project is known as the Shell project.
- Set up the Shell window. In this task, you set up a window, the Shell window, to host different user interface (UI) components in a decoupled way.
- Set up the application's bootstrapper. In this task, you set up code that initializes the application.
The following procedure describes how to create a solution with a Shell project. A Shell project is the basis of a typical application built using the Composite Application Library—it is a WPF Application project that contains the application's startup code, known as the bootstrapper, and a main window where views are typically displayed (the Shell window).
To create a solution with a Shell project
- In Visual Studio, create a new WPF application. To do this, point to New on the File menu, and then click Project. In the Project types list, select Windows inside the Visual C# node. In the Templates box, click WPF Application. Finally, set the project's name to HelloWorld.Desktop, specify a valid location, and then click OK.
Visual Studio will create the HelloWorld project, as illustrated in Figure 2. This project will be the Shell project of your application.
Figure 2HelloWorld project
- Using Windows Explorer, create a folder named Library.Desktop inside your solution's folder, and then copy the following assemblies into it:
- Microsoft.Practices.Composite.dll. This assembly contains the implementation of the Composite Application Library core components such as modularity, logging services, communication services, and definitions for several core interfaces. This assembly does not contain UI-specific elements.
- Microsoft.Practices.Composite.Presentation.dll. This assembly contains the implementation of Composite Application Library components that target WPF applications, including commands, regions, and events.
- Microsoft.Practices.Composite.UnityExtensions.dll. This assembly contains base and utility classes you can reuse in applications built with the Composite Application Library that consume the Unity Application Block. For example, it contains a bootstrapper base class, the UnityBootstrapper class, that creates and configures a Unity container with default Composite Application Library services when the application starts.
- Microsoft.Practices.Unity.dll and Microsoft.Practices.ObjectBuilder2.dll. These assemblies enable you to use the Unity Application Block in your application. By default, applications built using the Composite Application Guidance for WPF use the Unity Application Block. However, developers who prefer to use different container implementations can build adapters for them using the provided extensibility points in the Composite Application Library.
- Microsoft.Practices.ServiceLocation.dll. This assembly contains the Common Service Locator interface used by the Composite Application Guidance to provide an abstraction over Inversion of Control containers and service locators; therefore, you can change the container implementation with ease.
- In the HelloWorld project, add references to the assemblies listed in the preceding step. To do this, right-click the HelloWorld project in Solution Explorer, click Add Reference, click Browse in the Add Reference dialog box, browse to and select the assemblies you want to add, and then click OK.
The Shell window is the top-level window of an application based on the Composite Application Library. This window is a place to host different user interface (UI) components that exposes a way for itself to be dynamically populated by others, and it may also contain common UI elements, such as menus and toolbars. The Shell window sets the overall appearance of the application.
The following procedure explains how to set up the Shell window.
To set up the Shell window
- In Solution Explorer, rename the file Window1.xaml to Shell.xaml.
- Open the code-behind file Shell.xaml.cs and rename the Window1 class to Shell using the Visual Studio refactoring tools. To do this, right-click Window1 in the class signature, point to Refactor, and then click Rename, as illustrated in Figure 3. In the Rename dialog box, type Shell as the new name, and then click OK. If the Preview Changes — Rename dialog box appears, click Apply.
Figure 3Window1 renaming using Visual Studio refactoring tools
- In XAML view, open the Shell.xaml file, and then set the following attribute values to the Window root element:
- x:Class = "HelloWorld.Desktop.Shell" (this matches the code behind class's name)
- Title = "Hello World"
Your code should look like the following.
Regions
Conceptually, a region is a mechanism that developers can use to expose to the application's WPF container controls (those that permit child elements) as components that encapsulate a particular visual way of displaying views (typically, views are user controls). Regions can be accessed in a decoupled way by their names; they support dynamically adding or removing views at run time.
By showing controls through regions, you can consistently display and hide the views, independently of the visual style in which they display. This allows the appearance, behavior, and layout of your application to evolve independently of the views hosted within it.
The Composite Application Library supports the following controls to be exposed as regions:
- System.Windows.Controls.ContentControl and derived controls
- System.Windows.Controls.ItemsControl and derived controls
- Controls derived from the class System.Windows.Controls.Primitives.Selector, such as the System.Windows.Controls.TabControl control
The following procedure describes how to add an ItemsControl control to the Shell window and associate a region to it. In a subsequent task, you will dynamically add a view to this region.
To add a region to the Shell window
- In the Shell.xaml file, add the following namespace definition to the root Window element. You need this namespace to use an attached property for regions that is defined in the Composite Application Library.
- Replace the Grid control in the Shell window with an ItemsControl control named MainRegion, as shown in the following code.
Figure 4 shows the Shell window in the Design view.
Figure 4Shell window with an ItemsControl control
- In the ItemsControl control definition, set the attached property cal:RegionManager.RegionName to "MainRegion", as shown in the following code. This attached property indicates that a region named MainRegion is associated to the control.
Bootstrapper
The bootstrapper is responsible for the initialization of an application built using the Composite Application Library. When you have application, a startup Uniform Resource Identifier (URI) is specified in the App.xaml file; the URI launches the main window. In an application built using the Composite Application Library, it is the bootstrapper's responsibility to launch the main window. This is because the Shell window relies on services, such as the Region Manager, that need to be registered before the Shell window can be displayed. Additionally, the Shell window may rely on other services that are injected into its constructor.
The Composite Application Library includes a default abstract UnityBootstrapper class that handles this initialization using the Unity container. Typically, you use this base class to create a derived Bootstrapper class for your application that uses a Unity container. Many of the methods on the UnityBootstrapper class are virtual methods. You should override these methods as appropriate in your own custom bootstrapper implementation. If you are using a container other than Unity, you should write your own container-specific bootstrapper.
The following procedure explains how to set up the application's bootstrapper.
To set up the application's bootstrapper
- Add a new class file named Bootstrapper.cs to the HelloWorld project.
- Add the following using statements at the top of the file. You will use them to refer to elements referenced in the UnityBootstrapper class.
- Update the Bootstrapper class's signature to inherit from the UnityBootstrapper class.
- Override the CreateShell method in the Bootstrapper class. In this method, create an instance of the Shell window, display it to the user, and return it, as shown in the following code.
- Override the GetModuleCatalog method. In this template method, you typically create an instance of a module catalog, populate it with modules, and return it. The module catalog interface is Microsoft.Practices.Composite.Modularity.IModuleCatalog, and it contains metadata for all the modules in the application. Because the application contains no modules at this point, the implementation of the GetModuleCatalog method should simply return an instance of the module catalog, with no modules loaded. You can paste the following code in your Bootstrapper class to implement the method.
In the preceding code, an instance of the Microsoft.Practices.Composite.Modularity.ModuleCatalog class is returned. This implementation of the module catalog is used to define the application's modules. Module loading and module catalogs are described in more detail in "Task 3: Adding a Module" later in this topic.
- Open the file App.xaml.cs and initialize the Bootstrapper in the handler for the Startup event of the application, as shown in the following code. By doing this, the bootstrapper code will we executed when the application starts.
- Open the file App.xaml and remove the attribute StartupUri. Because you are manually instantiating the Shell window in your bootstrapper, this attribute is not required. The code in the App.xaml file should look like the following.
- Build and run the application. You should see an empty Hello World window, as shown in Figure 5.
Figure 5Hello World window
Task 3: Adding a Module
In this task, you will create a module and add it to your solution.
Background: Modularity Design Concept
Modularity is designing a system that is divided into a set of functional units (named modules) that can be composed into a larger application. A module represents a set of related concerns. It can include components such as views, business logic, and pieces of infrastructure, such as services for logging or authenticating users. Modules are independent of one another but can communicate with each other in a loosely coupled fashion.
A composite application exhibits modularity. For example, consider an online banking program. The user can access a variety of functions, such as transferring money between accounts, paying bills, and updating personal information from a single UI. However, behind the scenes, each of these functions is a discrete module. These modules communicate with each other and with any back-end systems, such as database servers. Shell services integrate the input from the different modules and handle the communication between the modules and the user. The user sees an integrated view that looks like a single application.
Benefits of Modularity
Modularity provides the following benefits to development teams:
- It promotes separation of concerns through allowing a high degree of separation between the application infrastructure and the business logic.
- It allows different teams to independently develop each of the individual business logic components and infrastructure components.
- It allows parts of the application to separately evolve.
- It promotes code re-use and flexibility because it allows business logic components and the application infrastructure to be incorporated into multiple solutions.
- It provides an excellent architecture for the front-end integration of line-of-business systems or service-oriented systems into a task-oriented user experience.
Modules
A module in the Composite Application Guidance for WPF and Silverlight is a logical unit in your application. Modules assist with implementing a modular help with testing and deployment.
Adding a module to your solution involves the following tasks:
- Creating a module. In this task, you create a module project with a module initializer class.
- Configuring how the module is loaded. In this task, you configure your application to load the module.
The following procedure describes how to create a module.
To create a module
- Add a new class library project to your solution. To do this, right-click the HelloWorld.Desktop solution node in Solution Explorer, point to Add, and then click New Project. In the Project types list, select Windows in the Visual C# node. In the Templates box, click Class Library. Finally, set the project's name to HelloWorldModule, and then click OK. Figure 6 illustrates how your solution should look like.
Figure 6Solution with a module named HelloWorldModule
- Add references in your module to the following Windows Presentation Foundation assemblies. To do this, right-click the HelloWorldModule project in Solution Explorer, and then click Add Reference. In the Add Reference dialog box, click the .NET tab, click the following assemblies, and then click OK:
- PresentationCore.dll
- PresentationFramework.dll
- WindowsBase.dll
- Add references in your module to the following Composite Application Library assemblies. To do this, right-click the HelloWorldModule project in Solution Explorer, and then click Add Reference. In the Add Reference dialog box, click the Browse tab, click the following assemblies, and then click OK:
- Microsoft.Practices.Composite.dll
- Microsoft.Practices.Composite.Presentation.dll
A module initializer class is a class that implements the Microsoft.Practices.Composite.Modularity.IModule interface. This interface contains a single Initialize method that is called during the module's initialization process. The following code illustrates the IModule interface definition.
In the Initialize method of your module initializer class, you implement logic to initialize the module. For example, you can register views and services or add views to regions. In the subsequent steps, you will create a module initializer class for the HelloWorld module. You will add code to it in the next task.
- Rename the Class1.cs file to HelloWorldModule.cs. To do this, right-click the file in Solution Explorer, click Rename, type the new name, and then press ENTER. In the dialog box that asks if you want to perform a rename of all references to your class, click Yes.
- Open the file HelloWorldModule.cs and add the following using statement at the top of the file. You will use it to refer to modularity elements provided by the Composite Application Library.
- Change the class signature to implement the IModule interface, as shown in the following code.
- In the HelloWorldModule class, add an empty definition of the Initialize method, as shown in the following code.
- Add a Views folder to the HelloWorldModule project. In this folder, you will store your view implementations. To do this, right-click the HelloWorldModule project in Solution Explorer, point to Add, and then click New Folder. Change the folder name to Views.
This step is recommended to organize your projects; this is useful when a module contains several artifacts. The following are other common folders that you can add to your module:
- Services. In this folder, you store service implementations and service interfaces.
- Controllers. In this folder, you store controllers.
Figure 7 shows the solution with the HelloWorldModule module.
Figure 7Solution with the HelloWorldModule
- Build the solution.
At this point, you have a solution based on the Composite Application Library with a module. However, the module is not being loaded into the application. The following section describes module loading and how you can load modules with the Composite Application Library.
Module Loading Process
Modules go through a three-step process during application start up:
- Modules are discovered by the module catalog. The module catalog contains a collection of metadata about those modules. This metadata can be consumed by the module manager service.
- The module manager service coordinates the modules initialization. It manages the retrieval and the subsequent initialization of the modules. It loads modules—retrieving them if necessary—and validates them.
- Finally, the module manager instantiates the module and calls the module's Initialize method.
Populating the Module Catalog
The Composite Application Library provides several ways to populate the module catalog. You can also provide your own custom implementation. In Windows Presentation Foundation (WPF), the following are types of populating the module catalog supported, without modification, by the Composite Application Library:
- Populating the module catalog from code. This is the most straightforward method of populating the module catalog. In this approach, the module information is added to the module catalog in code. If you directly reference the modules, you can use the module type to add the module. However, directly referencing modules results in a less decoupled design. If you do not directly reference modules from the shell, you must provide the fully qualified type name and the location of the assembly.
Another advantage of this approach is that you can easily add conditional logic to determine which modules should be loaded.
- Populating the catalog from a XAML file. Another approach is to declaratively specify the kind of module catalog to create and which modules to add to it; you do this by creating a ModuleCatalog.xaml file. Typically, the XAML file is added as a resource to your shell; from a technical perspective, this approach is very similar to defining the module catalog from code.
- Populating the catalog from a Configuration file. In WPF, you can specify the module information in the App.config file. The advantage of this approach is that because this configuration file is not compiled with the application, new modules can be easily added without recompiling the shell.
- Populating the catalog from a Directory. In WPF, you can also declare the modules in a directory and use the DirectoryModuleCatalog to look for modules in that designated folder.
This approach requires you to configure modules using attributes in code; this is the easiest way to add and remove modules from your application.
The following procedure explains how to populate the catalog from code to load the HelloWorldModule module into the HelloWorld.Desktop application.
To populate the module catalog with the HelloWorld module from code
- In your Shell project, add a reference to the module project. To do this in Solution Explorer, right-click the HelloWorld.Desktop project, and then click Add Reference. In the Add Reference dialog box, click the Projects tab, click the HelloWorldModule project, and then click OK.
- Open the Bootstrapper.cs file and explore the GetModuleCatalog method. The method implementation is shown in the following code.
This method returns an instance of the ModuleCatalog class. This type of module catalog service is used to define the application's modules from code—it implements the methods included in the IModuleCatalog interface and adds an AddModule method for developers to manually register modules that should be loaded in the application. The signature of this method is shown in the following code.
The AddModule method returns the same module catalog instance and takes the following parameters:
- The module initializer class's type of module to load. This type must implement the IModule interface.
- The Initialization mode. This parameter indicates how the module will be initialized. The possible values are InitializationMode.WhenAvailable and InitializationMode.OnDemand.
- An array containing the names of the modules that the module depends on, if any. These modules will be loaded before your module to ensure your module dependencies are available when it is loaded.
- Update the GetModuleCatalog method to register the HelloWorldModule module with the module catalog instance before returning it. To do this, you can replace the GetModuleCatalog implementation with the following code.
- Build and run the solution. To verify that the HelloWorldModule module gets initialized, add a breakpoint to the Initialize method of the HelloWorldModule class. The breakpoint should be hit when the application starts.
Task 4: Adding a View
In this task, you will create and add a view to the HelloWorldModule module. Views are objects that contain visual content. Views are often user controls, but they do not have to be user controls. Adding a view to your module involves the following tasks:
- Creating the view. In this task, you implement the view by creating the visual content and writing code to manage the UI elements in the view.
- Showing the view in a region. In this task, you obtain a reference to a region and add the view to it.
The following procedure describes how to create a view.
To create a view
- Add a new WPF user control to your module. To do this, right-click the Views folder in Solution Explorer, point to Add, and then click New Item. In the Add New Item dialog box, select the User Control (WPF) template, set the name to HelloWorldView.xaml, and then click Add.
- Add a "Hello World" text block to the view. To do this, you can replace your code in the file HelloWorldView.xaml with the following code.
<UserControl x: <Grid> <TextBlock Text="Hello World" Foreground="Green" HorizontalAlignment="Center" VerticalAlignment="Center" FontFamily="Calibri" FontSize="24" FontWeight="Bold"></TextBlock> </Grid> </UserControl>
- Save the file.
Region Manager
The region manager service is responsible for maintaining a collection of regions and creating new regions for controls. This service implements the Microsoft.Practices.Composite.Regions.IRegionManager interface. Typically, you interact directly with this service to locate regions in a decoupled way through their name and add views those regions. By default, the UnityBootstrapper base class registers an instance of this service in the application container. This means that you can obtain a reference to the region manager service in the HelloWorld application by using dependency injection.
The following procedure explains how to obtain an instance of the region manager and add the HelloWorldView view to the shell's main region.
To show the view in the shell
- Open the HelloWorldModule.cs file.
- Add the following using statement to the top of the file. You will use it to refer to the region elements in the Composite Application Library.
- Create a private read-only instance variable to hold a reference to the region manager. To do this, paste the following code inside the class body.
- Modify the HelloWorldModule class's constructor to obtain a region manager instance through constructor dependency injection and store it in the regionManager instance variable. To do this, the constructor has to take a parameter of type Microsoft.Practices.Composite.Regions.IRegionManager. You can paste the following code inside the class body to implement the constructor.
- In the Initialize method, invoke the RegisterViewWithRegion method on the RegionManager instance. This method registers a region name with its associated view type in the region view registry; the registry is responsible for registering and retrieving of these mappings.
The RegisterViewWithRegion method has two overloads. When you want to register a view directly, you use the first overload that requires two parameters, the region name and the type of the view. This is shown in the following code.
The UI composition approach used in the preceding code is known as View Discovery. When using this approach, you specify the views and the region where the views will be loaded. When a region is created, it asks for its associated views and automatically loads them.
- Build and run the application. You should see the Hello World window with a "Hello World" message, as illustrated in Figure 8.
Figure 8Hello World message
More Information
For a complete list of How-to topics included with the Composite Application Guidance, see Development Activities.
Home page on MSDN | Community site | http://msdn.microsoft.com/en-us/library/dd458867(d=printer).aspx | CC-MAIN-2014-52 | en | refinedweb |
UUID-like identifier generator.
More...
#include <adobe/zuid.hpp>
List of all members.
(Note that these are not member functions.)
The ZUID class implements a non-standard UUID (Universally Unique ID). The ZUID is generated with an algorithm based on one available from the Open Software Foundation, but with the following differences:
These changes where made to improve performance and avoid privacy issues of having a hardware specific address imbedded in documents. These changes increase the probability of generating colliding IDs but the probability is low enough to suffice non-mission critical needs.
The UUID code in this file has been significantly altered (as described above) and should not be used where a true UUID is needed. The MD5 code has only been altered for coding standards. The algorithm should still function as originally intended.
Definition at line 89 of file zuid.hpp.
[explicit]
Set this zuid to be the UUID. The UUID isn't changed
Parses strings of the style "d46f246c-c61b-3f98-83f8-21368e363c36" and constructs the zuid with it
Create a dependent zuid_t. Given an identical string and zuid_t it will always generate the same new zuid_t. This is useful if you have an object that has a unique name and you want to be able to get an ID for it given the ID of the parent object. The zuid_t is generated by running name_space and name (as UNICODE or ASCII) through MD5.
00000000-0000-0000-0000-000000000000
[related]
UUID-compliant storage for the ZUID
[static]
Always set to the null zuid 00000000-0000-0000-0000-000000000000
Definition at line 114 of file zuid.hpp.
Use of this website signifies your agreement to the Terms of Use and Online Privacy Policy.
Search powered by Google | http://stlab.adobe.com/classadobe_1_1zuid__t.html | CC-MAIN-2014-52 | en | refinedweb |
28 March 2011 09:18 [Source: ICIS news]
SINGAPORE (ICIS)--Shell is expected to restart operations at its 800,000 tonne/year mixed-feed cracker and 750,000 tonne/year MEG plant in Pulau Bukom, Singapore, by 29 March following a 10-day outage caused by operational issues, market sources said on Monday.
The cracker was shut on 18 March due to a compressor issue and had been expected to remain shut for 10 days, traders said.
The cracker had recently completed a turnaround and was restarted over the weekend of 12-13 March.
Shell subsequently declared force majeure (FM) on monoethylene glycol (MEG) and ethylene (C2) following the shutdown, while prompt shipments of propylene (C3) were also be affected, sources said.
The cracker complex produces 800,000 tonnes/year of C2, 450,000 tonnes/year of C3 and 230,000 tonnes/year of benzene, according to Shell’s website.
Shell’s 750,000 tonne/year MEG plant on neighbouring ?xml:namespace>
“For the duration of the Force Majeure we have declared, our supply chain teams are working hard to resume normal operations and supply as soon as possible,” a company spokesperson said.
When asked to confirm if the cracker will resume operations on 29 March, the spokesperson said: “We do not comment on operational | http://www.icis.com/Articles/2011/03/28/9447495/shell-to-restart-singapore-cracker-29-march-market.html | CC-MAIN-2014-52 | en | refinedweb |
16 July 2012 16:20 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
Polish synthetic rubber producer Synthos, meanwhile, said it saw little chance that its bid for ZAP could succeed given that the the ZAT/ZAP merger plan had the backing of the controlling shareholder in the firms, Poland's treasury ministry.
Acron said that following the close of its zlotych (Zl) 1.96bn ($573.1m, €467.8m) offer for 66% of ZAT at the end of Monday, it would consider whether there is a business case for utilising the number of shares offered for purchase to acquire a minority stake in ZAT.
A source at Acron said the Russian mineral fertilizer producer was far from convinced that European competition regulators would approve the merger because “ZAT and ZAP together might be too much of a giant for the Polish market”.
Acron was also looking at what it believed may be serious procedural discrepancies in how ZAT had formulated the merger strategy, he said.
Although ZAT, at this stage, is not yet obliged to file a motion for merger approval from the competition regulators, it is already preparing to submit such a motion to the Polish Office of Competition and Consumer Protection (UOKiK), which it believed would be passed on to the European authorities, ZAT said.
Analysts at investment bank WOOD & Company said the competition rulings on the merger, which would create
If the merger goes through, the only two major fertilizer producers left in
Approximately one-third of fertilizers used in
($1 = €0.82)
($1 = Zl 3.42, | http://www.icis.com/Articles/2012/07/16/9578533/acron-holds-out-for-minority-stake-despite-zatzap-merger-push.html | CC-MAIN-2014-52 | en | refinedweb |
CHINGU
github.com/ippa/chingu/tree/master
DOCUMENTATION: rdoc.info/projects/ippa/chingu
Ruby 1.9.2 is recommended. Should also work with 1.8.7+. Chingu development is mostly conducted using Win7 / Ruby 1.9.2.
DESCRIPTION
OpenGL accelerated 2D game framework for Ruby. Builds on the awesome Gosu (Ruby/C++) which provides all the core functionality. It adds simple yet powerful game states, pretty input handling, deployment safe asset-handling, a basic re-usable game object and automation of common task.
INSTALL
gem install chingu
QUICK START (TRY OUT THE EXAMPLES)
Chingu comes with 25+ examples demonstrating various parts of Chingu. Please browse the examples-directory in the Chingu root directory. The examples start out very simple. Watch out for instructions in the windows titlebar. Could be how to move the onscreen player or how to move the example forward. Usually it's arrowkeys and space. There's also more complex examples, like a clone of Conways game of life (en.wikipedia.org/wiki/Conway%27s_Game_of_Life) game_of_life.rb and example21_sidescroller_with_edit.rb where You can switch between playing and editing the level itself.
PROJECTS USING CHINGU
Links to some of the projects using/depending on Chingu:
github.com/Spooner/wrath (pixely 2-player action adventure in development)
ippa.se/games/unexpected_outcome.zip (LD#19 game compo entry by ippa)
bitbucket.org/philomory/ld19/ (LD#19 game compo entry by philomory)
github.com/Spooner/fidgit (GUI-lib )
github.com/Spooner/sidney (Sleep Is Death remake in ruby)
github.com/ippa/the_light_at_the_end_of_the_tunnel (LD#16 game compo entry)
github.com/ippa/gnorf (LD#18 game compo entry. Decent minigame with online highscores.)
github.com/ippa/holiday_droid (Work in progess platformer)
github.com/ippa/pixel_pang (Work in progress remake of the classic Pang)
github.com/ippa/whygone (An odd little platformer-puzzle-game for _why day)
github.com/erisdiscord/gosu-tmx (a TMX map loader)
github.com/rkachowski/tmxtilemap (Another TMX-class)
github.com/erisdiscord/gosu-ring (Secret of Mana-style ring menu for chingu/gosu)
github.com/deps/Berzerk (remake of the classic game. comes with robots.)
github.com/rkachowski/tigjamuk10 (“sillyness from tigjamuk - CB2 bistro in Cambridge, January 2010”)
github.com/zukunftsalick/ruby-raid (Remake of Ataris river raid, unsure of status)
github.com/edward/spacewar (a small game, unsure of status)
github.com/jstorimer/zig-zag (2D scrolling game, unsure of status)
… miss your Chingu project? Msg me on github and I'll add it to the list!
THE STORY
The last years I've dabbled around a lot with game development. I've developed games in both Rubygame and Gosu. I've looked at gamebox. Rubygame is a very capable framework with a lot of functionality (collision detection, very good event system etc). Gosu is way more minimalistic but also faster with OpenGL -acceleration. Gosu isn't likely to get much more complex since it does what it should do very well and fast.
After 10+ game prototypes and some finished smaller games I started to see patterns each time I started a new game. Making classes with x/y/image/other-parameters that I called update/draw on in the main loop. This became the basic Chingu::GameObject which encapsulates Gosus “Image.draw_rot” and enables automatic updating/drawing through “game_objects”.
There was always a huge big chunk of checking keyboard-events in the main loop. Borrowing ideas from Rubygame this has now become @player.keyboard(:left => :move_left, :space => :fire … etc.
CORE OVERVIEW
Chingu consists of the following core classes / concepts:
Chingu::Window
The main window, use it at you use Gosu::Window now. Calculates the framerate, takes care of states, handles chingu-formated input, updates and draws BasicGameObject / GameObjects automatically. Available throughout your source as $window (Yes, that's the only global Chingu has). You can also set various global settings. For example, self.factor=3, will make all fortcomming GameObjects scale 3 times.
Chingu::GameObject
Use this for all your in game objects. The player, the enemies, the bullets, the powerups, the loot laying around. It's very reusable and doesn't contain any game-logic (that's up to you!). Only stuff to put it on screen a certain way. If you do GameObject.create() instead of new() Chingu will keep save the object in the “game_object”-list for automatic updates/draws. GameObjects also have the nicer Chingu input-mapping: @player.input = { :left => :move_left, :right => :move_right, :space => :fire} Has either Chingu::Window or a Chingu::GameState as “parent”.
Chingu::BasicGameObject
For those who think GameObject is a too little fat, there's BasicGameObject (GameObject inherits from BasicGameObject). BasicGameObject is just an empty frame (no x,y,image accessors or draw-logic) for you to build on. It can be extended with Chingus trait-system though. The new() vs create() behavior of GameObject comes from BasicGameObject. BasicGameObject#parent points to either $window or a game state and is automatically set on creation time.
Chingu::GameStateManager
Keeps track of the game states. Implements a stack-based system with push_game_state and pop_game_state.
Chingu::GameState
A “standalone game loop” that can be activated and deactivated to control game flow. A game state is very much like a main gosu window. You define update() and draw() in a gamestate. It comes with 2 extras that main window doesn't have. #setup (called when activated) and #finalize (called when deactivated)
If using game states, the flow of draw/update/button_up/button_down is: Chingu::Window –> Chingu::GameStateManager –> Chingu::GameState. For example, inside game state Menu you call push_game_state(Level). When Level exists, it will go back to Menu.
Traits
Traits are extensions (or plugins if you so will) to BasicGameObjects included on the class-level. The aim is so encapsulate common behavior into modules for easy inclusion in your game classes. Making a trait is easy, just an ordinary module with the methods setup_trait(), update_trait() and/or draw_trait(). It currently has to be namespaced to Chingu::Traits for “traits” to work inside GameObject-classes.
OTHER CLASSES / HELPERS
Chingu::Text
Makes use of Image#from_text more rubyish and powerful. In it's core, another Chingu::GameObject + image genning with Image#from_text.
Chingu::Animation
Load and interact with tile-based animations. loop, bounce and access invidual frame(s) easily. An “@image = @animation.next” in your Player#update is usually enough to get you started!
Chingu::Parallax
A class for easy parallaxscrolling. Add layers with different damping, move the camera to generate a new snapshot. See example3.rb for more. NOTE: Doing Parallax.create when using a trait viewport will give bad results. If you need parallax together with viewport do Parallax.new and then manually doing parallax.update/draw.
Chingu::HighScoreList
A class to keep track of high scores, limit the list, automatic sorting on score, save/load to disc. See example13.rb for more.
Chingu::OnlineHighScoreList
A class to keep/sync online highscores to gamercv.com/. A lot more fun competing with others for positions then a local list.
Various Helpers
Both $window and game states gets some new graphical helpers, currently only 3, but quite useful:
fill() # Fills whole window with color 'color'. fill_rect() # Fills a given Rect 'rect' with Color 'color' fill_gradient() # Fills window or a given rect with a gradient between two colors. draw_circle() # Draws a circle draw_rect() # Draws a rect
If you base your models on GameObject (or BasicGameObject) you get:
Enemy.all # Returns an Array of all Enemy-instances Enemy.size # Returns the amount of Enemy-instances Enemy.destroy_all # Destroys all Enemy-instances Enemy.destroy_if(&block) # Destroy all objects for which &block returns true
BASICS / EXAMPLES
Chingu::Window
With Gosu the main window inherits from Gosu::Window. In Chingu we use Chingu::Window. It's a basic Gosu::Window with extra cheese on top of it. keyboard handling, automatic update/draw calls to all gameobjects, fps counting etc.
You're probably familiar with this very common Gosu pattern:
ROOT_PATH = File.dirname(File.expand_path(__FILE__)) class Game < Gosu::Window def initialize @player = Player.new end def update if Button::KbLeft @player.left elsif Button::KbRight @player.right end @player.update end def draw @player.draw end end class Player attr_accessor :x,:y,:image def initialize() @x = [:x] @y = [:y] @image = Image.new(File.join(ROOT_PATH, "media", "player.png")) end def move_left @x -= 1 end def move_right @x += 1 end def draw @image.draw(@x,@y,100) end end Game.new.show # Start the Game update/draw loop!
Chingu doesn't change the fundamental concept/flow of Gosu, but it will make the above code shorter:
# # We use Chingu::Window instead of Gosu::Window # class Game < Chingu::Window def initialize super # This is always needed if you override Window#initialize # # Player will automatically be updated and drawn since it's a Chingu::GameObject # You'll need your own Chingu::Window#update and Chingu::Window#draw after a while, but just put #super there and Chingu can do its thing. # @player = Player.create @player.input = {:left => :move_left, :right => :move_right} end end # # If we create classes from Chingu::GameObject we get stuff for free. # The accessors image,x,y,zorder,angle,factor_x,factor_y,center_x,center_y,mode,alpha. # We also get a default #draw which draws the image to screen with the parameters listed above. # You might recognize those from #draw_rot - # And in it's core, that's what Chingu::GameObject is, an encapsulation of draw_rot with some extras. # For example, we get automatic calls to draw/update with Chingu::GameObject, which usually is what you want. # You could stop this by doing: @player = Player.new(:draw => false, :update => false) # class Player < Chingu::GameObject def initialize(options) super(options.merge(:image => Image["player.png"]) end def move_left @x -= 1 end def move_right @x += 1 end end Game.new.show # Start the Game update/draw loop!
Roughly 50 lines became 26 more powerful lines. (you can do @player.angle = 100 for example)
If you've worked with Gosu for a while you're probably tired of passing around the window-parameter. Chingu solves this (as has many other developers) with a global variable $window. Yes, globals are bad, but in this case it kinda makes sense. It's used under the hood in various places.
The basic flow of Chingu::Window once show() is called is this (this is called one game iteration or game loop):
- Chingu::Window#draw() is called -- draw() is called on game objects belonging to Chingu::Window -- draw() is called on all game objects belonging to current game state - Chingu::Window#update() is called -- Input for Chingu::Window is processed -- Input for all game objects belonging to Chingu::Window is processed -- update() is called on all game objects belonging to Chingu::Window -- Input for current game state is processed -- Input for game objects belonging to current game state is processed -- update() is called on all game objects belonging to current game state
… the above is repeatet until game exists.
Chingu::GameObject
This is our basic “game unit”-class, meaning most in game objects (players, enemies, bullets etc) should be inherited from Chingu::GameObject. The basic ideas behind it are:
Encapsulate only the very common basics that Most in game objects need
Keep naming close to Gosu, but add smart convenient methods / shortcuts and a more rubyish feeling
No game logic allowed in GameObject, since that's not likely to be useful for others.
It's based around Image#draw_rot. So basically all the arguments that you pass to draw_rot can be passed to GameObject#new when creating a new object. An example using almost all arguments would be:
# # You probably recognize the arguments from # @player = Player.new(:image => Image["player.png"], :x=>100, :y=>100, :zorder=>100, :angle=>45, :factor_x=>10, :factor_y=>10, :center_x=>0, :center_y=>0) # # A shortcut for the above line would be # @player = Player.new(:image => "player.png", :x=>100, :y=>100, :zorder=>100, :angle=>45, :factor=>10, :center=>0) # # I've tried doing sensible defaults: # x/y = [middle of the screen] for super quick display where it should be easy in sight) # angle = 0 (no angle by default) # center_x/center_y = 0.5 (basically the center of the image will be drawn at x/y) # factor_x/factor_y = 1 (no zoom by default) # @player = Player.new # # By default Chingu::Window calls update & draw on all GameObjects in it's own update/draw. # If this is not what you want, use :draw and :update # @player = Player.new(:draw => false, :update => false)
Input
One of the core things I wanted a more natural way of inputhandling. You can define input -> actions on Chingu::Window, Chingu::GameState and Chingu::GameObject. Like this:
# # When left arrow is pressed, call @player.turn_left ... and so on. # @player.input = { :left => :turn_left, :right => :turn_right, :left => :halt_left, :right => :halt_right } # # In Gosu the equivalent would be: # def (id) @player.turn_left if id == Button::KbLeft @player.turn_right if id == Button::KbRight end def (id) @player.halt_left if id == Button::KbLeft @player.halt_right if id == Button::KbRight end
Another more complex example:
# # So what happens here? # # Pressing P would create an game state out of class Pause, cache it and activate it. # Pressing ESC would call Play#close # Holding down LEFT would call Play#move_left on every game iteration # Holding down RIGHT would call Play#move_right on every game iteration # Releasing SPACE would call Play#fire # class Play < Chingu::GameState def initialize self.input = { :p => Pause, :escape => :close, :holding_left => :move_left, :holding_right => :move_right, :released_space => :fire } end end class Pause < Chingu::GameState # pause logic here end
In Gosu the above code would include code in button_up(), button_down() and a check for button_down?() in update().
Every symbol can be prefixed by either “released_” or “holding_” while no prefix at all defaults to pressed once.
So, why not :up_space or :release_space instead of :released_space? :up_space doesn't sound like english, :release_space sounds more like a command then an event.
Or :hold_left or :down_left instead of :holding_left? :holding_left sounds like something that's happening over a period of time, not a single trigger, which corresponds well to how it works.
And with the default :space => :something you would imagine that :something is called once. You press :space once, :something is executed once.
GameState / GameStateManager
Chingu incorporates a basic push/pop game state system (as discussed here:).
Game states is a way of organizing your intros, menus, levels.
Game states aren't complicated. In Chingu a GameState is a class that behaves mostly like your default Gosu::Window (or in our case Chingu::Window) game loop.
# A simple GameState-example class Intro < Chingu::GameState def initialize() # called as usual when class is created, load resources and simular here end def update # game logic here end def draw # screen manipulation here end # Called Each time when we enter the game state, use this to reset the gamestate to a "virgin state" def setup @player.angle = 0 # point player upwards end # Called when we leave the game state def finalize push_game_state(Menu) # switch to game state "Menu" end end
Looks familiar yet? You can activate the above game state in 2 ways
class Game < Chingu::Window def initialize # # 1) Create a new Intro-object and activate it (pushing to the top). # This version makes more sense if you want to pass parameters to the gamestate, for example: # push_game_state(Level.new(:level_nr => 10)) # push_game_state(Intro.new) # # 2) This leaves the actual object-creation to the game state manager. # Intro#initialize() is called, then Intro#setup() # push_game_state(Intro) end end
Another example:
class Game < Chingu::Window def initialize # # We start by pushing Menu to the game state stack, making it active as it's the only state on stack. # # :setup => :false will skip setup() from being called (standard when switching to a new state) # push_game_state(Menu, :setup => false) # # We push another game state to the stack, Play. We now have 2 states, which active being first / active. # # :finalize => false will skip finalize() from being called on game state # that's being pushed down the stack, in this case Menu.finalize(). # push_game_state(Play, :finalize => false) # # Next, we remove Play state from the stack, going back to the Menu-state. But also: # .. skipping the standard call to Menu#setup (the new game state) # .. skipping the standard call to Play#finalize (the current game state) # # :setup => false can for example be useful when pop'ing a Pause game state. (see example4.rb) # pop_game_state(:setup => false, :finalize => :false) # # Replace the current game state with a new one. # # :setup and :finalize options are available here as well but: # .. setup and finalize are always skipped for Menu (the state under Play and Credits) # .. the finalize option only affects the popped game state # .. the setup option only affects the game state you're switching to # switch_game_state(Credits) end end
A GameState in Chingu is just a class with the following instance methods:
initialize() - as you might expect, called when GameState is created.
setup() - called each time the game state becomes active.
button_down(id) - called when a button is down.
button_up(id) - called when a button is released.
update() - just as in your normal game loop, put your game logic here.
draw() - just as in your normal game loop, put your screen manipulation here.
finalize() - called when a game state de-activated (for example by pushing a new one on top with push_game_state)
Chingu::Window automatically creates a @game_state_manager and makes it accessible in our game loop. By default the game loop calls update() / draw() on the the current game state.
Chingu also has a couple of helpers-methods for handling the game states: In a main loop or in a game state:
push_game_state(state) - adds a new gamestate on top of the stack, which then becomes the active one
pop_game_state - removes active gamestate and activates the previous one
switch_game_state(state) - replaces current game state with a new one
current_game_state - returns the current game state
previous_game_state - returns the previous game state (useful for pausing and dialog boxes, see example4.rb)
pop_until_game_state(state) - pop game states until given state is found
clear_game_states - removes all game states from stack
To switch to a certain gamestate with a keypress use Chingus input handler:
class Intro < Chingu::GameState def setup self.input = { :space => lambda{push_gamestate(Menu.new)} } end end
Or Chingus shortcut:
class Intro < Chingu::GameState def setup self.input = { :space => Menu } end end
Chingus inputhandler will detect that Menu is a GameState-class, create a new instance and activate it with push_game_state().
GOTCHA: Currently you can't switch to a new game state from Within GameState#initialize() or GameState#setup()
Premade game states
Chingu comes with some pre-made game states. A simple but usefull one is GameStates::Pause. Once pushed it will draw the previous game state but not update it – effectively pausing it. Some others are:
GameStates::EnterName
A gamestate where a gamer can select letters from a A-Z list, contructing his alias. When he's done he selects “GO!” and a developer-specified callback will be called with the name/alias as argument.
push_game_state GameStates::EnterName.new(:callback => method(:add)) def add(name) puts "User entered name #{name}" end
Combine GameStates::EnterName with class OnlineHighScoreList, a free acount @ and you have a premade stack to provide your 48h gamecompo entry with online high scores that adds an extra dimension to your game. See example16 for a full working example of this.
GameStates::Edit
The biggest and most usable is GameStates::Edit which enables fast 'n easy level-building with game objects. Start example19 and press 'E' to get a full example.
Edit commands / shortcuts:
F1: Help screen 1-5: create object 1..5 shown in toolbar at mousecursor CTRL+A: select all objects (not in-code-created ones though) CTRL+S: Save E: Save and Quit Q: Quit (without saving) ESC: Deselect all objects Right Mouse Button Click: Copy object bellow cursor for fast duplication Arrow-keys (with selected objects): Move objects 1 pixel at the time Arrow-keys (with no selected objects): Scroll within a viewport Bellow keys operates on all currently selected game objects ----------------------------------------------------------------------------------- DEL: delete selected objects BACKSPACE: reset angle and scale to default values Page Up: Increase zorder Page Down: Decrease zorder R: scale up F: scale down T: tilt left G: tilt right Y: inc zorder H: dec zorder U: less transparency J: more transparency Mouse Wheel (with no selected objects): Scroll viewport up/down Mouse Wheel: Scale up/down SHIFT + Mouse Wheel: Tilt left/right CTRL + Mouse Wheel: Zorder up/down ALT + Mouse Wheel: Transparency less/more
Move mouse cursor close to the window border to scroll a viewport if your game state has one.
If you're editing game state BigBossLevel the editor will save to big_boss_level.yml by default. All the game objects in that file are then easily loaded with the load_game_objects command.
Both Edit.new and load_game_objects take parameters as
:file => "enemies.yml" # Save edited game objects to file enemies.yml :debug => true # Will print various debugmsgs to console, usefull if something behaves oddly :except => Player # Don't edit or load objects based on class Player
WorkFlow
(This text is under development)
The setup-method
If a setup() is available in a instance of Chingu::GameObject, Chingu::Window and Chingu::GameState it will automatically be called. This is the perfect spot to include various setup/init-tasks like setting colors or loading animations (if you're not using the animation-trait). You could also override initialize() for this purpose but it's been proven prone to errors again and again. Compare the 2 snippets below:
# Easy to mess up, forgetting options or super def initialize( = {}) super @color = Color::WHITE end # Less code, easier to get right and works in GameObject, Window and GameState # Feel free to call setup() anytime, there's no magic about ut except it's autocalled once on object creation time. def setup @color = Color::WHITE end
Traits
Traits (sometimes called behaviors in other frameworks) is a way of adding logic to any class inheriting from BasicGameObject / GameObject. Chingus trait-implementation is just ordinary ruby modules with 3 special methods:
- setup_trait - update_trait - draw_trait
Each of those 3 methods must call “super” to continue the trait-chain.
Inside a certian trait-module you can also have a module called ClassMethods, methods inside that module will be added, yes you guessed it, as class methods. If initialize_trait is defined inside ClassMethods it will be called class-evaluation time (basicly on the trait :some_trait line).
A simple trait could be:
module Chingu module Trait module Inspect # # methods namespaced to ClassMethods get's extended as ... class methods! # module ClassMethods def initialize_trait() # possible initialize stuff here end def inspect "There's {self.size} active instances of class {self.to_s}" end end # # Since it's namespaced outside ClassMethods it becomes a normal instance-method # def inspect "Hello I'm an #{self.class.to_s}" end # # setup_trait is called when a object is created from a class that included the trait # you most likely want to put all the traits settings and option parsing here # def setup_trait() @long_inspect = true end end end end class Enemy < GameObject trait :inspect # includes Chingu::Trait::Inspect and extends Chingu::Trait::Inspect::ClassMethods end 10.times { Enemy.create } Enemy.inspect # => "There's 10 active instances of class Enemy" Enemy.all.first.inspect # => "Hello I'm a Enemy"
Example of using traits :velocity and :timer. We also use GameObject#setup which will automtically be called ad the end of GameObject#initialize. It's often a little bit cleaner to use setup() then to override initialize().
class Ogre < Chingu::GameObject traits :velocity, :timer def setup @red = Gosu::Color.new(0xFFFF0000) @white = Gosu::Color.new(0xFFFFFFFF) # # some basic physics provided by the velocity-trait # These 2 parameters will affect @x and @y every game-iteration # So if your ogre is standing on the ground, make sure you cancel out the effect of @acceleration_y # self.velocity_x = 1 # move constantly to the right self.acceleration_y = 0.4 # gravity is basicly a downwards acceleration end def hit_by(object) # # during() and then() is provided by the timer-trait # flash red for 300 millisec when hit, then go back to normal # during(100) { self.color = @red; self.mode = :additive }.then { self.color = @white; self.mode = :default } end end
The flow for a game object then becomes:
-- creating a GameObject class X ( with a "trait :bounding_box, :scale => 0.80" ) 1) trait gets merged into X, instance and class methods are added 2) GameObject.initialize_trait(:scale => 0.80) (initialize_trait is a class-method!) -- creating an instance of X 1) GameObject#initialize(options) 2) GameObject#setup_trait(options) 3) GameObject#setup(options) -- each game iteration 3) GameObject#draw_trait 4) GameObject#draw 5) GameObject#update_trait 6) GameObject#update
There's a couple of traits included as default in Chingu:
Trait “sprite”
This trait fuels GameObject. A GameObject is a BasicGameObject + the sprite-trait. Adds accessors :x, :y, :angle, :factor_x, :factor_y, :center_x, :center_y, :zorder, :mode, :visible, :color. See documentation for GameObject for how it works.
Trait “timer”
Adds timer functionality to your game object
during(300) { self.color = Color.new(0xFFFFFFFF) } # forces @color to white every update for 300 ms after(400) { self.destroy } # destroy object after 400 ms between(1000,2000) { self.angle += 10 } # starting after 1 second, modify angleevery update during 1 second every(2000) { Sound["bleep.wav"].play } # play bleep.wav every 2 seconds
Trait “velocity”
Adds accessors velocity_x, velocity_y, acceleration_x, acceleration_y, max_velocity to game object. They modify x, y as you would expect. *speed / angle will come*
Trait “bounding_box”
Adds accessor 'bounding_box', which returns an instance of class Rect based on current image size,x,y,factor_x,factor_y,center_x,center_y You can also scale the calculated rect with trait-options:
# This would return a rect slightly smaller then the image. # Make player think he's better @ dodging bullets then he really is ;) trait :bounding_box, :scale => 0.80 # Make the bounding box bigger then the image # :debug => true shows the actual box in red on the screen trait :bounding_box, :scale => 1.5, :debug => true
Inside your object you will also get a cache_bounding_box(). After that the bounding_box will be quicker but it will not longer adapt to size-changes.
Trait “bounding_circle”
Adds accessor 'radius', which returns a Fixnum based on current image size,factor_x and factor_y You can also scale the calculated radius with trait-options:
# This would return a radius slightly bigger then what initialize was calculated trait :bounding_circle, :scale => 1.10 # :debug => true shows the actual circle in red on the screen trait :bounding_circle, :debug => true
Inside your object you will also get a cache_bounding_circle(). After that radius() will be quicker but it will not longer adapt to size-changes.
Trait “animation”
Automatically load animations depending on the class-name. Useful when having a lot of simple classes thats mainpurpose is displaying an animation. Assuming the below code is included in a class FireBall.
# # If a fire_ball_10x10.png/bmp exists, it will be loaded as a tileanimation.
# 10x10 would indicate the width and height of each tile so Chingu knows hows to cut it up into single frames.
# The animation will then be available in animations[:default] as an Animation-instance. # # If more then 1 animation exist, they'll will be loaded at the same time, for example: # fire_ball_10x10_fly.png # Will be available in animations[:fly] as an Animation-instance # fire_ball_10x10_explode.png # Will be available in animations[:explode] as an Animation-instance #
# The below example will set the 200ms delay between each frame on all animations loaded.
# trait :animation, :delay => 200
Trait “effect”
Adds accessors rotation_rate, fade_rate and scale_rate to game object. They modify angle, alpha and factor_x/factor_y each update. Since this is pretty easy to do yourself this trait might be up for deprecation.
Trait “viewport”
A game state trait. Adds accessor viewport. Set viewport.x and viewport.y to. Basically what viewport.x = 10 will do is draw all game objects 10 pixels to the left of their ordinary position. Since the viewport has moved 10 pixels to the right, the game objects will be seen “moving” 10 pixels to the left. This is great for scrolling games. You also have:
viewport.game_area = [0,0,1000,400] # Set scrolling limits, the effective game world if you so will viewport.center_around(object) # Center viweport around an object which responds to x() and y() viewport.lag = 0.95 # Set a lag-factor to use in combination with x_target / y_target viewport.x_target = 100 # This will move viewport towards X-coordinate 100, the speed is determined by the lag-parameter.
NOTE: Doing Parallax.create when using a trait viewport will give bad results. If you need parallax together with viewport do Parallax.new and then manually doing parallax.update/draw.
Trait “collision_detection”
Adds class and instance methods for basic collision detection.
# Class method example # This will collide all Enemy-instances with all Bullet-instances using the attribute #radius from each object. Enemy.each_bounding_circle_collision(Bullet) do |enemy, bullet| end # You can also use the instance methods. This will use the Rect bounding_box from @player and each EnemyRocket-object. @player.each_bounding_box_collision(EnemyRocket) do |player, enemyrocket| player.die! end # # each_collision automatically tries to access #radius and #bounding_box to see what a certain game object provides # It knows how to collide radius/radius, bounding_box/bounding_box and radius/bounding_box ! # Since You're not explicity telling what collision type to use it might be slighty slower. # [Player, PlayerBullet].each_collision(Enemy, EnemyBullet) do |friend, foe| # do something end # # You can also give each_collision() an array of objects. # Ball.each_collsion(@array_of_ground_items) do |ball, ground| # do something end
Trait “asynchronous”
Allows your code to specify a GameObject's behavior asynchronously, including tweening, movement and even method calls. Tasks are added to a queue to be processed in order; the task at the front of the queue is updated each tick and removed when it has finished.
# Simple one-trick example # This will cause an object to move from its current location to 64,64. @guy.async.tween :x => 64, :y => 64 # Block syntax example # This will cause a line of text to fade out and vanish. Chingu::Text.trait :asynchronous = Chingu::Text.new 'Goodbye, World!' .async do |q| q.wait 500 q.tween 2000, :alpha => 0, :scale => 2 q.call :destroy end
Currently available tasks are wait(timeout, &condition), tween(timeout, properties), call(method, *arguments) and exec { … }.
For a more complete example of how to use this trait, see examples/example_async.rb.
(IN DEVELOPMENT) Trait “retrofy”
Providing easier handling of the “retrofy” effect (non-blurry zoom) Aims to help out when using zoom-factor to create a retrofeeling with big pixels. Provides screen_x and screen_y which takes the zoom into account Also provides new code for draw() which uses screen_x / screen_y instead of x / y
Assets / Paths
You might wonder why this is necessary in the straight Gosu example:
ROOT_PATH = File.dirname(File.expand_path(__FILE__)) @image = Image.new(File.join(ROOT_PATH, "media", "player.png"))
It enables you to start your game from any directory and it will still find your assets (pictures, samples, fonts etc..) correctly. For a local development version this might not be important, you're likely to start the game from the games root-dir. But as soon as you try to deploy (for example to windows with OCRA - github.com/larsch/ocra/tree/master) you'll run into trouble of you don't do it like that.
Chingu solves this problem behind the scenes for the most common assets. The 2 lines above can be replaced with:
Image["player.png"]
You also have:
Sound["shot.wav"] Song["intromusic.ogg"] Font["arial"] Font["verdana", 16] # 16 is the size of the font
The default settings are like this:
Image["image.png"] -- searches directories ".", "images", "gfx" and "media" Sample["sample.wav"] -- searches directories ".", "sounds", "sfx" and "media" Song["song.ogg"] -- searches directories ".", "songs", "sounds", "sfx" and "media" Font["verdana"] -- searches directories ".", "fonts", "media"
Add your own searchpaths like this:
Gosu::Image.autoload_dirs << File.join(ROOT, "gfx") Gosu::Sound.autoload_dirs << File.join(ROOT, "samples")
This will add pathtoyourgamegfx and pathtoyourgamesamples to Image and Sound.
Thanks to Jacious of rubygame-fame (rubygame.org/) for his named resource code powering this.
Text
Text is a class to give the use of Gosu::Font more rubyish feel and fit it better into Chingu.
# Pure Gosu
@font = Gosu::Font.new($window, "verdana", 30) @font.draw("A Text", 200, 50, 55, 2.0)
# Chingu
@text = Chingu::Text.create("A Text", :x => 200, :y => 50, :zorder => 55, :factor_x => 2.0) @text.draw
@text.draw is usually not needed as Text is a GameObject and therefore automatically updated/drawn (it #create is used instead of #new) It's not only that the second example is readable by ppl now even familiar with Gosu, @text comes with a number of changeable properties, x,y,zorder,angle,factor_x,color,mode etc. Set a new x or angle or color and it will instantly update on screen.
DEPRECATIONS
Chingu (as all libraries) will sometimes break an old API. Naturally we try to not do this, but sometimes it's nessesary to take the library forward. If your old game stops working with a new chingu it could depend on some of the following:
Listing game objects:
class Enemy < GameObject; end; class Troll < Enemy; end; class Witch < Enemy; end;
Chingu 0.7
Enemy.all # Will list objects created with Enemy.create, Troll.create, Witch.create
Chingu ~0.8+
Enemy.all # list only list objects created with Enemy.create
We gained a lot of speed breaking that API.
MISC / FAQ
How do I access my main-window easily?
Chingu keeps a global variable, $window, which contains the Chingu::Window instance. Since Chingu::Window is just Gosu::Window + some cheese you can do your $window.button_down?, $window.draw_line() etc from anywhere. See for a full set of methods.
How did you decide on naming of methods / classes?
There's 1 zillion ways of naming stuff. As a general guideline I've tried to follow Gosus naming. If Gosu didn't have a good name for a certain thing/method I've checked Ruby itself and then Rails since alot of Ruby-devs are familiar with Rails. GameObject.all is naming straight from rails for example. Most stuff in GameObject follow the naming from Gosus Image#draw_rot.
As far as possible, use correct rubyfied english game_objects, not gameobjects. class HighScore, not Highscore.
WHY?
Plain Gosu is very minimalistic, perfect to build some higher level logic on!
Deployment and asset handling should be simple
Managing game states/scenes (intros, menus, levels etc) should be simple
There are a lot patterns in game development
OPINIONS
Less code is usually better
Hash arguments FTW. And it becomes even better in 1.9.
Don't separate too much from Gosus core-naming
CREDITS:
Spooner (input-work, tests and various other patches)
Jacob Huzak (sprite-trait, tests etc)
Jacius of Rubygame (For doing cool stuff that's well documented as re-usable). So far rect.rb and named_resource.rb is straight outta Rubygame.
Banister (of texplay fame) for general feedeback and help with ruby-internals and building the trait-system
Jduff for input / commits
Jlnr,Philomory,Shawn24,JamesKilton for constructive feedback/discussions
Ariel Pillet for codesuggestions and cleanups
Deps for making the first real full game with Chingu (and making it better in the process)
Thanks to github.com/tarcieri for require_all code, good stuff
.. Did I forget anyone here? Msg me on github.
REQUIREMENTS:
Gosu, preferable the latest version
Ruby 1.9.1+ or 1.8.7+
gem 'texplay' for some bonus Image-pixel operations, not needed otherwise
TODO - this list is Discontinued and no longer updated!
add :padding and :align => :topleft etc to class Text
(skip) rename Chingu::Window so 'include Chingu' and 'include Gosu' wont make Window collide
(done) BasicObject vs GameObject vs ScreenObject => Became BasicGameObject and GameObject
(50%) some kind of componentsystem for GameObject (which should be cleaned up)
(done) scale <–> growth parameter. See trait “effect”
(done) Enemy.all … instead of game_objects_of_type(Enemy) ? could this be cool / understandable?
(done) Don't call .update(time) with timeparameter, make time available thru other means when needed.
(10% done) debug screen / game state.. check out shawn24's elite irb sollution :)
(done) Complete the input-definitions with all possible inputs (keyboard, gamepad, mouse)!
(done) Complete input-stuff with released-states etc
(done) More gfx effects, for example: fade in/out to a specific color (black makes sense between levels).
(posted request on forums) Summon good proven community gosu snippets into Chingu
(done) Generate docs @ ippa.github.com- rdoc.info/projects/ippa/chingu !
(done) A good scene-manager to manage welcome screens, levels and game flow- GameStateManager / GameState !
(20% done) make a playable simple game in examples\ that really depends on game states
(done) Make a gem- first gem made on github
(done) Automate gemgenning rake-task even more
(done) More examples when effects are more complete
class ChipmunkObject
(done) class Actor/MovingActor with maybe a bit more logic then the basic GameObject.
(60% done) Spell check all docs, sloppy spelling turns ppl off. tnx jduff ;).
Tests
(done) Streamline fps / tick code
(done) Encapsulate Font.new / draw_rot with a “class Text < GameObject”
(10% done) Make it possible for ppl to use the parts of Chingu they like
(done) At least make GameStateManager really easy to use with pure Gosu / Document it!
(50% done) Get better at styling rdocs
(done) all “gamestate” -> “game state”
(skipping) intergrate MovieMaker - solve this with traits instead.
A more robust game state <-> game_object system to connect them together.
FIX example4: :p => Pause.new would Change the “inside_game_state” to Pause and make @player belong to Pause.
Old History, now deprecated:
0.6 / 2009-11-21
More traits, better input, fixes This file is deprecated – see github commit-history instead!
0.5.7 / 2009-10-15
See github commithistory.
0.5 / 2009-10-7
Big refactor of GameObject. Now has BasicGameObject as base. A first basic “trait”-system where GameObject “has_traits :visual, :velocity” etc. Tons of enhancements and fixes. Speed optimization. More examples.
0.4.5 / 2009-08-27
Tons of small fixes across the board. Started on GFX Helpers (fill, fill_rect, fill_gradient so far). A basic particle system (see example7.rb)
0.4.0 / 2009-08-19
Alot of game state love. Now also works stand alone with pure gosu.
0.3.0 / 2009-08-14
Too much to list. remade inputsystem. gamestates are better. window.rb is cleaner. lots of small bugfixes. Bigger readme.
0.2.0 / 2009-08-10
tons of new stuff and fixes. complete keymap. gamestate system. moreexamples/docs. better game_object.
0.0.1 / 2009-08-05
first release
"If you program and want any longevity to your work, make a game. All else recycles, but people rewrite architectures to keep games alive.", _why | http://www.rubydoc.info/github/ippa/chingu/file/README.rdoc | CC-MAIN-2014-52 | en | refinedweb |
Searching Active Directory with .NET (Visual Studio 2005)
- Posted: Nov 02, 2005 at 6:56 PM
- 97,425 Views
- 7 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
Federal Developer Evangelist, Robert Shelton, takes you through a 12 minute walkthrough/demonstration of how to search Active Directory for users, groups, and other AD Objects.
This demonstration is using the DirectoryServices namespace of the .NET framework. The demonstration is using Visual Studio 2005, but the code will also work as written for Visual Studio 2003.
You can find the code at my blog:
My other AD Screencasts:
- Adding user to AD with .NET
- Adding groups and users to groups with .NET- AD Searchfilter (Querying) Syntax:
- List of SearchScope options:
~ Robert Shel
I couldn't find the VB-Code in the net, so I just ported it myself:
' If you want to search in a specific path, here's the right spot.
' Just insert the path into "As New DirectoryEntry("LDAP://OU=Accounting,DC=World,DC=com")"
Dim Entry As New DirectoryEntry
Dim Searcher As New DirectorySearcher(Entry)
Dim AdObj As SearchResult
Searcher.SearchScope = SearchScope.Subtree
Searcher.Filter() = "(ObjectClass=user)"
For Each AdObj In Searcher.FindAll
Label1.Text = Label1.Text & "CN=" & AdObj.Properties("CN").Item(0) & " | Path=" & AdObj.Path & "<br>"
I coded it with ASP.net for a webapplication.
But the App does exactely the same as the first example.
I hope you can use it. | http://channel9.msdn.com/Blogs/RobertShelton/Searching-Active-Directory-with-NET-Visual-Studio-2005?format=progressive | CC-MAIN-2014-52 | en | refinedweb |
> <sys:reference
Yes, you need to add a reference to your GBeanInfo with this name, and
then either add a constructor argument or a setter on the GBean class.
David pointed to configs/system-database/src/plan/plan.xml, which has this:
<gbean name="NonTransactionalThreadPooledTimer"
class="org.apache.geronimo.timer.jdbc.JDBCStoreThreadPooledNonTransactionalTimer">
...
<reference name="ManagedConnectionFactoryWrapper"><name>SystemDatasource
</name></reference>
Now if you look in the class
JDBCStoreThreadPooledNonTransactionalTimer, the GBean info contains:
infoFactory.addReference("ManagedConnectionFactoryWrapper",
ConnectionFactorySource.class,
NameFactory.JCA_MANAGED_CONNECTION_FACTORY);
Then
infoFactory.setConstructor(new String[]{"ManagedConnectionFactoryWrapper", ...
And if you look in the constructor
public JDBCStoreThreadPooledNonTransactionalTimer(ConnectionFactorySource
managedConnectionFactoryWrapper, ...
So this is how the reference works -- you add it to your GBeanInfo
with a name and class, then either just add a setter method, or list
it by name as a constructor argument in the GBean info and add the
actual constructor arg. Either the setter type or constructor arg
type should match the class argument to the GBeanInfo addReference.
The name of the reference in the plan should match the name of the
reference in the GBeanInfo. The connection factory class is
ConnectionFactorySource, and I expect the destination class to use
would be AdminObjectSource.
> <sys:name>WHAT IS THIS?</sys:name>
This should match the connectiondefinition-instance/name for the
connection factory, and I think the
adminobject-instance/message-destination-name for the admin object.
Thanks,
Aaron
On 6/11/06, Neal Sanche <neal@nsdev.org> wrote:
> Thanks David,
>
> I've looked for the examples you mention, and probably because it's
> late, but I'm not getting it. Let's see if I can follow it at least a
> little. I can't use JNDI, that much is clear. The only thing in the
> GBean's JNDI is the JMXConnector object.
>
> So, you're saying that I can use constructor dependency injection
> instead. So, in my <sys:gbean>, I would have to add a couple of
> <sys:reference> tags, the question is what do I use for the 'name'
> attribute and the <sys:name> child element of those references?
>
> Then, after I've done that, and set up the references in my GBean Info,
> I can set up a constructor that takes those parameters. I've been
> looking at the JDBCStoreThreadPooledNonTransactionalTimer class, which
> shows exactly how to do that. That makes sense.
>
> I guess all I need is direction on what my <sys:gbean> should look like?
>
> <sys:gbean
> <sys:attribute5000</sys:attribute>
> <sys:reference
> <sys:name>WHAT IS THIS?</sys:name>
> </sys:reference>
> </sys:gbean>
>
> If you could tell me how to figure out the two names, I'll try it and
> see if I get references to my queue and connection factories.
>
> Thanks!
>
> -Neal
>
> David Jencks wrote:
> > Hi Neal,
> >
> > This isn't going to work using jndi. We only supply the spec-required
> > local jndi java:comp context to j2ee components, and there is no
> > global jndi context. Therefore a gbean with a thread is not going to
> > have a usable jndi context.
> >
> > What you can do instead is use gbean references and constructor
> > dependency injection. This is a bit strange, because the gbeans
> > involved are not queues or connection factories but holders for them,
> > and to get the queue or connection factory you have to call a
> > $getResource method on them.
> >
> > For instance, for the connection factory you could include in your
> > gbean into and constuctor args a reference to a
> > ManagedConnectionFactoryWrapper, and get the connection factory by
> >
> > ConnectionFactory connectionFactory = (ConnectionFactory)
> > mcfWrapper.$getResource();
> >
> > in your constructor: similarly
> > Queue queue = (Queue) adminObjectWrapper.$getResource();
> >
> > If you look in the system-database plan and the timer module you can
> > see an example of this, the timer gbean has a reference to a
> > MCFWrapper gbean and gets the datasource from it. Activemq journal
> > has a similar reference to get a jdbc datasource for long-term
> > persistence.
> >
> > Hope this helps,
> > david jencks
> >
> >
> > On Jun 10, 2006, at 10:20 PM, Neal Sanche wrote:
> >
> >> Hi All,
> >>
> >> I am doing some research for a Geronimo 1.1 tutorial I'm putting
> >> together. I'm writing a little reminder application that uses an MDB
> >> that is periodically sent a JMS message to wake it up to perform a
> >> little database lookup and send out a bunch of reminder email
> >> messages. That's the theory anyway.
> >>
> >> I have managed to get my MDB to deploy after creating a JMS Resource
> >> Adaptor through the Admin console, and then setting up the dependency
> >> and MessageDriven section in my open-ejb.xml deployment plan, that
> >> looks like the following:
> >>
> >> <?xml version="1.0" encoding="UTF-8"?>
> >> <openejb-jar >> xmlns: >> xmlns: >> xmlns: >> xmlns:
> >> <sys:environment>
> >> <sys:moduleId>
> >> <sys:groupId>default</sys:groupId>
> >> <sys:artifactId>ReminderBackend</sys:artifactId>
> >> <sys:version>1.0</sys:version>
> >> <sys:type>car</sys:type>
> >> </sys:moduleId>
> >> <sys:dependencies>
> >> <sys:dependency>
> >> <sys:groupId>console.jms</sys:groupId>
> >> <sys:artifactId>ReminderMessageAdaptor</sys:artifactId>
> >> <sys:version>1.0</sys:version>
> >> <sys:type>rar</sys:type>
> >> </sys:dependency>
> >> </sys:dependencies>
> >> </sys:environment>
> >> <enterprise-beans>
> >> <message-driven>
> >> <ejb-name>TimerTick</ejb-name>
> >> <nam:resource-adapter>
> >> <nam:resource-link>ReminderMessageAdaptor</nam:resource-link>
> >> </nam:resource-adapter>
> >> <activation-config>
> >> <activation-config-property>
> >>
> >> <activation-config-property-name>destination</activation-config-property-name>
> >>
> >>
> >> <activation-config-property-value>TimerTickQueue</activation-config-property-value>
> >>
> >> </activation-config-property>
> >> <activation-config-property>
> >>
> >> <activation-config-property-name>destinationType</activation-config-property-name>
> >>
> >>
> >> <activation-config-property-value>javax.jms.Queue</activation-config-property-value>
> >>
> >> </activation-config-property>
> >> </activation-config>
> >> </message-driven>
> >> </enterprise-beans>
> >>
> >> </openejb-jar>
> >>
> >>
> >> Now, that's all good. It deploys without any warnings. Now, I want to
> >> send a message to the queue. So, I thought to myself, 'Self, I'll
> >> write a GBean to start a thread to send a TextMessage to the queue
> >> periodically.' So I wrote one, and added the following to my
> >> deployment descriptor:
> >>
> >> <sys:gbean
> >> <sys:attribute5000</sys:attribute>
> >> </sys:gbean>
> >>
> >> I am getting called back for the start() and stop() lifecycle
> >> methods, and so I wrote my thread to be interruptable, and that's all
> >> working. My thread wakes up and calls the following method (ignore
> >> the resource issues here, I haven't written my finally block yet):
> >>
> >>
> >> private void SendTickMessage() {
> >> System.err.println("Tick");
> >> try {
> >> Queue queue = TimerTickUtil.getQueue();
> >> System.err.println(queue.getQueueName());
> >> QueueConnection conn = TimerTickUtil.getQueueConnection();
> >> Session session =
> >> conn.createSession(false,Session.AUTO_ACKNOWLEDGE);
> >> TextMessage msg = session.createTextMessage("TICK");
> >> MessageProducer producer = session.createProducer(queue);
> >> producer.send(msg);
> >> } catch (Exception ex) {
> >> ex.printStackTrace();
> >> try {
> >> InitialContext ctx = new InitialContext();
> >> NamingEnumeration<NameClassPair> en =
> >> ctx.list("");
> >> while(en.hasMore()) {
> >> NameClassPair pair = en.next();
> >> System.err.println(pair.getName() + " -> "+
> >> pair.getClassName());
> >> }
> >> } catch (Exception ex2) {
> >> ex2.printStackTrace();
> >> }
> >> }
> >> }
> >>
> >> Now, the problem is that TimerTickUtil.getQueue() is trying to do a
> >> JNDI lookup for the queue, but I have absolutely no idea what form
> >> that JNDI name will take? I ran JConsole to get some idea of the JNDI
> >> namespace, but that was no good, and hence the silly code in the
> >> exception handler that tries to list the entire InitialContext list.
> >> But that's not giving me much to go on either.
> >>
> >> Can anyone tell me what the JNDI name of my queue will be from the
> >> above information? Is there a sample MDB app in the source tree that
> >> shows how this is done?
> >>
> >> I'll keep hammering at it and see if I can figure it out.
> >>
> >> -Neal
>
> | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200606.mbox/%3C74e15baa0606110702j45c46b7brc287aa062c8b11b9@mail.gmail.com%3E | CC-MAIN-2014-52 | en | refinedweb |
25 May 2012 10:58 [Source: ICIS news]
SINGAPORE (ICIS)--China Resources is running its new 300,000 tonne/year polyethylene terephthalate (PET) bottle chip line at ?xml:namespace>
The company has achieved on-spec production at the line, which was started up on 18 May, the source said.
China Resources plans to start up another 300,000 tonne/year PET line in
The company started a 300,000 tonne/year PET unit at the same site on 19 April, which was running at around 90% capacity, the source added.
The company’s 400,000 tonne/year PET line in
After all its new plants start up China Resources will become the biggest PET bottle chip producer in the country with a 1.3m tonne/year nameplate | http://www.icis.com/Articles/2012/05/25/9563763/china-resources-runs-new-pet-bottle-chip-unit-at-70.html | CC-MAIN-2014-52 | en | refinedweb |
The Java Specialists' Newsletter
Issue 0682003-04-21
Category:
Performance
Java version:
GitHub
Subscribe Free
RSS Feed
Welcome to the 68th edition of The Java(tm) Specialists' Newsletter, sent to 6400 Java
Specialists in 95 countries.
Since our last newsletter, we have had two famous Java authors
join the ranks of subscribers. It gives me great pleasure to
welcome Mark Grand and Bill Venners to our list of
subscribers.
Mark is famous for his three volumes of Java Design Patterns
books. You will notice that I quote Mark in the brochure
of my Design Patterns course. Bill is famous for his book
Inside The Java Virtual Machine.
Bill also does a lot of work training with Bruce Eckel.
Our last newsletter on BASIC Java
produced gasps of disbelief. Some readers
told me that they now wanted to unsubscribe, which of course I
supported 100%. Others enjoyed it with me. It was meant in
humour, as the warnings at the beginning of the newsletter clearly
indicated.
NEW:
Please see our new "Extreme Java" course, combining
concurrency, a little bit of performance and Java 8.
Extreme Java - Concurrency & Performance for Java 8.
The first code that I look for when I am asked to find out why
some code is slow is concatenation of Strings. When we concatenate
Strings with += a whole lot of objects are constructed.
Before we can look at an example, we need to define a Timer class
that we will use for measuring performance:
/**
*;
}
}
In the test case, we have three tasks that we want to measure.
The first is a simple += String append, which turns out to be
extremely slow. The second creates a StringBuffer and calls
the append method of StringBuffer. The third method creates
the StringBuffer with the correct size and then appends to
that. After I have presented the code, I will explain what
happens and why.());
}
});
}
}
This program does use quite a bit of memory, so you should set
the maximum old generation heapspace to be quite large, for example
256mb. You can do that with the -Xmx256m flag.
When we run this program, we get the following output:
String += 10000 additions
Length = 38890
Took 2203ms
StringBuffer 300 * 10000 additions initial size wrong
Length = 19888890
Took 2254ms
StringBuffer 300 * 10000 additions initial size right
Length = 19888890
Took 1562ms
You can observe that using StringBuffer directly is
about 300 times faster than using +=.
Another observation that we can make is that if we set
the initial size to be correct, it only takes 1562ms
instead of 2254ms. This is because of the way that
java.lang.StringBuffer works. When you create a new
StringBuffer, it creates a char[] of size 16. When
you append, and there is no space left in the char[]
then it is doubled in size. This means that if you
size it first, you will reduce the number of char[]s
that are constructed.
The time that the += String append takes is dependent
on the compiler that you use to compile the code. I
discovered this accidentally during my Java course last
week, and much to my embarrassment, I did not know why
this was. If you compile it from within Eclipse, you get
the result above, and if you compile it with Sun's
javac, you get the output below. I think
that Eclipse uses jikes to compile the code, but I am not
sure. Perhaps it even has an internal compiler?
javac
String += 10000 additions
Length = 38890
Took 7912ms
StringBuffer 300 * 10000 additions initial size wrong
Length = 19888890
Took 2634ms
StringBuffer 300 * 10000 additions initial size right
Length = 19888890
Took 1822ms
This took some head-scratching, resulting in my fingers
being full of wood splinters. I started by writing a
class that did the basic String append with +=.
public class BasicStringAppend {
public BasicStringAppend() {
String s = "";
for(int i = 0; i < 100; i++) {
s += i;
}
}
}
When in doubt about what the compiler does, disassemble
the classes. Even when I disassembled them, it took a
while before I figured out what the difference was and
why it was important. The part where they differ is in
italics. You can disassemble a class with the
tool javap that is in the bin directory of
your java installation. Use the -c parameter:
javap
javap -c BasicStringAppend
Compiled with Eclipse:
Compiled from BasicStringAppend.java
public class BasicStringAppend extends java.lang.Object {
public BasicStringAppend();
}
Method BasicStringAppend()
0 aload_0
1 invokespecial #9 <Method java.lang.Object()>
4 ldc #11 <String "">
6 astore_1
7 iconst_0
8 istore_2
9 goto 34
12 new #13 <Class java.lang.StringBuffer>
15 dup
16 aload_1
17 invokestatic #19 <Method java.lang.String valueOf(java.lang.Object)>
20 invokespecial #22 <Method java.lang.StringBuffer(java.lang.String)>
23 iload_2
24 invokevirtual #26 <Method java.lang.StringBuffer append(int)>
27 invokevirtual #30 <Method java.lang.String toString()>
30 astore_1
31 iinc 2 1
34 iload_2
35 bipush 100
37 if_icmplt 12
40 return
Compiled with Sun's javac:
Compiled from BasicStringAppend.java
public class BasicStringAppend extends java.lang.Object {
public BasicStringAppend();
}
Method BasicStringAppend()
0 aload_0
1 invokespecial #1 <Method java.lang.Object()>
4 ldc #2 <String "">
6 astore_1
7 iconst_0
8 istore_2
9 goto 34
12 new #3 <Class java.lang.StringBuffer>
15 dup
16 invokespecial #4 <Method java.lang.StringBuffer()>
19 aload_1
20 invokevirtual #5 <Method java.lang.StringBuffer append(java.lang.String)>
23 iload_2
24 invokevirtual #6 <Method java.lang.StringBuffer append(int)>
27 invokevirtual #7 <Method java.lang.String toString()>
30 astore_1
31 iinc 2 1
34 iload_2
35 bipush 100
37 if_icmplt 12
40 return
Instead of explaining what every line does (which I hope should not
be necessary on a Java Specialists' Newsletter) I present
the equivalent Java code for both IBM's Eclipse and Sun. The differences,
which equate to the disassembled difference, is again in italics:
public class IbmBasicStringAppend {
public IbmBasicStringAppend() {
String s = "";
for(int i = 0; i < 100; i++) {
s = new StringBuffer(String.valueOf(s)).append(i).toString();
}
}
}
public class SunBasicStringAppend {
public SunBasicStringAppend() {
String s = "";
for(int i = 0; i < 100; i++) {
s = new StringBuffer().append(s).append(i).toString();
}
}
}
It does not actually matter which compiler is better, either is terrible.
The answer is to avoid += with Strings wherever possible.
You should never reuse a StringBuffer object. Construct it, fill it,
convert it to a String, and then throw it away.
Why is this? StringBuffer contains a char[]
which holds the characters to be used for the String. When you call
toString() on the StringBuffer, does it make a copy of
the char[]? No, it assumes that you will
throw the StringBuffer away and constructs a String with a pointer to
the same char[] that is contained inside
StringBuffer! If you do change the StringBuffer after creating
a String, it makes a copy of the char[] and
uses that internally. Do yourself a favour and read the source code
of StringBuffer - it is enlightning.
char[]
toString()
But it gets worse than this. In JDK 1.4.1, Sun changed the way that
setLength() works. Before 1.4.1, it was safe to do the following:
... // StringBuffer sb defined somewhere else
sb.append(...);
sb.append(...);
sb.append(...);
String s = sb.toString();
sb.setLength(0);
The code of setLength pre-1.4.1 used to contain the following
snippet of code:
if (count < newLength) {
// *snip*
} else {
count = newLength;
if (shared) {
if (newLength > 0) {
copy();
} else {
// If newLength is zero, assume the StringBuffer is being
// stripped for reuse; Make new buffer of default size
value = new char[16];
shared = false;
}
}
}
It was replaced in the 1.4.1 version with:
if (count < newLength) {
// *snip*
} else {
count = newLength;
if (shared) copy();
}
Therefore, if you reuse a StringBuffer in JDK 1.4.1, and any one of the
Strings created with that StringBuffer is big,
all future Strings will have the same size char[]. This is not very
kind of Sun, since it causes bugs in many libraries. However, my argument
is that you should not have reused
StringBuffers anyway, since you will have less overhead simply creating
a new one than setting the size to zero again.
This memory leak was pointed out to me by Andrew Shearman during one
of my courses, thank you very much! For more information, you can
visit Sun's
website.
When you read those posts, it becomes apparent that JDOM reuses StringBuffers
extensively. It was probably a bit mean to change StringBuffer's setLength()
method, although I think that it is not a bug. It is simply highlighting
bugs in many libraries.
For those of you that use JDOM, I hope that JDOM will be fixed soon to cater
for this change in the JDK. For the rest of us, let us remember to throw away
used StringBuffers.
So long...
Heinz
Performance Articles
Related Java Course | http://www.javaspecialists.co.za/archive/Issue068.html | CC-MAIN-2014-52 | en | refinedweb |
I have never been happy with the code on my custom CursorAdapter until today I decided to review it and fix a little problem that was bothering me for a long time (interestingly enough, none of the users of my app ever reported such a problem).
Here’s a small description of my question:
My custom CursorAdapter overrides
newView() and
bindView() instead of
getView() as most examples I see. I use the ViewHolder pattern between these 2 methods. But my main issue was with the custom layout I’m using for each list item, it contains a
ToggleButton.
The problem was that the button state was not kept when a list item view scrolled out of view and then scrollbed back into view. This problem existed because the
cursor was never aware that the database data changed when the
ToggleButton was pressed and it was always pulling the same data. I tried to requery the cursor when clicking the
ToggleButton and that solved the problem, but it was very slow.
I have solved this issue and I’m posting the whole class here for review. I’ve commented the code thoroughly for this specific question to better explain my coding decisions.
Does this code look good to you? Would you improve/optimize or change it somehow?
P.S: I know the CursorLoader is an obvious improvement but I don’t have time to deal with such big code rewrites for the time being. It’s something I have in the roadmap though.
Here’s the code:
public class NotesListAdapter extends CursorAdapter implements OnClickListener { private static class ViewHolder { ImageView icon; TextView title; TextView description; ToggleButton visibility; } private static class NoteData { long id; int iconId; String title; String description; int position; } private LayoutInflater mInflater; private NotificationHelper mNotificationHelper; private AgendaNotesAdapter mAgendaAdapter; /* * This is used to store the state of the toggle buttons for each item in the list */ private List<Boolean> mToggleState; private int mColumnRowId; private int mColumnTitle; private int mColumnDescription; private int mColumnIconName; private int mColumnVisibility; public NotesListAdapter(Context context, Cursor cursor, NotificationHelper helper, AgendaNotesAdapter adapter) { super(context, cursor); mInflater = LayoutInflater.from(context); /* * Helper class to post notifications to the status bar and database adapter class to update * the database data when the user presses the toggle button in any of items in the list */ mNotificationHelper = helper; mAgendaAdapter = adapter; /* * There's no need to keep getting the column indexes every time in bindView() (as I see in * a few examples) so I do it once and save the indexes in instance variables */ findColumnIndexes(cursor); /* * Populate the toggle button states for each item in the list with the corresponding value * from each record in the database, but isn't this a slow operation? */ for(mToggleState = new ArrayList<Boolean>(); !cursor.isAfterLast(); cursor.moveToNext()) { mToggleState.add(cursor.getInt(mColumnVisibility) != 0); } } @Override public View newView(Context context, Cursor cursor, ViewGroup parent) { View view = mInflater.inflate(R.layout.list_item_note, null); /* * The ViewHolder pattern is here only used to prevent calling findViewById() all the time * in bindView(), we only need to find all the views once */ ViewHolder viewHolder = new ViewHolder(); viewHolder.icon = (ImageView)view.findViewById(R.id.imageview_icon); viewHolder.title = (TextView)view.findViewById(R.id.textview_title); viewHolder.description = (TextView)view.findViewById(R.id.textview_description); viewHolder.visibility = (ToggleButton)view.findViewById(R.id.togglebutton_visibility); /* * I also use newView() to set the toggle button click listener for each item in the list */ viewHolder.visibility.setOnClickListener(this); view.setTag(viewHolder); return view; } @Override public void bindView(View view, Context context, Cursor cursor) { Resources resources = context.getResources(); int iconId = resources.getIdentifier(cursor.getString(mColumnIconName), "drawable", context.getPackageName()); String title = cursor.getString(mColumnTitle); String description = cursor.getString(mColumnDescription); /* * This is similar to the ViewHolder pattern and it's need to access the note data when the * onClick() method is fired */ NoteData noteData = new NoteData(); /* * This data is needed to post a notification when the onClick() method is fired */ noteData.id = cursor.getLong(mColumnRowId); noteData.iconId = iconId; noteData.title = title; noteData.description = description; /* * This data is needed to update mToggleState[POS] when the onClick() method is fired */ noteData.position = cursor.getPosition(); /* * Get our ViewHolder with all the view IDs found in newView() */ ViewHolder viewHolder = (ViewHolder)view.getTag(); /* * The Html.fromHtml is needed but the code relevant to that was stripped */ viewHolder.icon.setImageResource(iconId); viewHolder.title.setText(Html.fromHtml(title)); viewHolder.description.setText(Html.fromHtml(description)); /* * Set the toggle button state for this list item from the value in mToggleState[POS] * instead of getting it from the database with 'cursor.getInt(mColumnVisibility) != 0' * otherwise the state will be incorrect if it was changed between the item view scrolling * out of view and scrolling back into view */ viewHolder.visibility.setChecked(mToggleState.get(noteData.position)); /* * Again, save the note data to be accessed when onClick() gets fired */ viewHolder.visibility.setTag(noteData); } @Override public void onClick(View view) { /* * Get the new state directly from the toggle button state */ boolean visibility = ((ToggleButton)view).isChecked(); /* * Get all our note data needed to post (or remove) a notification */ NoteData noteData = (NoteData)view.getTag(); /* * The toggle button state changed, update mToggleState[POS] to reflect that new change */ mToggleState.set(noteData.position, visibility); /* * Post the notification or remove it from the status bar depending on toggle button state */ if(visibility) { mNotificationHelper.postNotification( noteData.id, noteData.iconId, noteData.title, noteData.description); } else { mNotificationHelper.cancelNotification(noteData.id); } /* * Update the database note item with the new toggle button state, without the need to * requery the cursor (which is slow, I've tested it) to reflect the new toggle button state * in the list because the value was saved in mToggleState[POS] a few lines above */ mAgendaAdapter.updateNote(noteData.id, null, null, null, null, visibility); } private void findColumnIndexes(Cursor cursor) { mColumnRowId = cursor.getColumnIndex(AgendaNotesAdapter.KEY_ROW_ID); mColumnTitle = cursor.getColumnIndex(AgendaNotesAdapter.KEY_TITLE); mColumnDescription = cursor.getColumnIndex(AgendaNotesAdapter.KEY_DESCRIPTION); mColumnIconName = cursor.getColumnIndex(AgendaNotesAdapter.KEY_ICON_NAME); mColumnVisibility = cursor.getColumnIndex(AgendaNotesAdapter.KEY_VISIBILITY); } }
Your solution is optimal an I will add it to my weapons 🙂 Maybe, I’ll try to bring a little optimization for the calls to database.
Indeed, because of conditions of the task, there are only three solutions:
- Update only one row, requery cursor and redraw all items. (Straight-forward, brute force).
- Update the row, cache the results and use cache for drawing items.
- Cache the results, use cache for drawing items. And when you leave this activity/fragment then commit the results to database.
For 3rd solution you can use SparseArray for looking for the changes.
private SparseArray<NoteData> mArrayViewHolders; public void onClick(View view) { //here your logic with NoteData. //start of my improve if (mArrayViewHolders.get(selectedPosition) == null) { // put the change into array mArrayViewHolders.put(selectedPosition, noteData); } else { // rollback the change mArrayViewHolders.delete(selectedPosition); } //end of my improve //we don't commit the changes to database. }
Once again: from the start this array is empty. When you toggle the button first time (there is a change), you add NoteData to array. When you toggle the button second time (there is a rollback), you remove NoteData from array. And so on.
When you’re finishing, just request the array and push the changes into database.
Answer:
What you are seeing is the View re use of Android. I don’t think that you are doing something wrong by querying the cursor again. Just dont use the cursor.requery() function.
Instead, set the toggle to false at first always and then ask the cursor and switch it on if you have to.
Maybe you were doing that and I misunderstood something, however I don’t think that you should have slow results doing it.
Pseudo-code:
getView(){ setToggleFalse(); boolean value = Boolean.valueOf(cursor.getString('my column')); if (value){ setToggleTrue(); } }
Answer:
I would wait before going to CursorLoader. As it seems CursorLoader derivatives do not worl with CursorLoader.
Tags: androidandroid, database, list, listview, perl, view | https://exceptionshub.com/database-is-this-custom-cursoradapter-for-a-listview-properly-coded-for-android.html | CC-MAIN-2021-17 | en | refinedweb |
Scheduling tasks to Minimize Lateness
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Reading time: 25 minutes | Coding time: 5 minutes
The Problem
As they say, "Greed... is good. Greed is right. Greed works". We will look into one such problem where greedy proves its worth. According to the problem, We have a single resource and a set of n requests to use the resource for an interval of time. Each request i has a deadline di and time required to finish the request ti. Each request i must be assigned an interval of time ti which must not overlap with other accepted requests. Also, one must note that since we are scheduling the requests on one resource, we could find the starting(si) and finishing time(fi) of each by the relation fi = si + ti. We are willing to schedule maximum requests before their respective deadlines or at least in a way to decrease the time lag in finish time and deadline of the chosen request (i.e., lateness).
We claim a request i to be late if fi > di. If a request finishes before its deadline, we consider its lateness(li) to be zero and fi - di otherwise.
We are interested to find a schedule such that the maximum lateness (max(li*) where i ∈ {1...n}) is minimized.
Consider below example, we are given 3 requests(i=3) along with the time required to complete each one(ti) and their respective deadlines(di). We could schedule the requests in many ways but we choose the shown solution since it fulfills our motive to complete all the requests within the assigned deadlines.
In this case, since every request managed to finish before its deadline, the lateness of every request is 0 and hence the maximum lateness is also 0, which is minimum.
In above example, we could intuitively see the solution but how to proceed when n is sufficiently large? Here comes the need to define an Algorithm that will give the expected solution in almost every situation.
Greedy Strategies
A Greedy Algorithm works in stages and at every stage, it makes the best local choice depending on the past ones and assures to give a globally optimal solution.
Needless to say, there would be many greedy approaches that will give optimal solution in certain situations but our task is to find one that will work in every situation. Hence, we will now see if we could get an efficient greedy algorithm for our problem.
Shortest Processing Time First
With a little thought, one could claim a strategy which sorts the Processing time in ascending order will give us optimal solution since this way, we would be able to finish the smaller requests quickly. What do you think?
Yes, it won't work for us! It doesn't consider the deadlines of requests and this is where it looses the sprint.
You can clearly see that according to this strategy, we will choose t1 instead of t2 due to its lesser processing time but t2 will miss its deadline this way contributing towards a positive lateness which could have been 0 if we scheduled t2 before t1 (refer to the image above).
So, this strategy fails.
Minimum Slack Time First
Coming to the next strategy, we would like to consider the deadlines while scheduling. A solution could be considering the difference between the deadline and its processing time to choose a request(i.e., slack time) and adding requests with increasing slack time. This do works in the above scenario though. But it too fails. Check the example shown below:
Here as the slack time of t2 is smaller than t1 (0<1), we scheduled it first but as we could note, it leads to lateness of 3 in t1 and 0 in t2.Hence, calling the maximum latency as 3 in our solution. But if we consider the optimal solution, we can clearly see that this leads to lateness of 1 in t2 and 0 in t1 which means the maximum lateness here is 1. Therefore, this greedy algorithm fails.
Earliest Deadline First
Let's see another strategy which is based on choosing requests with increasing deadline. Surprisingly, it gives an optimal solution to our problem and we couldn't find a contradicting case here like above.
Quick Challenge - Check if this works fine in previous two examples.
Proving Optimality
We can't claim our greedy algorithm to be efficient due to lack of contradicting cases. So, we will rather prove that this gives an optimal solution to our problem.
We will consider that there exist an optimal solution O and the A is the solution returned by the greedy algorithm.
Since proving optimality of an algorithm is no easy task, We will gradually transform O into a schedule that is identical to A making sure that it holds optimality in every step. This procedure is generally called as an "Exchange Argument".
To Prove: "The schedule A produced by our greedy strategy has optimal maximum lateness".
Proof: We will first prove that "All schedules with no inversions and no idle time have the same maximum lateness".
where we say, there is an Inversion if there exist two job i, j such that di > dj and i is scheduled before j and Idle time is the time lag when the resource is not working yet there are requests which need to be scheduled.
If there are two different schedules which have neither inversions nor idle time, then they might differ by the order in which the requests are scheduled in them. Since we have scheduled the requests in increasing order of deadline, we can be assured that all the jobs with deadline less than di are scheduled before i and those with greater deadline after i.
Therefore, the only possibility of having different order in schedules is due to the requests with same deadline. Consider the requests with deadline d, the last request with deadline d will have the maximum lateness and our maximum lateness depends on this request irrespective of the order of jobs.
It supports our claim that "All schedules with no inversions and no idle time have the same maximum lateness".
Next up, we will prove that "There is an optimal schedule that has no inversions and no idle time"
Note that for every Optimal schedule O, we can always shift our requests in order to eliminate the idle time as there is no constraint on the starting time but on the finish time.
Also, let's assume that O has an inversion, i.e., there are two requests i, j with di < dj and j is scheduled before i. We could swap ith and jth request and hence can decrease the number of inversion. But we are left to prove that the new schedule after eliminating the inversion maintains optimality.
Let lk be the lateness of requests before swapping (where k ∈ {i, j}) and l'k be the lateness of requests after swapping.
Note that d2<d1 but 1 is scheduled before 2. Hence there is an inversion. After swapping, we should observe that:
l'1 = f'1 - d1
= f2 - d1 (since f'1 = f2)
<= f2 - d2 (d2 < d1)
<= l2
And we just saw that maximum lateness doesn't increase after swapping a pair with adjacent inversion.
Now, we have sufficient information to prove "The schedule A produced by the greedy algorithm has optimal maxmum lateness L"
As we discussed above, we know that there exist an Optimal schedule O that has no inversion. Recall that by choosing our greedy strategy (Earliest Deadline First) we will never get any inversions in our schedule. Moreover, we have proved that all the schedules with no inversions have the same maximum lateness.
Hence, the schedule obtained by the greedy algorithm is optimal.
The Pseudocode for the algorithm could be written as:
1. Sort the requests by their deadline 2. min_lateness = 0 3. start_time = 0 4. for i = 0 -> n 5. min_lateness = max(min_lateness, (t[i] + start_time) - d[i]) 6. start_time += t[i] 7. return min_lateness;
Implementation
#include <bits/stdc++.h> using namespace std; class Request { public: int deadline, p_time; bool operator < (const Request & x) const { return deadline < x.deadline; } }; int main() { int n, i, finish_time, start_time, min_lateness, temp; cout << "Enter the number of requests: "; cin >> n; // no. of requests Request r[n]; cout << "Enter the deadline and processing time of each request: "; for (i = 0; i < n; i++) // deadline and processing time of each job cin >> r[i].deadline >> r[i].p_time; sort(r, r + n); // sort jobs in increasing order of deadline start_time = 0; min_lateness = 0; for (i = 0; i < n; i++) { min_lateness = max((r[i].p_time + start_time) - r[i].deadline, min_lateness); start_time += r[i].p_time; } cout << "Maximum lateness of schedule: " << min_lateness; return 0; }
Complexity
Time complexity:
Θ(N log N)due to sorting (We could reduce the Time Complexity to
Θ(N)if the requests are already sorted
Space complexity:
Θ(1)
With this article at OpenGenus, you must have complete idea of scheduling tasks to minimize delay. Enjoy. | https://iq.opengenus.org/scheduling-to-minimize-lateness/ | CC-MAIN-2021-17 | en | refinedweb |
Use profiles to add properties to components, ports, and connectors. Import an existing profile, apply stereotypes, and add property values. To create a profile, see Define Profiles and Stereotypes.
The Profile Editor is independent from the model that opens it, so you must explicitly
import a new profile into a model. The profile must first be saved with an
.xml extension. On the Modeling tab, in the
Profiles section, select Import, then from the
drop-down, select Import
. Select the profile to import. An architecture model can
use multiple profiles at once.
Alternatively, open the Profile Editor. On the Modeling tab, in the
Profiles section, select Import, then from the
drop-down, select Edit
. You can import a profile into any open dictionaries or
models.
Note
For a System Composer™ component that is linked to a Simulink® behavior model, the profile must be imported into the Simulink model before applying a stereotype from it to the component. Since the Property Inspector on the Simulink side does not display stereotypes, this workflow is not finalized.
To manage profiles after they have been imported, in the Profiles
section, select Import, then from the drop-down, select
Manage
.
Once the profile is available in the model, open the Property Inspector. On the Modeling tab, in the Design section, select Property Inspector. Select a model element.
In the Stereotype field, use the drop-down to select the stereotype. Only the stereotypes that apply to the current element type (for example, a port) are available for selection. If no stereotype exists, you can use the <new / edit> option to open the Profile Editor and create one.
When you apply a stereotype to an element, a new set of properties appears in the Property Inspector under the name of the stereotype. To edit the properties, expand this set.
You can set multiple stereotypes for each element.
You can also apply component, port, connector, and interface stereotypes to all
applicable elements at the same architecture level. On the Modeling
tab, in the Profiles section, select Apply
Stereotypes. In the Apply Stereotypes dialog box, from Apply
stereotype(s) to, select
Top-level architecture,
All elements,
Components,
Ports,
Connectors, or
Interfaces.
Note
The
Interfaces option is only available if interfaces are
defined in the Interface Editor. For more information, see Define Interfaces.
You can also apply stereotypes by selecting a single model element. From the
Scope list, select
Selection,
This layer, or
Entire model.
You can also apply stereotypes to interfaces. When interfaces are locally defined and
you select one or more interfaces in the Interface Editor, the options for
Scope are
Selection and
Local
interfaces.
When interfaces are stored and shared across a data dictionary and you select one or
more interfaces in the Interface Editor, the options for Scope are
Selection and either
dictionary.sldd or the
name of the dictionary currently in use.
Note
For the stereotypes to display for interfaces in a dictionary, in the Apply Stereotypes dialog box, the profile must be imported into the dictionary.
You can also create a new component with an applied stereotype using the quick-insert menu. Select the stereotype as a fully qualified name. A component with that stereotype is created.
If a stereotype is no longer required for an element, remove it using the Property Inspector. Click Select next to the stereotype and choose Remove.
You can extend a stereotype by creating a new stereotype based on the existing one, allowing you to control properties in a structural manner. For example, all components in a project may have a part number, but only electrical components have a power rating, and only electronic components — a subset of electrical components — have manufacturer information. You can use an abstract stereotype to serve solely as a base for other stereotypes and not as a stereotype for any architecture model elements.
For example, create a new stereotype called
ElectronicComponent in
the Profile Editor. Select its base stereotype as
FunctionalArchitecture.ElectricalComponent. Define properties you are
adding to those of the base stereotype. Check Show inherited properties
at the bottom of the property list to show the properties of the base stereotype. You can
edit only the properties of the selected stereotype, not the base stereotype.
When you apply the new stereotype, it carries its defined properties in addition to those of its base stereotype.
editor |
systemcomposer.profile.Profile |
systemcomposer.profile.Property |
systemcomposer.profile.Stereotype | https://www.mathworks.com/help/systemcomposer/ug/manage-stereotypes-and-profiles.html | CC-MAIN-2021-17 | en | refinedweb |
It is possible to pass some values from the command line to your takes action accordingly −
#include <iostream> using namespace std; int main( int argc, char *argv[] ) { if( argc == 2 ) { cout << "The argument supplied is "<< argv[1] << endl; } else if( argc > 2 ) { cout << "Too many arguments supplied." <<endl; } else { cout << "One argument expected." << endl; } }
$./a.out testing The argument supplied is testing
$./a.out testing1 testing2 Too many arguments supplied.
$./a.out One argument expected | https://www.tutorialspoint.com/how-to-parse-command-line-arguments-in-cplusplus | CC-MAIN-2021-17 | en | refinedweb |
Wiring up a custom authentication method with OWIN in Web API Part 1: preparation
November 16, 2015 2 Comments
Introduction
There are a number of authentication types for a web project: basic -i.e. with username and password -, or token-based or claims-based authentication and various others. You can also have some custom authentication type that your project requires.
In this short series we’ll go through how to wire up a custom authentication method in a Web API project. We’ll see the classes that are required for such an infrastructure. We’ll also discuss how to wire up these elements so that the custom authentication mechanism is executed as part of the chain of Katana components. custom authentication type
In this demo we’ll simulate that the authentication details are stored in the HTTP header “x-company-auth”. The details need to include two elements: a user id and a PIN consisting of 6 digits that can only be used once. You may have seen authentication schemes where you get a number of pre-generated PIN numbers on a list that you have to provide for each transaction. I used to have a bank account which had a similar security mode. I had to provide a PIN from a list upon logging in every time I wanted to view my account. Our scenario is an approximation of that system.
Don’t get bogged down by this scenario however. The main goal of this series is to show you a possible solution for how to wire up any custom authentication mechanism with OWIN.
Starting point
I’ll be doing this demo in Visual Studio 2012 using a fairly empty Web API 2 project template. You should be able to follow the steps in VS 2013 as well. If you start an MVC5 project in VS 2013 then you’ll automatically get the OWIN-related libraries included in the project. For this demo I’ll use VS2012.
Fire up Visual Studio 2012 and start a new Web API project called “CustomAuthenticationOwin”. I downloaded the MVC5 project templates for VS 2012 from this link if you need to do the same. It gave me the following template type:
The project is very empty to start with:
Open WebApiConfig.cs and remove the “api” bit from the route setup:
config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "{controller}/{id}", defaults: new { id = RouteParameter.Optional } );
Adding OWIN
This.
Start the project. If everything has gone well then you should see “OWIN is working” printed in the debug window, meaning that the Startup class was successfully registered.
Adding a controller
Add the following Customer class to the Models folder:
public class Customer { public string Name { get; set; } public string Address { get; set; } public string Telephone { get; set; } }
Right-click the Controllers folder and add a new empty Web API controller:
Call it CustomersController and add the following Get method to return a list); } }
Run the web app. Extend the localhost URL with /customers and press Enter in the browser. You should see the three customers on the screen. How the customer objects are presented depends on the browser type but in Chrome it looks like this:
This is enough for starters. We’ll continue by building the components around the OWIN middleware in the next post.
You can view the list of posts on Security and Cryptography here.
Pingback: Wiring up a custom authentication method with OWIN in Web API Part 2: the headers | Dinesh Ram Kali.
Hello,
I have read with much interest this series.
Something that I am wondering is whether ‘x-company-auth’ is used by the owin middleware pipeline to trigger your custom auth mechanism, or whether the header is used purely in your own logic to validate the user credentials.
To give a bit more context, I am trying to implement authentication based on a set of custom headers, on which I have no control (I will be passed a USERNAME and USERTOKEN headers).
How will the pipeline know to call my own custom classes ? Can I force it to always call my classes ?
Regards,
Antoine | https://dotnetcodr.com/2015/11/16/wiring-up-a-custom-authentication-method-with-owin-in-web-api-part-1-preparation/ | CC-MAIN-2021-17 | en | refinedweb |
A set of utilities for manipulating (Geo)JSON and (Geo)TIFF data.
Project description
Features
PyGeoUtils is a part of HyRiver software stack that is designed to aid in watershed analysis through web services. This package provides utilities for manipulating (Geo)JSON and (Geo)TIFF responses from web services. These utilities are:
- json2geodf: For converting (Geo)JSON objects to GroPandas dataframe.
- arcgis2geojson: For converting ESRIGeoJSON objects to standard GeoJSON format.
- gtiff2xarray: For converting (Geo)TIFF objects to xarray datasets.
- gtiff2file: For saving (Geo)TIFF objects to a raster file.
- xarray_geomask: For masking a xarray.Dataset or xarray.DataArray using a polygon.
All these functions handle all necessary CRS transformations.
You can find some example notebooks here.oUtils using pip after installing libgdal on your system (for example, in Ubuntu run sudo apt install libgdal-dev):
$ pip install pygeoutils
Alternatively, PyGeoUtils can be installed from the conda-forge repository using Conda:
$ conda install -c conda-forge pygeoutils
Quick start
To demonstrate capabilities of PyGeoUtils let’s use PyGeoOGC to access National Wetlands Inventory from WMS, and FEMA National Flood Hazard via WFS, then convert the output to xarray.Dataset and GeoDataFrame, respectively.
import pygeoutils as geoutils from pygeoogc import WFS, WMS from shapely.geometry import Polygon geometry = Polygon( [ [-118.72, 34.118], [-118.31, 34.118], [-118.31, 34.518], [-118.72, 34.518], [-118.72, 34.118], ] ) url_wms = ( "" ) wms = WMS( url_wms, layers="0", outformat="image/tiff", crs="epsg:3857", ) r_dict = wms.getmap_bybox( geometry.bounds, 1e3, box_crs="epsg:4326", ) wetlands = geoutils.gtiff2xarray(r_dict, geometry, "epsg:4326") url_wfs = "" wfs = WFS( url_wfs, layer="public_NFHL:Base_Flood_Elevations", outformat="esrigeojson", crs="epsg:4269", ) r = wfs.getfeature_bybox(geometry.bounds, box_crs="epsg:4326") flood = geoutils.json2geodf(r.json(), "epsg:4269", "epsg:4326")
We can also save WMS outpus as raster file using gtiff2file:
geoutils.gtiff2file(r_dict, geometry, "epsg:4326", "raster")
Contributing
Contributions are very welcomed. Please read CONTRIBUTING.rst file for instructions.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pygeoutils/ | CC-MAIN-2021-17 | en | refinedweb |
Hi,
Is there a possibility to select nearest neighbor voxels in an MRI based on the label-map. For-example I have the voxels based on some label. then I need to find nearest neighbour voxels that dont belong to voxels identified through label.
I found this
import numpy
volume = array(‘Volume’)
label = array(‘Volume-label’)
points = numpy.where( label == 1 ) # or use another label number depending on what you segmented
values = volume[points] # this will be a list of the label values
values.mean() # should match the mean value of LabelStatistics calculation as a double-check
numpy.savetxt(‘values.txt’, values)
with above I can have voxel values defined by the label-map. how can I get voxel values in the nearest neighbor using these voxel values as reference. | https://discourse.slicer.org/t/select-nearest-neighbour-voxels-using-some-voxels-as-reference/16175 | CC-MAIN-2021-17 | en | refinedweb |
Decision Tree Machine Learning in Python KGP Talkie
For detailed theory read An introduction to Statistical Learning:
A
decision tree is a flowchart-like
tree structure where an
internal node represents feature,_0<<
Example
Why Decision Tree
- Decision tress often mimic the
human level thinkingso its so simple to understand the data and make some good interpretations.
- Decision trees actually make you see the
logicfor the data to interpret(not like black box algorithms like SVM,NN,etc..) .
How Decision Tree Works
- Select the best attribute using
Attribute Selection Measures(ASM)to split the records.
- Make that attribute a decision node and breaks the dataset into smaller subsets.
- Starts tree building by repeating this process
recursivelyfor each child until one of the condition will match:
- All the
tuplesbelong to the same attribute value.
- There are no more remaining
attributes.
- There are no more
instances.
Here couple of algorithms to build a decision tree, we only talk about a few which are:
CART (Classification and Regression Trees) → uses Gini Index(Classification) as metric.
ID3 (Iterative Dichotomiser 3) → uses Entropy function and Information gain as metrics.
Decision Making in DT with Attribute Selection Measures(ASM)
- Information Gain
- Gain Ratio
- Gini Index
Read Chapter 8:
Information Gain
In order to define information gain precisely, we begin by defining a measure commonly used in
information theory, called
entropy that characterizes the (im)purity of an arbitrary collection of examples.
Entropy
Entropy is the measure of the amount of
uncertainity in the data set.
where S = The current data set for which entropy is being calculated C – Set of classes in S C={yes , no} p(c) = The set S is perfectly classified
Information gain
Information gain calculates the
reduction in
entropy or
surprise from transforming a dataset in some way. It is the measure of the measure of the
difference in entropy from before to after the set
S is split on an attribute A. It is commonly used in the construction of
decision trees from a training dataset, by evaluating the information gain for each variable, and selecting the variable that maximizes the
information gain, which in turn minimizes the
entropy and best splits the dataset into groups for effective classification.
where
- H(S) = Entropy of set S
- T = The subsets created from splitting set S by attribute A
- H(t) = Entropy of subset t
- compute the
entropyfor data-set
- for every feature:
1.calculate
entropyfor all categorical values.
2.take average information
entropyfor the current attribute .
3.calculate gain for the current attribute. 4.pick the highest gain attribute
- Repeat until we get the tree we desired.
Gain Ratio,
Gini Index
Gini Index is a measurement of the likelihood of an incorrect classification of a new instance of a
random variable, if that new instance were randomly classified according to the distribution of class labels from the data set.
If our dataset is Pure then likelihood of incorrect classification is
0. If our sample is mixture of different classes then likelihood of incorrect classification will be
high.
Optimizing DT).
Recursive Binary Splitting
In this procedure all the features are considered and different
split points are tried and tested using a
Cost function. The split with the
best cost (or lowest cost) is selected.?
Pruning
The
performance of a tree can be further increased by
pruning. It involves removing the branches that make use of features having
low importance. This way, we reduce the complexity of tree, and thus increasing its predictive power by reducing
overfitting.
Decision Tree Regressor
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline
from sklearn import datasets, metrics from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor).
diabetes.feature_names
['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
diabetes.target[: 10]
array([151., 75., 141., 206., 135., 97., 138., 63., 110., 310.])
X = diabetes.data y = diabetes.target X.shape, y.shape
((442, 10), (442,))
df = pd.DataFrame(X, columns=diabetes.feature_names) df['target'] = y df.head()
Pairplot()
By default, this function will create a
grid of Axes such that each numeric variable in data will by shared in the y-axis across a single row and in the x-axis across a single column. The
diagonal Axes are treated differently, drawing a plot to show the
univariate distribution of the data for the variable in that column.
sns.pairplot(df) plt.show()
Decision Tree Regressor
Let’s see decision tree as regressor:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
regressor = DecisionTreeRegressor(random_state=42) regressor.fit(X_train, y_train) y_pred = regressor.predict(X_test)
The following plot shows predicted values of y and true values_6<<
Now, we will try to get the Root Mean Square Error of the data by using the function mean_squared_error().Let’s see the following code:
np.sqrt(metrics.mean_squared_error(y_test, y_pred))
70.61829663921893
y_test.std()
72.78840394263774
Decision Tree as a Classifier
Let’s see decision tree as classifier:
from sklearn.tree import DecisionTreeClassifier
Use iris data set:
iris = datasets.load_iris() iris.target_names
array(['setosa', 'versicolor', 'virginica'], dtype='<U10')
iris.feature_names
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
X = iris.data y = iris.target df = pd.DataFrame(X, columns=iris.feature_names) df['target'] = y df.head()
sns.pairplot(df) plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 1, test_size = 0.2, stratify = y) clf = DecisionTreeClassifier(criterion='gini', random_state=1) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print('Accuracy: ', metrics.accuracy_score(y_test, y_pred))
Accuracy: 0.9666666666666667
Now we will evaluate the accuracy of the classifier by using
confusion marix. For this we will use function
confusion_matrix(). Each cell in the square box represents relateve or absolute ratios between
y_test and
y_pred .
Now let’s_8<<
Classification_report()
The classification_report function builds a text report showing the main classification metrics. Here is a small example with custom
target_names and inferred labels.Now we will use of this function in the following code:
print(metrics.classification_report(y_test, y_pred))
precision recall f1-score support 0 1.00 1.00 1.00 10 1 0.91 1.00 0.95 10 2 1.00 0.90 0.95 10 accuracy 0.97 30 macro avg 0.97 0.97 0.97 30 weighted avg 0.97 0.97 0.97 30 | https://kgptalkie.com/decision-tree-machine-learning-in-python-kgp-talkie/ | CC-MAIN-2021-17 | en | refinedweb |
Opened 7 years ago
Closed 7 years ago
#16270 closed task (duplicate)
Ipython notebook
Description
The ipython notebook is a very pleasant tool. I would like to use sage notebook methods to produce image in the ipython notebook?
In particular, I would like to know how the sage notebook do that.
At the end, I would like to use the ipython tools which try to find a *_repr_foo_* method.
from IPython.display import Image class myObject(SageObject): def _repr_png_(self): f = produce_a_file_with_the_sage_notebook_tools(self, ...) return Image(f).data
I made some tests:
from IPython.display import Image file = "/Users/elix/img.png" def _repr_png_(self): latex.eval(latex(self), locals(), filename=file) return Image(filename=file, format='png').data BinaryTree._repr_png_ = _repr_png_ BinaryTree([[],[[],[]]])
that produces
but
e = (1 - sqrt(1 - 4*x)) / (2*x) latex.eval(latex(e), locals(), filename=file)
will produce an error... so that is not the good way...
Change History (7)
comment:1 Changed 7 years ago by
- Milestone changed from sage-6.2 to sage-6.3
comment:2 Changed 7 years ago by
- Milestone changed from sage-6.3 to sage-6.4
comment:3 Changed 7 years ago by
comment:4 Changed 7 years ago by
- Dependencies set to #16444
comment:5 Changed 7 years ago by
- Milestone changed from sage-6.4 to sage-duplicate/invalid/wontfix
- Reviewers set to Jean-Baptiste Priez
- Status changed from new to needs_review
According to this comment by the reporter of this ticket, this is a duplicate of #16444. I'll let him confirm that.
comment:6 Changed 7 years ago by
- Status changed from needs_review to positive_review
comment:7 Changed 7 years ago by
- Resolution set to duplicate
- Status changed from positive_review to closed
Note: See TracTickets for help on using tickets.
Is this a duplicate of #16444? | https://trac.sagemath.org/ticket/16270 | CC-MAIN-2021-17 | en | refinedweb |
Contact Me!).
Fig. 1
From the menu, select Storage | Storage Account, as shown in Fig. 2.
Fig. 2
The "Create Storage Account" dialog with the "Basic" tab selected displays, as shown in Fig. 3..
Fig. 4
The important field on this tab is "Hierarchical namespace". Select the "Enabled" radio button at this field.
Click the [Review + Create] button to advance to the "Review + Create" tab, as shown in Fig... | https://www.davidgiard.com/default,date,2019-06-25.aspx | CC-MAIN-2021-17 | en | refinedweb |
This example shows how to build a C executable from MATLAB® code that implements a simple Sobel filter to perform edge detection on images. The executable reads an image from the disk, applies the Sobel filtering algorithm, and then saves the modified image.
The example shows how to generate and modify an example main function that you can use when you build the executable.
Create a Folder and Copy Relevant Files
Run the Sobel Filter on the Image
Generate and Test a MEX Function
Generate an Example Main Function for sobel.m
Copy the Example Main Files
Modify the Generated Example Main Function
Generate the Sobel Filter Application
Run the Sobel Filter Application
Display the Resulting Image
To complete this example, install the following products:
MATLAB
MATLAB Coder™
C compiler (for most platforms, a default C compiler is supplied with MATLAB). For a list of supported compilers, see Supported Compilers.
You can use
mex -setup to change the default compiler. See Change Default Compiler.
The files you use in this example are:
To copy the example files to a local working folder:
Create a local working folder. For example,
c:\coder\edge_detection.
Navigate to the working folder.
Copy the files
sobel.m and
hello.jpg from
the examples folder
sobel to your working
folder.
copyfile(fullfile(docroot, 'toolbox', 'coder', 'examples', 'sobel'))
Read the original image into a MATLAB matrix and display it.
im = imread('hello.jpg');
Display the image as a basis for comparison to the result of the Sobel filter.
image(im);
The Sobel filtering algorithm operates on grayscale images. Convert the color image to an equivalent grayscale image with normalized values (0.0 for black, 1.0 for white).
gray = (0.2989 * double(im(:,:,1)) + 0.5870 * double(im(:,:,2)) + 0.1140 * double(im(:,:,3)))/255;
To run the MATLAB function for the Sobel filter, pass the grayscale image matrix
gray and a threshold value to the function
sobel. This example uses 0.7 for a threshold value.
edgeIm = sobel(gray, 0.7);
To display the modified image, reformat the matrix
edgeIm with
the function
repmat so that you can pass it to the
image command.
im3 = repmat(edgeIm, [1 1 3]); image(im3);
To test that generated code is functionally equivalent to the original MATLAB code and that run-time errors do not occur, generate a MEX function.
codegen -report sobel
codegen generates a MEX function named
sobel_mex in the current working folder.
To run the MEX function for the Sobel filter, pass the grayscale image matrix
gray and a threshold value to the function
sobel_mex. This example uses 0.7 for a threshold value.
edgeImMex = sobel_mex(gray, 0.7);
To display the modified image, reformat the matrix
edgeImMex with
the function
repmat so that you can pass it to the
image command.
im3Mex = repmat(edgeImMex, [1 1 3]); image(im3Mex);
This image is the same as the image created using the MATLAB function.
Although you can write a custom main function for your application, an example main function provides a template to help you incorporate the generated code.
To generate an example main function for the Sobel filter:
Create a configuration object for a C static library.
cfg = coder.config('lib');
For configuration objects for C/C++ source code, static libraries, dynamic
libraries, and executables, the setting
GenerateExampleMain controls
generation of the example main function. The setting is set to
'GenerateCodeOnly' by default, which generates the example main
function but does not compile it. For this example, do not change the value of the
GenerateExampleMain setting.
Generate a C static library using the configuration object.
codegen -report -config cfg sobel
The generated files for the static library are in the folder
codegen/lib/sobel. The example main files are in the subfolder
codegen/lib/sobel/examples.
Contents of Example Main File main.c
Do not modify the files
main.c and
main.h in the
examples subfolder. If you do, when you regenerate code, MATLAB
Coder does not regenerate the example main files. It warns you that it detects
changes to the generated files.
Copy the files
main.c and
main.h from the folder
codegen/lib/sobel/examples to another location. For this example, copy
the files to the current working folder. Modify the files in the new location.
Modify the Initialization Function argInit_d1024xd1024_real_T
Write the Function saveImage
Modify the Function main_sobel
Modify the Function Declarations
Contents of Modified File
main.c
The example main function declares and initializes data, including dynamically allocated data, to zero values. It calls entry-point functions with arguments set to zero values, but it does not use values returned from the entry-point functions.
The C main function must meet the requirements of your application. This example modifies the example main function to meet the requirements of the Sobel filter application.
This example modifies the file
main.c so that the Sobel filter
application:
Reads in the grayscale image from a binary file.
Applies the Sobel filtering algorithm.
Saves the modified image to a binary file.
Modify the function
main to:
Accept the file containing the grayscale image data and a threshold value as input arguments.
Call the function
main_sobel with the address of the grayscale
image data stream and the threshold value as input arguments.
In the function
main:
Remove the declarations
void(argc) and
(void)argv.
Declare the variable
filename to hold the name of the binary
file containing the grayscale image data.
const char *filename;
Declare the variable
threshold to hold the threshold
value.
double threshold;
Declare the variable
fd to hold the address of the grayscale
image data that the application reads in from
filename.
FILE *fd;
Add an
if statement that checks for three arguments.
if (argc != 3) { printf("Expected 2 arguments: filename and threshold\n"); exit(-1); }
Assign the input argument
argv[1] for the file containing the
grayscale image data to
filename.
filename = argv[1];
Assign the input argument
argv[2] for the threshold value to
threshold, converting the input from a string to a numeric
double.
threshold = atof(argv[2]);
Open the file containing the grayscale image data whose name is specified in
filename. Assign the address of the data stream to
fd.
fd = fopen(filename, "rb");
To verify that the executable can open
filename, write an
if-statement that exits the program if the value of
fd is
NULL.
if (fd == NULL) { exit(-1); }
Replace the function call for
main_sobel by calling
main_sobel with input arguments
fd and
threshold.
main_sobel(fd, threshold);
Close the grayscale image file after calling
sobel_terminate.
fclose(fd);
In the example main file, the function
argInit_d1024xd1024_real_T
creates a dynamically allocated variable-size array (emxArray) for the image that you pass
to the Sobel filter. This function initializes the emxArray to a default size and the
elements of the emxArray to 0. It returns the initialized emxArray.
For the Sobel filter application, modify the function to read the grayscale image data from a binary file into the emxArray.
In the function
argInit_d1024xd1024_real_T:
Replace the input argument
void with the argument
FILE
*fd. This variable points to the grayscale image data that the function
reads in.
static emxArray_real_T *argInit_d1024xd1024_real_T(FILE *fd)
Change the values of the variable
iv2 to match the dimensions
of the grayscale image matrix
gray.
iv2 holds
the size values for the dimensions of the emxArray that
argInit_d1024xd1024_real_T creates.
static int iv2[2] = { 484, 648 };
MATLAB stores matrix data in column-major format, while C stores matrix data in row-major format. Declare the dimensions accordingly.
Define a variable
element to hold the values read in from the
grayscale image data.
double element;
Change the
for-loop construct to read data points from the
normalized image into
element by adding an
fread
command to the inner
for-loop.
fread(&element, 1, sizeof(element), fd);
Inside the
for-loop, assign
element as the
value set for the emxArray data.
result->data[b_j0 + result->size[0] * b_j1] = element;
Modified Initialization Function argInit_d1024xd1024_real_T
The MATLAB function
sobel.m interfaces with MATLAB arrays, but the Sobel filter application interfaces with binary
files.
To save the image modified by the Sobel filtering algorithm to a binary file, create a
function
saveImage. The function
saveImage writes
data from an emxArray into a binary file. It uses a construction that is similar to the
one used by the function
argInit_d1024xd1024_real_T.
In the file
main.c:
Define the function
saveImage that takes the address of
emxArray
edgeImage as an input and has output type void.
static void saveImage(emxArray_uint8_T *edgeImage) { }
Define the variables
b_j0 and
b_j1 like they
are defined in the function
argInit_d1024xd1024_real_T.
int b_j0; int b_j1;
Define the variable
element to store data read from the
emxArray.
uint8_T element;
Open a binary file
edge.bin for writing the modified image.
Assign the address of
edge.bin to
FILE
*fd.
FILE *fd = fopen("edge.bin", "wb");
To verify that the executable can open
edge.bin, write an
if-statement that exits the program if the value of
fd is
NULL.
if (fd == NULL) { exit(-1); }
Write a nested
for-loop construct like the one in the function
argInit_d1024xd1024_real_T.
for (b_j0 = 0; b_j0 < edgeImage->size[0U]; b_j0++) { for (b_j1 = 0; b_j1 < edgeImage->size[1U]; b_j1++) { } }
Inside the inner
for-loop, assign the values from the modified
image data to
element.
element = edgeImage->data[b_j0 + edgeImage->size[0] * b_j1];
After the assignment for
element, write the value from
element to the file
edge.bin.
fwrite(&element, 1, sizeof(element), fd);
After the
for-loop construct, close
fd.
fclose(fd);
In the example main function, the function
main_sobel creates
emxArrays for the data for the grayscale and modified images. It calls the function
argInit_d1024xd1024_real_T to initialize the emxArray for the
grayscale image.
main_sobel passes both emxArrays and the threshold
value of 0 that the initialization function
argInit_real_T returns to
the function
sobel. When the function
main_sobel
ends, it discards the result of the function
sobel.
For the Sobel filter application, modify the function
main_sobel
to:
Take the address of the grayscale image data and the threshold value as inputs.
Read the data from the address using
argInit_d1024xd1024_real_T.
Pass the data to the Sobel filtering algorithm with the threshold value
threshold.
Save the result using
saveImage.
In the function
main_sobel:
Replace the input arguments to the function with the arguments
FILE
*fd and
double threshold.
static void main_sobel(FILE *fd, double threshold)
Pass the input argument
fd to the function call for
argInit_d1024xd1024_real_T.
originalImage = argInit_d1024xd1024_real_T(fd);
Replace the threshold value input in the function call to
sobel
with
threshold.
sobel(originalImage, threshold, edgeImage);
After calling the function
sobel, call the function
saveImage with the input
edgeImage.
saveImage(edgeImage);
Modified Function main_sobel
To match the changes that you made to the function definitions, make the following changes to the function declarations:
Change the input of the function
*argInit_d1024xd1024_real_T to
FILE *fd.
static emxArray_real_T *argInit_d1024xd1024_real_T(FILE *fd);
Change the inputs of the function
main_sobel to
FILE
*fd and
double threshold.
static void main_sobel(FILE *fd, double threshold);
Add the function
saveImage.
static void saveImage(emxArray_uint8_T *edgeImage);
Modified Function Declarations
For input/output functions that you use in
main.c, add the header
file
stdio.h to the included files list.
#include <stdio.h>
main.c
Navigate to the working folder if you are not currently in it.
Create a configuration object for a C standalone executable.
cfg = coder.config('exe');
Generate a C standalone executable for the Sobel filter using the configuration object and the modified main function.
codegen -report -config cfg sobel main.c main.h
By default, if you are running MATLAB on a Windows® platform, the executable
sobel.exe is generated in the
current working folder. If you are running MATLAB on a platform other than Windows, the file extension is the corresponding extension for that platform. By
default, the code generated for the executable is in the folder
codegen/exe/sobel.
Create the MATLAB matrix
gray if it is not currently in your MATLAB workspace:
im = imread('hello.jpg');
gray = (0.2989 * double(im(:,:,1)) + 0.5870 * double(im(:,:,2)) + 0.1140 * double(im(:,:,3)))/255;
Write the matrix
gray into a binary file using the
fopen and
fwrite commands. The application reads
in this binary file.
fid = fopen('gray.bin', 'w'); fwrite(fid, gray, 'double'); fclose(fid);
Run the executable, passing to it the file
gray.bin and the
threshold value 0.7.
To run the example in MATLAB on a Windows platform:
system('sobel.exe gray.bin 0.7');
The executable generates the file
edge.bin.
Read the file
edge.bin into a MATLAB matrix
edgeImExe using the
fopen and
fread commands.
fd = fopen('edge.bin', 'r'); edgeImExe = fread(fd, size(gray), 'uint8'); fclose(fd);
Pass the matrix
edgeImExe to the function
repmat and display the image.
im3Exe = repmat(edgeImExe, [1 1 3]); image(im3Exe);
The image matches the images from the MATLAB and MEX functions. | https://www.mathworks.com/help/coder/ug/generate-and-modify-an-example-cc-main-function.html | CC-MAIN-2021-17 | en | refinedweb |
I've read through everything I can on the forums but I'm still having difficulty getting my universe to warm up using the History() method. In the attached algorithm, what I believe is happening is:
from clr import AddReference
AddReference("System")
AddReference("QuantConnect.Algorithm")
AddReference("QuantConnect.Indicators")
AddReference("QuantConnect.Common")
AddReference("QuantConnect.Algorithm.Framework")
from System import *
from QuantConnect import *
from QuantConnect.Data import *
from QuantConnect.Algorithm import *
from QuantConnect.Indicators import *
from System.Collections.Generic import List
class PublicHelp(QCAlgorithm):
def Initialize(self):
self.SetStartDate(2017,1,1) #Set Start Date
self.SetEndDate(datetime.now().date() - timedelta(1)) #Set End Date
#self.SetEndDate(2013,1,1) #Set End Date
self.SetCash(150000) #Set Strategy Cash
self.UniverseSettings.Resolution = Resolution.Hour
self.averages = { };
self.AddEquity("SPY", Resolution.Hour)
self.AddUniverse(self.CoarseSelectionFunction)
self.Schedule.On(self.DateRules.EveryDay("SPY"), self.TimeRules.At(9,31), self.BuyFunc)
I'm initializing and importing most of the basic stuff. I've set my Universe resolution to Hours, so the CoarseSelectionFunction should be run each hour, yeah?
So going forward, I have my coarse filter which filters down by volume and uses the EMACross tutorial code that you can find in the Universe selection portion of the Documentation.
#Universe Filter
# sort the data by volume and price, apply the moving average crossver, and take the top 24 sorted results based on breakout magnitude
def CoarseSelectionFunction(self, coarse):
filtered = [ x for x in coarse if (x.DollarVolume > 50000000) ]
# We are going to use a dictionary to refer the object that will keep the moving averages
for cf in filtered:
if cf.Symbol not in self.averages:
self.averages[cf.Symbol] = SymbolData(cf.Symbol)
# Updates the SymbolData object with current EOD price
avg = self.averages[cf.Symbol]
history = self.History(cf.Symbol, 16)
avg.WarmUpIndicators(history.iloc[cf.Symbol])
avg.update(cf.EndTime, cf.AdjustedPrice)
# Filter the values of the dict: we only want up-trending securities
values = list(filter(lambda x: x.is_uptrend, self.averages.values()))
# Sorts the values of the dict: we want those with greater difference between the moving averages
values.sort(key=lambda x: x.scale, reverse=True)
for x in values[:200]:
self.Log('symbol: ' + str(x.symbol.Value) + ' scale: ' + str(x.scale))
# we need to return only the symbol objects
return [ x.symbol for x in values[:200] ]
# this event fires whenever we have changes to our universe
def OnSecuritiesChanged(self, changes):
self.changes = changes
# liquidate removed securities
for security in changes.RemovedSecurities:
if security.Invested:
self.Liquidate(security.Symbol)
#EMA Crossover Class
class SymbolData(object):
def __init__(self, symbol):
self.symbol = symbol
self.fast = ExponentialMovingAverage(50)
self.slow = ExponentialMovingAverage(200)
self.is_uptrend = False
self.scale = None
def update(self, time, value):
if self.fast.Update(time, value) and self.slow.Update(time, value):
fast = self.fast.Current.Value
slow = self.slow.Current.Value
self.is_uptrend = (fast / slow) > 1.00
if self.is_uptrend:
self.scale = (fast - slow) / ((fast + slow) / 2.0)
def WarmUpIndicators(self, history):
for tuple in history.itertuples():
self.fast.Update(tuple.index, tuple.close)
self.slow.Update(tuple.index, tuple.close)
Here I'm running into my main problem. I don't know where to/how to apply the history loop to warmup each indicator as its added to the universe. In this example I'm getting the error:
Runtime Error: TypeError : object is not callable I've even attempted to constantly keep the indicators updated in the OnData method like so:
at CoarseSelectionFunction in main.py:line 20
TypeError : object is not callable (Open Stacktrace)
#OnData
def OnData(self, data):
'''OnData event is the primary entry point for your algorithm. Each new data point will be pumped in here.'''
'''Arguments:
data: Slice object keyed by symbol containing the stock data'''
#Constantly update the universe moving averages and if a slice does not contain any data for the security, remove it from the universe
if self.IsMarketOpen("SPY"):
if bool(self.averages):
for x in self.Securities.Values:
if data.ContainsKey(x.Symbol):
if data[x.Symbol] is None:
continue
avg = self.averages[x.Symbol]
avg.update(data[x.Symbol].EndTime, data[x.Symbol].Open)
else:
self.RemoveSecurity(x.Symbol)
However, this isn't working. Perhaps I don't understand entirely, but I thought each hour a slice of data of be pumped through the indicators stored in the averages dictionary and so after 200 hours the indicators would be warmed up. I've tried this on a slow indicator of only 5 hours assuming it would be ready by the final hour, but this isn't the case.
Could someone with a deeper understanding explain my errors please? I've attached the project, for what its worth. Thank you in advance. | https://www.quantconnect.com/forum/discussion/5951/universe-warmup-python/ | CC-MAIN-2021-17 | en | refinedweb |
In part 1 of this series, I cover important concepts about measuring the accuracy of time on Amazon EC2 instances . I discussed calculating ClockErrorBound (𝜀) and using its value as a range between which system time is accurate. In this part, I walk through the process of using Amazon CloudWatch to measure and monitor system time accuracy via an example exercise.
Measuring and monitoring system time
The following exercise walks you through the steps to measure and monitor time on your EC2 instances.
Prerequisites
- Account permissions to install packages on two EC2 instances.
- Account permissions to create custom metrics and alerts in CloudWatch.
- An Amazon Simple Notification Service (Amazon SNS) topic configured to deliver notifications.
- EC2 instance with AWS CLI configured with appropriate credentials.
The following example works on EC2 instances running Amazon Linux. You might need changes for your OS.
Step 1. Install chrony on an EC2 instance
A flexible implementation of NTP, chrony is a replacement for the Network Time Protocol (NTP) included in most Linux distributions. On Amazon Linux 2, the default configuration uses chrony and is configured to use the Amazon Time Sync Service.
If you are not using it already, start by replacing NTP on your EC2 Linux instance with chrony.
sudo yum erase ntp* sudo yum -y install chrony sudo service chronyd start
The instance now uses chronyd to sync local time with the Amazon Time Sync Service available at 169.254.169.123.
Run the following command to configure your instance to start the chrony service as part of the boot sequence.
sudo chkconfig chronyd on
By default, chrony polls the NTP servers every 32 to 1,024 seconds. To improve the clock accuracy on your instance, we recommend that you change the polling internal to 16 seconds. To do this, edit the chrony configuration file (
/etc/chrony.conf) on your instance and add the following line:
server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4
Amazon Time Sync service is available at the 169.254.169.123 IP address for any instance running in an Amazon Virtual Private Cloud (Amazon VPC).
The
minpoll and
maxpoll parameters configure the minimum and maximum time interval for polling. The values for this parameter are the number of seconds as a power of two. In this case, the parameter is set to 4 (2^4) and this sets the minimum and maximum polling time interval to 16 seconds. A lower and appropriate frequency of polling ensures lower ClockErrorBound values.
The Amazon CloudWatch dashboard in Figure 1 displays data from two instances. Chronyd on Instance 1 is configured with the server directive and has minpoll and maxpoll values of 4. Notice that the ClockErrorBound values are consistently less than 1 millisecond (ms) and therefore do not trigger the CloudWatch alarm.
For more information about setting the time on EC2 instances, check set the time for your Linux instance in the Amazon EC2 user guide for Linux instances.
Figure 1: Amazon CloudWatch dashboard displaying alarms and metrics
Step 2. Create a script to monitor the drift in system time
The chrony client output provides detailed metrics on differences between system time and reference time. You can query the client output to determine the time difference and report it as a custom metric to CloudWatch. Here is the chronyc client output:
[[email protected] ~]$ chronyc tracking Reference ID : A9FEA97B (169.254.169.123) Stratum : 4 Ref time (UTC) : Thu Feb 04 03:22:27 2021.000431 seconds Update interval: 16.0 seconds Leap status : Normal
Use the output to calculate the range within which system time is accurate. Three fields from the output, system time (local offset), root delay, and root dispersion, are used to calculate the time offset on the instance and the ClockErrorBound (𝜀).
ClockErrorBound(𝜀) = System time + (0.5 * Root delay) + Root dispersion)
In the preceding example, the clock error bound reported by chrony is:
𝜀 = 0.000000011 + 0.000431 + 0.5 x 0.000544 = 0.7 milliseconds
ClockErrorBound(𝜀) is a good proxy for the accuracy of system time because it gives us the bounds between which observed time (C(t)) is accurate. Reference time is between C(t) + 𝜀 and(t) – 𝜀. The following shell script (
timepublisher.sh) calculates the ClockErrorBound(𝜀) value on a Linux instance. The last line of the script uses the AWS CLI to create or update a CloudWatch custom metric, ClockErrorBound with the calculated value in milliseconds. A new custom metric is created if it doesn’t exist already.
#!/bin/bash SYSTEM_TIME="" ROOT_DELAY="" ROOT_DISPERSION="" INSTANCE_ID=`curl -s` output=$(chronyc tracking) while read -r line; do # look for "System time", "Root delay", "Root dispersion". if [[ $line == "System time"* ]] then SYSTEM_TIME=`echo $line | cut -f2 -d":" | cut -f2 -d" "` elif [[ $line == "Root delay"* ]] then ROOT_DELAY=`echo $line | cut -f2 -d":" | cut -f2 -d" " ` elif [[ $line == "Root dispersion"* ]] then ROOT_DISPERSION=`echo $line | cut -f2 -d":" | cut -f2 -d" " ` fi done <<< "$output" CLOCK_ERROR_BOUND=`echo "($SYSTEM_TIME + (.5 * $ROOT_DELAY) + $ROOT_DISPERSION) * 1000" | bc ` # create or update a custom metric in CW. aws cloudwatch put-metric-data --metric-name ClockErrorBound --dimensions Instance=$INSTANCE_ID --namespace "TimeDrift" --value $CLOCK_ERROR_BOUND
Step 3. Create a cron job to publish metrics automatically
Next, you create a cron job to run this script at a regular interval. The following cron entry runs the
timepublisher.sh script every five minutes.
*/5 * * * * $HOME/timepublisher.sh
The ClockErrorBound metrics are available in the CloudWatch console when they are published.
Open the Amazon CloudWatch console and from the left navigation pane, choose Metrics. The CloudErrorBound metrics published by the
timepublisher.sh script are grouped by EC2 instance ID in the TimeDrift namespace. In this example, there are 158 total metrics and your total metrics may differ. Two of them are the custom TimeDrift metrics being published by the
timepublisher.sh script running on the two instances.
Figure 2: Amazon CloudWatch metrics grouped into namespaces
To view the metric data, choose the TimeDrift link, and then choose the Instance link. Your metrics will be grouped by instance. Figure 3 shows two instances reporting metrics in the TimeDrift custom namespace. To view the data, choose the metrics from the list in Figure 3. The graph displays the ClockErrorBound(𝜀) values in milliseconds over a one-hour time period. In the next step, you will use these metrics to create a CloudWatch alarm.
Figure 3: List of metrics available in the TimeDrift namespace
Step 4. Create a ClockErrorBound CloudWatch alarm
Create a CloudWatch alarm to monitor the value of the ClockErrorBound metric created in the previous step and notify a recipient when the value exceeds a threshold. Use a tolerance of 1 ms drift in your example and set an alarm threshold for this value. Your time drift tolerance differs based on your workload, so choose the appropriate value for your environment. When this threshold is exceeded, the alarm is triggered, its state will change from OK to ALARM, and a notification will be sent based on the alarm configuration.
In the Amazon CloudWatch console, choose Alarms, and then choose Create alarm.
Figure 4: Alarms page in the Amazon CloudWatch console
Choose the TimeDrift metric that your alarm will be based on. The value of this metric will determine the state of the alarm.
Figure 5: Metric selection is the first step in the alarm creation process
Search for the ClockErrorBound custom metric and view a list of matching metrics available in CloudWatch. You can also navigate the metric tree to display and then choose these metrics. Select the first one in the list for your first alarm.
Figure 6: Choosing a metric from a list of all available metrics for the alarm
There are a few different ways to configure the alarm behavior. Metric name and instance values are populated from the metric selected earlier. The Statistic option defines how you want the metric value to be evaluated (Sum, Average, Max, Min, Sample Count, p90). You can use the default (Average) in this case. Set the frequency of alarm evaluation in the Period field. Because the shell script updates the metric every five minutes, you can keep five minutes for the evaluation period, too. A more frequent evaluation does not result in any benefit. Set a static threshold of 1 (ms) and configure the alarm to trigger when the CloudErrorBound value exceeds that threshold.
Figure 7: Specify metric conditions for the alarm
Now specify the threshold type and value conditions for the alarm trigger. You can either use a static (hardcoded) or dynamic threshold type. In this case, because we know the specific tolerance for acceptable time drive (1 ms), set the alarm to trigger whenever the value of the CloudErrorBound metric exceeds a static value of 1. In Additional configuration, you can configure options for datapoints in an alarm and missing data treatment.
Figure 8: Specify alarm thresholds and other conditions
There are actions associated with alarms. These actions are run when the alarm is triggered. In Amazon CloudWatch, there are five types of actions you can configure in response to an alarm. For example, an action can send an email, message, or mobile push notification through Amazon SNS. You can configure the alarm to send notification to a previously created
CW_Alarms SNS topic. (See Prerequisites.) Alarms can be configured to deliver notifications to multiple topics, which are useful if you want to group recipients for your environment.
Figure 9: Configure notifications for the alarm
Figure 10 shows the other available actions: Auto Scaling, EC2, Ticket, and Systems Manager OpsCenter. Depending on your use case and environment, these action types help with automation.
Figure 10: Configure other actions in response to alarm.
Now add a name and description for your alarm. CloudWatch displays the alarm and its configuration in a preview before activating it.
Figure 11: Add a name and description for the alarm
CloudWatch displays a success message and a list of alarms configured for the account.
Figure 12: Success banner
You can use the AWS CLI or SDK to automate the alarm creation process. Use the following command to create the alarm used in this example. Replace the SNS topic Amazon Resource Name (ARN) in the
--alarm-actions option with the ARN of your SNS topic or action. Each action is specified as an ARN. Use the ID for your instance in
InstanceId.
aws cloudwatch put-metric-alarm --alarm-name "Instance 1 - ClockErrorBound > 1 ms" \<br />--alarm-description "CloudErrorBound exceeds 1 ms. for Instance 1" \<br />--metric-name ClockErrorBound --namespace TimeDrift --statistic Average --period 300 \<br />--threshold 1 --comparison-operator GreaterThanThreshold \<br />--dimensions "Name=InstanceId,Value=INSTANCE_ID" --evaluation-periods 1 \<br />--alarm-actions arn:aws:sns:us-west-2:111222333:CW_Alarms
This example creates an alarm for each instance in your environment. Because customers can get alarm fatigue as their environment grows in size, Amazon CloudWatch offers a composite alarm capability that you can use to aggregate alarms, reduce alarm noise, and increase monitoring efficiency. Composite alarms aggregate multiple alarms into a single, higher-level alarm. You can use them to create logical conditions for the alarm triggers.
Create a CloudErrorBound alarm for at least one other instance in your environment. When you select two or more alarms, you can create a composite alarm based on these metric alarms.
Figure 13: Selecting the group of alarms for a new composite alarm
When you choose Create composite alarm, you can enter the logic for the alarm in an editor. Creating a composite alarm for this use case is helpful because you can create multiple metric alarms with no notifications and manage their notifications in the single composite alarm definition. For more information, see the Improve monitoring efficiency using Amazon CloudWatch composite alarms blog post.
Figure 14: Composite alarm conditions as logical evaluations
The composite alarm notification and action settings are the same as those for the metric alarms you created earlier.
Cleanup
To avoid ongoing charges to your account, delete the resources you created.
- Edit the crontab on your instances and remove the directive to run the
timepublisher.shscript.
- Open the Amazon CloudWatch console, navigate to the list of alarms, and delete the three alarms you created. Deleted the composite alarm first and then the two metric alarms.
- In the CloudWatch console, delete the dashboards you created in this exercise.
Conclusion
In this post, I showed how you can use CloudWatch to monitor time drift on EC2 instances. You can use these steps to monitor and alarm on any other system metrics. I installed chrony on an EC2 instance and then used the output of the chronyc client to calculate the value for ClockErrorBound(𝜀). It is a measure of the range in which the system time has drifted from the reference time. I published this value at a five-minute frequency as a CloudWatch metric through cron on my system. Finally, I created a CloudWatch alarm to alert me when the time drift exceeds 1 ms. For further reading, check the following documentation:
AWS CLI Reference for CloudWatch
Amazon CloudWatch user guide, including Creating a Composite Alarm
About the authors
Sanjay Bhatia is a Principal Technical Account Manager for Strategic Accounts at AWS. Based in the Bay Area, Sanjay works with a global team to help a strategic AWS customer operate their workloads efficiently on AWS. Sanjay has helped a diverse set of customers design and operate a broad variety of workloads using AWS Services and has a keen interest in Performance Management solutions.
Julien Ridoux is a Senior Software Engineer with AWS, where he focuses on continuously improving the health and availability of EC2. After an academic career and a focus on accurate clock synchronization, Julien now enjoys facing the challenges of building systems at Amazon scale. Outside of work, Julien can be found enjoying the many outdoor activities the Pacific North West region has to offer. | https://awsfeed.com/whats-new/management-tools/manage-amazon-ec2-instance-clock-accuracy-using-amazon-time-sync-service-and-amazon-cloudwatch-part-2 | CC-MAIN-2021-17 | en | refinedweb |
What's new in C# 9.0
C# 9.0 adds the following features and enhancements to the C# language:
- Records
- Init only setters
- Top-level statements
- Pattern matching enhancements
- Performance and interop
- Native sized integers
- Function pointers
- Suppress emitting localsinit flag
- Fit and finish features
- Target-typed
newexpressions
- static anonymous functions
- Target-typed conditional expressions
- Covariant return types
- Extension
GetEnumeratorsupport for
foreachloops
- Lambda discard parameters
- Attributes on local functions
- Support for code generators
- Module initializers
- New features for partial methods
C# 9.0 is supported on .NET 5. For more information, see C# language versioning.
You can download the latest .NET SDK from the .NET downloads page.
Record types
C# 9.0 introduces record types. You use the
record keyword to define a reference type that provides built-in functionality for encapsulating data. You can create record types with immutable properties by using positional parameters or standard property syntax:
public record Person(string FirstName, string LastName);
public record Person { public string FirstName { get; init; } public string LastName { get; init; } };
You can also create record types with mutable properties and fields:
public record Person { public string FirstName { get; set; } public string LastName { get; set; } };
While records can be mutable, they are primarily intended for supporting immutable data models. The record type offers the following features:
- Concise syntax for creating a reference type with immutable properties
- Behavior useful for a data-centric reference type:
- Support for inheritance hierarchies
You can use structure types to design data-centric types that provide value equality and little or no behavior. But for relatively large data models, structure types have some disadvantages:
- They don't support inheritance.
- They're less efficient at determining value equality. For value types, the ValueType.Equals method uses reflection to find all fields. For records, the compiler generates the
Equalsmethod. In practice, the implementation of value equality in records is measurably faster.
- They use more memory in some scenarios, since every instance has a complete copy of all of the data. Record types are reference types, so a record instance contains only a reference to the data.
Positional syntax for property definition
You can use positional parameters to declare properties of a record and to initialize the property values when you create an instance:
public record Person(string FirstName, string LastName); public static void Main() { Person person = new("Nancy", "Davolio"); Console.WriteLine(person); // output: Person { FirstName = Nancy, LastName = Davolio } }
When you use the positional syntax for property definition, the compiler creates:
- A public init-only auto-implemented property for each positional parameter provided in the record declaration. An init-only property can only be set in the constructor or by using a property initializer.
- A primary constructor whose parameters match the positional parameters on the record declaration.
- A
Deconstructmethod with an
outparameter for each positional parameter provided in the record declaration.
For more information, see Positional syntax in the C# language reference article about records.
Immutability
A record type is not necessarily immutable. You can declare properties with
set accessors and fields that aren't
readonly. But while records can be mutable, they make it easier to create immutable data models. Properties that you create by using positional syntax are immutable.
Immutability can be useful when you want a data-centric type to be thread-safe or a hash code to remain the same in a hash table. It can prevent bugs that happen when you pass an argument by reference to a method, and the method unexpectedly changes the argument value.
The features unique to record types are implemented by compiler-synthesized methods, and none of these methods compromises immutability by modifying object state.
Value equality
Value equality means that two variables of a record type are equal if the types match and all property and field values match. For other reference types, equality means identity. That is, two variables of a reference type are equal if they refer to the same object.
The following example illustrates value equality of record types:
public record Person(string FirstName, string LastName, string[] PhoneNumbers); public static void Main() { var phoneNumbers = new string[2]; Person person1 = new("Nancy", "Davolio", phoneNumbers); Person person2 = new("Nancy", "Davolio", phoneNumbers); Console.WriteLine(person1 == person2); // output: True person1.PhoneNumbers[0] = "555-1234"; Console.WriteLine(person1 == person2); // output: True Console.WriteLine(ReferenceEquals(person1, person2)); // output: False }
In
class types, you could manually override equality methods and operators to achieve value equality, but developing and testing that code would be time-consuming and error-prone. Having this functionality built-in prevents bugs that would result from forgetting to update custom override code when properties or fields are added or changed.
For more information, see Value equality in the C# language reference article about records.
Nondestructive mutation
If you need to mutate immutable properties of a record instance, you can use a
with expression to achieve nondestructive mutation. A
with expression makes a new record instance that is a copy of an existing record instance, with specified properties and fields modified. You use object initializer syntax to specify the values to be changed, as shown in the following example:
public record Person(string FirstName, string LastName) { public string[] PhoneNumbers { get; init; } } public static void Main() { Person person1 = new("Nancy", "Davolio") { PhoneNumbers = new string[1] }; Console.WriteLine(person1); // output: Person { FirstName = Nancy, LastName = Davolio, PhoneNumbers = System.String[] } Person person2 = person1 with { FirstName = "John" }; Console.WriteLine(person2); // output: Person { FirstName = John, LastName = Davolio, PhoneNumbers = System.String[] } Console.WriteLine(person1 == person2); // output: False person2 = person1 with { PhoneNumbers = new string[1] }; Console.WriteLine(person2); // output: Person { FirstName = Nancy, LastName = Davolio, PhoneNumbers = System.String[] } Console.WriteLine(person1 == person2); // output: False person2 = person1 with { }; Console.WriteLine(person1 == person2); // output: True }
For more information, see Nondestructive mutation in the C# language reference article about records.
Built-in formatting for display
Record types have a compiler-generated ToString method that displays the names and values of public properties and fields. The
ToString method returns a string of the following format:
<record type name> { <property name> = <value>, <property name> = <value>, ...}
For reference types, the type name of the object that the property refers to is displayed instead of the property value. In the following example, the array is a reference type, so
System.String[] is displayed instead of the actual array element values:
Person { FirstName = Nancy, LastName = Davolio, ChildNames = System.String[] }
For more information, see Built-in formatting in the C# language reference article about records.
Inheritance
A record can inherit from another record. However, a record can't inherit from a class, and a class can't inherit from a record.
The following example illustrates inheritance with positional property syntax:
public abstract record Person(string FirstName, string LastName); public record Teacher(string FirstName, string LastName, int Grade) : Person(FirstName, LastName); public static void Main() { Person teacher = new Teacher("Nancy", "Davolio", 3); Console.WriteLine(teacher); // output: Teacher { FirstName = Nancy, LastName = Davolio, Grade = 3 } }
For two record variables to be equal, the run-time type must be equal. The types of the containing variables might be different. This is illustrated in the following code); Person student = new Student("Nancy", "Davolio", 3); Console.WriteLine(teacher == student); // output: False Student student2 = new Student("Nancy", "Davolio", 3); Console.WriteLine(student2 == student); // output: True }
In the example, all instances have the same properties and the same property values. But
student == teacher returns
False although both are
Person-type variables. And
student == student2 returns
True although one is a
Person variable and one is a
Student variable.
All public properties and fields of both derived and base types are included in the
ToString output, as shown in the following); Console.WriteLine(teacher); // output: Teacher { FirstName = Nancy, LastName = Davolio, Grade = 3 } }
For more information, see Inheritance in the C# language reference article about records.
Init only setters
Init only setters provide consistent syntax to initialize members of an object. Property initializers make it clear which value is setting which property. The downside is that those properties must be settable. Starting with C# 9.0, you can create
init accessors instead of
set accessors for properties and indexers. Callers can use property initializer syntax to set these values in creation expressions, but those properties are readonly once construction has completed. Init only setters provide a window to change state. That window closes when the construction phase ends. The construction phase effectively ends after all initialization, including property initializers and with-expressions have completed.
You can declare
init only setters in any type you write. For example, the following struct defines a weather observation structure:
public struct WeatherObservation { public DateTime RecordedAt { get; init; } public decimal TemperatureInCelsius { get; init; } public decimal PressureInMillibars { get; init; } public override string ToString() => $"At {RecordedAt:h:mm tt} on {RecordedAt:M/d/yyyy}: " + $"Temp = {TemperatureInCelsius}, with {PressureInMillibars} pressure"; }
Callers can use property initializer syntax to set the values, while still preserving the immutability:
var now = new WeatherObservation { RecordedAt = DateTime.Now, TemperatureInCelsius = 20, PressureInMillibars = 998.0m };
An attempt to change an observation after initialization results in a compiler error:
// Error! CS8852. now.TemperatureInCelsius = 18;
Init only setters can be useful to set base class properties from derived classes. They can also set derived properties through helpers in a base class. Positional records declare properties using init only setters. Those setters are used in with-expressions. You can declare init only setters for any
class,
struct, or
record you define.
For more information, see init (C# Reference).
Top-level statements
Top-level statements remove unnecessary ceremony from many applications. Consider the canonical "Hello World!" program:
using System; namespace HelloWorld { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
There's only one line of code that does anything. With top-level statements, you can replace all that boilerplate with the
using directive and the single line that does the work:
using System; Console.WriteLine("Hello World!");
If you wanted a one-line program, you could remove the
using directive and use the fully qualified type name:
System.Console.WriteLine("Hello World!");
Only one file in your application may use top-level statements. If the compiler finds top-level statements in multiple source files, it's an error. It's also an error if you combine top-level statements with a declared program entry point method, typically a
Main method. In a sense, you can think that one file contains the statements that would normally be in the
Main method of a
Program class.
One of the most common uses for this feature is creating teaching materials. Beginner C# developers can write the canonical "Hello World!" in one or two lines of code. None of the extra ceremony is needed. However, seasoned developers will find many uses for this feature as well. Top-level statements enable a script-like experience for experimentation similar to what Jupyter notebooks provide. Top-level statements are great for small console programs and utilities. Azure Functions is an ideal use case for top-level statements.
Most importantly, top-level statements don't limit your application's scope or complexity. Those statements can access or use any .NET class. They also don't limit your use of command-line arguments or return values. Top-level statements can access an array of strings named
args. If the top-level statements return an integer value, that value becomes the integer return code from a synthesized
Main method. The top-level statements may contain async expressions. In that case, the synthesized entry point returns a
Task, or
Task<int>.
For more information, see Top-level statements in the C# Programming Guide.
Pattern matching enhancements
C# 9 includes new pattern matching improvements:
- Type patterns match a variable is a type
- Parenthesized patterns enforce or emphasize the precedence of pattern combinations
- Conjunctive
andpatterns require both patterns to match
- Disjunctive
orpatterns require either pattern to match
- Negated
notpatterns require that a pattern doesn't match
- Relational patterns require the input be less than, greater than, less than or equal, or greater than or equal to a given constant.
These patterns enrich the syntax for patterns. Consider these examples:
public static bool IsLetter(this char c) => c is >= 'a' and <= 'z' or >= 'A' and <= 'Z';
With optional parentheses to make it clear that
and has higher precedence than
or:
public static bool IsLetterOrSeparator(this char c) => c is (>= 'a' and <= 'z') or (>= 'A' and <= 'Z') or '.' or ',';
One of the most common uses is a new syntax for a null check:
if (e is not null) { // ... }
Any of these patterns can be used in any context where patterns are allowed:
is pattern expressions,
switch expressions, nested patterns, and the pattern of a
switch statement's
case label.
For more information, see Patterns (C# reference).
For more information, see the Relational patterns and Logical patterns sections of the Patterns article.
Performance and interop
Three new features improve support for native interop and low-level libraries that require high performance: native sized integers, function pointers, and omitting the
localsinit flag.
Native sized integers,
nint and
nuint, are integer types. They're expressed by the underlying types System.IntPtr and System.UIntPtr. The compiler surfaces additional conversions and operations for these types as native ints. Native sized integers define properties for
MaxValue or
MinValue. These values can't be expressed as compile-time constants because they depend on the native size of an integer on the target machine. Those values are readonly at runtime. You can use constant values for
nint in the range [
int.MinValue ..
int.MaxValue]. You can use constant values for
nuint in the range [
uint.MinValue ..
uint.MaxValue]. The compiler performs constant folding for all unary and binary operators using the System.Int32 and System.UInt32 types. If the result doesn't fit in 32 bits, the operation is executed at runtime and isn't considered a constant. Native sized integers can increase performance in scenarios where integer math is used extensively and needs to have the fastest performance possible. For more information, see
nint and
nuint types
Function pointers provide an easy syntax to access the IL opcodes
ldftn and
calli. You can declare function pointers using new
delegate* syntax. A
delegate* type is a pointer type. Invoking the
delegate* type uses
calli, in contrast to a delegate that uses
callvirt on the
Invoke() method. Syntactically, the invocations are identical. Function pointer invocation uses the
managed calling convention. You add the
unmanaged keyword after the
delegate* syntax to declare that you want the
unmanaged calling convention. Other calling conventions can be specified using attributes on the
delegate* declaration. For more information, see Unsafe code and pointer types.
Finally, you can add the System.Runtime.CompilerServices.SkipLocalsInitAttribute to instruct the compiler not to emit the
localsinit flag. This flag instructs the CLR to zero-initialize all local variables. The
localsinit flag has been the default behavior for C# since 1.0. However, the extra zero-initialization may have measurable performance impact in some scenarios. In particular, when you use
stackalloc. In those cases, you can add the SkipLocalsInitAttribute. You may add it to a single method or property, or to a
class,
struct,
interface, or even a module. This attribute doesn't affect
abstract methods; it affects the code generated for the implementation. For more information, see
SkipLocalsInit attribute.
These features can improve performance in some scenarios. They should be used only after careful benchmarking both before and after adoption. Code involving native sized integers must be tested on multiple target platforms with different integer sizes. The other features require unsafe code.
Fit and finish features
Many of the other features help you write code more efficiently. In C# 9.0, you can omit the type in a
new expression when the created object's type is already known. The most common use is in field declarations:
private List<WeatherObservation> _observations = new();
Target-typed
new can also be used when you need to create a new object to pass as an argument to a method. Consider a
ForecastFor() method with the following signature:
public WeatherForecast ForecastFor(DateTime forecastDate, WeatherForecastOptions options)
You could call it as follows:
var forecast = station.ForecastFor(DateTime.Now.AddDays(2), new());
Another nice use for this feature is to combine it with init only properties to initialize a new object:
WeatherStation station = new() { Location = "Seattle, WA" };
You can return an instance created by the default constructor using a
return new(); statement.
A similar feature improves the target type resolution of conditional expressions. With this change, the two expressions need not have an implicit conversion from one to the other, but may both have implicit conversions to a target type. You likely won't notice this change. What you will notice is that some conditional expressions that previously required casts or wouldn't compile now just work.
Starting in C# 9.0, you can add the
static modifier to lambda expressions or anonymous methods. Static lambda expressions are analogous to the
static local functions: a static lambda or anonymous method can't capture local variables or instance state. The
static modifier prevents accidentally capturing other variables.
Covariant return types provide flexibility for the return types of override methods. An override method can return a type derived from the return type of the overridden base method. This can be useful for records and for other types that support virtual clone or factory methods.
In addition, the
foreach loop will recognize and use an extension method
GetEnumerator that otherwise satisfies the
foreach pattern. This change means
foreach is consistent with other pattern-based constructions such as the async pattern, and pattern-based deconstruction. In practice, this change means you can add
foreach support to any type. You should limit its use to when enumerating an object makes sense in your design.
Next, you can use discards as parameters to lambda expressions. This convenience enables you to avoid naming the argument, and the compiler may avoid using it. You use the
_ for any argument. For more information, see the Input parameters of a lambda expression section of the Lambda expressions article.
Finally, you can now apply attributes to local functions. For example, you can apply nullable attribute annotations to local functions.
Support for code generators
Two final features support C# code generators. C# code generators are a component you can write that is similar to a roslyn analyzer or code fix. The difference is that code generators analyze code and write new source code files as part of the compilation process. A typical code generator searches code for attributes or other conventions.
A code generator reads attributes or other code elements using the Roslyn analysis APIs. From that information, it adds new code to the compilation. Source generators can only add code; they aren't allowed to modify any existing code in the compilation.
The two features added for code generators are extensions to partial method syntax, and module initializers. First, the changes to partial methods. Before C# 9.0, partial methods are
private but can't specify an access modifier, have a
void return, and can't have
out parameters. These restrictions meant that if no method implementation is provided, the compiler removes all calls to the partial method. C# 9.0 removes these restrictions, but requires that partial method declarations have an implementation. Code generators can provide that implementation. To avoid introducing a breaking change, the compiler considers any partial method without an access modifier to follow the old rules. If the partial method includes the
private access modifier, the new rules govern that partial method. For more information, see partial method (C# Reference).
The second new feature for code generators is module initializers. Module initializers are methods that have the ModuleInitializerAttribute attribute attached to them. These methods will be called by the runtime before any other field access or method invocation within the entire module. A module initializer method:
- Must be static
- Must be parameterless
- Must return void
- Must not be a generic method
- Must not be contained in a generic class
- Must be accessible from the containing module
That last bullet point effectively means the method and its containing class must be internal or public. The method can't be a local function. For more information, see
ModuleInitializer attribute. | https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-9 | CC-MAIN-2021-17 | en | refinedweb |
Getting a result from a parallel task in Java
September 18, 2016 Leave a comment
In this post we saw how to execute a task on a different thread in Java. The examples demonstrated how to start a thread in the background without the main thread waiting for a result. This strategy is called fire-and-forget and is ideal in cases where the task has no return value.
However, that’s not always the case. What if we want to wait for the task to finish and return a result? Welcome to the future… or to the Future with a capital F.
Imagine that.
If we want to get the result from all 4 services then we can obviously call them one by one at first:
CalculationService adder = new AdditionService(); CalculationService subtractor = new SubtractionService(); CalculationService multiplier = new MultiplicationService(); CalculationService divider = new DivisionService(); int firstOperand = 10; int secondOperand = 5; //all on single thread Instant start = Instant.now(); int addResult = adder.calculate(firstOperand, secondOperand); int subtractResult = subtractor.calculate(addResult, secondOperand); int multplResult = multiplier.calculate(addResult, secondOperand); int divResult = divider.calculate(addResult, secondOperand); Instant finish = Instant.now(); Duration duration = Duration.between(start, finish); long seconds = duration.getSeconds();
As expected it takes 10 seconds to run all calculations. However, we can do a lot better. We can let all 4 operations execute on their own threads which will run in parallel. Then we wait for all of them to return their results. We’ll reuse what we learnt about the ExecutorService in the previous post.
One way to solve this problem is to implement the Callable of T interface as follows:
public class CalculationServiceTask implements Callable<Integer> { private final CalculationService calculationService; private final int firstOperand; private final int secondOperand; public CalculationServiceTask(CalculationService calculationService, int firstOperand, int secondOperand) { this.calculationService = calculationService; this.firstOperand = firstOperand; this.secondOperand = secondOperand; } @Override public Integer call() throws Exception { return calculationService.calculate(firstOperand, secondOperand); } }
The T type parameter declares the return type. Callable has a single function called call() where we implement what and how to return. The submit() method of the executor service can accept a single Callable and return a Future of T where the Future object holds the result of the operation. In our case though it makes more sense to run the invokeAll method which accepts a collection of Callables and let them execute in parallel. It returns a collection of Future objects. We can then iterate this collection and get each result one by one:
List<Callable<Integer>> calculationTasks = new ArrayList<>(); calculationTasks.add(new CalculationServiceTask(adder, firstOperand, secondOperand)); calculationTasks.add(new CalculationServiceTask(subtractor, firstOperand, secondOperand)); calculationTasks.add(new CalculationServiceTask(multiplier, firstOperand, secondOperand)); calculationTasks.add(new CalculationServiceTask(divider, firstOperand, secondOperand)); ExecutorService newCachedThreadPool = Executors.newCachedThreadPool(); try { List<Future<Integer>> invokeAll = newCachedThreadPool.invokeAll(calculationTasks); for (Future<Integer> future : invokeAll) { int result = future.get(); System.out.println(result); } } catch (InterruptedException | ExecutionException ex) { System.err.println(ex.getMessage()); } Instant finish = Instant.now(); Duration duration = Duration.between(start, finish); long seconds = duration.getSeconds();
This time it took 4 seconds to complete all tasks. That corresponds to the longest running task, i.e. division.
Like we saw with runnables in the previous post we can simplify our code with Lambdas in Java 8:
calculationTasks.add(() -> adder.calculate(firstOperand, secondOperand)); calculationTasks.add(() -> subtractor.calculate(firstOperand, secondOperand)); calculationTasks.add(() -> multiplier.calculate(firstOperand, secondOperand)); calculationTasks.add(() -> divider.calculate(firstOperand, secondOperand));
In that case we don’t need to implement the Callable interface at all.
View all posts related to Java here. | https://dotnetcodr.com/2016/09/18/getting-a-result-from-a-parallel-task-in-java/ | CC-MAIN-2021-17 | en | refinedweb |
Hi,
Since the sample project doesn't have the same issue, it would be very helpful if you could send us your project to debug, or a simplified version of it that still has the same problem. If you are not comfortable posting it in the forum, you can send it to the support email, mentioning this post and I will find it there.
As for CI integration, the only way we've found to work around this limitation in older versions of Unity is to modify the XcodeAPI project from Unity. You can find the project here:
If you want to go with this solution, we can share our version of the XcodeAPI that we use to build internally.
For 2017.2 and higher, it is possible to do the same thing using the API that is shipped with Unity. Please see the last post in this thread for an example on how to do this:
Best regards,
Alexandru
Unfortunately my company is very secretive so I don't think sending you this project right now would be acceptable but I can look into it. I'll see what I can come up with further investigation, I am now in the process of basically stripping the project back until it works again and I'll see what I can find out.
Using the modified xcodeapi sounds like a fine solution, if you would share it, it would at least speed up my testing considerably. :) Can you share it through here?
I managed to fix the compile/missing library issue, it was due to I guess a shonky update of the libWikitude2UnityBridge meta file when it auto-updated from 5.4 to 5.6, meaning the library wasn't being fully included into the iOS build. Regenerating the meta file seems to have fixed those woes.
Would still love to see your implementation of xcodeapi for automatically adding the embedded framework at export though.
Cheers,
Jon
Hi,
I've attached our version of the Xcode API project to this post.
Alongside the usual files, you will find the XcodeUpdater.cs script, that demonstrates how the API can be used to modify a Unity Xcode project to work with Wikitude without any modifications.
It is configured to run as a PosProcessBuild method, but it also provides a way to manually trigger it from a menu command, as long as you change the hardcoded path to the Xcode project in the TestXcode method. I've found this useful for testing.
All the code is in the Wikitude.Xcode namespace, but feel free to modify that to your needs.
If you have any issues integrating this, please let me know.
Best regards,
Alexandru
Also, please keep in mind that the code assumes that it is modifying a new (clean) build from Unity and it is not appending to an existing one. If you try to run it on an existing Xcode project, it will try to add the framework multiple times.
Best regards,
Alexandru
Jonathan Murphy
Hi there,!
I'm currently evaluating Wikitude for it's Markerless AR functionality for my company and I have managed to get an existing prototype working on an Android device (we were previously using ARCore) but now I am having issues getting the plugin to work properly with our iOS implementation. I have two issues I'm struggling with right now.
1. I get a crash when the scene initialises that looks like this:
#0 0x00000001018c1874 in ::-[WTWikitude2UnityBridge initWithLicenseKey:andTrackerManagerName:andUnityGraphics:andUnityGraphicsMetal:](NSString *, NSString *, IUnityGraphics *, IUnityGraphicsMetal *) at /Users/emperor/Development/Tools/Jenkins/Master/Instance/jobs/native_sdk_builder/workspace/repositories/unity_plugin/src/ios/Wikitude2UnityBridge/WTWikitude2UnityBridge.mm:98
#1 0x00000001018bf640 in ::UnityWikitudeBridge_InstantiateWikitudeNativeSDK(const char *, const char *) at /Users/emperor/Development/Tools/Jenkins/Master/Instance/jobs/native_sdk_builder/workspace/repositories/unity_plugin/src/ios/Wikitude2UnityBridge/WTGlobalUnityBridge.mm:88
#2 0x000000010088dd9c in ::iOSBridge_UnityWikitudeBridge_InstantiateWikitudeNativeSDK_m2397249553(Il2CppObject *, String_t *, String_t *, const MethodInfo *) at /Users/jonathanmurphy/Documents/Development/psm-prototype/Build/Classes/Native/Bulk_WikitudeUnityPlugin_0.cpp:7993
#3 0x000000010088ed38 in ::iOSBridge_Wikitude_IPlatformBridge_InstantiateWikitudeNativeSDK_m2737054127(iOSBridge_t3713850486 *, String_t *, String_t *, const MethodInfo *) at /Users/jonathanmurphy/Documents/Development/psm-prototype/Build/Classes/Native/Bulk_WikitudeUnityPlugin_0.cpp:8838
#4 0x00000001008a1a08 in InterfaceActionInvoker2<String_t*, String_t*>::Invoke(unsigned int, Il2CppClass*, Il2CppObject*, String_t*, String_t*) at /Users/jonathanmurphy/Documents/Development/psm-prototype/Build/Classes/Native/GeneratedInterfaceInvokers.h:69
#5 0x00000001008acf88 in ::WikitudeCamera_Awake_m535637355(WikitudeCamera_t2517845841 *, const MethodInfo *) at /Users/jonathanmurphy/Documents/Development/psm-prototype/Build/Classes/Native/Bulk_WikitudeUnityPlugin_0.cpp:19612
#6 0x0000000100ca1c2c in RuntimeInvoker_Void_t1841601450(MethodInfo const*, void*, void**) at /Users/jonathanmurphy/Documents/Development/psm-prototype/Build/Classes/Native/Il2CppInvokerTable.cpp:1506
I am able to get the Sample project compiling fine but I'm just unsure how my Project is setup differently, I have the license key entered correctly and the settings for Graphics rendering seem the same (Metal, OpenGLES3, OpenGLES2).
2. Every time I export from Unity I need to manually add WikitudeNativeSDK.Framework to the Embedded Binaries section of the build settings. This is breaking our Jenkins CI as I can't really find an automated way of doing this in Unity 5.6.3f1. This may be fixable with some more work but are there any existing to this?
I would love to recommend this package for our companies needs but right now it's hard to recommend. Any help would be much appreciated. | https://support.wikitude.com/support/discussions/topics/5000085661 | CC-MAIN-2021-17 | en | refinedweb |
Statistical plotting recipes for Plots.jl
This package is a drop-in replacement for Plots.jl that contains many statistical recipes for concepts and types introduced in the JuliaStats organization.
It is thus slightly less lightweight, but has more functionality. Main documentation is found in the Plots.jl documentation ().
Initialize:
#]add StatsPlots # install the package if it isn't installed using StatsPlots # no need for `using Plots` as that is reexported here gr(size=(400,300))
Table-like data structures, including
DataFrames,
IndexedTables,
DataStreams, etc... (see here for an exhaustive list), are supported thanks to the macro
@dfwhich allows passing columns as symbols. Those columns can then be manipulated inside the
plotcall, like normal
Arrays:
julia using DataFrames, IndexedTables df = DataFrame(a = 1:10, b = 10 .* rand(10), c = 10 .* rand(10)) @df df plot(:a, [:b :c], colour = [:red :blue]) @df df scatter(:a, :b, markersize = 4 .* log.(:c .+ 0.1)) t = table(1:10, rand(10), names = [:a, :b]) # IndexedTable @df t scatter(2 .* :b)
Inside a
@dfmacro call, the
colsutility function can be used to refer to a range of columns:
@df df plot(:a, cols(2:3), colour = [:red :blue])
or to refer to a column whose symbol is represented by a variable:
s = :b @df df plot(:a, cols(s))
cols()will refer to all columns of the data table.
In case of ambiguity, symbols not referring to
DataFramecolumns must be escaped by
^():
julia df[:red] = rand(10) @df df plot(:a, [:b :c], colour = ^([:red :blue]))
The
@dfmacro plays nicely with the new syntax of the Query.jl data manipulation package (v0.8 and above), in that a plot command can be added at the end of a query pipeline, without having to explicitly collect the outcome of the query first:
using Query, StatsPlots df |> @filter(_.a > 5) |> @map({_.b, d = _.c-10}) |> @df scatter(:b, :d)
The
@dfsyntax is also compatible with the Plots.jl grouping machinery:
using RDatasets school = RDatasets.dataset("mlmRev","Hsb82") @df school density(:MAch, group = :Sx)
To group by more than one column, use a tuple of symbols:
@df school density(:MAch, group = (:Sx, :Sector), legend = :topleft)
To name the legend entries with custom or automatic names (i.e.
Sex = Male, Sector = Public) use the curly bracket syntax
group = {Sex = :Sx, :Sector}. Entries with
=get the custom name you give, whereas entries without
=take the name of the column.
The old syntax, passing the
DataFrameas the first argument to the
plotcall is no longer supported.
A GUI based on the Interact package is available to create plots from a table interactively, using any of the recipes defined below. This small app can be deployed in a Jupyter lab / notebook, Juno plot pane, a Blink window or in the browser, see here for instructions.
import RDatasets iris = RDatasets.dataset("datasets", "iris") using StatsPlots, Interact using Blink w = Window() body!(w, dataviewer(iris))
using RDatasets iris = dataset("datasets","iris") @df iris marginalhist(:PetalLength, :PetalWidth)
using RDatasets iris = dataset("datasets","iris") @df iris marginalscatter(:PetalLength, :PetalWidth)
x = randn(1024) y = randn(1024) marginalkde(x, x+y)
levels=Ncan be used to set the number of contour levels (default 10); levels are evenly-spaced in the cumulative probability mass.
clip=((-xl, xh), (-yl, yh))(default
((-3, 3), (-3, 3))) can be used to adjust the bounds of the plot. Clip values are expressed as multiples of the
[0.16-0.5]and
[0.5,0.84]percentiles of the underlying 1D distributions (these would be 1-sigma ranges for a Gaussian).
This plot type shows the correlation among input variables. The marker color in scatter plots reveal the degree of correlation. Pass the desired colorgradient to
markercolor. With the default gradient positive correlations are blue, neutral are yellow and negative are red. In the 2d-histograms the color gradient show the frequency of points in that bin (as usual controlled by
seriescolor).
gr(size = (600, 500))
then
julia @df iris corrplot([:SepalLength :SepalWidth :PetalLength :PetalWidth], grid = false)or also:
julia @df iris corrplot(cols(1:4), grid = false)
A correlation plot may also be produced from a matrix:
M = randn(1000,4) M[:,2] .+= 0.8sqrt.(abs.(M[:,1])) .- 0.5M[:,3] .+ 5 M[:,3] .-= 0.7M[:,1].^2 .+ 2 corrplot(M, label = ["x$i" for i=1:4])
cornerplot(M)
cornerplot(M, compact=true)
import RDatasets singers = dataset("lattice", "singer") @df singers violin(string.(:VoicePart), :Height, linewidth=0) @df singers boxplot!(string.(:VoicePart), :Height, fillalpha=0.75, linewidth=2) @df singers dotplot!(string.(:VoicePart), :Height, marker=(:black, stroke(0)))
Asymmetric violin or dot plots can be created using the
sidekeyword (
:both- default,
:rightor
:left), e.g.:
singers_moscow = deepcopy(singers) singers_moscow[:Height] = singers_moscow[:Height] .+ 5 @df singers violin(string.(:VoicePart), :Height, side=:right, linewidth=0, label="Scala") @df singers_moscow violin!(string.(:VoicePart), :Height, side=:left, linewidth=0, label="Moscow") @df singers dotplot!(string.(:VoicePart), :Height, side=:right, marker=(:black,stroke(0)), label="") @df singers_moscow dotplot!(string.(:VoicePart), :Height, side=:left, marker=(:black,stroke(0)), label="")
Dot plots can spread their dots over the full width of their column
mode = :uniform, or restricted to the kernel density (i.e. width of violin plot) with
mode = :density(default). Horizontal position is random, so dots are repositioned each time the plot is recreated.
mode = :nonekeeps the dots along the center.
The ea-histogram is an alternative histogram implementation, where every 'box' in the histogram contains the same number of sample points and all boxes have the same area. Areas with a higher density of points thus get higher boxes. This type of histogram shows spikes well, but may oversmooth in the tails. The y axis is not intuitively interpretable.
a = [randn(100); randn(100) .+ 3; randn(100) ./ 2 .+ 3] ea_histogram(a, bins = :scott, fillalpha = 0.4)
AndrewsPlots are a way to visualize structure in high-dimensional data by depicting each row of an array or table as a line that varies with the values in columns.
using RDatasets iris = dataset("datasets", "iris") @df iris andrewsplot(:Species, cols(1:4), legend = :topleft)
using Distributions plot(Normal(3,5), fill=(0, .5,:orange))
dist = Gamma(2) scatter(dist, leg=false) bar!(dist, func=cdf, alpha=0.3)
The
qqplotfunction compares the quantiles of two distributions, and accepts either a vector of sample values or a
Distribution. The
qqnormis a shorthand for comparing a distribution to the normal distribution. If the distributions are similar the points will be on a straight line.
x = rand(Normal(), 100) y = rand(Cauchy(), 100)
plot( qqplot(x, y, qqline = :fit), # qqplot of two samples, show a fitted regression line qqplot(Cauchy, y), # compare with a Cauchy distribution fitted to y; pass an instance (e.g. Normal(0,1)) to compare with a specific distribution qqnorm(x, qqline = :R) # the :R default line passes through the 1st and 3rd quartiles of the distribution )
groupedbar(rand(10,3), bar_position = :stack, bar_width=0.7)
This is the default:
groupedbar(rand(10,3), bar_position = :dodge, bar_width=0.7)
The
groupsyntax is also possible in combination with
groupedbar:
ctg = repeat(["Category 1", "Category 2"], inner = 5) nam = repeat("G" .* string.(1:5), outer = 2)
groupedbar(nam, rand(5, 2), group = ctg, xlabel = "Groups", ylabel = "Scores", title = "Scores by group and category", bar_width = 0.67, lw = 0, framestyle = :box)
using RDatasets iris = dataset("datasets", "iris") @df iris groupedhist(:SepalLength, group = :Species, bar_position = :dodge)
@df iris groupedhist(:SepalLength, group = :Species, bar_position = :stack)
using Clustering D = rand(10, 10) D += D' hc = hclust(D, linkage=:single) plot(hc)
The
branchorder=:optimaloption in
hclust()can be used to minimize the distance between neighboring leaves:
using Clustering using Distances using StatsPlots using Random
n = 40
mat = zeros(Int, n, n)
create banded matrix
for i in 1:n last = minimum([i+Int(floor(n/5)), n]) for j in i:last mat[i,j] = 1 end end
randomize order
mat = mat[:, randperm(n)] dm = pairwise(Euclidean(), mat, dims=2)
normal ordering
hcl1 = hclust(dm, linkage=:average) plot( plot(hcl1, xticks=false), heatmap(mat[:, hcl1.order], colorbar=false, xticks=(1:n, ["$i" for i in hcl1.order])), layout=grid(2,1, heights=[0.2,0.8]) )
Compare to:
# optimal ordering hcl2 = hclust(dm, linkage=:average, branchorder=:optimal) plot( plot(hcl2, xticks=false), heatmap(mat[:, hcl2.order], colorbar=false, xticks=(1:n, ["$i" for i in hcl2.order])), layout=grid(2,1, heights=[0.2,0.8]) )
using Distances using Clustering using StatsBase using StatsPlots
pd=rand(Float64,16,7)
dist_col=pairwise(CorrDist(),pd,dims=2) hc_col=hclust(dist_col, branchorder=:optimal) dist_row=pairwise(CorrDist(),pd,dims=1) hc_row=hclust(dist_row, branchorder=:optimal)
pdz=similar(pd) for row in hc_row.order pdz[row,hc_col.order]=zscore(pd[row,hc_col.order]) end nrows=length(hc_row.order) rowlabels=(1:16)[hc_row.order] ncols=length(hc_col.order) collabels=(1:7)[hc_col.order] l = grid(2,2,heights=[0.2,0.8,0.2,0.8],widths=[0.8,0.2,0.8,0.2]) plot( layout = l, plot(hc_col,xticks=false), plot(ticks=nothing,border=:none), plot( pdz[hc_row.order,hc_col.order], st=:heatmap, #yticks=(1:nrows,rowlabels), yticks=(1:nrows,rowlabels), xticks=(1:ncols,collabels), xrotation=90, colorbar=false ), plot(hc_row,yticks=false,xrotation=90,orientation=:horizontal) )
Population analysis on a table-like data structures can be done using the highly recommended GroupedErrors package.
This external package, in combination with StatsPlots, greatly simplifies the creation of two types of plots:
Some simple summary statistics are computed for each experimental subject (mean is default but any scalar valued function would do) and then plotted against some other summary statistics, potentially splitting by some categorical experimental variable.
Some statistical analysis is computed at the single subject level (for example the density/hazard/cumulative of some variable, or the expected value of a variable given another) and the analysis is summarized across subjects (taking for example mean and s.e.m), potentially splitting by some categorical experimental variable.
For more information please refer to the README.
A GUI based on QML and the GR Plots.jl backend to simplify the use of StatsPlots.jl and GroupedErrors.jl even further can be found here (usable but still in alpha stage).
MultivariateStats.jlcan be plotted as scatter plots.
using MultivariateStats, RDatasets, StatsPlots
iris = dataset("datasets", "iris") X = convert(Matrix, iris[:, 1:4]) M = fit(MDS, X'; maxoutdim=2)
plot(M, group=iris.Species)
PCA will be added once the API in MultivariateStats is changed. See and.
A 2×2 covariance matrix
Σcan be plotted as an ellipse, which is a contour line of a Gaussian density function with variance
Σ.
covellipse([0,2], [2 1; 1 4], n_std=2, aspect_ratio=1, label="cov1") covellipse!([1,0], [1 -0.5; -0.5 3], showaxes=true, label="cov2") | https://xscode.com/JuliaPlots/StatsPlots.jl | CC-MAIN-2021-17 | en | refinedweb |
Coordinate
#include <Coordinate.h>
Detailed Description
Represents a coordinate with the properties of a name and coordinates.
Definition at line 23 of file Coordinate.h.
Constructor & Destructor Documentation
Constructor.
Definition at line 19 of file Coordinate.cpp.
Member Function Documentation
Provides access to the altitude (meters) of the coordinate.
Bearing (in degree) to the given coordinate.
Definition at line 82 of file Coordinate.cpp.
Change the altitude of the coordinate.
Definition at line 65 of file Coordinate.cpp.
Distance (in meter) to the given coordinate.
Definition at line 75 of file Coordinate.cpp.
Provides access to the latitude (degree) of the coordinate.
Provides access to the longitude (degree) of the coordinate.
Change the altitude of the coordinate.
Definition at line 59 of file Coordinate.cpp.
Change all coordinates at once.
Definition at line 70 of file Coordinate.cpp.
Change the latitude of the coordinate.
Definition at line 48 of file Coordinate.cpp.
Change the longitude of the coordinate.
Definition at line 37 of file Coordinate.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2021 The KDE developers.
Generated on Fri Apr 9 2021 23:20:03 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/marble/html/classCoordinate.html | CC-MAIN-2021-17 | en | refinedweb |
- Training Library
- Containers
- Courses
- Deploying A Cloud Native Application into Kubernetes
K8s Cluster DNS Resolution Testing'll quickly demonstrate how DNS works internally within the Cluster. And, as an example, how we can test DNS resolution for the registered services deployed as part of our sample application.
For starters, let's view the system pods within the cluster which are used for DNS. Within the terminal, I'll run the following command: kubectl get pods --namespace= kube-system -l, for label, k8s- app=kube-dns. Now, this results in the following two CoreDNS pods which are used for DNS and have been deployed in the kube-system namespace. Next, I'll launch the following pod for DNS testing purposes which is based on the tutum/dnsutils image. This will give us the ability to run the dig utility which allows us to resolve various DNS names currently registered within our cluster.
Okay, we'll now attempt to resolve the mongo.cloudacademy.svc.cluster.local service name using the dig utility like so. Here, we can see that this results in the answer section containing the following three A records, one for each of the mongo pods where the IP address is the address assigned to the pod. This is designed like so, since the mongo service was deployed as a headless service where the ClusterIP property was set to None. Next, we'll attempt to resolve the api.cloudacademy.svc.cluster.local service record. And here we can see that it resolves differently. In this case, the answer section contains a single A record containing the VIP address, 10.101.151.37 which the cluster registered and assigned to the API service when it was deployed.
And finally, we'll query and resolve the frontend.cloudacademy.svc.cluster.local service record. And again, we can see that the answer section contains a single A record containing the VIP address, 10.107.216.59, which the cluster registered and assigned to the frontend service when it was deployed. Let's now exit this testing pod and run a quick check on the services that are currently deployed within the cloudacademy namespace. I'll run the command: kubectl get svc, for service. And as expected the frontend and api services have the ClusterIP addresses that we've just seen when we performed DNS resolution on the cluster-registered service names. And finally, notice how the ClusterIP is set to None for the mongo service. It is this property that makes it a headless service and changes the behavior of DNS for the equivalent service record as seen earlier.
Okay that concludes this brief DNS review and testing demonstration. The key takeaway from this lecture is knowing how to query and resolve DNS names within the cluster.
>. | https://cloudacademy.com/course/deploying-cloud-native-app-into-kubernetes/k8s-cluster-dns-resolution-testing/ | CC-MAIN-2021-17 | en | refinedweb |
Introduction: Controlling Arduino With Gamepad
Lately I've been curious about befriending Arduino or any other microcontroller with a gamepad in order to have physical interaction with the things I make, but there seemed to be no fast or cheap way to do so.
Most solutions involved:
- Completely dismantling your game controller and bypassing USB logic with some weird contraption made of wires, protoboards and a microcontroller acting as an UART gate, which then passes messages to HC-05 bluetooth module.
- Making your own joystick/gamepad based on the above principles
- Buying a microcontroller with USB-host functionality and writing a ton of code for USB driver to have a “puppy on a leash”
- Using a bunch of third-party software, like Input remapping programs, Processing IDE and Python to do this one simple thing
For quick testing of remote-controlled the prototype (most likely right on the desk or workbench) we need a simple solution with minimal expenses. This is why I've decided to do a little research in this topic and implement small, but somewhat useful software solution to this problem.
Over the course of development I found out that this material will not only be useful in this one particular application, but can also serve as a foundation for much wider range of applications, like data logging systems, PC-based flight control, remote sensor data acquisition etc. etc. etc.
Step 1: INTRODUCTION
The original article is published on my website. This is still work in progress and requires lots of fine-tuning, but that's what DIY is all about - continuous improvement!
The original amount of material I wrote is a bit too big for this Instructable, so in order to save you some time and save myself from repeating the same task over again I will skip some of the stuff and provide a link to an appropriate resource instead.
General concept of my project consists of the following:
- We are going to use a wired/wireless gamepad connected to PC
- We will implement a lightweight software written in C++ in order to read the current state of XInput Device(gamepad)
- If necessary, we can transform current gamepad state into short useful data sequence (button state, axis position etc.), which will be sent over UART to our microcontroller.
- Optionally, we can read some data back from microcontroller, like Force-Feedback triggers for gamepad, or plain-simple sensor data.
These principles will also help us to develop the basis for a two-way communication between Arduino(or any other MCU) and a PC, which we can use, for example, for a low-resolution serial camera feed or almost real-time sensor information update.
The main advantages of this method are:
- It does not require any hardware modifications, like torturing the gamepad
- It will not cost you a penny, given that you have a computer and some means of serial communication( like USB-UART interface, HC-05/06 module etc.)
- In this specific situation it will work on any Windows-powered PC with any XInput compatible gamepad (which includes cheap rumblepad/sixaxis clones)
However, it requires at least some basic C++/Arduino programming skills and a little bit of technical know-how.
Step 2: LEARNING SERIAL COMMUNICATION
Before we dive into development process, I'd like you to go over some preliminary reading in order to understand what we are trying to do. I've already compiled a simple tutorial on serial communication (second link), so once you are done, we can start developing a fully functional program to suit our purposes.
READING MATERIALS:
We will start with creating 2 simple functions, which will allow to open and close UART connection.
For this you'll need MS Visual C++, pair of hands and caffeine-infused brain.
The COM port initialization is a straightforward process: first we create port configuration portDCB, which contains all the communication settings, and then we assign the port handle. Notice, that port is initialized with CreateFile() function call, and just like with conventional files we can use ReadFile() and WriteFile() to exchange data.
Then we assign the new configuration with SetCommState() function call. If at any step of this process we encounter an error, we will print the appropriate message and return FALSE.
Otherwise, we return TRUE and as a result of execution of UART_Init(), port variable will now point to a serial port handle.
For the purpose of flexibility we will provide the COM port name and its baud rate as arguments of this function. Default settings are set to 8 bit transmission length with 1 stop bit. Parity, error correction and any type of flow control are disabled by default.
/* * UART_Init() * Opens the com port with ID "portName" at baud rate "baud" * HANDLE *port becomes a pointer to an active COM port connection * Returns whether the connection is successful or not. */ BOOL UART_Init(HANDLE *port, LPCWSTR portName, DWORD baud) { DCB portDCB; // _DCB struct for serial configuration bool result = FALSE; // Return value COMMTIMEOUTS comTOUT; // Communication timeout *port = CreateFile(portName, GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_FLAG_WRITE_THROUGH, NULL); // Try opening port communication if(*port==INVALID_HANDLE_VALUE) { wprintf(L"ERROR: Cannot open port %s\n",portName); return FALSE; } // NEW SETTINGS portDCB.DCBlength = sizeof(DCB);// Setup config length GetCommState(*port, &portDCB); // Get default port state portDCB.BaudRate = baud; // Set baud rate portDCB.fBinary = TRUE; // Enable Binary mode portDCB.fParity = FALSE; // Disable parity portDCB.fOutxCtsFlow = FALSE; // No CTS portDCB.fOutxDsrFlow = FALSE; // No DSR portDCB.fDtrControl = DTR_CONTROL_DISABLE; // No DTR portDCB.fDsrSensitivity = FALSE; // No DSR sensitivity portDCB.fTXContinueOnXoff = TRUE; // TX on XOFF portDCB.fOutX = FALSE; // No XON/XOFF portDCB.fInX = FALSE; // portDCB.fErrorChar = FALSE; // No error correction portDCB.fNull = FALSE; // Keep NULL values portDCB.fRtsControl = RTS_CONTROL_DISABLE; // Disable RTS portDCB.fAbortOnError = FALSE; // Disable abort-on-error portDCB.ByteSize = 8; // 8-bit frames portDCB.Parity = NOPARITY; // Parity: none portDCB.StopBits = ONESTOPBIT; // StopBits: 1 // Try reconfiguring COM port if (!SetCommState (*port, &portDCB)) { wprintf(L"ERROR: Cannot configure port %s\n",portName); return FALSE; } /// Communication timeout values result = GetCommTimeouts(*port, &comTOUT); comTOUT.ReadIntervalTimeout = 10; comTOUT.ReadTotalTimeoutMultiplier = 1; comTOUT.ReadTotalTimeoutConstant = 1; /// Set new timeout values result = SetCommTimeouts(*port, &comTOUT); return TRUE; }
Closing COM port is very easy. All we need to do is release the handle (line 2) and set *port pointer to NULL, so we accidentally don’t access the old handle.
UART_Close() function returns FALSE if we are trying to close an uninitialized or previously closed port handle.
BOOL UART_Close(HANDLE *port) { if (*port == NULL) return FALSE; CloseHandle(*port); *port = NULL; return TRUE; }
As you've already guessed, the next logical step will be implementing functions to send/receive UART messages. The key moment of this part is that we will use communication events, described in MSDN article mentioned earlier.
BOOL UART_Send(HANDLE port, char *Buffer) { DWORD bytesTransmitted; if(!WriteFile(port,Buffer, strlen(Buffer), &bytesTransmitted, NULL)) { DWORD Errors; COMSTAT Status; ClearCommError(port,&Errors,&Status); printf("ERROR: Unable to send data.\n"); return FALSE; } else { return TRUE; } }
Assuming that our arduino might be occupied at the time of transmission and could not provide a proper response, we want to wait for EV_RXCHAR event to occur every time RX has incoming data. To address this problem we will set up a communications mask and wait for our event before reading the next byte.
BOOL UART_Receive(HANDLE port, char *Buffer) { DWORD bytesTransmitted = 0; // Byte counter DWORD status = EV_RXCHAR; // transmission status mask memset(Buffer, 0, BUFFER_SIZE); // Clear input buffer SetCommMask (port, EV_RXCHAR); // Set up event mask WaitCommEvent(port, &status, 0); // Listen for RX event if(status & EV_RXCHAR) // If event occured { DWORD success=0; char c = 0; do { if(!ReadFile(port,&c, 1, &success, NULL)) // Read 1 char { // If error occured, print the message and exit DWORD Errors; COMSTAT Status; ClearCommError(port,&Errors,&Status); // Clear errors memset(Buffer, 0, BUFFER_SIZE); // Clear input buffer printf("ERROR: Unable to receive data.\n"); // Print error message return FALSE; } else { Buffer[bytesTransmitted]=c; // Add last character bytesTransmitted++; // Increase trans. counter } } while((success==1) && (c!='\n')); // do until the end of message } return TRUE; }
These four functions should be enough to handle basic UART communication between Arduino and your PC.
Now, let's evaluate the functionality of our code with a simple UART loopback test. We need to finish the program's _tmain() function first:
int _tmain(int argc, _TCHAR* argv[]) { HANDLE port; char Buffer[BUFFER_SIZE] = "TEST MESSAGE\n"; // Unable to open? exit with code 1 if (!UART_Init(&port, L"COM8:", CBR_115200)) { system("PAUSE"); return 1; } // : continue execution else { // Here we send the string from buffer and print the response. // Our Arduino loopback should return the same string int msgs = 0; // reset # of messages while((port!=INVALID_HANDLE_VALUE) && (msgs<100)) // Send/Receive 100 messages { printf("Sending: %s\n", Buffer); UART_Send(port, Buffer); // Send data to UART port if(UART_Receive(port, Buffer)) // Receive data printf("Received: %s\n", Buffer); PurgeComm(port, PURGE_RXCLEAR | PURGE_TXCLEAR); // Flush RX and TX msgs++; // Increment # of messages } UART_Close(&port); // Close port } system("PAUSE"); return 0; }
This code initializes port COM8, which is my USB-UART cable (don't forget to change that part to your port #). Then, it sends 100 messages over UART and prints both original message and response. Implementing the communication event listener earlier really paid off at the end. If you look at this program carefully, you'll see that we have only used about a dozen lines of effective code to make it work!
Now, let's setup our Arduino to work as UART loopback device. We will also implement an event-driven UART communication in order to be able to do some other stuff while not transmitting.
Open up your Arduino IDE and use this code as an example:
String buffer = ""; // a string to hold incoming data void setup() { buffer.reserve(255); // Reserve 255 chars Serial.begin(115200); // Initialize UART } void loop() { // NOP } // SerialEvent occurs every time we receive RX interrupt void serialEvent() { while (Serial.available()) { char c = (char)Serial.read(); // Read character buffer += c; // Add it to buffer // If end-of-line, reset buffer and send back the data if (c == '\n') { Serial.print(buffer); // Loopback buffer = ""; // Clear buffer } } }
Now you can upload the sketch to Arduino, compile the C++ project and test it!
Step 3: GETTING INPUT FROM GAMEPAD
Now, that we know how to send the information to Arduino, we only need to learn how to acquire input from the XBOX gamepad.
In this section we will learn the basics of XInput and write a very simple program, which displays current gamepad state in the console output. We will also learn some important aspects of pre-processing the input values to avoid problematic thumbstick input ranges ("dead zones").
THE BASICS
XInput API provides the means of getting input from XBOX 360 controllers and includes a variety of tools to set the controller effects(farce feedback), process audio input/output for gaming headsets and do other cool stuff.
XInput supports up to 4 controllers, but in our situation only controller #0 will be used as default.
In order to update the current state of the gamepad we will use XInputGetState() function. It takes 2 parameters: the gamepad ID(which is 0 in most cases) and the pointer to XInput state variable. The return value of XInputGetState can be used to check the availability of the gamepad. The value of ERR_SUCCESS means that the gamepad is on, and XInput state now has it's current state.
XINPUT_STATE consists of the following elements:
typedef struct _XINPUT_STATE { DWORD dwPacketNumber; XINPUT_GAMEPAD Gamepad; } XINPUT_STATE;
dwPacketNumber indicates whether the gamepad state has changed.
Gamepad is a data type, which represents current gamepad state, including thumbstick positions, trigger values, D-pad and button flags.
typedef struct _XINPUT_GAMEPAD { WORD wButtons; BYTE bLeftTrigger; BYTE bRightTrigger; SHORT sThumbLX; SHORT sThumbLY; SHORT sThumbRX; SHORT sThumbRY; } XINPUT_GAMEPAD;
sThumbLX, sThumbLY, sThumbRX and sThumbRY are 16-bit signed integers, which take values from −32,768 to 32,767. These correspond to current thumbstick positions.
bLeftTrigger and bRightTrigger take values in 0..255 range.
wButtons represents the state of all buttons on an XBox controller, where each bit corresponds to current state of each individual button. If we want to check whether the X button was pressed we need to perform the following operations:
XINPUT_STATE gpState; // Create state variable memset(&gpState,0,sizeof(XINPUT_STATE)); // Reset state DWORD res = XInputGetState(0,&gpState); // Get new state if(gpState.wButtons & 0x4000 ) { printf("Xplosive kick!\n"); }
The following list shows all buttons and their corresponding bitmasks:
XINPUT_GAMEPAD_DPAD_UP 0x0001 XINPUT_GAMEPAD_DPAD_DOWN 0x0002 XINPUT_GAMEPAD_DPAD_LEFT 0x0004 XINPUT_GAMEPAD_DPAD_RIGHT 0x0008 XINPUT_GAMEPAD_START 0x0010 XINPUT_GAMEPAD_BACK 0x0020 XINPUT_GAMEPAD_LEFT_THUMB 0x0040 // These are thumbstick buttons XINPUT_GAMEPAD_RIGHT_THUMB 0x0080 XINPUT_GAMEPAD_LEFT_SHOULDER 0x0100 // Left bumper XINPUT_GAMEPAD_RIGHT_SHOULDER 0x0200 // Right bumper XINPUT_GAMEPAD_A 0x1000 XINPUT_GAMEPAD_B 0x2000 XINPUT_GAMEPAD_X 0x4000 XINPUT_GAMEPAD_Y 0x8000
LET'S PRACTICE
At this point we have all the tools we need to write our first program with XInput. It will be laughably simple, but it will help to understand how this process works and which elements of XInput we need.
#include "stdafx.h" #include <Windows.h> #include <XInput.h> #pragma comment(lib, "XInput.lib") // required for linker int _tmain(int argc, _TCHAR* argv[]) { XINPUT_STATE gpState; // Gamepad state int player = -1; // Gamepad ID // Polling all 4 gamepads to see who's alive for(int i=0;i<4;i++) { DWORD res = XInputGetState(i,&gpState); // Getting state if(res==ERROR_SUCCESS) // If alive - print message { printf("Controller #%d is ON!\n",i+1); player = i; // Assign last alive gamepad as active } } if(player<0) // If player==-1 in other words... { printf("Haven't found any gamepads...\n"); } else { while(true) { system("CLS"); // Clear screen memset(&gpState,0,sizeof(XINPUT_STATE)); // Reset state DWORD res = XInputGetState(0,&gpState); // Get new state printf("LX\tLY\tRX\tRY\tLTrig\tRTrig\tButtons\n"); // Print header // Thumbstick values are divided by 256 for better consistency printf("%d\t%d\t%d\t%d\t%d\t%d\t%d\n", gpState.Gamepad.sThumbLX/256, gpState.Gamepad.sThumbLY/256, gpState.Gamepad.sThumbRX/256, gpState.Gamepad.sThumbRY/256, gpState.Gamepad.bLeftTrigger, gpState.Gamepad.bRightTrigger, gpState.Gamepad.wButtons); } } system("PAUSE"); return 0; }
Once you build the solution and run your program, you will see the output changing when you move thumbsticks, or push buttons on your gamepad. We will send this data to Arduino board in the next section of this tutorial.
Right now I want you to pay attention to the output, when you are not doing anything. Values of LX, RX, LY and RY are not equal to 0, as we expect them to. This happens for a number of reasons, but what matters most is that we are aware of this phenomena!
These fluctuations and inconsistencies in values are called "dead zones". To get rid of this nasty anomaly we need to find the lowest marginal value at which we can consider that the thumbstick is actually pushed in some direction.
To do so we need to define a deadzone threshold and compare it to current values. Check out MSDN reference for more info.
Meanwhile, use this sample code to correct these values:
float LX = gpState.Gamepad.sThumbLX; // Get LX float LY = gpState.Gamepad.sThumbLY; // Get LY magnitude = sqrt(LX*LX+LY*LY); // Calculate the radius of current position if(magnitude < XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE) // Inside dead zone? { // Set all to 0 LX=0.0; LY=0.0; } // Do the same for RX and RY
There are also predefined dead zone values for left and right thumbsticks and triggers. You can use these, or define your own thresholds(in my case ~6500 worked for left and right stick), but remember that these values largely depend on how beat-up your gamepad is!
#define XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE 7849 #define XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE 8689 #define XINPUT_GAMEPAD_TRIGGER_THRESHOLD 30
ADDITIONAL READING
The only additional resource I'm going to mention is this one: XInput Game Controller APIs
It has everything you need to know about XInput, including complete reference and helpful programming guide.
That's it for this part. Now, let's try to combine our skills in programming gamepads and serial connections to control Arduino remotely!!! ...in case you forgot where we started...
Step 4: PUTTING PIECES TOGETHER
If you fully understood all the materials in previous steps, you should be able to write an XBox controller-to-UART interface or implement PC-Arduino communication on your own.
As a final example I will use a very simple contraption: an Arduino with few LEDs and the buzzer.
Originally I wanted to build a small RC car, but due to lengthy delays in parts delivery I won't be able to do it at least for another week or so... If you have a pair of EasyDriver boards you could connect direction pin instead of yellow LED, and motor step pin instead of red LEDs(see schematic above). A piezo-buzzer connected to pin D3 reacts on any button press on your controller.
The entire functional description boils down to this:
- Read the XBox controller state and transform it into a short, but well defined string.
In my case I'm only sending motor speeds, direction and button state, so the message looks like this:
LLL RRR D BBBB
where LLL is the Left Motor speed, RRR is the Right Motor speed, and BBBB represents button state.
D is a motor direction, which takes only two values: 1 for forward and 0 for backward.
Both LLL and RRR will be normalized for deadzones and scaled to smaller values (under 255)
Alternatively you can send raw XINPUT data to arduino and process it on the microcontroller itself.
- Next, we send this message over UART to Arduino
- Set all motor speeds to acquired values and check button state to determine additional actions
- Send some data back to PC (I'm just sending motor speeds for debugging)
- Acquired data is processed into some visual representation. Use anything you like, be it a simple text output in console window, or GUI-based output, like progress bars, graphs, flowcharts or even OpenGL rendering.
We've already learned how to read UART messages using events, so we don't really need to worry about proper timing. PC-side code can be further improved with such cool things, like multi-threading and asynchronous communication, but we won't be doing that today.
So, let's start with Arduino.
Attached is a simple sketch for our RC car. Nothing special, just setting motor speeds depending on thumbstick position.
sqrt(LX*LX+LY*LY) sets the magnitude of motor speed
LY sign(- or +) controls the movement direction (forward / backward).
Based on the value of LX we set the difference between Left and Right motor speed. If LX is positive, then Left motor is set to current speed value, and Right motor uses(128-LX). If LX is negative, we assign values the opposite way.
On PC side I've created a small class, called XBoxUart, which combines all the things we've learned previously in a single program.
Please, use links below to download the source code for PC and Arduino side.
First, upload the arduino sketch. You can test if it's working by opening a Serial Monitor at 115200 bod and sending data manually. For example message "100 100 1 0" is an equivalent of moving forward 100 steps(left and right motors) with no buttons pressed. In response you should get strings, shown in the screenshot above.
NOTE: Don't forget to change the COM port name to whatever your Arduino CDC is. If you have an HC-05 module, you can connect it directly to Arduino RX and TX pins, if you wanna try it without wires.
Now you can compile the C++ code in Visual Studio, start your XBox controller, launch the program on your PC and see how LEDs(or motors) change their behavior with movement of the thumbstick! Pressing any button will trigger the buzzer. The output in the console window will be similar to what you see on the last screenshot.
Step 5: AFTERWORDS
Thank you for your attention and I hope this material will help you in the future projects.
This article is a simple and very crude example of how to control things over UART, but in the near future I will try to make a more flexible solution.
You can check my blog for updates:
If you like this article, please cast your vote for my "Coded Creations" entry.
Regards,
Anton.
9 Discussions
I was working in something like that just controlling with frecuency on D+ or D- trying to uart chip receiver....for this year good luck man!! eagleboysbc"gmail.com
Thax
But how to connect the gampad with pc?
Through usb?
I'm using a wireless XBOX360 controller with a Wireless PC receiver, but it should work with any XInput compatible gamepad (which is almost any USB gamepad).
If you have an unusual device, like a Wiimote or some other motion sensor for example, you can emulate and remap it as an XInput device using GlovePie or x360ce.
We may be competing for the same thing but I'm digging this. I was just about to start researching how to do this. So you got my vote.
Thx! Seen your instructable on edge detection. Few years ago I did a coursework in steganography with the use of similar technique (Canny edge detector + simple clustering for data encoding).
Just what I was looking for!
WOW! This is great stuff. Thank you. | http://www.instructables.com/id/Controlling-Arduino-with-Gamepad/ | CC-MAIN-2018-30 | en | refinedweb |
Dumpers not working on OSX. :(
Hi,
I was trying to get the Dumpers to work on OSX Yosemite based on the information found here:
However for some reason they aren't working, and there seems to be no error.
Here's my test code:
#include <iostream>
using namespace std;
struct vec3 {
vec3(const float& x = 0.0f,
const float& y = 0.0f,
const float& z = 0.0f)
: x(x)
, y(y)
, z(z)
{}
float x, y, z;
};
int main(int, char**) {
vec3 v;
cout << v.x << ' ' << v.y << ' ' << v.z << endl;
return 0;
}
And here's my dumpers:
#!/usr/bin/python
def qdump__vec3(d, value):
x = value["x"]
y = value["y"]
z = value["z"]
d.putValue('(%.3f, %.3f, %.3f)' % (x, y, z))
d.putType("vec3")
d.putAddress(value.address)
d.putNumChild(0)
I tried to load the dumpers by putting
python execfile('/path/to/file/DebugHelpers.py')
in both ~/.gdbinit and "Additional Startup Commands".
In both cases the debugger log prints:
GdbStartupCommands: python execfile('/path/to/file/DebugHelpers.py') (default: ) ***
But the type is not printed according to the qdump__vec3.
Am I doing something wrong here?
Thank you for your time.
System: I installed Qt5.4.1 and which comes with QtCreator 3.4. Also installed X-Code 6.3.1. | https://forum.qt.io/topic/54328/dumpers-not-working-on-osx | CC-MAIN-2018-30 | en | refinedweb |
We’re happy to announce the availability of our newest free ebook, Introducing Windows Server 2016 (ISBN 9780735697744), by John McCabe and the Windows Server team. Enjoy!.
Introduction related.
About the author.
When will be Kindle?
Today is 4.10 and there are not 🙁
import it into calibre, export it as a mobi or azw. Or docx. or ….
Send the pdf file to email of your device @amazon.com and syncronize… after this your ebook will be displayed in collection.
Except it will still handle and look like a pdf which can be troublesome when converting to any of those formats. Text wont fit right, menu might not work, etc.
If they had released it like epub then it could easily be converted to any other format…
Try to use Kindle DX (9 inch screen) PDFs are there more welcome 🙂
Ok
My whatsapp is not working
Great eBook! Let’s read.
Thank you John !
What a great book. Timely released. More resources in less volume.
Thanks John.
thanks for this ebook! i will take a look
Great ebook! Important informations…
Por favor, editen una versión en español.
Thank you for update news.
Thanks
will be help
Thanks
Thanks, for ebook and all my wishes to Microsoft for great deals
Thanks John.
thank you
Where is the download link??
Thanks John
Thank you John.
Thank You, John McCabe and the Windows Server team for this book. Thanks Kim for publishing this on the portal.
Thank you for the great ebook – do i Need Windows 10 Enterprise to read it or is Windows 10 Pro working too?! 😀
When I am going to download the ebook, there is an ERROW:
“There is a problem with this website’s security certificate. This organization’s certificate has been revoked.”
Teşekkürler
its great
thanks for this ebook! i will take a look
Thank you Guys for keeping the technology rolling we always need to improve!
THANK YOU
Thanks for sharing this eBook.
Aspetto che lo pubblichiate in italiano
Seria de gran ayuda e interesante que enviaran estos correos en Español
muchas gracias
Gostei
Thanks in advance.
Nice one, thank you.
Thanks, just what I needed !
Very pleased to get this ebook. thank you John
I need it. Thank you so much.
Bem que poderia ser em portugues.
It’s can not be download.
Send me the ebook copy server 2016
this version of Windows Server, help Surface RT to prepare for Windows Server 2016 and give the means to develop and design. A path to introduce Windows Server 2016 into the RT environment for full advantage of what is to come.
Available now on Kindle.
Great
Thanks John
As a former engineer whose epileptic condition is visibly in recession its nice to touch base with old haunts. Realize that looking back for former models of what developing, the older the analogy, the more dependable the resource.
need ebook microsoft server2016
Thanks
well,its god but it lacks the installation and upgrade step by step practical guideline needed. try to include practical step-by step guidelines that can be implemented in the test lab right away next time. thanks
Isn’t that what TechNet is for?
Thanks for the book!
Looking for ePub version
how to buy the microsoft ebook?
Thanks!
Thank you very much
For waht it’s worth, I’d like to see the ebook released in epub and mobi formats as well. I have a Kindle Oasis I bought specifically to take backpacking with me. It’s smaller and lighter than any tablet and has at least a months of battery life… versus hours of battery life with tablets.
I agree with Marlon, pdf files on Kindle devices, especially the ‘page white’ Kindles, are not a good combination. I think the only advantage a pdf file has is the original publisher intended formatting is preserved which includes an annoyingly fat border between the text and the edges of the screen. A mobi version of the ebook would allow the text to reflow to the edges of the screen and allow me to choose a smaller font size to get more text on the screen. Both of these would result in fewer ‘page turns’ (screen refreshes) which, in turn, results in longer battery life. Since page white Kindles only use power refresh the screen, the fewer times I have to change pages, the longer the battery lasts.
Thank you.
I am Abhishek kumarRao/sarojkumar/a geethamarani x2000x2016-17.com
Thanks, for a book.
Thanks
When will the ePub and mobi versions of John McCabe and the Windows Server team
be released: Introducing Windows Server 2016 so that it read with Apple iPad iBooks.
We are unable to offer the ePub and Mobi formats of this book at this time.
Well, very nice that eBook.
Great information.
i do not know
I need this ebook. thanks
Webroot Support has become a necessity for the clients, as it is not easy to manage all the attributes of it. Hence if a user wants to enjoy Webroot services to the fullest then he must take the help of Webroot technical support team.
Thank you so much for sharing this e-book with us. I really like it, there are lots of new things into about the new technology.
I Like It
Thank You, John McCabe and the Windows Server team for this book. Thanks Kim for publishing this on the portal.
Thank You, John McCabe and the Windows Server team for this book. Thanks Kim for publishing this on the portal. gREAT rEAD UP OF COURSE | https://blogs.msdn.microsoft.com/microsoft_press/2016/09/26/free-ebook-introducing-windows-server-2016/ | CC-MAIN-2018-30 | en | refinedweb |
Programming With XML Using Visual Basic 9
Bill Burrows
myVBProf.com
August 2007
Applies to:
XML
Microsoft Visual Basic 9.0
Microsoft Visual Studio 2008
Summary: In this paper, using a realistic application, we will look at the new features and capabilities that are available in Microsoft Visual Basic 9.0 that relate to programming with XML. (18 printed pages)
Contents
Introduction
Application Overview
LINQ to XML in Visual Basic 9.0: Key Features
Getting Home Sales Data from Windows Live Expo
Office_Open_XML_File_Format
Using XML Literals and Embedded Expressions to Create the New Worksheet
Modifying the Office Excel Workbook Container
Putting It All Together
Productivity Enhancements Within Visual Basic 9.0 and Visual Studio 2008
Conclusion
Introduction
Programming with XML has traditionally been done using the document object model (DOM) API, XSLT, and XPath. Using this approach, the developer not only must understand XML, but also must become proficient in these additional technologies in order to be productive.
Microsoft Microsoft Visual Studio 2008 and provides a distinctive approach to programming with XML. In addition, enhanced user experience and debugging support within Visual Studio helps improve a developer's productivity while working with XML.
In this paper, using a realistic application, we will look at the new features and capabilities available in Visual Basic 9.0 that relate to programming with XML.
Application Overview
We will be looking at an application where information on homes for sale is extracted from an RSS feed and placed into a Microsoft Office Excel workbook. This application runs on a server, builds the Office Excel workbook, and then makes it available for downloading to the client. Online listings are fine, but sometimes you want to just download all that into Office Excel, so that you can do some more processing of your own offlinesuch as which homes you've visited, tracking attributes that interest you, or performing such analyses as computing the average cost per square foot.
In this application, we will be using a LINQ to XML query to extract XML from Microsoft Windows Live Expo (expo.live.com) using the Windows Live Expo API and the Expo Web service that exposes an HTTP/GET request interface with XML response. (Click here for information on getting started with Windows Live API.) The user will provide a ZIP code as the basis of what home information to extract. They will also provide a distance (in miles) to define the area to be searched for houses for sale. After this information is extracted from the Expo Web service, it will be used to create a new Office Excel workbook on the server by creating an Office Open XML file. This process will demonstrate the use of embedded expressions to insert the data into the workbook. Finally, the user will be given the option to download the workbook for use on the client computer.
LINQ to XML in Visual Basic 9.0: Key Features
XML literals. In Visual Basic 9.0, one can treat XML as a literal value. The developer experience is enhanced when working with XML literals with autocompletion and outlining. You can create a variable and associate it to an XML literal by either typing the literal or, more likely, pasting some XML directly into the code editor. This feature is unique to Visual Basic 9.0. In the sample application, we will create an XML literal by pasting Office Open XML that represents an Office Excel worksheet.
XML literals can also contain embedded expressions. An embedded expression is evaluated at run time and provides the means to modify the XML dynamically, based on values obtained while the program is executing. In our sample application, we will be using embedded expressions to insert values taken from an RSS feed, and inserting them into the XML that represents an Office Excel worksheet.
Figure 1 shows an XML literal and embedded expressions.
Figure 1. XML literal with an LINQ to XML query and embedded expressions
XML axis properties. XML axis properties, also known simply as XML properties, are used to reference and identify child elements, attributes, and descendent elements. These provide a shorter, more readable syntax than using equivalent LINQ to XML methods. Table 1 provides a summary of XML properties. Intellisense is provided for axis properties when an appropriate XML schema is added to the project. Details on this feature will be explored in the last section of this article.
Table 1. XML axis properties
XML namespaces. To define a namespace that can be used in XML literals or a LINQ to XML query or embedded expression, one uses the Imports statement. In our application, we will have to define a specific namespace that relates to the RSS feed that is supplied by the Windows Live Expo API. To define this namespace within our code, we will use the following Imports statement:
Imports <xmlns:
This statement creates an identifier named expo that can be used to qualify an element within queries and literals. The following line of code returns all the category child elements in the expo namespace:
Item.<expo:category>
To define a default namespace, one also uses the Imports statement, but no identifier is provided. An example definition of a default namespace is the follow:
Imports <
By default, the empty namespace is the "default" namespace. If you define another namespace to be the default and you need the use of the empty namespace, you can create a prefix for the empty namespace as in the following example.
Imports <xmlns:
Type inference. In previous versions of Visual Basic, dropping the As clause from a variable declaration resulted in the variable being typed as the Object type, where late binding was used to deal with the contents of the variable. In Visual Basic 9.0, the type of local variables is "inferred" by the type of the initialize expression on the right-hand side. For instance, in the following statement:
Dim x = 1
the x variable would be inferred as Integernot Object, as in earlier versions of Visual Basic. If you look at the code in Figure 1, you will see that type inference is being used for the sheetTemplate variable. Because the variable is being assigned an XML literal, its type is implied as XElement, because that is the type of the XML literal on the right-hand side. Visual Basic 9.0 supports a new option named Option Infer. This option is used to turn type inference on or off, and is on by default. It is important to note that type inference applies only to local variables. Type inference does not apply to class-level and module-level variables. This means that in the class definition that follows, the status variable will be an Object type, not a String type.
Public Class DemoClass
Dim status = "Default"
End Class
Getting Home Sales Data from Windows Live Expo
We are using the XML over HTTP interface named to get the data in the form of an RSS feed. The syntax of this request is the following:
When making the call to the service, the user may provide a number of parameters. For this application, the parameters that will be passed to the service are the following:
Using this Web service, we obtain the RSS feed and store it as an XElement type. The code to do this is straightforward and is shown in Figure 2. Notice that in defining the feed variable, we are using "type inference," as described earlier.
Private Function GetSheetXML() As XDocument
' get the "xml over http" rss feed url
Dim url As String = BuildURL()
' get the xml feed - note that type inference is used here
Dim feed = XElement.Load(url)
Figure 2. Getting the RSS feed
The application uses a helper function named BuildURL to define the service call and its parameters. This function is shown in Figure 3. Note that there is little error checking in this application. This is not because error checking is not needed, but instead because the error checking could make it harder to focus on the technologies that are the subject of this article. Also, be aware that MyAppID is a numerical identifier that is available to developers from the Windows Live Expo API site.
Figure 3. Building the Web service call URL (Click on the picture for a larger image)
The XML that is returned from the Web service is in the form of an RSS feed. A sample of this XML is shown in Figure 4.
Figure 4. The RSS feed from expo.live.com
An important attribute in the <rss> element is the namespace definition:
xmlns:classifieds=""
We will have to define this namespace in our code in order to identify correctly elements in the document that are qualified by the namespace (such as the <classifieds:totalListings> element in Figure 4).
The key elements for our application in the RSS feed are the <item> elements. Figure 4 shows one <item> element; but, in reality, there are many returned from the service. A closer look at an <item> element reveals the elements and attributes with which we will be working. Figure 5 shows the XML.
Figure 5. Details of the <item> element (Click on the picture for a larger image)
We are interested in data about the homes for sale including the price ; the ZIP code (postcode) ; and details on the home, including number of bedrooms and bathrooms, the year the home was built, and the size of the home . Note that the element named <classifieds:LOT_SIZE> is actually storing the square footage of the home.
You can see how we will be using these elements in the worksheet that is shown in Figure 6.
Figure 6. Spreadsheet created from the RSS feed (Click on the picture for a larger image)
The other thing that is noted in the <item> element in Figure 5 is the classifieds:transactionType attribute .
We will now look at the code that will process the XML RSS feed data. The RSS feed includes homes both for sale and for rent, so the first thing that we must do is to get only elements that are for sale. To do this, we must query the RSS XElement holding the feed; therefore, we use a LINQ to XML query, as seen in Figure 7 (which is the continuation of the GetSheetXML() function that is shown in Figure 2). Again, note the use of type inference in the definition of itemList.
Figure 7. LINQ to XML query to get selected <item> elements (Click on the picture for a larger image)
In addition to this code, namespaces must be defined. This is done at the beginning of the code using Imports statements, as shown in Figure 8. We will discuss these namespaces as they are used in the code.
' define an expo.live namespace
Imports <xmlns:
' define an empty namespace
Imports <xmlns:
' define the default namespace
Imports <
Figure 8. Defining the namespace
Let's look at the LINQ to XML query in Figure 7 in some detail. The From clause identifies an iterator named itemElement that refers to the element that is in scope for each iteration. The expression feed...<empty_ns:item> identifies the IEnumerable(Of XElement) "feed" reference and uses the "descendants axis" (...) to get all the <item> elements within the feed reference, no matter how deeply they occur. The XML axis property must be qualified with the empty namespace. Otherwise, the default namespace—in this case,—would be applied to the axis property. The Where clause filters these <item> elements to only those that have "For Sale" in the <category> item's transactionType attribute. Note the use of the namespace identifier (expo) and attribute axis (@) in the query syntax.
Because our ultimate objective is to place data from each "For Sale" item into a separate row of our worksheet, we need a way to identify easily the row in the worksheet in which each selected item will be stored. In this application, we do this by converting the itemList—which is an IEnumerable—into a List(Of T), so that we can later use the list's IndexOf() method to get each <item> index and use it to determine a row number in the worksheet.
Office Open XML File Format
Now that we have our XML extracted from the Live Expo site, we must get it into an Office Excel 2007 worksheet. To do this, we will need to understand the Office Open XML File Format. The specification for this format is quite extensive; we will touch only the surface, as far as our understanding is concerned. In addition, an excellent reference that is specific to the Office Open Excel File format is available on the Web. (Please see Standard ECMA-376 Office Open XML File Formats.) Note that when we talk about Office Excel in this article, we are referring to Office Excel 2007.
An Office Excel file (.xlsx) is a container file or package that is actually in an industry-standard ZIP file format. Each file comprises a collection of parts, and this collection defines the document. For an Office Excel file, these various parts and their relationships are shown in Figure 9.
Figure 9. The various parts and their relationships in an Office Excel document (Click on the picture for a larger image)
You can see these parts if you open an Office Excel document using a ZIP application. Figure 10 shows such a view. Notice the paths that are shown in the figure; they give you a sense of the file and directory structure within the document. To work with an Office Excel document as a ZIP archive, just change the file extension from ".xlsx" to ".zip".
Figure 10. Files (parts) stored within the ZIP container of an XML document (Click on the picture for a larger image)
In our application, we will store an existing Office Excel document file (named baseWorkbook.xlsx) on the server. We will then build a new worksheet (such as sheet1.xml, shown in Figure 10) using the XML <item> elements we extracted from the RSS feed. We will then delete the existing worksheet from the Office Excel document and then add our new one. Finally, we will offer the user the opportunity to download the newly modified workbook with the newly added worksheet.
We are ready to write the code to create the new worksheet using the <item> elements from the RSS feed. It must be restated that we have just touched the surface of Office Open XML Format. In fact, there are things that we might want to add to our workbook that might cause "issues" when we open the workbook. These issues deal with the many parts of the document and their relationships. If you add XML that does not include all the relationships, an informational dialog box might be displayed indicating that there are issues that must be resolved. You are given the option to have Office Excel try to fix these issueswhich really means that it attempts to resolve the references, update shared-value tables, and so on.
Using XML Literals and Embedded Expressions to Create the New Worksheet
The code that we will see next is a continuation of the GetSheetXML() function that was shown earlier. We will be working with a large XML literal that was created initially by copying and pasting the complete XML definition of the worksheet from an existing Office Excel workbook. This XML literal is then modified by placing embedded expressions at the appropriate locations.
We begin by looking at a few lines of code that define some parameters for modifying the XML literal that represents the new worksheet. This code (again, a continuation of the GetSheetXML() function) is shown in Figure 11.
Figure 11. Continuation of the GetSheetXML() function that creates the new worksheet (Click on the picture for a larger image)
The code in Figure 11 determines what the last row will be by first determining the number of <item> elements in the RSS feed. Because there will be one new row in the worksheet for each <item> element, we can determine the last row (because the first rowthe headingsare in row 2, we calculate the last row by adding 2 to the number of <item> elements). The last line of code in Figure 11 sets the value of a String variable that defines the cell range for our worksheet.
Now, we start working with the XML literal. Figure 12 shows the first few lines of the literal. We started by writing the code:
Dim sheetTemplate = _
and then just pasting in the XML definition from the existing Office Excel worksheet. This is one of the great features in Visual Basic 9.0: Instead of having to create a document using the DOM API, we just take the XML that we want to manipulate and paste it into our code as an XML literal. Then, we replace the original "ref" attribute value from:
ref="B2:H3"
to
ref=<%= cellRange %>
This embedded expression uses the previously defined cellRange variable to take into account the new rows of data to be added.
' finally we go into the actual XML literal and insert the new range
' and the appropriate data from the RSS feed
Dim sheetTemplate = _
<?xml version="1.0"?>
<worksheet>
<dimension ref=<%= cellRange %>/>
Figure 12. First part of the XML literal
Also note that the object reference named "sheetTemplate"—used to store the XML literalis defined using type inference. In this case, the type will become an XDocument, as opposed to an XElement. The difference between the two is that XDocuments may contain processing instructions (PI) and comments before the root element definition, while XElement types cannot. The following special XML declaration:
<?xml version="1.0"?>
in our document will be seen by the type-inference engine and, therefore, will type "sheetTemplate" as an XDocument.
The final step that we must perform is to define the rows using values from each <item> element in the RSS feed. Figure 13 shows this code. (There is additional XML in the literal between the <dimension> element in Figure 12 and the start of the embedded expression in Figure 13. See the code download for this article to view this XML.) There is a lot going on in these lines of code. First, note that we have a LINQ to XML query that queries across the List(Of XElement) RSS feed named rssItems:
<%= From item In rssItems Let rowNum = rssItems.IndexOf(item) + startRow _
As mentioned earlier, we must identify the row number for the row that we are inserting; we use the IndexOf method of the List to compute this. This computed value is stored in a local variable named rowNum that will be computed for each iteration of the query.
For each item in the collection, we select a number of values and use them in embedded expressions. These embedded expressions are fairly straightforward.
Figure 13. Creating new rows in the worksheet using the RRS feed data (Click on the picture for a larger image)
We are accessing specific data items from the feed. For example, in column D of each row, we are adding the value of the YEAR_BUILT element (item.<expo:details>.<expo:YEAR_BUILT>.Value). Note that we have used the namespace that we defined in Figure 8. As you will see in a later section, XML Intellisense is a big help in entering element references. We can easily create the formula found in column H by using an embedded expression that includes string constants, row values, and string concatenation. Something that Visual Basic programmers must remember is that XML is case sensitive; this means that XML properties, like attribute names, are case sensitive.
The issue of namespaces and how they are applied within the XML literals and XML axis properties can be confusing, and it is helpful to review what we have done here. Figure 14 summarizes what is happening by showing the namespace definitions and Tool Tip–enhanced segments of code. In this figure, we see <expo:category>, which resolves to fully qualified name:
{}category
This resolution is the result of applying the <expo> prefix. Similarly, we see <row>, which resolves to fully qualified name:
{}row
This resolution is a result of the fact that the default namespace is being applied. Finally, <empty_ns:item> resolves to item, because the empty namespace is applied (the empty_ns prefix).
Figure 14. Namespaces applied to code (Click on the picture for a larger image)
Finally note the entire new XDocument contents are returned by the function. In the next step, we will take this XDocument and place it into our workbook document container.
Modifying the Office Excel Workbook Container
As mentioned previously, the Office Excel workbook is a container stored in the standard ZIP archive format. Microsoft introduced a new API, known as the Packaging API, with the introduction of Microsoft .NET 3.0. This API, which is found in WindowsBase.dll, must be added as a reference to the application in order to get access to the packaging API.
For this application, a small class named SpreadSheet has been created to manage the workbook. It includes a constructor that opens the package and establishes a reference to the workbook part (sheet1.xml) that will be replaced with the new worksheet that is stored in the XDocument object. The original worksheet must be removed, so there is a RemoveOldSheet method. The new worksheet must then be added, so an AddNewSheet method is included for the class.
The complete code for this SpreadSheet class is available in the code download for this article.
Putting It All Together
With the background of the code presented previously, it is time to look at the main Web application and code that orchestrates the fetching of the RSS feed, converts it to a new Office Open XML file, and then replaces an existing worksheet with the newly created one.
Figure 15 shows the user interface for the application prototype. The user supplies a ZIP code as the center of the home search and a distance in miles around that ZIP code. A simple click event is defined for the Get Spreadsheet button.
Figure 15. The Web interface for the prototype application
When the click event finishes executing, a hyperlink pointing to the newly modified Office Excel workbook is made visible as shown in Figure 16. This allows the user to download the workbook to the client machine.
Figure 16. The Web interface with the worksheet download link active
The code for the Web application and the first part of the click event is shown in Figure 17. Note the Imports of the necessary namespaces. Regarding the system-derived namespaces, this application was built using Beta 2 of Visual Studio 2008; as later betas and release candidates are released, there might be a need to use a different set of Imports. Also note the Import statements that define the XML namespaces used within the Expo Live RSS feed, an empty namespace, and the default namespace used within the Office Open Excel worksheet XML document.
Figure 17. The Imports and first few lines of the Web application (Click on the picture for a larger image)
Note As of Beta 2, the Web application template does not include the reference to System.Xml.Linq.dll. In Beta 2, this reference is located at C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5.
We looked at the GetSheetXML() function in Figures 2, 7, and 11 through 13. This function got the RSS feed from Expo Live by using the user-supplied parameters; then, it took the RSS feed, and used an XML literal and embedded expressions to build the new worksheet.
The final steps involve using the SpreadSheet class and the Packaging API to replace an existing worksheet with our new one. Figure 18 shows this code as the continuation of the click event.
Figure 18. Continuation of the click-event code (Click on the picture for a larger image)
Note that the first part of the code works with getting some configuration settings from the web.config file. These settings define the location of the template workbook as well as the relative location of the workbook part that will be replaced (sheet1.xml). The relevant section of web.config is shown in Figure 19.
Figure 19. Application settings from the web.config file (Click on the picture for a larger image)
The new worksheet first must be saved, because the Packaging API can add only a part from a file. Following this, a new SpreadSheet object is created and used to remove the old worksheet part and replace it with the new worksheet part. Finally, the hyperlink that points to the updated workbook is made visible.
This concludes the in-depth description of our sample application. It shows how Visual Basic 9.0, with its LINQ to XML and embedded XML features, provides an extremely powerful way to work with XML. Note in particular how important the use of XML literals and XML axis properties—which are unique to Visual Basic 9.0—were to the application solution. Next, we look at some of the productivity features in Visual Basic 9.0 that make it much easier for the developer, working within Visual Studio, to use the new language features.
Productivity Enhancements Within Visual Basic 9.0 and Visual Studio 2008
When dealing with XML literals, there are two important features that are available within Visual Basic 9.0. The first is autocompletion. With autocompletion, when an opening element is entered into the code, the closing element is entered automatically. In addition, if you change the spelling of the opening tag, the system will automatically change the spelling of the matching closing tag. The second is outlining where the literal can easily be collapsed or expanded based on parent/child relationships.
In addition to outlining and autocompletion, XML literals are checked for syntax. Figure 20 shows an XML literal that contains two errors. In the top image, an attribute value is shown without being enclosed in quotes. When that error is corrected, the lack of the closing ">" character in the </phone> element is highlighted.
Figure 20. Syntax error in an XML literal (Click on the picture for a larger image)
Arguably, however, the most significant productivity enhancement in Visual Basic 9.0 deals with Intellisense and XML axis properties. If a schema is available within the project for the XML, the information from the schema is used within the context of Intellisense to provide the developer with a set of choices while entering LINQ queries and embedded expressions.
For our application, we do not have a schema available, so we must create one. Fortunately, Visual Studio provides a tool to do this, if we have the XML available. To get the XML, open the URL that was created in the application (see Figure 3) in Visual Studio by using the File menu and selecting Open. Visual Studio will open the XML editor with the query result (you might want to use the Save As command to shorten the file name). You can now create the schema using the XML menu item and selecting Create Schema. For the RSS feed in our application, three schema files will be created. Be sure to save these schemas and add them to the project. You can see these schema files in the Solution Explorer that is shown in Figure 21.
Figure 21. Solution Explorer highlighting the new schema files
You can then see the namespaces defined in the RSS feed schema in the Imports statement. Figure 22 shows the Intellisense (which is called schema discovery).
Figure 22. Schema-enhanced Intellisense on Imports
Now that we have imported the namespace that is backed up with the schema information, we can see the enhanced Intellisense when we enter code. The first code that we will enter is the LINQ query shown in Figure 7. Figure 23 shows the Intellisense list that is displayed as possible values for the descendant's axis of the feed list variable.
Figure 23. Intellisense applied within a LINQ query
Note that there are many choices for descendant attributes and the Intellisense engine cannot identify which ones are known with certainty. Items in which the XSD type is not known with certainty are placed in what is called the "candidate" list. To indicate that an item is in this candidate list, a question-mark glyph is added to the item. Figure 24 shows this glyph.
Figure 24. Glyph to indicate item is in the "candidate" list
Figure 25 shows the Intellisense list within an embedded expression. Intellisense matches not only on the prefix, but also on the local names of the element or attribute. Looking at Figure 25, you can start typing "cou...", and the match on "expo:country" will be found.
Figure 25. Intellisense applied within an embedded expression
The choices available as descendants of the <location> element are well defined in the schema and thus are added to what is called the "matched" list by Intellisense. The fact that a choice is a member of the matched list is indicated by the use of the green check-mark glyph as shown in Figure 26.
Figure 26. Glyph to indicate item is in the "matched" list
In addition to the great Intellisense and compile-time support for XML, Visual Basic 9.0 also supports enhanced information while debugging. Figure 27 shows the Locals window while the application is in Break mode. The breakpoint has been set right after the following statement:
Dim rssItems As List(Of XElement) = itemList.ToList
has been executed. This statement causes the LINQ query to be executed. In Figure 27, we are looking at the in-memory results of the query for the itemList variable. You can see how the contents of this variable can be expanded to see the XML elements that are returned from the query. The value column shows the XML content of the variable which makes it extremely easy to examine the results.
Figure 27. Run-time information available for debugging (Click on the picture for a larger image)
Conclusion
In this article, we have seen a number of new features that are available in Visual Basic 9.0 and Visual Studio 2008. The processing of XML has been improved significantly with the addition of LINQ to XML, XML literals, XML axis properties, and improved Intellisense and debugging support. With these new features, Visual Basic 9.0 has raised the bar, as far as processing XML is concerned. The realistic prototype application demonstrates the value of these new features, in addition to a brief look at the new Office Open XML format. | https://msdn.microsoft.com/en-us/library/bb687701.aspx | CC-MAIN-2018-30 | en | refinedweb |
0
this is a code from the book "the C programming language" by dennis ritchie, but for some reason its not working!
#include <stdio.h> #define IN 1 /* inside a word */ #define OUT 0 /* outside a word */ /* count lines, words, and characters in input */); }
the output shows "nw" (for NewWord) as 0 all the time..
cant understand why this is so... | https://www.daniweb.com/programming/software-development/threads/434986/word-count-problem | CC-MAIN-2018-30 | en | refinedweb |
1
Hi everyone I have a issue where I have written the entire program but couple little tweaks are needed for me to get the program done.
I have my super class here "Pet". I have created couple objects in Main, "myRobin" , "myCow", "myBlackMamba" and passed some arguments to them. Im now trying to send those parameters and have them printed to the output via the toString method I have. Im learning my way around Java ..so Im wondering what am I missing that could make this possible....
Thanks everyone.
public class Pet { //office wants to store info regarding the animal it treats /* * Done */ ///variables /*Diet * Nocturnal * poisionous * ability to fly */ /* * Done */ //Classes /*SuperClass pet * subClass: * robin, cow, black mamaba */ /* * Done */ ///Characteristics /*Num. of Legs * wings or no * feathers , skin or fur * nocturnal */ private int numOfLegs; private String wings; private String skin; private String fur; private String nocturnal; private String diet; private String poison; public Pet(int l , String w , String s, String n, String d, String p) { numOfLegs = l; wings = w; skin = s; nocturnal = n; diet = d; poison = p; } //will allow me to create subclasses public Pet() { } //main start public static void Main(String[] args) { //return what each different class does Robin myRobin = new Robin(2, "Two Wings" , "feathers" , "No I sleep" , "berries" , "I am not poisonous!" ); myRobin.toString(); Cow myCow = new Cow(4,"No Wings", "little fur" , "I sleep, moo!", "I love to eat hay and grass", "Im not poisionous :("); myCow.toString(); BlackMamba myBlackMamba = new BlackMamba(0, "No Wings","skin","awake at night","I love eating rodents","Of course Im poisonous"); myBlackMamba.toString(); Pet p = new Pet(); } //might delete this public void Report() { System.out.println("This is the information for the pet."); } ///Generate getters and setters to access private data fields public String getWings() { return wings; } public void setWings(String wings) { this.wings = wings; } public String getSkin() { return skin; } public void setSkin(String skin) { this.skin = skin; } public String getFur() { return fur; } public void setFur(String fur) { this.fur = fur; } public String getNocturnal() { return nocturnal; } public void setNocturnal(String nocturnal) { this.nocturnal = nocturnal; } public String getDiet() { return diet; } public void setDiet(String diet) { this.diet = diet; } public String getPoison() { return poison; } public void setPoison(String poison) { this.poison = poison; } public void setNumOfLegs(int numOfLegs) { this.numOfLegs = numOfLegs; } public int getNumOfLegs() { return numOfLegs; } ////end getters and setters public String toString() { return("Robin:" + wings ); } ///Test all subclasses. //start with }
and then my other three classes "Robin, Cow, BlackMamba" . Since these three subclasses are pretty much similar except for the names I will post the Robin class here:
package com.Java.CS300; public class Robin extends Pet { ///Determine whether this constructor is needed public Robin(int l , String w , String s, String n, String d, String p) { } //myPet is of type "Pet" public Pet myPet; ///list the characteristics here and return a message which concludes what the animal does //these are the methods supplied to the animal public void Legs(Pet p) { myPet = p; } public Pet doIFly() { return myPet; } public Pet mySkin() { return myPet; } public Pet poison() { return myPet; } public Pet nocturnal() { return myPet; } public Pet diet() { return myPet; } //method for animal to report its characteristics ////might delete this public void infoRobin(int l , String w , String f, String s, String n, String d, String p) { //call from main Robin object } // end of report on Robin } | https://www.daniweb.com/programming/software-development/threads/446528/attempting-to-print-object-to-the-tostring-method-and-send-all-the-argument | CC-MAIN-2018-30 | en | refinedweb |
OpenCV Error: LNK1181: cannot open input file 'opencv_core2410.dll.lib'
- Todd Morehouse
I am new to the Qt ide, and am still learning C++. So I apologize in advance if I do not quite understand something.
I am trying to setup OpenCV to use inside the Qt IDE for my senior project.
I have gone through the grueling process of attempting to follow the steps by many tutorials (Steps were easy, errors were the hard part).
I somehow managed to get through all of these errors, and now I am at my final error (hopefully) and hoping somebody here can help :), here it goes.
I compiled OpenCV using minGW to c:\opencv\build\install. (OpenCV version 2.4.10)
This seems OK.
I set the proper environment variables for this as well, and set what seems to be the proper includes in my .pro file.
Once I try to run the application, I get the following error.
:-1: error: LNK1181: cannot open input file 'opencv_core2410.dll.lib'
Inside the lib folder, everything is named .dll.a, I am not sure if these are supposed to be .dll.lib, or if the .dll.lib files are somewhere else and or missing them.
# # Project created by QtCreator 2015-03-30T22:25:22 # #------------------------------------------------- QT += core QT -= gui TARGET = untitled4 CONFIG += console CONFIG -= app_bundle TEMPLATE = app SOURCES += main.cpp INCLUDEPATH += C:\\opencv\\build\\install\\include LIBS += -LC:\\opencv\\build\\install\\x64\\mingw\\lib \ -lopencv_core2410.dll \ -lopecv_highgui2410.dll \ -lopencv_imgproc2410.dll \ -lopencv_features2d2410.dll \ -lopencv_calib3d2410.dll
main.cpp
#include <opencv2/highgui/highgui.hpp> int main(){ cv::Mat image = cv::imread("img.jpg"); cv::namedWindow("My Image"); cv::imshow("My Image", image); cv::waitKey(5000); return 1; }
Thank you in advance for the help, it is much appreciated :)!
LIBS += -LC:\opencv\build\install\x64\mingw\lib
-lopencv_core2410
-lopecv_highgui2410
-lopencv_imgproc2410
-lopencv_features2d2410
-lopencv_calib3d2410
If your *.lib in "C:\opencv\build\install\x64\mingw\lib" and add the "C:\opencv\build\install\x64\mingw\bin" ( should containing your DLLs ) in your Project-Enviroment Path
- Todd Morehouse
I still come out to the same error. | https://forum.qt.io/topic/52741/opencv-error-lnk1181-cannot-open-input-file-opencv_core2410-dll-lib | CC-MAIN-2018-30 | en | refinedweb |
j'ai exaaactement le même problème . Help?
Tried 2 diff languages, JS and PHP, same algorithm, but got 2 diff results. In JS I wasn't able to pass 91%, in PHP I got 100%. After submitting the JS version it keep failing step 02.
Hi, I'm stuck in this problem with C#..I never have to optimize my code like that
I don't know why it's too slow, can you help me ? Edit: Solved
P.S I ll delete my code after an hint
What's wrong with :
int[] array = new int[N]; <-- You don't need to resize it anymore after that[...]Array.Sort(array);
thank you, I thinked we can't do that.I still have an error but not because my sort is too long, I ll search why ^^
on the last test my answer is always 70 .. What did I do wrong?
actuel= temp should be outside the if statement
It's driving me crazy. Java implementation with TreeSet and single "for" loop, still does not validate test #6 (Horses in disorder). Anyone with a clue about this?
Use a TreeSet to store the Integer values. TreeSets are ordered automatically by their natural ordering. Furthermore, TreeSets do not allow duplicate entries; the add() method returns false if the value is already present in the set, which makes the rest of the process unneccessary (as equal Integers have a difference of 0 ).
Regarding the use of a regular for-loop to compare values: You might consider using a ListIterator as all the Pi[n] elements are in the for-loop scope until the end of the loop. Although I must admit that I haven't tested the difference.
Good luck!
Hi Everyone ,
I am stuck on this puzzle and since it can't access its varibles and other stuff, I am stuck at a whoop 90% coverage
Any help will appreciated.
Thank Suraj
Hey, i can't pass the Horses in disorder test too and I can't find anything wrong with my code. Any help is appreciated.Python 3:
n = int(input())
s = set()
for i in range(n):
p = int(input())
s.add(p)
ss = sorted(s)
if len(ss) > 1:
min = ss[1] - ss[0]
for i in range(2,len(ss)):
if ss[i] - ss[i-1] < min:
min = ss[i] - ss[i-1]
print(min)
else:
print(0)
I'm getting the same thing.
You defined your initial input collection as a set() and not a list(). What are the two main differences between a set and a list?
I covered the case where there are only horses with the same power with if len(ss) > 1, I can't think of any other case where having 2 or more horses with the same power would make a difference. Obviously I am missing something, but I have no idea what it is.
If you had 2 or more horses with the same power, what would their difference be? How would they appear in your set() before getting sorted?
len(ss) is going to tell you how many horses there are in the set you have converted to a sorted list. But it doesn't tell you how many horses there are with the same value, if any.
Try creating a custom test case with a list of horses like 5, 6,8,6,3,1. What should the answer be?
Why can't I see the solutions from other members, they are all locked?
You need to solve the puzzle first.
Hey guys any suggestions on how I can improve this code to make it run faster ? Thank you much appreciated !
import java.util.*;import java.io.*;import java.math.*;
/** * Auto-generated code below aims at helping you parse * the standard input according to the problem statement. **/class Solution {
public static void main(String args[]) {
Scanner in = new Scanner(System.in);
int N = in.nextInt();
int [] array= new int[N];
int D=100000,sub=0;
for (int i = 0; i < N; i++) {
array[i] = in.nextInt();
}
for (int j=0; j<(N-1);j++){
for (int k=1; k<N;k++){
if (array[j]>array[k])
sub=array[j]-array[k];
else
sub=array[k]-array[j];
if ((sub<D)&&(sub!=0))
D=sub;
}
}
System.out.println(D);
}
}
Sort the array so you don't have 2 for loop.
Hello everyone
I'm in need of a little help.
I'm writing this in Java.
At first I essentially used a brute force algorithm. first 2 cases are fine.
Second case of course will time out. because i'm comparing every possible combination pair. So this leads to n + n+1 +n+2... (geometric sequence)=1/2n(n+1)=(n^2+n)/2 so is O(n)=n^2.
with n=99999 will yield a very large number.. ok. no big deal
2nd method is to write pretty much a "quick sort" recursive algorithm function that sorts the data in order in average case n*log(n) time then there is only n-1 (hence linear time which is FAST) amount of pairs.
first 2 cases are fine. now this time there is an out of memory heap error. I use 2 arrays (Not ArrayLists) where the function returns L * Pivot * R such that L and R are arrays and Pivot is one inger where the base cases are size 2 and 1 that swap positions. If I was writing this in C++ I would be able to "deallocate" my temporary Left and Right Arrays. However Java prides itself of no having to worry about heap memory and have been trouble trying to explicitly call the "garbage collector" at the end of the recursive function.
Could anyone who was successful in java be so kind briefly help or tell me how they did it. Thank you in advance.
I think your solution might be too complex. Remember this is an easy puzzle. There is a O(n) solution. 5 lines will be sufficient. | http://forum.codingame.com/t/horse-racing-duals-puzzle-discussion/38?page=10 | CC-MAIN-2018-30 | en | refinedweb |
Easily download, build, install, upgrade, and uninstall Python packages
Project description
Installing and Using Setuptools
Table of Contents
- Installing and Using Setuptools
- CHANGES
- 7.0
- 6.1
- 6.0.2
- 6.0.1
- 6.0
- 5.8
- 5.7
- 5.6
- 5.5.1
- 5.5
- 5.4.2
- 5.4.1
- 5.4
- 5.3
- 5.2
- 5.1
- 5.0.2
- 5.0.1
- 5.0
- 3.7.1 and 3.8.1 and 4.0.1
- 4.0
- 3.8
- 3.7
- 3.6
- 3.5.2
- 3.5.1
- 3.5
- 3.4.4
- 3.4.3
- 3.4.2
- 3.4.1
- 3.4
- 3.3
- 3.2
- 3.1
- 3.0.2
- 3.0.1
- 3.0
- 2.2
- 2.1.2
- 2.1.1
- 2.1
- 2.0.2
- 2.0.1
- 2.0
- 1.4.2
- 1.4.1
- 1.4
- 1.3.2
- 1.3.1
- 1.3
- 1.2
- 1.1.7
- 1.1.6
- 1.1.5
- 1.1.4
- 1.1.3
- 1.1.2
- 1.1.1
- 1.1
- 1.0
- 0.9.8
- 0.9.7
- 0.9.6
- 0.9.5
- 0.9.4
- 0.9.3
- 0.9.2
- 0.9.1
- 0.9
- 0.8
- 0.7.8
- 0.7.7
- 0.7.6
- 0.7.5
- 0.7.4
- 0.7.3
- 0.7.2
- 0.7.1
- 0.7
- 0.7b4
- 0.6.49
- 0.6.48
- 0.6.47
- 0.6.46
-
-.
CHANGES
7.0
Issue #80, Issue #209: Eggs that are downloaded for setup_requires, test_requires, etc. are now placed in a ./.eggs directory instead of directly in the current directory. This choice of location means the files can be readily managed (removed, ignored). Additionally, later phases or invocations of setuptools will not detect the package as already installed and ignore it for permanent install (See #209).
This change is indicated as backward-incompatible as installations that depend on the installation in the current directory will need to account for the new location. Systems that ignore *.egg will probably need to be adapted to ignore .eggs. The files will need to be manually moved or will be retrieved again. Most use cases will require no attention.
6.1
- Issue #268: When resolving package versions, a VersionConflict now reports which package previously required the conflicting version.
6.0.2
- Issue #262: Fixed regression in pip install due to egg-info directories being omitted. Re-opens Issue #118.
6.0.1
- Issue #259: Fixed regression with namespace package handling on single version, externally managed installs.
6.0
Issue #100: When building a distribution, Setuptools will no longer match default files using platform-dependent case sensitivity, but rather will only match the files if their case matches exactly. As a result, on Windows and other case-insensitive file systems, files with names such as ‘readme.txt’ or ‘README.TXT’ will be omitted from the distribution and a warning will be issued indicating that ‘README.txt’ was not found. Other filenames affected are:
- README.rst
- README
- setup.cfg
- setup.py (or the script name)
- test/test*.py
Any users producing distributions with filenames that match those above case-insensitively, but not case-sensitively, should rename those files in their repository for better portability.
Pull Request #72: When using single_version_externally_managed, the exclusion list now includes Python 3.2 __pycache__ entries.
Pull Request #76 and Pull Request #78: lines in top_level.txt are now ordered deterministically.
Issue #118: The egg-info directory is now no longer included in the list of outputs.
Issue #258: Setuptools now patches distutils msvc9compiler to recognize the specially-packaged compiler package for easy extension module support on Python 2.6, 2.7, and 3.2.
5.8
- Issue #237: pkg_resources now uses explicit detection of Python 2 vs. Python 3, supporting environments where builtins have been patched to make Python 3 look more like Python 2.
5.7
- Issue #240: Based on real-world performance measures against 5.4, zip manifests are now cached in all circumstances. The PKG_RESOURCES_CACHE_ZIP_MANIFESTS environment variable is no longer relevant. The observed “memory increase” referenced in the 5.4 release notes and detailed in Issue #154 was likely not an increase over the status quo, but rather only an increase over not storing the zip info at all.
5.6
- Issue #242: Use absolute imports in svn_utils to avoid issues if the installing package adds an xml module to the path.
5.5.1
- Issue #239: Fix typo in 5.5 such that fix did not take.
5.5
- Issue #239: Setuptools now includes the setup_requires directive on Distribution objects and validates the syntax just like install_requires and tests_require directives.
5.4.2
- Issue #236: Corrected regression in execfile implementation for Python 2.6.
5.4.1
- Python #7776: (ssl_support) Correct usage of host for validation when tunneling for HTTPS.
5.4
- Issue #154: pkg_resources will now cache the zip manifests rather than re-processing the same file from disk multiple times, but only if the environment variable PKG_RESOURCES_CACHE_ZIP_MANIFESTS is set. Clients that package many modules in the same zip file will see some improvement in startup time by enabling this feature. This feature is not enabled by default because it causes a substantial increase in memory usage.
5.3
- Issue #185: Make svn tagging work on the new style SVN metadata. Thanks cazabon!
- Prune revision control directories (e.g .svn) from base path as well as sub-directories.
5.2
- Added a Developer Guide to the official documentation.
- Some code refactoring and cleanup was done with no intended behavioral changes.
- During install_egg_info, the generated lines for namespace package .pth files are now processed even during a dry run.
5.1
- Issue #202: Implemented more robust cache invalidation for the ZipImporter, building on the work in Issue #168. Special thanks to Jurko Gospodnetic and PJE.
5.0.2
- Issue #220: Restored script templates.
5.0.1
- Renamed script templates to end with .tmpl now that they no longer need to be processed by 2to3. Fixes spurious syntax errors during build/install.
5.0
- Issue #218: Re-release of 3.8.1 to signal that it supersedes 4.x.
- Incidentally, script templates were updated not to include the triple-quote escaping.
3.7.1 and 3.8.1 and 4.0.1
- Issue #213: Use legacy StringIO behavior for compatibility under pbr.
- Issue #218: Setuptools 3.8.1 superseded 4.0.1, and 4.x was removed from the available versions to install.
4.0
- Issue #210: setup.py develop now copies scripts in binary mode rather than text mode, matching the behavior of the install command.
3.8
- Extend Issue #197 workaround to include all Python 3 versions prior to 3.2.2.
3.7
- Issue #193: Improved handling of Unicode filenames when building manifests.
3.6
- Issue #203: Honor proxy settings for Powershell downloader in the bootstrap routine.
3.5.2
- Issue #168: More robust handling of replaced zip files and stale caches. Fixes ZipImportError complaining about a ‘bad local header’.
3.5.1
- Issue #199: Restored install._install for compatibility with earlier NumPy versions.
3.5
- Issue #195: Follow symbolic links in find_packages (restoring behavior broken in 3.4).
- Issue #197: On Python 3.1, PKG-INFO is now saved in a UTF-8 encoding instead of sys.getpreferredencoding to match the behavior on Python 2.6-3.4.
- Issue #192: Preferred bootstrap location is now (mirrored from former location).
3.4.4
- Issue #184: Correct failure where find_package over-matched packages when directory traversal isn’t short-circuited.
3.4.3
- Issue #183: Really fix test command with Python 3.1.
3.4.2
- Issue #183: Fix additional regression in test command on Python 3.1.
3.4.1
- Issue #180: Fix regression in test command not caught by py.test-run tests.
3.4
- Issue #176: Add parameter to the test command to support a custom test runner: –test-runner or -r.
- Issue #177: Now assume most common invocation to install command on platforms/environments without stack support (issuing a warning). Setuptools now installs naturally on IronPython. Behavior on CPython should be unchanged.
3.2
- Pull Request #39: Add support for C++ targets from Cython .pyx files.
- Issue #162: Update dependency on certifi to 1.0.1.
- Issue #164: Update dependency on wincertstore to 0.2.
3.1
- Issue #161: Restore Features functionality to allow backward compatibility (for Features) until the uses of that functionality is sufficiently removed.
3.0.1
- Issue #157: Restore support for Python 2.6 in bootstrap script where zipfile.ZipFile does not yet have support for context managers.
3.0
- Issue #125: Prevent Subversion support from creating a ~/.subversion directory just for checking the presence of a Subversion repository.
- Issue #12: Namespace packages are now imported lazily. That is, the mere declaration of a namespace package in an egg on sys.path no longer causes it to be imported when pkg_resources is imported. Note that this change means that all of a namespace package’s __init__.py files must include a declare_namespace() call in order to ensure that they will be handled properly at runtime. In 2.x it was possible to get away without including the declaration, but only at the cost of forcing namespace packages to be imported early, which 3.0 no longer does.
- Issue #148: When building (bdist_egg), setuptools no longer adds __init__.py files to namespace packages. Any packages that rely on this behavior will need to create __init__.py files and include the declare_namespace().
- Issue #7: Setuptools itself is now distributed as a zip archive in addition to tar archive. ez_setup.py now uses zip archive. This approach avoids the potential security vulnerabilities presented by use of tar archives in ez_setup.py. It also leverages the security features added to ZipFile.extract in Python 2.7.4.
- Issue #65: Removed deprecated Features functionality.
- Pull Request #28: Remove backport of _bytecode_filenames which is available in Python 2.6 and later, but also has better compatibility with Python 3 environments.
- Issue #156: Fix spelling of __PYVENV_LAUNCHER__ variable.
2.2
- Issue #141: Restored fix for allowing setup_requires dependencies to override installed dependencies during setup.
- Issue #128: Fixed issue where only the first dependency link was honored in a distribution where multiple dependency links were supplied.
2.1.2
- Issue #144: Read long_description using codecs module to avoid errors installing on systems where LANG=C.
2.1.1
- Issue #139: Fix regression in re_finder for CVS repos (and maybe Git repos as well).
2.1
- Issue #129: Suppress inspection of *.whl files when searching for files in a zip-imported file.
- Issue #131: Fix RuntimeError when constructing an egg fetcher.
2.0.2
- Fix NameError during installation with Python implementations (e.g. Jython) not containing parser module.
- Fix NameError in sdist:re_finder.
2.0.1
- Issue #124: Fixed error in list detection in upload_docs.
2.0
- Issue #121: Exempt lib2to3 pickled grammars from DirectorySandbox.
- Issue #41: Dropped support for Python 2.4 and Python 2.5. Clients requiring setuptools for those versions of Python should use setuptools 1.x.
- Removed setuptools.command.easy_install.HAS_USER_SITE. Clients expecting this boolean variable should use site.ENABLE_USER_SITE instead.
- Removed pkg_resources.ImpWrapper. Clients that expected this class should use pkgutil.ImpImporter instead.
1.4.2
- Issue #116: Correct TypeError when reading a local package index on Python 3.
1.4.1
Issue #114: Use sys.getfilesystemencoding for decoding config in bdist_wininst distributions.
Issue #105 and Issue #113: Establish a more robust technique for determining the terminal encoding:
1. Try ``getpreferredencoding`` 2. If that returns US_ASCII or None, try the encoding from ``getdefaultlocale``. If that encoding was a "fallback" because Python could not figure it out from the environment or OS, encoding remains unresolved. 3. If the encoding is resolved, then make sure Python actually implements the encoding. 4. On the event of an error or unknown codec, revert to fallbacks (UTF-8 on Darwin, ASCII on everything else). 5. On the encoding is 'mac-roman' on Darwin, use UTF-8 as 'mac-roman' was a bug on older Python releases. On a side note, it would seem that the encoding only matters for when SVN does not yet support ``--xml`` and when getting repository and svn version numbers. The ``--xml`` technique should yield UTF-8 according to some messages on the SVN mailing lists. So if the version numbers are always 7-bit ASCII clean, it may be best to only support the file parsing methods for legacy SVN releases and support for SVN without the subprocess command would simple go away as support for the older SVNs does.
1.4
- Issue #27: easy_install will now use credentials from .pypirc if present for connecting to the package index.
- Pull Request #21: Omit unwanted newlines in package_index._encode_auth when the username/password pair length indicates wrapping.
1.3
- Address security vulnerability in SSL match_hostname check as reported in Python #17997.
- Prefer backports.ssl_match_hostname for backport implementation if present.
- Correct NameError in ssl_support module (socket.error).
1.2
- Issue #26: Add support for SVN 1.7. Special thanks to Philip Thiem for the contribution.
- Issue #93: Wheels are now distributed with every release. Note that as reported in Issue #108, as of Pip 1.4, scripts aren’t installed properly from wheels. Therefore, if using Pip to install setuptools from a wheel, the easy_install command will not be available.
- Setuptools “natural” launcher support, introduced in 1.0, is now officially supported.
1.1.7
- Fixed behavior of NameError handling in ‘script template (dev).py’ (script launcher for ‘develop’ installs).
- ez_setup.py now ensures partial downloads are cleaned up following a failed download.
- Distribute #363 and Issue #55: Skip an sdist test that fails on locales other than UTF-8.
1.1.6
- Distribute #349: sandbox.execfile now opens the target file in binary mode, thus honoring a BOM in the file when compiled.
1.1.2
1.1.1
1.1
- Issue #71 (Distribute .9.7
0.9.5
- Python #17980: Fix security vulnerability in SSL certificate validation.
0.9.4
0.9.1
- Distribute #386: Allow other positional and keyword arguments to os.open.
- Corrected dependency on certifi mis-referenced in 0.9.
0.7.8
- Distribute #375: Yet another fix for yet another regression.
0.7.7
- Distribute #375: Repair AttributeError created in last release (redo).
- Issue #30: Added test for get_cache_path.
0.7.6
- Distribute #375: Repair AttributeError created in last release.
0.7.5
- Issue #21: Restore Python 2.4 compatibility in test_easy_install.
- Distribute #375: Merged additional warning from Distribute 0.6.46.
- Now honor the environment variable SETUPTOOLS_DISABLE_VERSIONED_EASY_INSTALL_SCRIPT in addition to the now deprecated DISTRIBUTE_DISABLE_VERSIONED_EASY_INSTALL_SCRIPT.
0.7.3
0.7.2.49
- Move warning check in get_cache_path to follow the directory creation to avoid errors when the cache path does not yet exist. Fixes the error reported in Distribute #375.
0.6.46
- Distribute #375: Issue a warning if the PYTHON_EGG_CACHE or otherwise customized egg cache location specifies a directory that’s group- or world-writable.
0.6.45
- Distribute #379: distribute_setup.py now traps VersionConflict as well, restoring ability to upgrade from an older setuptools version.
0.6.43
- Distribute #378: Restore support for Python 2.4 Syntax (regression in 0.6.42).
0.6.42
- External links finder no longer yields duplicate links.
- Distribute #337: Moved site.py to setuptools/site-patch.py (graft of very old patch from setuptools trunk which inspired PR #31).
0.6.41
- Distribute #27: Use public api for loading resources from zip files rather than the private method _zip_directory_cache.
- Added a new function easy_install.get_win_launcher which may be used by third-party libraries such as buildout to get a suitable script launcher.
0.6.40
- Distribute #376: brought back cli.exe and gui.exe that were deleted in the previous release.
0.6.39
- Add support for console launchers on ARM platforms.
- Fix possible issue in GUI launchers where the subsystem was not supplied to the linker.
- Launcher build script now refactored for robustness.
- Distribute #375: Resources extracted from a zip egg to the file system now also check the contents of the file against the zip contents during each invocation of get_resource_filename.
0.6.38
- Distribute #371: The launcher manifest file is now installed properly.
0.6.37
- Distribute #143: Launcher scripts, including easy_install itself, are now accompanied by a manifest on 32-bit Windows environments to avoid the Installer Detection Technology and thus undesirable UAC elevation described in this Microsoft article.
0.6.36
- Pull Request #35: In Buildout .
- Distribute #278: Restored compatibility with distribute 0.6.22 and setuptools 0.6. Updated the documentation to match more closely with the version parsing as intended in setuptools 0.6.
0.6.34
- Distribute #341: 0.6.33 fails to build under Python 2.4.
0.6.33
- Fix 2 errors with Jython 2.5.
- Fix 1 failure with Jython 2.5 and 2.7.
- Disable workaround for Jython scripts on Linux systems.
- Distribute #336: setup.py no longer masks failure exit code when tests fail.
- Fix issue in pkg_resources where try/except around a platform-dependent import would trigger hook load failures on Mercurial. See pull request 32 for details.
- Distribute #341: Fix a ResourceWarning.
0.6.32
- Fix test suite with Python 2.6.
- Fix some DeprecationWarnings and ResourceWarnings.
- Distribute #335: Backed out setup_requires superceding installed requirements until regression can be addressed.
0.6.31
Distribute #303: Make sure the manifest only ever contains UTF-8 in Python 3.
Distribute #329: Properly close files created by tests for compatibility with Jython.
Work around Jython #1980 and Jython #1981.
Distribute .
Distribute
- Distribute #328: Clean up temporary directories in distribute_setup.py.
- Fix fatal bug in distribute_setup.py.
0.6.29
- Pull Request #14: Honor file permissions in zip files.
- Distribute #327: Merged pull request #24 to fix a dependency problem with pip.
- Merged pull request #23 to fix.
- If Sphinx is installed, the upload_docs command now runs build_sphinx to produce uploadable documentation.
- Distribute #326: upload_docs provided mangled auth credentials under Python 3.
- Distribute #320: Fix check for “createable” in distribute_setup.py.
- Distribute #305: Remove a warning that was triggered during normal operations.
- Distribute #311: Print metadata in UTF-8 independent of platform.
- Distribute #303: Read manifest file with UTF-8 encoding under Python 3.
- Distribute #301: Allow to run tests of namespace packages when using 2to3.
- Distribute #304: Prevent import loop in site.py under Python 3.3.
- Distribute #283: Reenable scanning of *.pyc / *.pyo files on Python 3.3.
- Distribute .
- Distribute #306: Even if 2to3 is used, we build in-place under Python 2.
- Distribute #307: Prints the full path when .svn/entries is broken.
- Distribute #313: Support for sdist subcommands (Python 2.7)
- Distribute #314: test_local_index() would fail an OS X.
- Distribute #310: Non-ascii characters in a namespace __init__.py causes errors.
- Distribute #218: Improved documentation on behavior of package_data and include_package_data. Files indicated by package_data are now included in the manifest.
- distribute_setup.py now allows a –download-base argument for retrieving distribute from a specified location.
0.6.28
- Distribute #294: setup.py can now be invoked from any directory.
- Scripts are now installed honoring the umask.
- Added support for .dist-info directories.
- Distribute .
- Distribute #231: Don’t fiddle with system python when used with buildout (bootstrap.py)
0.6.26
- Distribute #183: Symlinked files are now extracted from source distributions.
- Distribute #227: Easy_install fetch parameters are now passed during the installation of a source distribution; now fulfillment of setup_requires dependencies will honor the parameters passed to easy_install.
0.6.25
- Distribute #258: Workaround a cache issue
- Distribute #260: distribute_setup.py now accepts the –user parameter for Python 2.6 and later.
- Distribute #262: package_index.open_with_auth no longer throws LookupError on Python 3.
- Distribute #269: AttributeError when an exception occurs reading Manifest.in on late releases of Python.
- Distribute #272: Prevent TypeError when namespace package names are unicode and single-install-externally-managed is used. Also fixes PIP issue 449.
- Distribute #273: Legacy script launchers now install with Python2/3 support.
0.6.24
- Distribute #249: Added options to exclude 2to3 fixers
0.6.23
- Distribute #244: Fixed a test
- Distribute #243: Fixed a test
- Distribute #239: Fixed a test
- Distribute #240: Fixed a test
- Distribute #241: Fixed a test
- Distribute #237: Fixed a test
- Distribute #238: easy_install now uses 64bit executable wrappers on 64bit Python
- Distribute #208: Fixed parsed_versions, it now honors post-releases as noted in the documentation
- Distribute #207: Windows cli and gui wrappers pass CTRL-C to child python process
- Distribute #227: easy_install now passes its arguments to setup.py bdist_egg
- Distribute #225: Fixed a NameError on Python 2.5, 2.4
0.6.21
- Distribute #225: FIxed a regression on py2.4
0.6.20
- Distribute #135: Include url in warning when processing URLs in package_index.
- Distribute #212: Fix issue where easy_instal fails on Python 3 on windows installer.
- Distribute #213: Fix typo in documentation.
0.6.19
- Distribute #206: AttributeError: ‘HTTPMessage’ object has no attribute ‘getheaders’
0.6.18
- Distribute #210: Fixed a regression introduced by Distribute #204 fix.
0.6.17
- Support ‘DISTRIBUTE_DISABLE_VERSIONED_EASY_INSTALL_SCRIPT’ environment variable to allow to disable installation of easy_install-${version} script.
- Support Python >=3.1.4 and >=3.2.1.
- Distribute #204: Don’t try to import the parent of a namespace package in declare_namespace
- Distribute #196: Tolerate responses with multiple Content-Length headers
- Distribute #205: Sandboxing doesn’t preserve working_set. Leads to setup_requires problems.
0.6.16
- Builds sdist gztar even on Windows (avoiding Distribute #193).
- Distribute #192: Fixed metadata omitted on Windows when package_dir specified with forward-slash.
- Distribute #195: Cython build support.
- Distribute #200: Issues with recognizing 64-bit packages on Windows.
0.6.15
- Fixed typo in bdist_egg
- Several issues under Python 3 has been solved.
- Distribute #146: Fixed missing DLL files after easy_install of windows exe package.
0.6.14
- Distribute #170: Fixed unittest failure. Thanks to Toshio.
- Distribute #171: Fixed race condition in unittests cause deadlocks in test suite.
- Distribute #143: Fixed a lookup issue with easy_install. Thanks to David and Zooko.
- Distribute #174: Fixed the edit mode when its used with setuptools itself
0.6.13
- Distribute #160: 2.7 gives ValueError(“Invalid IPv6 URL”)
- Distribute #150: Fixed using ~/.local even in a –no-site-packages virtualenv
- Distribute #163: scan index links before external links, and don’t use the md5 when comparing two distributions
0.6.12
- Distribute #149: Fixed various failures on 2.3/2.4
0.6.11
- Found another case of SandboxViolation - fixed
- Distribute #15 and Distribute #48: Introduced a socket timeout of 15 seconds on url openings
- Added indexsidebar.html into MANIFEST.in
- Distribute #108: Fixed TypeError with Python3.1
- Distribute #121: Fixed –help install command trying to actually install.
- Distribute #112: Added an os.makedirs so that Tarek’s solution will work.
- Distribute #133: Added –no-find-links to easy_install
- Added easy_install –user
- Distribute #100: Fixed develop –user not taking ‘.’ in PYTHONPATH into account
- Distribute #134: removed spurious UserWarnings. Patch by VanLindberg
- Distribute #138: cant_write_to_target error when setup_requires is used.
- Distribute #147: respect the sys.dont_write_bytecode flag
0.6.10
- Reverted change made for the DistributionNotFound exception because zc.buildout uses the exception message to get the name of the distribution.
0.6.9
- Distribute #90: unknown setuptools version can be added in the working set
- Distribute #87: setupt.py doesn’t try to convert distribute_setup.py anymore Initial Patch by arfrever.
- Distribute #89: added a side bar with a download link to the doc.
- Distribute #86: fixed missing sentence in pkg_resources doc.
- Added a nicer error message when a DistributionNotFound is raised.
- Distribute #80: test_develop now works with Python 3.1
- Distribute #93: upload_docs now works if there is an empty sub-directory.
- Distribute #70: exec bit on non-exec files
- Distribute #99: now the standalone easy_install command doesn’t uses a “setup.cfg” if any exists in the working directory. It will use it only if triggered by install_requires from a setup.py call (install, develop, etc).
- Distribute #101: Allowing os.devnull in Sandbox
- Distribute #92: Fixed the “no eggs” found error with MacPort (platform.mac_ver() fails)
- Distribute #103: test_get_script_header_jython_workaround not run anymore under py3 with C or POSIX local. Contributed by Arfrever.
- Distribute #104: remvoved the assertion when the installation fails, with a nicer message for the end user.
- Distribute #100: making sure there’s no SandboxViolation when the setup script patches setuptools.
0.6.8
- Added “check_packages” in dist. (added in Setuptools 0.6c11)
- Fixed the DONT_PATCH_SETUPTOOLS state.
0.6.7
- Distribute #58: Added –user support to the develop command
- Distribute .
- Distribute #21: Allow PackageIndex.open_url to gracefully handle all cases of a httplib.HTTPException instead of just InvalidURL and BadStatusLine.
- Removed virtual-python.py from this distribution and updated documentation to point to the actively maintained virtualenv instead.
- Distribute #64: use_setuptools no longer rebuilds the distribute egg every time it is run
- use_setuptools now properly respects the requested version
- use_setuptools will no longer try to import a distribute egg for the wrong Python version
- Distribute #74: no_fake should be True by default.
- Distribute #72: avoid a bootstrapping issue with easy_install -U
0.6.6
- Unified the bootstrap file so it works on both py2.x and py3k without 2to3 (patch by Holger Krekel)
0.6.5
- Distribute #65: cli.exe and gui.exe are now generated at build time, depending on the platform in use.
- Distribute Distribute #52.
- Added an upload_docs command to easily upload project documentation to PyPI’s. This close issue Distribute Old Setuptools #39.
- Added option to run 2to3 automatically when installing on Python 3. This closes issue Distribute #31.
- Fixed invalid usage of requirement.parse, that broke develop -d. This closes Old Setuptools #44.
- Fixed script launcher for 64-bit Windows. This closes Old Setuptools #2.
- KeyError when compiling extensions. This closes Old Setuptools #41.
bootstrapping
- Fixed bootstrap not working on Windows. This closes issue Distribute #49.
- Fixed 2.6 dependencies. This closes issue Distribute #50.
- Make sure setuptools is patched when running through easy_install This closes Old Setuptools #40.
0.6.1
setuptools
- package_index.urlopen now catches BadStatusLine and malformed url errors. This closes Distribute #16 and Distribute #18.
- zip_ok is now False by default. This closes Old Setuptools #33.
- Fixed invalid URL error catching. Old Setuptools #20.
- Fixed invalid bootstraping with easy_install installation (Distribute Distribute #10.
0.6
setuptools
- Packages required at build time where not fully present at install time. This closes Distribute #12.
- Protected against failures in tarfile extraction. This closes Distribute #10.
- Made Jython api_tests.txt doctest compatible. This closes Distribute #7.
- sandbox.py replaced builtin type file with builtin function open. This closes Distribute #6.
- Immediately close all file handles. This closes Distribute #3.
- Added compatibility with Subversion 1.6. This references Distribute Distribute #13.
- Allow to find_on_path on systems with tight permissions to fail gracefully. This closes Distribute #9.
- Corrected inconsistency between documentation and code of add_entry. This closes Distribute #8.
- Immediately close all file handles. This closes Distribute #3.
easy_install
- Immediately close all file handles. This closes Distribute #3.
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/setuptools/7.0/ | CC-MAIN-2018-30 | en | refinedweb |
How do you decide if a change you made to your webpage is getting more customers to sign up? How do you know if the new drug you invented cures more people than the current market leader? Did you make a groundbreaking scientific discovery?
All these questions can be answered using a branch of statistics called hypothesis testing. This post explains the basics of hypothesis testing.
The first question everyone has is: did it work? How do you know if what you are seeing is due to chance or skill? To answer this you need to know: how often would you declare victory just because of random variations in your data sample? Luckily you can choose this number! This is what p-values do for you.
But before diving into more details let's set up a little toy experiment to work with and illustrate the different concepts.
measured.
def two_samples(difference, N=6500, delta_variance=0.): As = np.random.normal(6., size=N) Bs = np.random.normal(6. + difference, scale=1+delta_variance, size=N) return As, Bs
What does this look like then? We will create two samples with the same mean and 100 observations in each.
a = plt.axes() As, Bs = two_samples(0., N=100) _=a.hist(As, bins=30, range=(2,10), alpha=0.6) _=a.hist(Bs, bins=30, range=(2,10), alpha=0.6) print "Mean for sample A: %.3f and for sample B: %.3f"%(np.mean(As), np.mean(Bs))
Mean for sample A: 5.946 and for sample B: 6.093
You can see that the mean of neither of the two samples is exactly six, nor are the two values the same. Looking at the histogram of the two samples they do look kind of similar. If we did not know the truth about how these samples were made, would we conclude that they are different? If we did, would we be right?
This is where p-values and hypothesis testing come in. To do hypothesis testing you need two hypotheses which you would can pit against each other. The first one is called the Null hypothesis or $H_0$ and the other one is often referred to as "alternate" or $H_1$. It is important to remember that hypothesis testing can only answer the following question: should I abandon $H_0$?
In order to get started with your hypothesis testing you need to assume that $H_0$ is true, so the test can never tell you whether or not this assumption is a good one to make. All it can do is tell you that there is overwhelming evidence against your null hypothesis. It also does not tell you whether $H_1$ is true or not.
The p-value is often used (and abused) to decide if a result is "statistically significant". The p-value is nothing more than the probability that you observed a result as extreme (far away from $H_0$) or more extreme than the one you did by chance alone assuming that $H_0$ is true.
Let's stick with the example of us wanting to know if our changes to our website improved the conversion rate or not. The p-value is the probability for the mean in the second sample being bigger than the mean in the first sample due to nothing else but chance. In this case you can calculate the p-value by using Student's t-test. It is implemented in
scipy so let's reveal it:. print "P-value: %.5f, the smaller the less likely it is that the means are the same"%(p) one_sided_ttest(As, Bs)
P-value: 0.15576, the smaller the less likely it is that the means are the same
Common practice is to decide below which value the p-value has to be in order for this result to be statistically significant or not before looking at the data. By choosing a smaller value you are less likely to incorrectly conclude that your changes improved the conversion rate. Common choices are 0.05 or 0.01. Meaning you only make a mistake 1 in 20 or 1 in 100 times.
Let us repeat the experiment and look at another p-value:
As2, Bs2 = two_samples(0., N=100) one_sided_ttest(As2, Bs2)
P-value: 0.00285, the smaller the less likely it is that the means are the same
What happened here? The p-value is different! Not only is it different but it is also below 0.01, our changes worked! Actually we know that the two samples have the same mean, so how can this test be telling us that we found a statistically significant difference? This must be one of the cases where there is no difference but the p-value is small and we incorrectly conclude that there is a difference.
Let's repeat the experiment a few more times and keep track of all the p-values we see:
def repeat_experiment(repeats=10000, diff=0.): p_values = [] for i in xrange(repeats): A,B = two_samples(diff, N=100) t,p = stats.ttest_ind(A, B, equal_var=True) if t < 0: p /= 2. else: p = 1 - p/2. p_values.append(p) plt.hist(p_values, range=(0,1.), bins=20) plt.axvspan(0., 0.1, facecolor="red", alpha=0.5) repeat_experiment()
The p-value depends on the outcome of your experiment, that is which particular values you have for your observations. Therefore it is different everytime you repeat the experiment. You can see that roughly 10% of all experiments ended up in the red shaded area, they have p-values below 0.1. These are the cases where you observe a significant difference in the means despite there being none. A false positive.
What happens if there is a difference between the means of the two samples?
repeat_experiment(diff=0.05)
Now you get a p-value less than 0.1 more often than 10% of the time. This is exactly what you would expect as the Null hypothesis is not true.
An important thing to realize is that by choosing your p-value threshold to be say 0.05, you are choosing to be wrong 1 in 20 times. Keep in mind: This is true if you judged a lot of copies of this experiment. For each individual experiment you do, you are either right or wrong. The trouble is you do not know which one of the two it is.
The smaller a value you choose for your p-value threshold, the smaller the chance of being wrong when you decide to switch to the new webpage. Nobody likes being wrong so why not always choose a very, very small threshold?
The price you pay for choosing a lower threshold is that you will end up missing out on opportunities to improve your conversion rate. By lowering the p-value threshold you will conclude that the new version did not improve things when it actually did.
def keep_or_not(improvement, threshold=0.05, N=100, repeats=1000): keep = 0 for i in xrange(repeats): A,B = two_samples(improvement, N=N) t,p = stats.ttest_ind(A, B, equal_var=True) if t < 0: p /= 2. else: p = 1 - p/2. if p <= threshold: keep += 1 return float(keep)/repeats improvement = 0.05 thresholds = (0.01, 0.05, 0.1, 0.15, 0.2, 0.25) for thresh in thresholds: kept = keep_or_not(improvement, thresh)*100 plt.plot(thresh, kept, "bo") plt.ylim((0, 45)) plt.xlim((0, thresholds[-1]*1.1)) plt.grid() plt.xlabel("p-value threshold") plt.ylabel("% cases correctly accepted")
<matplotlib.text.Text at 0x106ede550>
From this you can see that the times you accept the new webpage (which we know to be better by 5%) is smaller if you choose your p-value lower. Missing out on these opportunities is the price you pay for being wrong less often.
For a fixed p-value threshold, you correctly decide to change your webpage more often if the effect is larger:
improvements = np.linspace(0., 0.4, 9) for improvement in improvements: kept = keep_or_not(improvement)*100 plt.plot(improvement, kept, "bo") plt.ylim((0, 100)) plt.xlim((0, improvements[-1]*1.1)) plt.grid() plt.xlabel("Size of the improvement") plt.ylabel("% cases correctly accepted") plt.axhline(5)
<matplotlib.lines.Line2D at 0x10711b910>
This makes sense. If the difference between your two onversion rates is larger, then it should be easier to detect. As a result you correctly choose to change your webpage in a higher fraction of cases. In other words the larger the difference, the more often you correctly reject the Null hypothesis.
The horizontal blue line marks the p-value threshold of 5%. You can see for the left most point at 0% improvement, we reject the Null hypothesis in 5% of cases and change our webpage. In reality the new webpage does no better than what we had before.
Similarly, the larger your p-value threshold the more often you correctly decide to reject the Null hypothesis. This comes at a price though, because the larger your p-value threshold, the higher the chance of you incorrectly deciding to change the website.
What we have called "% cases correctly accepted" is known in statistics as the power of a statistical test. The power of a test depends on the p-value threshold, the size of the effect you are looking for and the size of your sample.
For a given p-value threshold and improvement your chances of correctly detecting that there is an improvement depend on how many observations you have. If a change increases the conversion rate by a whopping 10% that is much easier to detect (you need to watch less people) than if a change only increases the conversion rate by 0.5%.
improvements = (0.005, 0.05, 0.1, 0.3) markers = ("ro", "gv", "b^", "ms") for improvement, marker in zip(improvements, markers): sample_size = np.linspace(10, 5000, 10) kept = [keep_or_not(improvement, N=size, repeats=10000)*100 for size in sample_size] plt.plot(sample_size, kept, marker, label="improvement=%g%%"%(improvement*100)) plt.legend(loc='best') plt.ylim((0, 100)) plt.xlim((0, sample_size[-1]*1.1)) plt.grid() plt.xlabel("Sample size") plt.ylabel("% cases correctly accepted")
<matplotlib.text.Text at 0x10aed9190>
As you can see from this plot, for a given sample size you are more likely to correctly decide to switch to the new webpage for larger improvements. For increases in conversion rate of 10% or more you can see that you do not need a sample with more than 2000 observations or so to gurantee you will decide to switch if there is an effect. For very small improvements you see that you need very large samples to be sure to actually detect the small improvement.
Now you know about hypothesis testing, p-values and how to use them to decide if you should switch, and you know that p-values are not all there is. The power of your test, the probability to actually detect an improvement if it is there is just as important as p-values. The beauty is that you can calculate a lot of these numbers before you ever start running an A/B test or the likes.
This post started life as a ipython notebook, download it or view it online. | http://betatim.github.io/posts/when-to-switch/ | CC-MAIN-2018-30 | en | refinedweb |
Java has a built-in sort function for integers, just use that.
Small challenge: solve it without sorting!
Hi everyone!! I'm a beginner programmer and curently I'm stuck in my solution. It's writtent in C#. I have this error
Unhandled Exception: System.IndexOutOfRangeException: Index was outside the bounds of the array.at Solution.Main on line 29
Can anyone can help me why it shows up:(
class Solution{ static void Main(string[] args) { int N = int.Parse(Console.ReadLine()); int[] horses = new int[N]; for (int i = 0; i < N; i++) { horses[i] = int.Parse(Console.ReadLine()); } Array.Sort(horses); int minDifference = horses[1] - horses[0]; int currentDifference =0; for ( int i = 0; i< horses.Length; i++) {
currentDifference = horses[i+1] - horses[i];
if (minDifference > currentDifference)
{
minDifference = currentDifference;
}
}
// Write an action using Console.WriteLine()
// To debug: Console.Error.WriteLine("Debug messages...");
Console.WriteLine(minDifference);
}
}
Hi
for ( int i = 0; i< horses.Length; i++)
horses[i+1]
for ( int i = 0; i< horses.Length; i++)
horses[i+1]
i+1 is outside for the last interation of the loop.
First, when I've submitted my code, they say that my algorithm fails with tie horses.Ok, so I've managed to correct it and I've entered tie horses in the personalized test and failed.Then I've submitted the algorithm anyway and the tie horses case is success.
Anyway I just want to wish good luck to those who haven't managed to resolve the puzzle yet. A little advice : It's not because simplest solution isn't the best that you'll have to search way too far.
can you tell me how to do it please? i'm a newbie of java and i can't pass this
Hello everyone!I'm trying to resolve this puzzle in C++, but I have some problems.In summary, the output of my code doesn't match with the output of the test case.I think that the following image summarizes a little bit better the problem:
Thanks in advance!
Use cerr to display debug ouput like "The difference between...". Use cout only for the expected answer (1)
I resolved That was the problem Now, I have to understand the fails in the validators panel !
Hi, I have found a little issue, I hope i am mistaking , when I've printed the numbers for input I have found out that the first pw data is missing. This issue is in JAVA. I have manage to pass 91% of validation. Example of error data
input: 3 5 8 9output: 5 8 9
code used: int N = in.nextInt(); for (int i = 0; i < N; i++) { int pi = in.nextInt(); System.err.println(pi); }
Hi,I have realized the bash code thanks for the previous replies which have helped me.To help you to do it, here are some reminders and some tricks :
The best way to assign a value is ((diff=$x- $y)) instead of diff=expr $x - $yTo get an absolute value, you just have to do it : abs=${diff#-} instead of any if instruction.
expr $x - $y
Good luck.
Well, just got a bash achievement, took me a while.
The trick is, that if you extract (i+1)th element from array before (i)th element, it somehow takes extremely more time, so the program needs more time to work than given time limit.
Hello,
I'm coding using python, and I'm stuck at 90%, the test number 6 - horses in disorder - does not pass, while the same test in the IDE passes. I read the hint but I don't know what to perform to optimize the algorithm. When I try to add some test the code become to slow with a lot of horses. My code is after the post
Any help appreciated to help me perform the 100%.
Thanks
import sys
import math
# Auto-generated code below aims at helping you parse
# the standard input according to the problem statement.
numbers = []
difference = None
temp = int(raw_input())
if 1 < temp < 100000:
n = temp
for i in xrange(n):
pi = int(raw_input())
if 0 < pi <= 10000000:
numbers.append(pi)
del(n)
# Write an action using print
# To debug: print >> sys.stderr, "Debug messages..."
numbers.sort()
for i,n in enumerate(numbers):
difference_current = abs(numbers[i] - numbers[i-1])
if not difference or (difference_current < difference):
difference = difference_current
print difference
Bonjour tout le monde !
Je suis vraiment embêté avec le test de "Chevaux dans le désordre" aussi. ça passe parfaitement coté IDE mais le validateur me rejette et comme il n'est pas verbeux je ne sais pas si le problème vient du fait que les chevaux ne sont pas triés ou que le temps de traitement est trop long (sachant que je passe les tests "Nombreux chevaux").
J'ai testé avec la fonction sort (en JS), tri fusion, tri rapide, tri par insertion à l'initialisation du tableau, mais rien n'y fait je suis toujours bloqué sur ce test coté validation (il passe toujours coté IDE)
En vous remerciant !
[quote="baptistemm, post:235, topic:38"]
for i,n in enumerate(numbers):
difference_current = abs(numbers[i] - numbers[i-1])
[/quote]
what will be your first i? what do you hold in numbers[first i - 1]?
Bonjour,J'échoue au deuxième test de validation (plus petite dirfférence sur les deux premiers chevaux), et je n'arrive pas à voir pourquoi.Est-ce qu'il y a un moyen de voir les input output de ces tests?Merci
Est-ce qu'il y a un moyen de voir les input output de ces tests?
si tu veux voir du debug, le plus simple comme détaillé dans le code c'est d'écrire sur la stderr.
Hello bvs23bkv33
actually I surrounded the instruction with a condition if i>0:, I don't know why it disappeared. However even when I had back the condition the test #6 still fails.
if i>0:
I really need your help with this puzzle. Logically, my code would work perfectly for all test cases, but for some reason it always prnts out a negative value!
My code is here.
By the way, I am using Python 3
I would really appreciate for any ideas on why my code is wrong | http://forum.codingame.com/t/horse-racing-duals-puzzle-discussion/38?page=11 | CC-MAIN-2018-30 | en | refinedweb |
First thing: it's poor practice to post full source code, it used to be really frowned upon in the past.
Second thing: Python's a[-1] means the last element of a. So, if a is sorted, a[0] - a[-1] quickly becomes negative and then all other differences would be positive, so greater than this. Hope this is hint enough to fix your problem.
a[-1]
a
a[0] - a[-1]
Bonjour,je suis débutante sur codingGame et java. J'ai essayé ce jeu et n'ayant pas réussi, j'ai cherché des solution et en ai trouvé une avec int answer = Integer.MAX_VALUE;mais je ne comprends pas le lien avec MAX_VALUE?Pourriez - vous m'expliquez SVP.Merci d'avance
Quand tu tentes de minimiser une valeur (la différence de puissance entre les chevaux), il faut que tu l'initialises à quelques choses de très grand, c'est pour cela qu'il utilise Integer.MAX_VALUE, qui est la plus grande valeur qu'un int peut prendre (2 147 483 647).
Ensuite, je suppose que dans son code il fait quelque chose comme
if (myNewValue < answer)
answer=myNewValue
Ok, I dont know if someone will healp me, but there I go...
Can someone help me with this, I don't knot why it is not working:
#include <iostream>
#include <string>
#include <vector>
#include <algorithm>
using namespace std;
void getUnitaryDif(int N, int arrayT[],int arrayDif[]);
int difOfEntries (int arrayUnitaryDif[],int startI,int endI);
int module(int input);
int minimum (int inputA, int inputB);
/**
* Auto-generated code below aims at helping you parse
* the standard input according to the problem statement.
**/
int main()
{
int N;
int arrayT[N];
cin >> N; cin.ignore();
for (int i = 0; i < N; i++) {
int Pi;
cin >> Pi; cin.ignore();
arrayT[i] = Pi;
}
int arrayUnitaryDif[N-1];
getUnitaryDif(N, arrayT, arrayUnitaryDif);
int result(arrayUnitaryDif[0]);
for (int i = 1; i < N-1; i++)
{
result = minimum(result, module(arrayUnitaryDif[i]));
}
for (int i = 1; i < N; i++)
{
for (int j = i+1; j <= N; j++)
{
result = minimum(result, module(difOfEntries(arrayUnitaryDif, i, j)));
}
}
cout << result << endl;
}
void getUnitaryDif(int N, int arrayT[],int arrayDif[])
{
for (int i = 0; i < N-1; i++)
{
arrayDif[i] = arrayT[i+1] - arrayT[i];
}
}
int difOfEntries (int arrayUnitaryDif[],int startI,int endI)
{
int dif(0);
for (int i = startI-1; i < endI; i++)
{
dif += arrayUnitaryDif[i];
}
return dif;
}
int module(int input)
{
if (input < 0)
return -input;
else
return input;
}
int minimum (int inputA, int inputB)
{
if (inputA < inputB)
return inputA;
else
return inputB;
}
The ideia is to get an array (arrayUnitDif[]) that has the diference between horse N and horse N+1. Then any other diference of N and N+x would be as simple as summing arrayUnitDif[N] to arrayUnitDif[N+x-1].
Can someone help me, please. Ive been trying this for hours.
What's the problem? Does it not compile? Does it not run? Does it give a wrong result? Every time? Sometimes?
Hi everyone. I only got 90% score for this puzzle. This result shows "Shorter difference on the first 2 horses" not resolve. Anybody has any idea?
How do you calculate the shortest difference? Are you sure your algorithm will work for any valid input? Are you sure the algorithm implemented correctly? I suggest to recheck everything
Hi everyone!I'm a bit disappointed : I just got an error after submit, on the "01 Simple Case". All the other tests are OK. Must be something really stupid, but I can't find what... Any idea ?Thanks,
Edit: Ok, I re-coded this from scratch, using simple and brutal sorting this time. Way more simple than the custom sorting based on dichotomy I used before, and much more efficient... I still don't have any idea why the simple case was wrong. But the new code scored 100% easily, so I won't search for long...
Hi i'm having some issue resolving this puzzle Any kind of help would be appreciated, thanks.
Here is my code (C++ )
#include <iostream>
#include <string>
#include <vector>
#include <algorithm>
using namespace std;
/**
* Auto-generated code below aims at helping you parse
* the standard input according to the problem statement.
**/
int main()
{
int N;
int D=10000000;
vector<int> liste (N,0);
cin >> N; cin.ignore();
for (int i = 0; i < N; i++) {
int Pi;
cin >> Pi; cin.ignore();
liste[i]=Pi;
}
sort(liste.begin(),liste.end());
for (int k = 0; k<N-1;k++){
if (abs(liste[k] - liste[k+1]) < D){
D=abs(liste[k] - liste[k+1]);
}
}
// Write an action using cout. DON'T FORGET THE "<< endl"
// To debug: cerr << "Debug messages..." << endl;
cout << D << endl;
}
I get this : ErrorsSegmentation fault.at Answer.cpp. function main () on line 32FailureFound: NothingExpected: 1
I think there's a problem with the way i create my vector ?
try liste.push_back(Pi);
thanks it worked
could you tell me what goes wrong with liste[i]=Pi ?
The list is initially empty (you just say that you plan to add N elements, so the vector class can already allocate that memory and avoid moving bytes while extending).So, before you can set/overwrite the element at index x, you first have to add it with push_back.
i dont know why my solution doesnt works in javascript for many horses, please everyone can help me?
/**
* Auto-generated code below aims at helping you parse
* the standard input according to the problem statement.
**/
var aux;
var menor=10000;
var N = parseInt(readline());
var cadena = new Array(100001);
for (var i = 0; i < N; i++){
cadena[i]=parseInt(readline());
}
cadena.sort(function(a, b){return a-b});
for (var i = 0; i < N; i++) {
if (i===0){
aux=cadena[0];
}
if (i>0){
if ((cadena[i]-aux)<menor){
menor=(cadena[i]-aux);
aux=cadena[i];
}
}
}
// Write an action using print()
// To debug: printErr('Debug messages...');
print(menor);
The problem is that you store your variable "aux" only if your condition is true, so if it's false you keep all time the same value and your condition can't be true anymore
Using Javascript I had trouble with Horses in Disorder test but solved it. In case this helps anyone with a similar problem,I originally had a variable D which held the lowest difference between currently checked values, but I checked if D was undefined before every comparison. I changed D to be a very high initial value 99999999 (which I should have done in the first place if I'd noticed the constraints section 0 < Pi ≤ 10000000)
var temp= pi[i+1]-pi[i] //pi is my array of horse power
var D;
if(!D || temp < D){
D = temp;
}
//became
var D=99999999
if(temp < D){
D = temp;
}
just changing this made it efficient enough to pass.
Good Luck!
There is a weird bug when solving this puzzle in PHP.
If there is an extra line break at the end of the source code, the last test in the validation tests will not pass (Many horses).This happens even if all the IDE test pass and all the other validation tests pass.
Please fix this.
To me it's not a bug but just how PHP works while the interpretor of PHP consider that all outside of the php tag is just text, normally to write html. So if the PHP tag is closed you can write hardcoded value and CG will consider that like the STDOUT of the file.Except if they scan the file and remove all outside any php tag there is no issue of this, and i don't think we really need that
I don't think the disorderly is solvable in pure bash (I mean not calling external utilities). Anybody did it??
I am really not good in bash (never did any bash except for this puzzle) but yeah i did it
Could you share your code?? You can mail me if you want:
[No, you shouldn’t ask for this here.] | http://forum.codingame.com/t/horse-racing-duals-puzzle-discussion/38?page=12 | CC-MAIN-2018-30 | en | refinedweb |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
When I run a script in Bamboo using the Script plugin, all of the standard out from the script is reported as an error in the Bamboo log ang is reported in the Error Summary. Is there a way to get it to ignor standard out for logging? I even tried to redirect the std.out to a file to see if I could hide it, but that didn't work. The script that I'm running is nosetests with the xunit output parameter. Here's a sample of the standard out that I get that it shows an error:
----------------------------------------------------------------------
Ran 0 tests in 0.088s
OK
Shows like this in the log:
error 18-Jun-2014 11:44:30 error 18-Jun-2014 11:44:30 ---------------------------------------------------------------------- error 18-Jun-2014 11:44:30 Ran 0 tests in 0.088s error 18-Jun-2014 11:44:30 error 18-Jun-2014 11:44:30 OK
Hi Marc,
Thank you for your question.
There are a few things I would suggest you on doing:
nosetest p1.py > text.txt 2>&1
Or
nosetests --processes 4 &> output.txt nosetests --with-xunit --xunit-file=test_output.xml
Or
The option to make it as quiet as possible (-q for all of them)
The option to turn off output capture (-s works for both pytest and nose, not needed for unittest)
nosetests -q -s test_fixtures:TestSkip
Kind regards,
Rafael
The first option worked great. It still confuses me as to *why* a standard out print statement is interpreted as an error in Bamboo. Shouldn't it me more of an informational item and not an error?
You can always specify what results are successful and what are failures. I've got another post on here somewhere where I go into some detail on how to structure that. My examples are based on the documentation for error levels on robocopy which you can google and find. Robocopy always returns a non-zero return code which is only a "failure" when the number is above 7. So when you script with that you have to force the return to 0 so that the build task reads as successful. As long as you can test for success or failure in your script you can return 0 or return (non-zero) and force the script to succeed or fail based on your criteria instead of the. | https://community.atlassian.com/t5/Bamboo-questions/Running-a-script-using-the-Script-plugin-reports-all-standard/qaq-p/336983 | CC-MAIN-2018-30 | en | refinedweb |
Javascript Multiline Regular Expressions Don't Include Carriage Returns In IE
In a Regular Expression (RegEx) pattern, the ^ and $ characters typically match the start and end of an entire string. However, if you run a regular expression pattern in "Multiline" mode, the ^ and $ characters should match the start and end of each individual line, respectively. This is a pattern construct that I typically use on the server-side for data file parsing. On the client side, however, I very rarely use it. And, because of this seldom usage, I tend to forget that client-side support for multiline patterns is not universally consistent.
Case in point, last week I discovered some buggy behavior in my jQuery Template Markup Language (JTML) project. In the underlying rendering engine, JTML compiles down to an executable Javascript function in which each line of the JTML template is written to an output buffer in order to reduce string concatenation costs. The individual template lines were extracted using a multiline regular expression. This worked perfectly in Firefox, but created unterminated string constant Javascript errors in IE.
At first, debugging this problem was very frustrating because it appeared that both Firefox and IE supported multiline regular expressions. And, in fact, they do. But, they do not support these pattern constructs in the same capacity. After much alert()'ing and console.log()'ing, I finally figured out what the difference was - Internet Explorer (IE) does not include carriage returns (\r) in its multiline match delimiters. As such, those \r characters were being compiled down into mid-string line breaks, which is what was causing the unterminated string errors.
To see this in action, I am going to loop over the lines of a given Script tag using a multiline regular expression:
- <!DOCTYPE HTML>
- <html>
- <head>
- <title>Javascript Multline Regular Expression</title>
- </head>
- <body>
- <h1>
- Javascript Multline Regular Expression
- </h1>
- <!-- This is our input data. -->
- <script id="template" type="text/jtml">
- This data
- is spread across
- multiple lines.
- </script>
- <!-- This is our output element. -->
- <form>
- <textarea
- id="output"
-
- </textarea>
- </form>
- <script type="text/javascript">
- // Grab the HTML of the template node.
- var jtml = document.getElementById( "template" ).innerHTML;
- // Grab the FORM output.
- var output = document.getElementById( "output" );
- // Create a counter for the number of lines found.
- var lineCount = 0;
- // Iterate over the JTML content in MULTILINE mode; this
- // should match the
- jtml.replace(
- new RegExp( "^(.*)$", "gm" ),
- function( $0 ){
- // Append mached line to output.
- output.value += $0;
- // Increment line count.
- lineCount++;
- }
- );
- // Append line count to output.
- output.value += lineCount;
- </script>
- </body>
- </html>
As you can see, as I am matching the individual lines in the Script tag, I am outputting them to the Textarea output and incrementing my line count. When I run this in Firefox, I get the following page output:
As you can see, Firefox found 5 individual lines in the Script tag. And, since it used both the carriage return and the new line characters as multiline delimiters, the resultant textarea has no hard line breaks.
On the other hand, when we run the above code in Internet Explorer (IE), we get the following page output:
This is a very different story. As you can see, Internet Explorer also found multiple, individual lines; but, it found 11 lines rather than just 5. This is because it did not include the carriage return (\r) character in the multiline pattern delimiter. As such, the resultant textarea does contain hard line breaks as well as lines consisting of just the \r character (hence the additional line count).
NOTE: Some of the line count in IE can be reduced by using the (+) qualifier rather than the (*) qualifier in the matching regular expression.
I've had multiline problems before. But, as I was saying, I don't use multiline regular expressions very often in Javascript. Hopefully, this time, I'll remember that even in the most modern browsers, they are not quite supported consistently enough for quite alarming. I've used regular expressions to seperate lines before.
Gotta take a deeper look into that...
Thanks for pointing it out Ben!
@Martin,
Yeah, this is frustrating stuff. There are some other odd Javascript RegExp differences in the other browsers, specifically with looping and exec(). This seems like the kind of thing that should be pretty universal.
Hi,
Nice post, Please tell me what of the both i can use for begin (the ^ and $)?
Thank you for this answer.
Sincerely
IE is now complying if DOCTYPE is 1st in streeming. Without it, same old bug. | http://www.bennadel.com/blog/1917-javascript-multiline-regular-expressions-don-t-include-carriage-returns-in-ie.htm?_rewrite | CC-MAIN-2016-30 | en | refinedweb |
resque-retry
A Resque plugin. Requires Resque ~> 1.25 & resque-scheduler ~> 4.0.
This gem provides retry, delay and exponential backoff support for resque jobs.
- Redis backed retry count/limit.
- Retry on all or specific exceptions.
- Exponential backoff (varying the delay between retrys).
- Multiple failure backend with retry suppression & resque-web tab.
- Small & Extendable - plenty of places to override retry logic/settings.
Install & Quick Start
To install:
$ gem install resque-retry
If you're using Bundler to manage your dependencies, you should add
gem 'resque-retry' to your
Gemfile.
Add this to your
Rakefile:
require 'resque/tasks' require 'resque/scheduler/tasks'
The delay between retry attempts is provided by resque-scheduler. You'll want to run the scheduler process, otherwise delayed retry attempts will never perform:
$ rake resque:scheduler
Use the plugin:
require 'resque-retry' class ExampleRetryJob extend Resque::Plugins::Retry @queue = :example_queue @retry_limit = 3 @retry_delay = 60 def self.perform(*args) # your magic/heavy lifting goes here. end end
Then start up a resque worker as normal:
$ QUEUE=* rake resque:work
Now if you ExampleRetryJob fails, it will be retried 3 times, with a 60 second delay between attempts.
For more explanation and examples, please see the remaining documentation.
Failure Backend & Resque Web Additions
Lets say you're using the Redis failure backend of resque (the default). Every time a job fails, the failure queue is populated with the job and exception details.
Normally this is useful, but if your jobs retry... it can cause a bit of a mess.
For example: given a job that retried 4 times before completing successful. You'll have a lot of failures for the same job and you wont be sure if it actually completed successfully just by just using the resque-web interface.
Failure Backend
MultipleWithRetrySuppression is a multiple failure backend, with retry
suppression.
Here's an example, using the Redis failure backend:
require 'resque-retry' require 'resque/failure/redis' # require your jobs & application code. Resque::Failure::MultipleWithRetrySuppression.classes = [Resque::Failure::Redis] Resque::Failure.backend = Resque::Failure::MultipleWithRetrySuppression
If a job fails, but can and will retry, the failure details wont be logged in the Redis failed queue (visible via resque-web).
If the job fails, but can't or won't retry, the failure will be logged in the Redis failed queue, like a normal failure (without retry) would.
Resque Web Additions
If you're using the
MultipleWithRetrySuppression failure backend, you should
also checkout the
resque-web additions!
The new Retry tab displays delayed jobs with retry information; the number of attempts and the exception details from the last failure.
Configuring and running the Resque-Web Interface
Using a Rack configuration:
One alternative is to use a rack configuration file. To use this, make sure you
include this in your
config.ru or similar file:
require 'resque-retry' require 'resque-retry/server' # Make sure to require your workers & application code below this line: # require '[path]/[to]/[jobs]/your_worker' # Run the server run Resque::Server.new
As an example, you could run this server with the following command:
rackup -p 9292 config.ru
When using bundler, you can also run the server like this:
bundle exec rackup -p 9292 config.ru
Using the 'resque-web' command with a configuration file:
Another alternative is to use resque's built-in 'resque-web' command with the additional resque-retry tabs. In order to do this, you must first create a configuration file. For the sake of this example we'll create the configuration file in a 'config' directory, and name it 'resque_web_config.rb'. In practice you could rename this configuration file to anything you like and place in your project in a directory of your choosing. The contents of the configuration file would look like this:
# [app_dir]/config/resque_web_config.rb require 'resque-retry' require 'resque-retry/server' # Make sure to require your workers & application code below this line: # require '[path]/[to]/[jobs]/your_worker'
Once you have the configuration file ready, you can pass the configuration file to the resque-web command as a parameter, like so:
resque-web [app_dir]/config/resque_web_config.rb
Retry Options & Logic
Please take a look at the yardoc/code for more details on methods you may wish to override.
Customisation is pretty easy, the below examples should give you some ideas =), adapt for your own usage and feel free to pick and mix!
Here are a list of the options provided (click to jump):
- Retry Defaults
- Custom Retry
- Sleep After Requeuing
- Exponential Backoff
- Retry Specific Exceptions
- Fail Fast For Specific Exceptions
- Custom Retry Criteria Check Callbacks
- Retry Arguments
- Job Retry Identifier/Key
- Expire Retry Counters From Redis
- Try Again and Give Up Callbacks
- Ignored Exceptions
- Debug Plugin Logging
Retry Defaults
Retry the job once on failure, with zero delay.
require 'resque-retry' class DeliverWebHook extend Resque::Plugins::Retry @queue = :web_hooks def self.perform(url, hook_id, hmac_key) heavy_lifting end end
When a job runs, the number of retry attempts is checked and incremented in Redis. If your job fails, the number of retry attempts is used to determine if we can requeue the job for another go.
Custom Retry
class DeliverWebHook extend Resque::Plugins::Retry @queue = :web_hooks @retry_limit = 10 @retry_delay = 120 def self.perform(url, hook_id, hmac_key) heavy_lifting end end
The above modification will allow your job to retry up to 10 times, with a delay of 120 seconds, or 2 minutes between retry attempts.
You can override the
retry_delay method to set the delay value dynamically.
Sleep After Requeuing
Sometimes it is useful to delay the worker that failed a job attempt, but
still requeue the job for immediate processing by other workers. This can be
done with
@sleep_after_requeue:
class DeliverWebHook extend Resque::Plugins::Retry @queue = :web_hooks @sleep_after_requeue = 5 def self.perform(url, hook_id, hmac_key) heavy_lifting end end
This retries the job once and causes the worker that failed to sleep for 5 seconds after requeuing the job. If there are multiple workers in the system this allows the job to be retried immediately while the original worker heals itself. For example failed jobs may cause other (non-worker) OS processes to die. A system monitor such as monit or god can fix the server while the job is being retried on a different worker.
@sleep_after_requeue is independent of
@retry_delay. If you set both, they
both take effect.
You can override the
sleep_after_requeue method to set the sleep value
dynamically.
Exponential Backoff
Use this if you wish to vary the delay between retry attempts:
class DeliverSMS extend Resque::Plugins::ExponentialBackoff @queue = :mt_messages def self.perform(mt_id, mobile_number, ) heavy_lifting end end
Default Settings
key: m = minutes, h = hours 0s, 1m, 10m, 1h, 3h, 6h @backoff_strategy = [0, 60, 600, 3600, 10800, 21600] @retry_delay_multiplicand_min = 1.0 @retry_delay_multiplicand_max = 1.0
The first delay will be 0 seconds, the 2nd will be 60 seconds, etc... Again, tweak to your own needs.
The number of retries is equal to the size of the
backoff_strategy array,
unless you set
retry_limit yourself.
The delay values will be multiplied by a random
Float value between
retry_delay_multiplicand_min and
retry_delay_multiplicand_max (both have a
default of
1.0). The product (
delay_multiplicand) is recalculated on every
attempt. This feature can be useful if you have a lot of jobs fail at the same
time (e.g. rate-limiting/throttling or connectivity issues) and you don't want
them all retried on the same schedule.
Retry Specific Exceptions
The default will allow a retry for any type of exception. You may change it so
only specific exceptions are retried using
retry_exceptions:
class DeliverSMS extend Resque::Plugins::Retry @queue = :mt_messages @retry_exceptions = [NetworkError] def self.perform(mt_id, mobile_number, ) heavy_lifting end end
The above modification will only retry if a
NetworkError (or subclass)
exception is thrown.
You may also want to specify different retry delays for different exception
types. You may optionally set
@retry_exceptions to a hash where the keys are
your specific exception classes to retry on, and the values are your retry
delays in seconds or an array of retry delays to be used similar to exponential
backoff.
class DeliverSMS extend Resque::Plugins::Retry @queue = :mt_messages @retry_exceptions = { NetworkError => 30, SystemCallError => [120, 240] } def self.perform(mt_id, mobile_number, ) heavy_lifting end end
In the above example, Resque would retry any
DeliverSMS jobs which throw a
NetworkError or
SystemCallError. If the job throws a
NetworkError it
will be retried 30 seconds later, if it throws
SystemCallError it will first
retry 120 seconds later then subsequent retry attempts 240 seconds later.
Fail Fast For Specific Exceptions
The default will allow a retry for any type of exception. You may change
it so specific exceptions fail immediately by using
fatal_exceptions:
class DeliverSMS extend Resque::Plugins::Retry @queue = :mt_divisions @fatal_exceptions = [NetworkError] def self.perform(mt_id, mobile_number, ) heavy_lifting end end
In the above example, Resque would retry any
DeliverSMS jobs that throw any
type of error other than
NetworkError. If the job throws a
NetworkError it
will be marked as "failed" immediately.
Custom Retry Criteria Check Callbacks
You may define custom retry criteria callbacks:
class TurkWorker extend Resque::Plugins::Retry @queue = :turk_job_processor @retry_exceptions = [NetworkError] retry_criteria_check do |exception, *args| if exception. =~ /InvalidJobId/ false # don't retry if we got passed a invalid job id. else true # its okay for a retry attempt to continue. end end def self.perform(job_id) heavy_lifting end end
Similar to the previous example, this job will retry if either a
NetworkError (or subclass) exception is thrown or any of the callbacks
return true.
You can also register a retry criteria check with a Symbol if the method is already defined on the job class:
class AlwaysRetryJob extend Resque::Plugins::Retry retry_criteria_check :yes def self.yes(ex, *args) true end end
Use
@retry_exceptions = [] to only use your custom retry criteria checks
to determine if the job should retry.
NB: Your callback must be able to accept the exception and job arguments as
passed parameters, or else it cannot be called. e.g., in the example above,
defining
def self.yes; true; end would not work.
Retry Arguments
You may override
retry_args, which is passed the current job arguments, to
modify the arguments for the next retry attempt.
class DeliverViaSMSC extend Resque::Plugins::Retry @queue = :mt_smsc_messages # retry using the emergency SMSC. def self.retry_args(smsc_id, mt_message) [999, mt_message] end self.perform(smsc_id, mt_message) heavy_lifting end end
Alternatively, if you require finer control of the args based on the exception
thrown, you may override
retry_args_for_exception, which is passed the
exception and the current job arguments, to modify the arguments for the next
retry attempt.
class DeliverViaSMSC extend Resque::Plugins::Retry @queue = :mt_smsc_messages # retry using the emergency SMSC. def self.retry_args_for_exception(exception, smsc_id, mt_message) [999, mt_message + exception.message] end self.perform(smsc_id, mt_message) heavy_lifting end end
Job Retry Identifier/Key
The retry attempt is incremented and stored in a Redis key. The key is built
using the
retry_identifier. If you have a lot of arguments or really long
ones, you should consider overriding
retry_identifier to define a more precise
or loose custom retry identifier.
The default identifier is just your job arguments joined with a dash
'-'.
By default the key uses this format:
'resque-retry:<job class name>:<retry_identifier>'.
Or you can define the entire key by overriding
redis_retry_key.
class DeliverSMS extend Resque::Plugins::Retry @queue = :mt_messages def self.retry_identifier(mt_id, mobile_number, message) "#{mobile_number}:#{mt_id}" end self.perform(mt_id, mobile_number, message) heavy_lifting end end
Expire Retry Counters From Redis
Allow the Redis to expire stale retry counters from the database by setting
@expire_retry_key_after:
class DeliverSMS extend Resque::Plugins::Retry @queue = :mt_messages @expire_retry_key_after = 3600 # expire key after `retry_delay` plus 1 hour def self.perform(mt_id, mobile_number, ) heavy_lifting end end
This saves you from having to run a "house cleaning" or "errand" job.
The expiary timeout is "pushed forward" or "touched" after each failure to ensure it's not expired too soon.
Try Again and Give Up Callbacks
Resque's
on_failure callbacks are always called, regardless of whether the
job is going to be retried or not. If you want to run a callback only when the
job is being retried, you can add a
try_again_callback:
class LoggedJob extend Resque::Plugins::Retry try_again_callback do |exception, *args| logger.info("Received #{exception}, retrying job #{self.name} with #{args}") end end
Similarly, if you want to run a callback only when the job has failed, and is
not retrying, you can add a
give_up_callback:
class LoggedJob extend Resque::Plugins::Retry give_up_callback do |exception, *args| logger.error("Received #{exception}, job #{self.name} failed with #{args}") end end
You can register a callback with a Symbol if the method is already defined on the job class:
class LoggedJob extend Resque::Plugins::Retry give_up_callback :log_give_up def self.log_give_up(ex, *args) logger.error("Received #{exception}, job #{self.name} failed with #{args}") end end
You can register multiple callbacks, and they will be called in the order that
they were registered. You can also set callbacks by setting
@try_again_callbacks or
@give_up_callbacks to an array where each element
is a
Proc or
Symbol.
class CallbackJob extend Resque::Plugins::Retry @try_again_callbacks = [ :call_me_first, :call_me_second, lambda { |*args| call_me_third(*args) } ] def self.call_me_first(ex, *args); end def self.call_me_second(ex, *args); end def self.call_me_third(ex, *args); end end
Warning: Make sure your callbacks do not throw any exceptions. If they do, subsequent callbacks will not be triggered, and the job will not be retried (if it was trying again). The retry counter also will not be reset.
Ignored Exceptions
If there is an exception for which you want to retry, but you don't want it to
increment your retry counter, you can add it to
@ignore_exceptions.
One use case: Restarting your workers triggers a
Resque::TermException. You
may want your workers to retry the job that they were working on, but without
incrementing the retry counter.
class RestartResilientJob extend Resque::Plugins::Retry @retry_exceptions = [Resque::TermException] @ignore_exceptions = [Resque::TermException] end
Reminder:
@ignore_exceptions should be a subset of
@retry_exceptions.
Debug Plugin Logging
The inner-workings of the plugin are output to the Resque Logger
when
Resque.logger.level is set to
Logger::DEBUG.
Contributing/Pull Requests
- Yes please!
- Fork the project.
- Make your feature addition or bug fix.
- Add tests for it.
- In a seperate commit, update the HISTORY.md file please.
- Send us a pull request. Bonus points for topic branches.
- If you edit the gemspec/version etc, please do so in another commit. | http://www.rubydoc.info/github/lantins/resque-retry/master/frames | CC-MAIN-2016-30 | en | refinedweb |
PDL::Stats - a collection of statistics modules in Perl Data Language, with a quick-start guide for non-PDL people.
Loads modules named below, making the functions available in the current namespace.
Properly formated documentations online at
use PDL::LiteF; # loads less modules use PDL::NiceSlice; # preprocessor for easier pdl indexing syntax use PDL::Stats; # Is equivalent to the following: use PDL::Stats::Basic; use PDL::Stats::GLM; use PDL::Stats::Kmeans; use PDL::Stats::TS; # and the following if installed; use PDL::Stats::Distr; use PDL::GSL::CDF;
Enjoy PDL::Stats without having to dive into PDL, just wet your feet a little. Three key words two concepts and an icing on the cake, you should be well on your way there.
The magic word that puts PDL::Stats at your disposal. pdl creates a PDL numeric data object (a pdl, pronounced "piddle" :/ ) from perl array or array ref. All PDL::Stats methods, unless meant for regular perl array, can then be called from the data object.
my @y = 0..5; my $y = pdl @y; # a simple function my $stdv = $y->stdv; # you can skip the intermediate $y my $stdv = stdv( pdl @y ); # a more complex method, skipping intermediate $y my @x1 = qw( y y y n n n ); my @x2 = qw( 1 0 1 0 1 0 ) # do a two-way analysis of variance with y as DV and x1 x2 as IVs my %result = pdl(@y)->anova( \@x1, \@x2 ); print "$_\t$result{$_}\n" for (sort keys %result);
If you have a list of list, ie array of array refs, pdl will create a multi-dimensional data object.
my @a = ( [1,2,3,4], [0,1,2,3], [4,5,6,7] ); my $a = pdl @a; print $a . $a->info; # here's what you will get [ [1 2 3 4] [0 1 2 3] [4 5 6 7] ] PDL: Double D [4,3]
PDL::Stats puts observations in the first dimension and variables in the second dimension, ie pdl [obs, var]. In PDL::Stats the above example represents 4 observations on 3 variables.
# you can do all kinds of fancy stuff on such a 2D pdl. my %result = $a->kmeans( {NCLUS=>2} ); print "$_\t$result{$_}\n" for (sort keys %result);
Make sure the array of array refs is rectangular. If the array refs are of unequal sizes, pdl will pad it out with 0s to match the longest list.
Tells you the data type (yes pdls are typed, but you shouldn't have to worry about it here*) and dimensionality of the pdl, as seen in the above example. I find it a big help for my sanity to keep track of the dimensionality of a pdl. As mentioned above, PDL::Stats uses 2D pdl with observation x variable dimensionality.
*pdl uses double precision by default. If you are working with things like epoch time, then you should probably use pdl(long, @epoch) to maintain the precision.
Come back to the perl reality from the PDL wonder land. list turns a pdl data object into a regular perl list. Caveat: list produces a flat list. The dimensionality of the data object is lost.
This is not a function, but a concept. You will see something like this frequently in the pod:
stdv Signature: (a(n); float+ [o]b())
The signature tells you what the function expects as input and what kind of output it produces. a(n) means it expects a 1D pdl with n elements; [o] is for output, b() means its a scalar. So stdv will take your 1D list and give back a scalar. float+ you can ignore; but if you insist, it means the output is at float or double precision. The name a or b or c is not important. What's important is the thing in the parenthesis.
corr Signature: (a(n); b(n); float+ [o]c())
Here the function corr takes two inputs, two 1D pdl with the same numbers of elements, and gives back a scalar.
t_test Signature: (a(n); b(m); float+ [o]t(); [o]d())
Here the function t_test can take two 1D pdls of unequal size (n==m is certainly fine), and give back two scalars, t-value and degrees of freedom. Yes we accommodate t-tests with unequal sample sizes.
assign Signature: (data(o,v); centroid(c,v); byte [o]cluster(o,c))
Here is one of the most complicated signatures in the package. This is a function from Kmeans. assign takes data of observasion x variable dimensions, and a centroid of cluster x variable dimensions, and returns an observation x cluster membership pdl (indicated by 1s and 0s).
Got the idea? Then we can see how PDL does its magic :)
Another concept. The first thing to know is that, threading is optional.
PDL threading means automatically repeating the operation on extra elements or dimensions fed to a function. For a function with a signature like this
gsl_cdf_tdist_P Signature: (double x(); double nu(); [o]out())
the signatures says that it takes two scalars as input, and returns a scalar as output. If you need to look up the p-values for a list of t's, with the same degrees of freedom 19,
my @t = ( 1.65, 1.96, 2.56 ); my $p = gsl_cdf_tdist_P( pdl(@t), 19 ); print $p . "\n" . $p->info; # here's what you will get [0.94231136 0.96758551 0.99042586] PDL: Double D [3]
The same function is repeated on each element in the list you provided. If you had different degrees of freedoms for the t's,
my @df = (199, 39, 19); my $p = gsl_cdf_tdist_P( pdl(@t), pdl(@df) ); print $p . "\n" . $p->info; # here's what you will get [0.94973979 0.97141553 0.99042586] PDL: Double D [3]
The df's are automatically matched with the t's to give you the results.
An example of threading thru extra dimension(s):
stdv Signature: (a(n); float+ [o]b())
if the input is of 2D, say you want to compute the stdv for each of the 3 variables,
my @a = ( [1,1,3,4], [0,1,2,3], [4,5,6,7] ); # pdl @a is pdl dim [4,3] my $sd = stdv( pdl @a ); print $sd . "\n" . $sd->info; # this is what you will get [ 1.2990381 1.118034 1.118034] PDL: Double D [3]
Here the function was given an input with an extra dimension of size 3, so it repeates the stdv operation on the extra dimenion 3 times, and gives back a 1D pdl of size 3.
Threading works for arbitrary number of dimensions, but it's best to refrain from higher dim pdls unless you have already decided to become a PDL wiz / witch.
Not all PDL::Stats methods thread. As a rule of thumb, if a function has a signature attached to it, it threads.
Essentially a perl shell with "use PDL;" at start up. Comes with the PDL installation. Very handy to try out pdl operations, or just plain perl. print is shortened to p to avoid injury from exessive typing. my goes out of scope at the end of (multi)line input, so mostly you will have to drop the good practice of my here.
PDL::Impatient
~~~~~~~~~~~~ ~~~~~ ~~~~~~~~ ~~~~~ ~~~ `` ><(((">
All rights reserved. There is no warranty. You are allowed to redistribute this software / documentation as described in the file COPYING in the PDL distribution. | http://search.cpan.org/dist/PDL-Stats/Stats.pm | CC-MAIN-2016-30 | en | refinedweb |
Object IDentifier. A 'dot' delimited hierarchically structured node and leaf namespace. Although used in many disciplines, in system administration they are commonly used in SNMP MIBs, and for naming attributes in x.500 directories and x.509 certificate attributes.
OIDs seen by system administrators "in the wild" are typically under the IANA "Private Enterprises" hierarchy, and take the form
1.3.6.1.4.1.enterprise.<etc>, where
enterprise is an IANA-assigned Private Enterprise Number, and
<etc> is an internally defined hierarchy..
For additional information on OIDs, consult the Wikipedia article. | http://serverfault.com/tags/oid/info | CC-MAIN-2016-30 | en | refinedweb |
Problem Statement.
Design (state: draft)
The OData protocol already has the constructs necessary to express navigation property URIs in the form of standard <atom:link> elements. For example, the following is the atom representation for an Order Entry which has a navigation property Order_Details expressed as an <atom:link>.
Following this pattern, the URI needed to address the relationship between entities can also be expressed as a link element in the following form:
<link rel=”<relationshipPropertyName>” type=”application/ xml” Title=”< relationshipPropertyName >” ” href=” relatedLinksURI” />
Where:
– relationshipPropertyName is a navigation property name as defined in CSDL associated with the datat service.
– relatedLinksURI is URI which identifies the relationship between the Entry represented by the parent <entry> and another Entry (or group of Entries) as identified by the navigation property
The example below shows an Order entry with a link element (as described above) that represents the orders’ relationship with OrderDetails as a response to the following GET query ””.
For JSON we would use “associationuri” to hold the relationship URI. As shown in the example below the associationuri would be a sibling of the uri property on a navigation property.
In the case of expanded relationships (using data services $expand operator) the associationuri would be placed before the “results” of the expanded related entities. For example the following GET query request “ /Orders(1)?$expand=Order_Details&$format=json” would produce the following server payload: Moustafa
Program Manager, WCF Data Services
This post is part of the transparent design exercise in the Astoria Team. To understand how it works and how your feedback will be used please look at this post.
I like the idea of encoding the relationship information w/o resorting to URI convention. But the suggested use of the rel and title attributes is, I suspect, going to cause trouble for state machine clients.
I’d like to see something like:
<link rel="relationship" href="{uri-to-details}" />
IOW, offer a <link /> that points the user agent to a resource that contains the details you are trying to encode into your REL and TITLE attributes.
This will give the most flexibility, prevent OData from encoding resource-specific information into relation links, and prevent OData from overriding the use of TITLE.
I am just looking at the Data Service Provider toolkit and I notice several things.
1.) You need a huge amount of code to expose the most simple classes like
public Class B
{
public string NoIdProperty {get;set;}
}
public Class A
{
public B MyB { get;set; }
public int MyIDProperty { get; set; }
}
2.) You do really messy things like in GetQueryRootForResourceSet(ResourceSet) function
3.) The whole System.Data.Services.Providers Namespace is as good as undocumented
4.) The whole namespace thing smells like entity framework to me
5.) The whole error reporting/debugging is not even close to real life requirements. I definately need more details that just "Something blew up somewhere".
I really love the idea of a WCF Service that exposes objects and I love the whole basic concept behind data services.
Maybe I am just too stupid, but so far for me it looks the current ADO .NET Data Service implementation will probably never match my expectations.
Maybe you should think about dumping the current implementation and start a rewrite that is focused on exposing objects (not just entity framework) from the scratch. Trying to fix the current code base seems to be a big waste of your and my time. But at least you should do your homework in regards to documentation before thinking about extensions.
Cheers,
Tobias
Tobias,
Answering your points in turn…
1) There is no doubt that the most simple path – the one we originally optimized for – is EF.
However the reflection provider is pretty simple too, and is designed to make it easy for you to take custom classes and expose then as a Data Service.
The first step is to create a ‘Context’ class to represent the model of you Data Service, something like this for example:
public class Context
{
private List<A> _As; // TODO: Initialize some how.
private List<B> _Bs;
public IQueryable<A> As {
return _As.AsQueryable();
}
public IQueryable<B> Bs {
return _Bs.AsQueryable();
}
}
And then on your classes Identify the Key properties using the [DataServiceKeyAttribute] like this:
[DataServiceKeyAttribute(“NoIdProperty”)]
public Class B
{
public string NoIdProperty {get;set;}
}
At this point you can expose your classes as a DataService simply by using
public MyDataService: DataService<Context> {…}
The service will be readonly, but you if your Context class also implements IDataServiceUpdateProvider (like the L2S example we posted on CodeGallery) then you have a readwrite service.
Finally if the reflection provider doesn’t give you enough flexibility then you need to implement a full Data Service Provider, which is admittedly not trivial. But then it is a very advanced API, which we don’t expect many developer to need to implement.
2) See above
3) Both My Blog [] and the OData SDK – DSP toolkit [] cover writing custom providers. I hear your frustration though, it is clear we need more guidance around the reflection provider, we’ll work towards that soon, in the meantime feel free to contact us again if you need help.
4) Namespace is a pretty platform and technology agnostic thing, it is simply a way of avoiding naming conflicts while keeping simple names. The web is built on the same principle. There must be millions of about.html pages out there – simple name – disambiguated by the rest of URL.
5) Have you seen DataService<>.HandleException(..) & DataServiceConfiguration.UseVerboseErrors which you can turn on in your InitializeService method? Both of these simplify tracking down issues in your DataService.
Hopefully my answers to both (1) and (2) cause you to rethink your assessment. I personally have spent a lot of time writing DSPs etc, and can say that there exist a lot of opportuntities to make things even better, and the beautiful thing is that most of the interfaces and APIs are public so you / others don’t necessarily have to wait for Microsoft to take the lead.
Please let me know if you have any more questions
Alex James | https://blogs.msdn.microsoft.com/odatateam/2010/04/05/relationship-links/ | CC-MAIN-2016-30 | en | refinedweb |
The D compiler performs stability computations for each of the probe descriptions and action statements in your D programs. You can use the dtrace -v option to display a report of your program's stability. The following example uses a program written on the command line:
You may also wish to combine the dtrace -v option with the -e option, which tells dtrace to compile but not execute your D program, so that you can determine program stability without having to enable any probes and execute your program. Here is another example stability report:
Notice that in our new program, we have referenced the D variable curthread, which has a Stable name, but Private data semantics (that is, if you look at it, you are accessing Private implementation details of the kernel), and this status is now reflected in the program's stability report. Stability attributes in the program report are computed by selecting the minimum stability level and class out of the corresponding values for each interface attributes triplet.
Stability attributes are computed for a probe description by taking the minimum stability attributes of all specified probe description fields according to the attributes published by the provider. The attributes of the available DTrace providers are shown in the chapter corresponding to each provider. DTrace providers export a stability attributes triplet for each of the four description fields for all probes published by that provider. Therefore, a provider's name may have a greater stability than the individual probes it exports. For example, the probe description:
fbt:::
indicating that DTrace should trace entry and return from all kernel functions, has greater stability than the probe description:
fbt:foo:bar:entry
which names a specific internal function bar() in the kernel module foo. For simplicity, most providers use a single set of attributes for all of the individual module:function:name values that they publish. Providers also specify attributes for the args[] array, as the stability of any probe arguments varies by provider.
If the provider field is not specified in a probe description, then the description is assigned the stability attributes Unstable/Unstable/Common because the description might end up matching probes of providers that do not yet exist when used on a future Solaris OS.
Stability attributes are computed for most D language statements by taking the minimum stability and class of the entities in the statement. For example, the following D language entities have the following attributes:
If you write the following D program statement:
x += curthread->t_pri;
then the resulting attributes of the statement are Stable/Private/Common, the minimum attributes associated with the operands curthread and x. The stability of an expression is computed by taking the minimum stability attributes of each of the operands.
Any D variables you define in your program are automatically assigned the attributes Stable/Stable/Common. In addition, the D language grammar and D operators are implicitly assigned the attributes Stable/Stable/Common. References to kernel symbols using the backquote (`) operator are always assigned the attributes Private/Private/Unknown because they reflect implementation artifacts. Types that you define in your D program source code, specifically those that are associated with the C and D type namespace, are assigned the attributes Stable/Stable/Common. Types that are defined in the operating system implementation and provided by other type namespaces are assigned the attributes Private/Private/Unknown. The D type cast operator yields an expression whose stability attributes are the minimum of the input expression's attributes and the attributes of the cast output type.
If you use the C preprocessor to include C system header files, these types will be associated with the C type namespace and will be assigned the attributes Stable/Stable/Common as the D compiler has no choice but to assume that you are taking responsibility for these declarations. It is therefore possible to mislead yourself about your program's stability if you use the C preprocessor to include a header file containing implementation artifacts. You should always consult the documentation corresponding to the header files you are including in order to determine the correct stability levels. | http://docs.oracle.com/cd/E19253-01/817-6223/chp-stab-4/index.html | CC-MAIN-2016-30 | en | refinedweb |
@Target(value=TYPE) @Retention(value=RUNTIME) public @interface SecondaryTable
This annotation is used to specify a secondary table for the annotated entity class. Specifying one or more secondary tables indicates that the data for the entity class is stored across multiple tables.
If no
SecondaryTable annotation is specified,
it is assumed that all persistent fields or properties of the
entity are mapped to the primary table. If no primary key join
columns are specified, the join columns are assumed to reference
the primary key columns of the primary table, and have the same
names and types as the referenced primary key columns of the
primary table.
Example 1: Single secondary table with a single primary key column. @Entity @Table(name="CUSTOMER") @SecondaryTable(name="CUST_DETAIL", pkJoinColumns=@PrimaryKeyJoinColumn(name="CUST_ID")) public class Customer { ... } Example 2: Single secondary table with multiple primary key columns. @Entity @Table(name="CUSTOMER") @SecondaryTable(name="CUST_DETAIL", pkJoinColumns={ @PrimaryKeyJoinColumn(name="CUST_ID"), @PrimaryKeyJoinColumn(name="CUST_TYPE")}) public class Customer { ... }
public abstract.
Copyright 2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. | http://docs.oracle.com/javaee/5/api/javax/persistence/SecondaryTable.html | CC-MAIN-2016-30 | en | refinedweb |
Tweaking the above code produces the following times when running test program above on LinuxTime to read 2400 packets of 64 bytes from native USB only: 1 second (or less)Time to write 2400 packets of 64 bytes to native USB only: 1 second (or less)Time to read 2400 packets of 32 bytes from native USB only: 1 second (or less)Time to write 2400 packets of 32 bytes to native USB only: 1 second (or less)
Time to write 2400 packets of 64 bytes to programming port: 13 secondsTime to read 2400 packets of 64 bytes from programming port: 13 seconds
Time to write 2400 packets of 32 bytes to programming port: 77 secondsTime to read 2400 packets of 32 bytes from programming port: 77 seconds
I've tried changing port speed to 230400. It makes no difference whatsoever.
I would use the native USB port for both read and write but as soon as I start doing async read/write against the native USB port, I'm seeing extreme data corruption and data loss.
paul@preston:/tmp/t > ./test Succfully opened /dev/ttyACM0 for readingTotal time to run test with 2400 packets of size 64 was 13 seconds
paul@preston:/tmp/t > ./test Succfully opened /dev/ttyACM0 for readingTotal time to run test with 2400 packets of size 32 was 81 seconds
#include <fcntl.h>#include <stdio.h>#include <errno.h>#include <time.h>#include <stdlib.h>#include <string.h>#include <unistd.h>#include <termios.h>#define PKTS 2400#define PKT_SIZE 32#define SER_PORT_IN "/dev/ttyACM0" // On my system this is programming #define SER_PORT_OUT "/dev/ttyACM1" // On my system this is native USB #define SER_PORT_SPEED 115200#define TEST_READ 1#define TEST_WRITE 0int main() { time_t deltaT = 0; int i = 0, bytes = 0, totalBytes = 0, fdIn = 0, fdOut = 0; char buf[PKT_SIZE]; struct termios t;#if !(TEST_READ || TEST_WRITE) printf("Nothing to test, change TEST_READ to 1 or TEST_WRITE to 1\n"); return(0);#endif#if TEST_READ // this stty command does not seem to set the baud rate in some cases snprintf(buf, sizeof(buf)-1, "stty -F %s sane raw -echo %d", SER_PORT_IN, SER_PORT_SPEED); errno = 0; fdIn = open(SER_PORT_IN, O_RDWR | O_NOCTTY); if (fdIn < 0) { printf("ERROR: Unable to open port %s, err(%d): %s\n", SER_PORT_IN, errno, strerror(errno)); return(-1); } // using the termios functions after the port is open is very reliable tcgetattr(fdIn, &t); cfsetispeed(&t, B115200); tcsetattr(fdIn, TCSANOW, &t); printf("Succfully opened %s for reading\n", SER_PORT_IN);#endif#if TEST_WRITE snprintf(buf, sizeof(buf)-1, "stty -F %s sane raw -echo", SER_PORT_OUT); system(buf); // Time for device to reset and native USB dev node to show up again sleep(4); errno = 0; fdOut = open(SER_PORT_OUT, O_RDWR | O_NOCTTY); if (fdOut < 0) { printf("ERROR: Unable to open port %s, err(%d): %s\n", SER_PORT_OUT, errno, strerror(errno)); return(-1); } printf("Succfully opened %s for writing\n", SER_PORT_OUT);#endif deltaT = time(NULL); for (i = 0; i < PKTS; i++) {#if TEST_WRITE // Ensure full packet is delivered bytes = 0; totalBytes = 0; while (totalBytes < sizeof(buf)) { errno = 0; bytes = write(fdOut, buf+totalBytes, sizeof(buf) - totalBytes); if (bytes < 0) { printf("ERROR: Unable to write to port %s, err(%d): %s\n", SER_PORT_OUT, errno, strerror(errno)); return(-1); } totalBytes += bytes; }#endif#if TEST_READ bytes = 0; totalBytes = 0; while (totalBytes < sizeof(buf)) { errno = 0; bytes = read(fdIn, buf+totalBytes, sizeof(buf) - totalBytes); if (bytes < 0) { printf("ERROR: Unable to read from port %s, err(%d): %s\n", SER_PORT_IN, errno, strerror(errno)); return(-2); } //printf("read %d\n", bytes); totalBytes += bytes; }#endif } printf("Total time to run test with %d packets of size %d was %d seconds\n", PKTS, (int)sizeof(buf), (int)(time(NULL) - deltaT)); return 0;}
Does just sending data TO the PC using SerialUSB.print at speeds 115200 or higher work flawlessly?
// USB Serial Transmit Bandwidth Test// Written by Paul Stoffregen, paul@pjrc.com// This benchmark code is in the public domain.//// Within 5 seconds of opening the port, this program// will send a message as rapidly as possible, for 10 seconds.//// To run this benchmark test, use serial_read.exe (Windows) or// serial_listen (Mac, Linux) program can read the data efficiently// without saving it.// You can also run a terminal emulator and select the option// to capture all text to a file. However, some terminal emulators// may limit the speed, depending upon how they update the screen// and how efficiently their code processes the imcoming data. The// Arduino Serial Monitor is particularly slow. Only use it to// verify this sketch works. For actual benchmarks, use the// efficient receive tests above.//// Full disclosure: Paul is the author of Teensyduino. //// Results can vary depending on the number of other USB devices// connected. For fastest results, disconnect all others.//#define USBSERIAL Serial // for Leonardo, Teensy, Fubarino#define USBSERIAL SerialUSB // for Due, Maplevoid setup(){ USBSERIAL.begin(115200);}void loop(){ // wait for serial port to be opened while (!USBSERIAL) ; // give the user 5 seconds to enable text capture in their // terminal emulator, or do whatever to get ready for (int n=5; n; n--) { USBSERIAL.print("10 second speed test begins in "); USBSERIAL.print(n); USBSERIAL.println(" seconds."); if (!USBSERIAL) break; delay(1000); } // send a string as fast as possible, for 10 seconds unsigned long beginMillis = millis(); do { USBSERIAL.print("USB Fast Serial Transmit Bandwidth Test, capture this text.\r\n"); } while (millis() - beginMillis < 10000); USBSERIAL.println("done!"); // after the test, wait forever doing nothing, // well, at least until the terminal emulator quits while (USBSERIAL) ;}
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=173049.msg1286768 | CC-MAIN-2016-30 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.